text
stringlengths 8.19k
1.23M
| summary
stringlengths 342
12.7k
|
---|---|
You are an expert at summarizing long articles. Proceed to summarize the following text:
Interior lacks adequate assurance that it is receiving the full royalties it is owed because (1) neither BLM nor OMM is fully inspecting leases and meters as required by law and agency policies, and (2) MMS lacks adequate management systems and sufficient internal controls for verifying that royalty payment data are accurate and complete. With regard to inspecting oil and gas production, BLM is charged with inspecting approximately 20,000 producing onshore leases annually to ensure that oil and gas volumes are accurately measured. However, BLM’s state Inspection and Enforcement Coordinators from Colorado, Montana, New Mexico, Utah, and Wyoming told us that only 8 of the 23 field offices in the 5 states completed both their (1) required annual inspections of wells and leases that are high-producing and those that have a history of violations and (2) inspections every third year on all remaining leases. According to the BLM state Inspection and Enforcement Coordinators, the number of completed production inspections varied greatly by field office. For example, while BLM inspectors were able to complete all of the production inspections in the Kemmerer, Wyoming, field office, inspectors in the Glenwood Springs, Colorado, field office were able to complete only about one-quarter of the required inspections. Officials in 3 of the 5 field offices in which we held detailed discussions with inspection staff told us that they had not been able to complete the production inspections because of competing priorities, including their focus on completing a growing number of drilling inspections for new oil and gas wells, and high inspection staff turnover. However, BLM officials from all 5 field offices told us that when they have conducted production inspections they have identified a number of violations. For example, BLM staff in 4 of the 5 field offices identified errors in the amounts of oil and gas production volumes reported by operators to MMS by comparing production reports with third-party source documents. Additionally, BLM staff from 1 field office we visited showed us a bypass built around a gas meter, allowing gas to flow around the meter without being measured. BLM staff ordered the company to remove the bypass. Staff from another field office told us of a case in which individuals illegally tapped into a gas line and routed gas to private residences. Finally, in one of the field offices we visited, BLM officials told us of an instance in which a company maintained two sets of conflicting production data—one used by the company and another reported to MMS. Moreover, OMM, which is responsible for inspecting offshore production facilities that include oil and gas meters, did not inspect all oil and gas royalty meters, as required by its policy, in 2007. For example, OMM officials responsible for meter inspections in the Gulf of Mexico told us that they completed about half of the required 2,700 inspections, but that they met OMM’s goal for witnessing oil and gas meter calibrations. OMM officials told us that one reason they were unable to complete all the meter inspections was their focus on the remaining cleanup work from hurricanes Katrina and Rita. Meter inspections are an important aspect of the offshore production verification process because, according to officials, one of the most common violations identified during inspections is missing or broken meter seals. Meter seals are meant to prevent tampering with measurement equipment. When seals are missing or broken, it is not possible without closer inspection to determine whether the meter is correctly measuring oil or gas production. With regard to MMS’s assurance that royalty data are being accurately reported by companies, MMS’s systems and processes for collecting and verifying these data lack both capabilities and key internal controls, including those focused on data accuracy, integrity, and completeness. For example, MMS lacks an automated process to routinely and systematically reconcile all production data filed by payors (those responsible for paying the royalties) with production data filed by operators (those responsible for reporting production volumes). MMS officials told us that before they transitioned to the current financial management system in 2001, their system included an automated process that reconciled the production and royalty data on all transactions within approximately 6 months of the initial entry date. However, MMS’s new system does not have that capability. As a result, such comparisons are not performed on all properties. Comparisons are made, if at all, 3 years or more after the initial entry date by the MMS compliance group for those properties selected for a compliance review or audit. In addition, MMS lacks a process to routinely and systematically reconcile all production data included by payors on their royalty reports or by operators on their production reports with production data available from third-party sources. OMM does compare a large part of the offshore operator-reported production data with third-party data from pipeline operators through both its oil and gas verification programs, but BLM compares only a relatively small percentage of reported onshore oil and gas production data with third-party pipeline data. When BLM and OMM do make comparisons and find discrepancies, they forward the information to MMS, which then takes steps to reconcile and correct these discrepancies by talking to operators. However, even when discrepancies are corrected and the operator-reported data and pipeline data have been reconciled, these newly reconciled data are not automatically and systematically compared with the reported sales volume in the royalty report, previously entered into the financial management database, to ensure the accuracy of the royalty payment. Such comparisons occur only if a royalty payor’s property has been selected for an audit or compliance review. Furthermore, MMS’s financial management system lacks internal controls over the integrity and accuracy of production and royalty-in-value data entered by companies. Companies may legally make changes to both royalty and production data in MMS’s financial management system for up to 6 years after the reporting month, and these changes may necessitate changes in the royalty payment. However, when companies retroactively change the data they previously entered, these changes do not require prior approval by, or notification of, MMS. As a result of the companies’ ability to unilaterally make these retroactive changes, the production data and required royalty payments can change over time, further complicating efforts by agency officials to reconcile production data and ensure that the proper amount of royalties was paid. Compounding this data reliability concern, changes made to the data do not necessarily trigger a review to determine their reasonableness or whether additional royalties are due. According to agency officials, these changes are not subject to review at the time a change is made and would be evaluated only if selected for an audit or compliance review. This is also problematic because companies may change production and royalty data after an audit or compliance review has been done, making it unclear whether these audited royalty payments remain accurate after they have been reviewed. Further, MMS officials recently examined data from September 2002 through July 2007 and identified over 81,000 adjustments made to data outside the allowable 6-year time frame. MMS is working to modify the system to automatically identify adjustments that have been made to data outside of the allowable 6-year time frame, but this effort does not address the need to identify adjustments made within the allowable time that might necessitate further adjustments to production data and royalty payments due. Finally, MMS’s financial management system could not reliably detect when production data reports were missing until late 2004, and the system continues to lack the ability to automatically detect missing royalty reports. In 2004, MMS modified its financial management system to automatically detect missing production reports. As a result, MMS has identified a backlog of approximately 300,000 missing production reports that must be investigated and resolved. It is important that MMS have a complete set of accurate production reports so that BLM can prioritize production inspections, and its compliance group can easily reconcile royalty payments with production information. Importantly, MMS’s financial management system continues to lack the ability to automatically detect cases in which an expected royalty report has not been filed. While not filing a royalty report may be justifiable under certain circumstances, such as when a company sells its lease, MMS’s inability to detect missing royalty reports presents the risk that MMS will not identify instances in which it is owed royalties that are simply not being paid. Officials told us they are currently able to identify missing royalty reports in instances when they have no royalty report to match with funds deposited to Treasury. However, cases in which a company stops filing royalty reports and stops paying royalties would not be detected unless the payor or lease was selected for an audit or compliance review. MMS’s increasing use of compliance reviews, which are more limited in scope than audits, has led to an inconsistent use of third-party data to verify that self-reported royalty data are correct, thereby placing accurate royalty collections at risk. Since 2001, MMS has increasingly used compliance reviews to achieve its performance goals of completing compliance activities—either full audits or compliance reviews—on a predetermined percentage of royalty payments. According to MMS, compliance reviews can be conducted much more quickly and require fewer resources than audits, largely because they represent a quicker, more limited reasonableness check of the accuracy and completeness of a company’s self-reported data, and do not include a systematic examination of underlying source documentation. Audits, on the other hand, are more time- and resource-intensive, and they include the review of original source documents, such as sales revenue data, transportation and gas processing costs, and production volumes, to verify whether company- reported data are accurate and complete. When third-party data are readily available from OMM, MMS may use them when conducting a compliance review. For example, MMS may use available third-party data on oil and gas production volumes collected by OMM in its compliance reviews for offshore properties. In contrast, because BLM collects only a limited amount of third-party data for onshore production, and MMS does not request these data from the companies, MMS does not systematically use third-party data when conducting onshore compliance reviews. Despite conducting thousands of compliance reviews since 2001, MMS has only recently evaluated their effectiveness. For calendar year 2002, MMS compared the results of 100 of about 700 compliance reviews of offshore leases and companies with the results of audits conducted on those same leases or companies. However, while the compliance reviews covered, among other things, 12 months of production volumes on all products— oil, gas, and retrograde, a liquid product that condenses out of gas under certain conditions—the audits covered only 1 month and one product. As a result of this evaluation comparing the results of compliance reviews with those of audits, MMS now plans to improve its compliance review process by, for example, ensuring that it includes a step to check that royalties are paid on all royalty-bearing products, including retrograde. To achieve its annual performance goals, MMS began using the compliance reviews along with audits. One of MMS’s performance goals is to complete compliance activities—either audits or compliance reviews— on a specified percentage of royalty payments within 3 years of the initial royalty payment. For example, in 2006 MMS reported that it had achieved this goal by confirming reasonable compliance on 72.5 percent of all calendar year 2003 royalties. To help meet this goal, MMS continues to rely heavily on compliance reviews, yet it is unable to state the extent to which this performance goal is accomplished through audits as opposed to compliance reviews. As a result, MMS does not have information available to determine the percentage of the goal that was achieved using third- party data and the percentage that did not systematically rely on third- party data. Moreover, to help meet its performance goal, MMS has historically conducted compliance reviews or audits on leases and companies that have generated the most royalties, with the result that the same leases and companies are reviewed year after year. Accordingly, many leases and companies have gone for years without ever having been reviewed or audited. In 2006, Interior’s Inspector General (IG) reviewed MMS’s compliance process and made a number of recommendations aimed at strengthening it. The IG recommended, among other things, that MMS examine 1 month of third-party source documentation as part of each compliance review to provide greater assurance that both the production and allowance data are accurate. The IG also recommended that MMS track the percentage of the annual performance goal that was accomplished through audits versus through compliance reviews, and that MMS move toward a risk-based compliance program and away from reviewing or auditing the same leases and companies each year. To address the IG’s recommendations, MMS has recently revised its compliance review guidance to include suggested steps for reviewing third-party source production data when available for both offshore and onshore oil and gas, though the guidance falls short of making these steps a requirement. MMS has also agreed to start tracking compliance activity data in 2007 that will allow it to report the percentage of the performance goal that was achieved through audits versus through compliance reviews. Finally, MMS has initiated a risk-based compliance pilot project, whereby leases and companies are selected for compliance work according to MMS-defined risk criteria that include factors other than whether the leases or companies generate high royalty payments. According to MMS, during fiscal year 2008 it will further evaluate and refine the pilot as it moves toward fuller implementation. Finally, representatives from the states and tribes who are responsible for conducting compliance work under agreements with MMS have expressed concerns about the quality of self-reported production and royalty data they use in their reviews. As part our work, we sent questionnaires to all 11 states and seven tribes that conducted compliance work for MMS in fiscal year 2007. Of the nine state and five tribal representatives who responded, seven reported that they lack confidence in the accuracy of the royalty data. For example, several representatives reported that because of concerns with MMS’s production and royalty data, they routinely look to other sources of corroborating data, such as production data from state oil and gas agencies and tax agencies. Finally, several respondents noted that companies frequently report production volumes to the wrong leases and that they must then devote their limited resources to correcting these reporting problems before beginning their compliance reviews and audits. Because MMS’s royalty-in-kind program does not extend the same production verification processes used by its oil program to its gas program, it does not have adequate assurance that it is collecting the gas royalties it is owed. As noted, under the royalty-in-kind program, MMS collects royalties in the form of oil and gas and then sells these commodities in competitive sales. To ensure that the government obtains the fair value of these sales, MMS must make sure that it receives the volumes to which it is entitled. Because prices of these commodities fluctuate over time, it is also important that MMS receive the oil and gas at the time it is entitled to them. As part of its royalty-in-kind oversight effort, MMS identifies imbalances between the volume operators owe the federal government in royalties and the volume delivered and resolves these imbalances by adjusting future delivery requirements or cash payments. The methods that MMS uses to identify these imbalances differ for oil and gas. For oil, MMS obtains pipeline meter data from OMM’s liquid verification system, which records oil volumes flowing through numerous metering points in the Gulf of Mexico region. MMS calculates its royalty share of oil by multiplying the total production volumes provided in these pipeline statements by the royalty rates for a given lease. MMS compares this calculation with the volume of royalty oil that the operators delivered as reported by pipeline operators. When the value of an imbalance cumulatively reaches $100,000, MMS conducts further research to resolve the discrepancy. Using pipeline statements to verify production volumes is a good check against companies’ self-reporting of royalties due the federal government because companies have an incentive to not underreport their share of oil going into the pipeline because that is the amount they will have to sell at the other end of the pipeline. For gas, MMS relies on information contained in two operator-provided documents—monthly imbalance statements and production reports. Imbalance statements include the operator’s total gas production for the month, the share of that production that the government is entitled to, and any differences between what the operator delivered and the government’s royalty share. Production reports contain a large number of data elements, including production volumes for each gas well. MMS compares the production volumes contained in the imbalance statements with those in the production reports to verify production levels. MMS then calculates its royalty share based on these production figures and compares its royalty share with gas volumes the operators delivered as reported by pipeline operators. When the value of an imbalance cumulatively reaches $100,000, MMS conducts further research to resolve the discrepancy. MMS’s ability to detect gas imbalances is weaker than for oil because it does not use third-party metering data to verify the operator-reported production numbers. Since 2004, OMM has collected data from gas pipeline companies through its gas verification system, which is similar to its liquid verification system in that the system records information from pipeline company-provided source documents. Our review of data from this program shows that these data could be a useful tool in verifying offshore gas production volumes. Specifically, our analysis of these pipeline data showed that for the months of January 2004, May 2005, July 2005, and June 2006, 25 percent of the pipeline metering points had an outstanding discrepancy between self-reported and pipeline data. These discrepancies are both positive and negative—that is, production volumes submitted to MMS by operators are at times either under- or overreported. Data from the gas verification system could be useful in validating production volumes and reducing discrepancies. However, to fully benefit from this opportunity, MMS needs to improve the timeliness and reliability of these data. After examining this issue, in December 2007, the Subcommittee on Royalty Management, a panel appointed by the Secretary of the Interior to examine MMS’s royalty program, reported that OMM is not adequately staffed to conduct sufficient review of data from the gas verification system. We have not yet, nor has MMS, determined the net impact of these discrepancies on royalties owed the federal government. The methods and underlying assumptions MMS uses to compare the revenues it collects in kind with what it would have collected in cash do not account for all costs and do not sufficiently deal with uncertainties, raising doubts about the claimed financial benefits of the royalty-in-kind program. Specifically, MMS’s calculation showing that MMS sold the royalty oil and gas for $74 million more than MMS would have received in cash payments did not appropriately account for uncertainty in estimates of cash payments. In addition, MMS’s calculation that early royalty-in-kind payments yielded $5 million in interest was based on assumptions about payment dates and interest rates that could misstate the estimated interest benefit. Finally, MMS’s calculation that the royalty-in-kind program cost about $8 million less to administer than an in-value program did not include significant costs that, if included, could change MMS’s conclusions. MMS sold the oil and gas it collected during the 3 fiscal years 2004 through 2006 for $8.15 billion and calculated that this amount exceeded what MMS would have received in cash royalties by about $74 million—a net benefit of approximately 0.9 percent. MMS has recognized that its estimates of what it would have received in cash payments are subject to some degree of error but has not appropriately evaluated or reported how sensitive the net benefit calculations are to this error. This is important because even a 1 percent error in the estimates of cash payments would change the estimated benefit of the royalty-in-kind program from $74 million to anywhere from a loss of $6 million to a benefit of $155 million. Moreover, MMS’s annual reports to the Congress present oil sales data in aggregate and therefore do not reflect the fact that, in many individual sales, MMS sold the oil it collected in kind for less than it estimates it would have collected in cash. Specifically, MMS estimates that, in fiscal year 2006, it sold 28 million barrels of oil, or 64 percent of all the oil it collected in kind, for less than it would have collected in cash. The government would have received an additional $6 million in revenue if it had taken these royalties in cash instead. These sales indicate that MMS has not always been able to achieve one of its central goals: to select, based on systematic economic analysis, which royalties to take in cash and which to take in kind in a way that maximizes revenues to the government. According to a senior MMS official, the federal government has several advantages when selling gas that it does not have when selling oil, a fact that helps to explain why MMS’s gas sales have performed better than its oil sales. For example, MMS can bundle the natural gas production in the Gulf of Mexico from many different leases into large volumes that MMS can use to negotiate discounts for transporting gas from production sites to market centers. Because purchasers receive these discounts when they buy gas from MMS, they may be willing to pay more for gas from MMS than from the original owners. Opportunities for bundling are less prevalent in the oil market. Because MMS generally does not have this, or other, advantages when selling oil, purchasers often pay MMS about what they would pay other producers for oil, and sometimes less. Indeed, MMS’s policies allow it to sell oil for up to 7.7 cents less per barrel than MMS estimates it would collect if it took the royalties in cash. MMS told us that the other financial benefits of the royalty-in-kind program, including interest payments and reduced administrative costs, justify selling oil for less than the estimated cash payments because once these additional revenues are factored in, the net benefit to the government is still positive. However, as discussed below, we have found that there are significant questions and uncertainties about the other financial benefits as well. Revenues from the sale of royalty-in-kind oil are due 10 days earlier than cash payments, and revenues from the sale of in-kind gas are due 5 days earlier. MMS calculates that the government earned about $5 million in interest from fiscal years 2004 through 2006 from these early payments that it would not have received had it taken royalties in cash. We found two weaknesses in the way MMS calculates this interest. First, the payment dates used to calculate the interest revenue have the potential to over- or underestimate its value. MMS calculates the interest on the basis of the time between the actual date that Treasury received a royalty-in- kind payment and the theoretical latest date that Treasury would have received a cash payment under the royalty-in-value program. However, MMS officials told us that cash payments can, and sometimes do, arrive before their due date. As a result, MMS might be overstating the value of the early royalty-in-kind payments. Second, the interest rate used to calculate the interest revenue may either over- or understate its value because the rate is not linked to any market rate. From fiscal year 2004 through 2007, MMS used a 3 percent interest rate to calculate the time value of these early payments. However, during this time, actual market interest rates at which the federal government borrowed fluctuated. For example, 4-week Treasury bill rates ranged from a low of 0.72 percent to a high of 5.18 percent during this same period. Therefore, during some fiscal years, MMS likely overstated or understated the value of these early payments. MMS has developed procedures to capture the administrative costs of the royalty-in-kind and cash royalty programs and includes in its administrative cost comparison primarily the variable costs for the federal offshore oil and gas activities—that is, costs that fluctuate based on the volume of oil or gas received by MMS, such as labor costs. Although MMS also includes some department-level fixed costs, it excludes some fixed costs that it does not incur on a predictable basis (largely information technology costs). According to MMS, if it included these IT and other such costs, there would be a high potential of skewing the unit price used to determine the administrative cost savings. However, by excluding such fixed costs from the administrative cost comparison, MMS is not including all the necessary cost information to evaluate the efficacy of the royalty-in- kind program. MMS’s administrative cost analysis compares a bundle of royalty-in-kind program administrative costs divided by the number of barrels of oil equivalent realized by the royalty-in-kind program during a year, with a bundle of cash royalty program administrative costs divided by the number of barrels of oil equivalent realized by that program. The difference between these amounts represents the difference in cost to administer a barrel of oil equivalent under each program. MMS then multiplies the difference in cost to administer a barrel of oil equivalent under the two programs by the number of barrels of oil equivalent realized by the royalty-in-kind program to determine the administrative cost savings. However, MMS’s calculations excluded some fixed costs that are not incurred on a regular or predictable basis from the analysis. For example, in fiscal year 2006, royalty-in-kind IT costs of $3.4 million were excluded from the comparison. Moreover, additional IT costs of approximately $29.4 million—some of which may have been incurred for either the royalty-in-kind or the cash royalty program—were also excluded. Including and assigning these IT costs to the programs supported by those costs would provide a more complete accounting of the respective costs of the royalty-in-kind and royalty-in-value programs, and would likely impact the results of MMS’s administrative cost analysis. Ultimately the system used by Interior to ensure taxpayers receive appropriate value for oil and gas produced from federal lands and waters is more of an honor system than we are comfortable with. Despite the heavy scrutiny that Interior has faced in its oversight of royalty management, we and others continue to identify persistent weaknesses in royalty collections. Given both the long-term fiscal challenges the government faces and the increased demand for the nation’s oil and gas resources, it is imperative that we have a royalty collection system going forward that can assure the American public that the government is receiving proper royalty payments. Our work on this issue is continuing along several avenues, including comparing the royalties taken in kind with the value of royalties taken in cash, assessing the rate of oil and gas development on federal lands, comparing the amount of money the U.S. government receives with what foreign countries receive for allowing companies to develop and produce oil and gas, and examining further the accuracy of MMS’s production and royalty data. We plan to make recommendations to address the weaknesses we identified in our final reports on these issues. We look forward to further work and to helping this subcommittee and the Congress as a whole to exercise oversight on this important issue. Mr. Chairman, this concludes our prepared statement. We would be pleased to respond to any questions that you or other members of the subcommittee may have at this time. For further information about this testimony, please contact either Frank Rusco, at 202-512-3841, or [email protected], or Jeanette Franzel, at 202-512- 9406, or [email protected]. Contact points for our Congressional Relations and Public Affairs may be found on the last page of this statement. Contributors to this testimony include Ron Belak, Ben Bolitzer, Lisa Brownson, Melinda Cordero, Nancy Crothers, Glenn C. Fischer, Cindy Gilbert, Tom Hackney, Chase Huntley, Heather Hill, Barbara Kelly, Sandra Kerr, Paul Kinney, Jennifer Leone, Jon Ludwigson, Tim Minelli, Michelle Munn, G. Greg Peterson, Barbara Timmerman, and Mary Welch. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | Companies that develop and produce federal oil and gas resources do so under leases administered by the Department of the Interior (Interior). Interior's Bureau of Land Management (BLM) and Offshore Minerals Management (OMM) are responsible for overseeing oil and gas operations on federal leases. Companies are required to self- report their production volumes and other data to Interior's Minerals Management Service (MMS) and to pay royalties either "in value" (payments made in cash), or "in kind" (payments made in oil or gas). GAO's testimony will focus on whether (1) Interior has adequate assurance that it is receiving full compensation for oil and gas produced from federal lands and waters, (2) MMS's compliance efforts provide a check on industry's self-reported data, (3) MMS has reasonable assurance that it is collecting the right amounts of royalty-in-kind oil and gas, and (4) the benefits of the royalty-in-kind program that MMS has reported are reliable. This testimony is based on ongoing work. When this work is complete, we expect to make recommendations to address these and other findings. To address these issues GAO analyzed MMS data, reviewed MMS, and other agency policies and procedures, and interviewed officials at Interior. In commenting on a draft of this testimony, Interior provided GAO technical comments which were incorporated where appropriate. Interior lacks adequate assurance that it is receiving full compensation for oil and gas produced from federal lands and waters because Interior's Bureau of Land Management (BLM) and Offshore Minerals Management (OMM) are not fully conducting production inspections as required by law and agency policies and because MMS's financial management systems are inadequate and lack key internal controls. Officials at BLM told us that only 8 of the 23 field offices in five key states we sampled completed their required production inspections in fiscal year 2007. Similarly, officials at OMM told us that they completed about half of the required production inspections in calendar year 2007 in the Gulf of Mexico. In addition, MMS's financial management system lacks an automated process for routinely and systematically reconciling production data with royalty payments. MMS's compliance efforts do not consistently examine third-party source documents to verify whether self-reported industry royalty-in-value payment data are complete and accurate, putting full collection of royalties at risk. In 2001, to help meet its annual performance goals, MMS moved from conducting audits, which compare self-reported data against source documents, toward compliance reviews, which provide a more limited check of a company's self-reported data and do not include systematic comparison to source documentation. MMS could not tell us what percentage of its annual performance goal was achieved through audits as opposed to compliance reviews. Because the production verification processes MMS uses for royalty-in-kind gas are not as rigorous as those applied to royalty-in-kind oil, MMS cannot be certain it is collecting the gas royalties it is due. MMS compares companies' self-reported oil production data with pipeline meter data from OMM's oil verification system, which records oil volumes flowing through metering points. While analogous data are available from OMM's gas verification system, MMS has not chosen to use these third-party data to verify the company-reported production numbers. The financial benefits of the royalty-in-kind program are uncertain due to questions and uncertainties surrounding the underlying assumptions and methods MMS used to compare the revenues it collected in kind with what it would have collected in cash. Specifically, questions and uncertainties exist regarding MMS's methods to calculate the net revenues from in-kind oil and gas sales, interest payments, and administrative cost savings. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
The Internet is a worldwide network of networks made up of servers, routers, and backbone networks. To send a communication from one computer to another, a series of addresses is attached to information sent from the first computer to route the information to its final destination. The protocol that guides the administration of the routing addresses is the Internet protocol. The most widely deployed version of IP is version 4 (IPv4). The two basic functions of IP include (1) addressing and (2) fragmentation of data, so that information can move across networks. An IP address consists of a fixed sequence of numbers. IPv4 uses a 32-bit address format, which provides approximately 4.3 billion unique IP addresses. By providing a numerical description of the location of networked computers, addresses distinguish one computer from another on the Internet. In some ways, an IP address is like a physical street address. For example, if a letter is going to be sent from one location to another, the contents of the letter must be placed in an envelope that provides addresses for the sender and receiver. Similarly, if data are to be transmitted across the Internet from a source to a destination, IP addresses must be placed in an IP header. Figure 1 is a simplified illustration of this concept. In addition to containing the addresses of sender and receiver, the header also contains a series of fields that provide information about what is being transmitted. Limited IPv4 address space prompted organizations that need large numbers of IP addresses to implement technical solutions to compensate. For example, network administrators began to use one unique IP address to represent a large number of users. In other words, to the outside world, all computers behind a device known as a network address translation router appear to have the same address. While this method has enabled organizations to compensate for the limited number of globally unique IP addresses available with IPv4, the resulting network structure has eliminated the original end-to-end communications model of the Internet. Because of the limitations of IPv4, in 1994 the Internet Engineering Task Force (IETF) began reviewing proposals for a successor to IPv4 that would increase IP address space and simplify routing. The IETF established a working group to be specifically responsible for developing the specifications and standardization of IPv6. Over the past 10 years, IPv6 has evolved into a mature standard. A complete list of the IPv6 documents can be found at the IETF Web site. Interest in IPv6 is gaining momentum around the world, particularly in parts of the world that have limited IPv4 address space to meet their industry and consumer communications needs. Regions that have limited IPv4 address space, such as Asia and Europe, have undertaken efforts to develop, test, and implement IPv6 deployments. As a region, Asia controls only about 9 percent of the allocated IPv4 addresses, and yet has more than half of the world’s population. As a result, the region is investing in IPv6 development, testing, and implementation. For example, the Japanese government’s e-Japan Priority Policy Program mandated the incorporation of IPv6 and set a deadline of 2005 to upgrade existing systems in both the public and private sectors. The government has helped to support the establishment of an IPv6 Promotion Council to facilitate issues related to development and deployment and is providing tax incentives to promote deployment. In addition, major Japanese corporations in the communications and consumer electronics sectors are also developing IPv6 networks and products. Further, the Chinese government has reportedly set aside approximately $170 million to develop an IPv6-capable infrastructure. The European Commission initiated a task force in April 2001 to design an IPv6 Roadmap. The Roadmap serves as an update and plan of action for development and future perspectives. It also serves as a way to coordinate European efforts for developing, testing, and deploying IPv6. Europe currently has a task force that has the dual mandate of initiating country/regional IPv6 task forces across European states and seeking global cooperation around the world. Europe’s Task Force and the Japanese IPv6 Promotion Council forged an alliance to foster worldwide deployment. The key characteristics of IPv6 are designed to increase address space, promote flexibility and functionality, and enhance security. For example, IPv6 dramatically increases the amount of IP address space available from the approximately 4.3 billion in IPv4 to approximately 3.4 × 10This large number of IPv6 addresses means that almost any electronic device can have its own address. While IP addresses are commonly associated with computers, they are increasingly being assigned to other items such as cellular phones, consumer electronics, and automobiles. In contrast to IPv4, the massive address space available in IPv6 will allow virtually any device to be assigned a globally reachable address. This change fosters greater end-to-end communications between devices with unique IP addresses and can better support the delivery of data-rich content such as voice and video. In addition to the increased number of addresses, IPv6 improves the routing of data, provides mobility features for wireless, and eases automatic configuration capabilities for network administration, quality of service, and security. These characteristics are expected to enable advanced Internet communications and foster new software applications. While applications that fully exploit IPv6 are still in development, industry experts have identified various federal functions that might benefit from IPv6-enabled applications, such as border security, first responders, public health, and information sharing. The transition to IPv6 is under way for many federal agencies because their networks already contain IPv6-capable software and equipment. For example, most major operating systems, printers, and routers currently support IPv6. Therefore, it is important for agencies to note that the transition to IPv6 is different from a software upgrade because, when it is installed, its capability is also being integrated into the software and hardware. Besides recognizing that an IPv6 transition is already under way, other key considerations for federal agencies to address in an IPv6 transition include significant IT planning efforts and immediate actions to ensure the security of agency information and networks. Important planning considerations include the following: ● Developing inventories and assessing risks—An inventory of equipment (software and hardware) provides management with an understanding of the scope of an IPv6 transition and assists in focusing agency risk assessments. These assessments are essential steps in determining what controls are required to protect a network and what level of resources should be expended on controls. ● Creating business cases for an IPv6 transition—A business case usually identifies the organizational need for the system and provides a clear statement of the high-level system goals. One key aspect to consider while drafting the business case for IPv6 is to understand how many devices an agency wants to connect to the Internet. This will help in determining how much IPv6 address space is needed for the agency. Within the business case, it is crucial to include how the new technology will integrate with the agency’s existing enterprise architecture. ● Establishing policies and enforcement mechanisms—Developing and establishing IPv6 transition policies and enforcement mechanisms are important considerations for ensuring an efficient and effective transition. Furthermore, because of the scope, complexities, and costs involved in an IPv6 transition, effective enforcement of agency IPv6 policies is an important consideration for management officials. ● Determining the costs—Cost benefit analyses and return-on- investment calculations can be used to justify investments. During the year 2000 (Y2K) technology challenge, the federal government amended the Federal Acquisition Regulation and mandated that all contracts for information technology include a clause requiring the delivered systems or service to be ready for the Y2K date change. This helped prevent the federal government from procuring systems and services that might have been obsolete or that required costly upgrades. Similarly, proactive integration of IPv6 requirements into federal acquisition requirements can reduce the costs and complexity of the IPv6 transition of federal agencies and ensure that federal applications are able to operate in an IPv6 environment without costly upgrades. ● Identifying timelines and methods for the transition—Timelines and process management can assist a federal agency in determining when to authorize its various component organizations to allow IPv6 traffic and features. Additionally, agencies can benefit from understanding the different types of transition methods or approaches that can allow them to use both IPv4 and IPv6 without causing significant interruptions in network services. As IPv6-capable software and devices accumulate in agency networks, they could be abused by attackers if not managed properly. For example, IPv6 is included in most computer operating systems and, if not enabled by default, is easy for administrators to enable either intentionally or as an unintentional byproduct of running a program. We tested IPv6 features and found that, if firewalls and intrusion detection systems are not appropriately configured, IPv6 traffic may not be detected or controlled, leaving systems vulnerable to attacks by malicious hackers. Further, in April 2005, the United States Computer Emergency Response Team (US-CERT), located at the Department of Homeland Security (DHS), issued an IPv6 cyber security alert to federal agencies based on our IPv6 test scenarios and discussions with DHS officials. The alert warned federal agencies that unmanaged or rogue implementations of IPv6 present network management security risks. Specifically, the US-CERT notice informed agencies that some firewalls and network intrusion detection systems do not provide IPv6 detection or filtering capability and that malicious users might be able to tunnel IPv6 traffic through these security devices undetected. Further, one feature of IPv6, known as automatic configuration (where a device that is IPv6 enabled will derive its own IP address from neighboring routers without an administrator’s intervention), could allow devices to automatically configure themselves with an IPv6 address without authorization. US-CERT provided agencies with a series of short-term solutions including ● determining if firewalls and intrusion detection system products support IPv6 and implement additional IPv6 security measures and identifying IPv6 devices and disabling if not necessary. The Department of Defense’s transition to IPv6 is a key component of its business case to improve interoperability among many information and weapons systems, known as the Global Information Grid (GIG). The IPv6 component of GIG facilitates DOD’s goal of achieving network-centric operations by exploiting the key characteristics of IPv6, including ● enhanced mobility features, ● enhanced configuration features, ● enhanced quality of service, and ● enhanced security features. The department’s efforts to develop policies, timelines, and methods for transitioning to IPv6 are progressing. In 2004, Defense established an IPv6 Transition Office to provide the overall coordination, common engineering solutions, and technical guidance across the department to support an integrated and coherent transition to IPv6. The Transition Office is in the early stages of its work and has developed a set of products, including a draft system engineering management plan, risk management planning documentation, budgetary documentation, requirements criteria, and a master schedule. The management schedule includes a set of implementation milestones that include DOD’s goal of transitioning to IPv6 by fiscal year 2008. In parallel with the Transition Office’s efforts, the Office of the DOD Chief Information Officer has created an IPv6 transition plan. The Chief Information Officer has responsibility for ensuring a coherent and timely transition and for establishing and maintaining the overall departmental transition plan, and is the final approval authority for any IPv6 transition waivers. Although DOD has made substantial progress in developing a planning framework for transitioning to IPv6, the department still faces several challenges, including developing a full inventory of IPv6-capable software and hardware, finalizing its IPv6 systems engineering management plan, monitoring its operational networks for unauthorized IPv6 traffic, and developing a comprehensive enforcement strategy, including using its existing budgetary and acquisition review process. Unlike DOD, the majority of other federal agencies reporting have not yet initiated transition planning efforts for IPv6. For example, of the 22 agencies that responded to our survey, 4 agencies reported having established a date or goal for transitioning to IPv6. The majority of agencies have not addressed key planning considerations. For example, ● 22 agencies reported not having developed a business case, ● 21 agencies reported not having plans, ● 19 agencies reported not having inventoried their IPv6-capable ● 22 agencies reported not having estimated costs. Agency responses demonstrate that few efforts outside DOD have been initiated to address IPv6. If agency planning is not carefully monitored, it could result in significant and unexpected costs for the federal government. To address the challenges IPv6 presents to federal networks, in our report we recommended that federal agencies begin addressing key IPv6 planning considerations. Specifically, we recommended that the Director of OMB instruct agencies to begin developing inventories and assessing risks, creating business cases for the IPv6 transition, establishing policies and enforcement mechanisms, determining the costs, and identifying timelines and methods for transition, as appropriate. To help ensure that IPv6 would not result in unexpected costs for the federal agencies, we recommended that the Director consider amending the Federal Acquisition Regulation with specific language that requires that all information technology systems and applications purchased by the federal government be able to operate in an IPv6 environment. Finally, because poorly configured and unmanaged IPv6 capabilities present immediate risks to federal agency networks, we recommended that agency heads take immediate action to address the near-term security risks. Such actions could include determining what IPv6 capabilities they may have and initiating steps to ensure that they can control and monitor IPv6 traffic to prevent unauthorized access. In summary, transitioning to IPv6 is a pervasive, crosscutting challenge for federal agencies that could result in significant benefits to agency services and operations. But such benefits may be diminished if action is not taken to ensure that agencies are addressing the attendant challenges, including addressing key planning considerations and acting to ensure the security of agency information and networks. If agencies do not address these key planning issues and do not seek to understand the potential scope and complexities of IPv6 issues—whether agencies plan to transition immediately or not—they will face potentially increased costs and security risks. Mr. Chairman, this completes our prepared statement. We would be happy to respond to any questions you or other Members of the Committee may have at this time. For further information, please contact David Powner at (202)-512- 9286 or Keith Rhodes at (202)-512-6412. We can also be reached by e-mail at [email protected] and [email protected] respectively. Key contributors to this testimony were Scott Borre, Lon Chin, West Coile, Camille Chaires, John Dale, Neil Doherty, Nancy Glover, Richard Hung, Hal Lewis, George Kovachick, J. Paul Nicholas, Christopher Owens, Eric Trout, and Eric Winter. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | The Internet protocol (IP) provides the addressing mechanism that defines how and where information such as text, voice, and video moves across interconnected networks. Internet protocol version 4 (IPv4), which is widely used today, may not be able to accommodate the increasing number of global users and devices that are connecting to the Internet. As a result, IP version 6 (IPv6) was developed to increase the amount of available IP address space. The new protocol is gaining increased attention from regions with limited IP addresses. For its testimony, GAO was asked to discuss the findings and recommendations of its recent study of IPv6 (GAO-05-471). In this study, GAO was asked to (1) describe the key characteristics of IPv6; (2) identify the key planning considerations for federal agencies in transitioning to IPv6; and (3) determine the progress made by the Department of Defense (DOD) and other major agencies in the transition to IPv6. The key characteristics of IPv6 are designed to increase address space, promote flexibility and functionality, and enhance security. For example, by using 128-bit addresses rather than 32-bit addresses, IPv6 dramatically increases the available Internet address space from approximately 4.3 billion in IPv4 to approximately 3.4 x 10^38 in IPv6. Key planning considerations for federal agencies include recognizing that the transition is already under way, because agency networks already include IPv6-capable software and equipment. Other important agency planning considerations include developing inventories and assessing risks; creating business cases that identify organizational needs and goals; establishing policies and enforcement mechanisms; determining costs; and identifying timelines and methods for transition. Managing the security aspects of transition is also an important consideration because poorly managed IPv6 capabilities can put agency information and systems at risk. DOD has made progress in developing a business case, policies, timelines, and processes for transitioning to IPv6. Unlike DOD, the majority of other major federal agencies reported that they have not yet initiated key planning efforts for IPv6. In its report, GAO recommended, among other things, that the Director of the Office of Management and Budget (OMB) instruct agencies to begin to address key planning considerations for the IPv6 transition and that agencies act to mitigate near-term IPv6 security risks. Officials from OMB, DOD, and Commerce generally agreed with the contents of the report. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
Approximately 4 percent of discretionary spending in the United States’ federal budget is appropriated for the conduct of foreign affairs activities. This includes funding for bilateral and multilateral assistance, military assistance, and State Department activities. Spending for State, taken from the “150 Account,” makes up the largest share of foreign affairs spending. Funding for State’s Diplomatic and Consular Programs—State’s chief operating account, which supports the department’s diplomatic activities and programs, including salaries and benefits—comprises the largest portion of its appropriations. Embassy security, construction, and maintenance funding comprises another large portion of State’s appropriation. Funding for the administration of foreign affairs has risen dramatically in recent fiscal years, due, in part, to enhanced funding for security-related improvements worldwide, including personnel, construction, and equipment following the bombings of two U.S. embassies in 1998 and the events of September 11, 2001. For example, State received about $2.8 billion in fiscal year 1998, but by fiscal year 2003, State’s appropriation was approximately $6 billion. For fiscal year 2004, State is seeking approximately $6.4 billion, which includes $4 billion for diplomatic and consular affairs and $1.5 billion for embassy security, construction, and maintenance. In addition, State plans to spend $262 million over fiscal years 2003 and 2004 on information technology modernization initiatives overseas. Humanitarian and economic development assistance is an integral part of U.S. global security strategy, particularly as the United States seeks to diminish the underlying conditions of poverty and corruption that may be linked to instability and terrorism. USAID is charged with overseeing U.S. foreign economic and humanitarian assistance programs. In fiscal year 2003, Congress appropriated about $12 billion—including supplemental funding—to USAID, and the agency managed programs in about 160 countries, including 71 overseas missions with USAID direct-hire presence. Fiscal year 2004 foreign aid spending is expected to increase due, in part, to substantial increases in HIV/AIDS funding and security- related economic aid. I would like to discuss State’s performance in managing its overseas real estate, overseeing major embassy construction projects, managing its overseas presence and staffing, modernizing its information technology, and developing and implementing strategic plans. State manages an overseas real property portfolio valued at approximately $12 billion. The management of real property is an area where State could achieve major cost savings and other operational efficiencies. In the past, we have been critical of State’s management of its overseas property, including its slow disposal of unneeded facilities. Recently, officials at State’s Bureau of Overseas Buildings Operations (OBO), which manages the government’s real property overseas, have taken a more systematic approach to identifying unneeded properties and have significantly increased the sale of these properties. For example, in 2002, OBO completed sales of 26 properties totaling $64 million, with contracts in place for another $40 million in sales. But State needs to dispose of more facilities in the coming years as it embarks on an expensive plan to replace embassies and consulates that do not meet State’s security requirements and/or are in poor condition. Unneeded property and deteriorating facilities present a real problem— but also an opportunity to improve U.S. operations abroad and achieve savings. We have reported that the management of overseas real estate has been a continuing challenge for State, although the department has made improvements in recent years. One of the key weaknesses we found was the lack of a systematic process to identify unneeded properties and to dispose of them in a timely manner. In 1996, we identified properties worth hundreds of millions of dollars potentially excess to State’s needs or of questionable value and expensive to maintain that the department had not previously identified for potential sale. As a result of State’s inability to resolve internal disputes and sell excess property in an expeditious manner, we recommended that the Secretary of State appoint an independent panel to decide which properties should be sold. The Secretary of State created this panel in 1997. As of April 2002, the Real Property Advisory Board had reviewed 41 disputed properties and recommended that 26 be sold. By that time, State had disposed of seven of these properties for about $21 million. In 2002, we again reviewed State’s processes for identifying and selling unneeded overseas real estate and found that it had taken steps to implement a more systematic approach that included asking posts to annually identify properties for disposal and increasing efforts by OBO and officials from State’s OIG to identify such properties when they visit posts. For example, the director of OBO took steps to resolve disputes with posts that have delayed the sale of valuable property. OBO has also instituted monthly Project Performance Reviews to review all aspects of real estate management, such as the status of acquisitions and disposal of overseas property. However, we found that the department’s ability to monitor property use and identify potentially unneeded properties was hampered by errors and omissions in its property inventory. Inaccurate inventory information can result in unneeded properties not being identified for potential sale. Therefore, we recommended that the department improve the accuracy of its real property inventory. In commenting on our report, OBO said that it had already taken action to improve its data collection. For example, State sent a cable to all overseas posts reminding them of their responsibilities to maintain accurate real estate records. State has significantly improved its performance in selling unneeded property. In total, between fiscal years 1997 through 2002, State sold 129 properties for more than $459 million. Funds generated from property sales are being used to help offset embassy construction costs in Berlin, Germany; Luanda, Angola; and elsewhere. State estimates it will sell additional properties between fiscal years 2003 and 2008 valued at approximately $300 million. More recently, State has taken action to sell two properties (a 0.4 acre parking lot and an office building) in Paris identified in a GAO report as potentially unneeded. After initially resisting the sale of the parking lot, the department reversed its decision and sold both properties in June 2003 for a total of $63.1 million—a substantial benefit to the government. The parking lot alone was sold conditionally for $20.7 million. Although this may be a unique case, it demonstrates how scrutiny of the property inventory could result in potential savings. The department should continue to look closely at property holdings to see if other opportunities exist. If State continues to streamline its operations and dispose of additional facilities over the next several years, it can use those funds to help offset the cost of replacing about 160 embassies and consulates for security reasons in the coming years. In the past, State has had difficulties ensuring that major embassy construction projects were completed on time and within budget. For example, in 1991 we reported that State’s previous construction program suffered from delays and cost increases due to, among other things, poor program planning and inadequate contractor performance. In 1998, State embarked on the largest overseas embassy construction program in its history in response to the bombings of U.S. embassies in Africa. From fiscal years 1999 through 2003, State received approximately $2.7 billion for its new construction program and began replacing 25 of 185 posts identified as vulnerable by State. To better manage this program, OBO has undertaken several initiatives aimed at improving State’s stewardship of its funds for embassy buildings, including cutting costs of planned construction projects, using standard designs, and reducing construction duration through a “fast track” process. Moreover, State hopes that additional management tools aimed at ensuring that new facilities are built in the most cost-effective manner, including improvements in how agencies determine requirements for new embassies, will help move the program forward. State is also pursuing a cost-sharing plan that would charge other federal agencies for the cost of their overall overseas presence and provide additional funds to help accelerate the embassy construction program. While State has begun replacing many facilities, OBO officials estimated that beginning in fiscal year 2004, it will cost an additional $17 billion to replace facilities at remaining posts. As of February 2003, State had begun replacing 25 of 185 posts identified by State as vulnerable after the 1998 embassy bombings. To avoid the problems that weakened the previous embassy construction program, we recommended that State develop a long-term capital construction plan that identifies (1) proposed construction projects’ cost estimates and schedules and (2) estimated annual funding requirements for the overall program. Although State initially resisted implementing our recommendation, OBO’s new leadership reconsidered this recommendation and has since produced two annual planning documents titled the “Long-Range Overseas Building Plan.” According to OBO, the long-range plan is the roadmap by which State, other departments and agencies, the Office of Management and Budget (OMB), the Congress, and others can focus on defining and resolving the needs of overseas facilities. In addition to the long-range plan, OBO has undertaken several initiatives aimed at improving State’s stewardship of its embassy construction funds. These measures have the potential to result in significant cost savings and other efficiencies. For example, OBO has developed Standard Embassy Designs (SED) for use in most embassy construction projects. SEDs provide OBO with the ability to contract for shortened design and construction periods and control costs through standardization; shifted from “design-bid-build” contracting toward “design-build” contracts, which have the potential to reduce project costs and construction time frames; developed and implemented procedures to enforce cost planning during the design phase and ensure that the final designs are within budget; and increased the number of contractors eligible to bid for construction projects, thereby increasing competition for contracts, which could potentially result in lower bids. OBO has set a goal of a 2-year design and construction period for its mid- sized, standard embassy design buildings, which, if met, could reduce the amount of time spent in design and construction by almost one year. We reported in January 2003 that these cost-cutting efforts allowed OBO to achieve $150 million in potential cost savings during fiscal year 2002. These savings, according to OBO, resulted from the application of the SEDs and increased competition for the design and construction of these projects. Despite these gains, State will face continuing hurdles throughout the life of the embassy construction program. These hurdles include meeting construction schedules within the estimated costs and ensuring that State has the capacity to manage a large number of projects simultaneously. Because of the high costs associated with this program and the importance of providing secure facilities overseas, we believe this program merits continuous oversight by State, GAO, and the Congress. In addition to ensuring that individual construction projects meet cost and performance schedules, State must also ensure that new embassies are appropriately sized. Given that the size and cost of new facilities are directly related to agencies’ anticipated staffing needs, it is imperative that future requirements be predicted as accurately as possible. Embassy buildings that are designed too small may require additional construction and funding in the future; buildings that are too large may have unused space—a waste of government funds. State’s construction program in the late 1980s encountered lengthy delays and cost overruns in part because it lacked coordinated planning of post requirements prior to approval and budgeting for construction projects. As real needs were determined, changes in scope and increases in costs followed. OBO now requires that all staffing projections for new embassy compounds be finalized prior to submitting funding requests, which are sent to Congress as part of State’s annual budget request each February. In April 2003, we reported that U.S. agencies operating overseas, including State, were developing staffing projections without a systematic approach. We found that State’s headquarters gave embassies little guidance on factors to consider when developing projections, and thus U.S. agencies did not take a consistent or systematic approach to determining long-term staffing needs. Based on our recommendations, State in May 2003 issued a “Guide to Developing Staffing Projections for New Embassy and Consulate Compound Construction,” which requires a more serious, disciplined approach to developing staffing projections. When fully implemented, this approach should ensure that overseas staffing projections are more accurate and minimize the financial risks associated with building facilities that are designed for the wrong number of people. Historically, State has paid all costs associated with the construction of overseas facilities. Following the embassy bombings, the Overseas Presence Advisory Panel (OPAP) noted a lack of cost sharing among agencies that use overseas facilities. As a result, OPAP recommended that agencies be required to pay rent in government-owned buildings in foreign countries to cover operating and maintenance costs. In 2001, an interagency group put forth a proposal that would require agencies to pay rent based on the space they occupy in overseas facilities, but the plan was not enacted. In 2002, OMB began an effort to develop a mechanism that would require users of overseas facilities to share the construction costs associated with those facilities. The administration believes that if agencies were required to pay a greater portion of the total costs associated with operating overseas facilities, they would think more carefully before posting personnel overseas. As part of this effort, State has presented a capital security cost-sharing plan that would require agencies to help fund its capital construction program. State’s proposal calls for each agency to fund a proportion of the total construction program cost based on its respective proportion of total overseas staffing. OBO has reported that its proposed cost-sharing program could result in additional funds, thereby reducing the duration of the overall program. State maintains a network of approximately 260 diplomatic posts in about 170 countries worldwide and employs a direct-hire workforce of about 30,000 employees, about 60 percent of those overseas. The costs of maintaining staff overseas vary by agency but in general are extremely high. In 2002, the average annual cost of placing one full-time direct-hire American family of four in a U.S. embassy was approximately $339,000. These costs make it critical that the U.S. overseas presence is sized appropriately to conduct its work. We have reported that State and most other federal agencies overseas have historically lacked a systematic process for determining the right number of personnel needed overseas— otherwise known as rightsizing. Moreover, in June 2002, we reported that State faces serious staffing shortfalls at hardship posts—in both the number of staff assigned to these posts and their experience, skills, and/or language proficiency. Thus, State has been unable to ensure that it has “the right people in the right place at the right time with the right skills to carry out America’s foreign policy”—its definition of diplomatic readiness. However, since 2001, State has directed significant attention to improving weaknesses in the management of its workforce planning and staffing issues that we and others have noted. Because personnel salaries and benefits consume a huge portion of State’s operating budget, it is important that the department exercise good stewardship of its human capital resources. Around the time GAO designated strategic human capital management as a governmentwide high-risk area in 2001, State, as part of its Diplomatic Readiness Initiative (DRI), began directing significant attention to addressing its human capital needs, adding 1,158 employees over a 3-year period (fiscal years 2002 through 2004). In fiscal year 2002, Congress allocated nearly $107 million for the DRI. State requested nearly $100 million annually in fiscal years 2003 and 2004 to hire approximately 400 new staff each year. The DRI has enabled the department to boost recruitment. However, State has historically lacked a systematic approach to determine the appropriate size and location of its overseas staff. To move the rightsizing process forward, the August 2001 President’s Management Agenda identified it as one of the administration’s priorities. Given the high costs of maintaining the U.S. overseas presence, the administration has instructed U.S. agencies to reconfigure the number of overseas staff to the minimum necessary to meet U.S. foreign policy goals. This OMB-led initiative aims to develop cost-saving tools or models, such as increasing the use of regional centers, revising the Mission Performance Planning (MPP) process, increasing overseas administrative efficiency, and relocating functions to the United States. According to the OPAP, although the magnitude of savings from rightsizing the overseas presence cannot be known in advance, “significant savings” are achievable. For example, it said that reducing all agencies’ staffing by 10 percent could yield governmentwide savings of almost $380 million a year. GAO’s Rightsizing Framework In May 2002, we testified on our development of a rightsizing framework. The framework is a series of questions linking staffing levels to three critical elements of overseas diplomatic operations: security of facilities, mission priorities and requirements, and cost of operations. It also addresses consideration of rightsizing options, such as relocating functions back to the United States or to regional centers, competitively sourcing functions, and streamlining operations. Rightsizing analyses could lead decision makers to increase, decrease, or change the mix of staff at a given post. For example, based on our work at the U.S. embassy in Paris, we identified positions that could potentially be relocated to regional centers or back to the United States. On the other hand, rightsizing analyses may indicate the need for increased staffing, particularly at hardship posts. In a follow-up report to our testimony, we recommended that the director of OMB ensure that our framework is used as a basis for assessing staffing levels in the administration’s rightsizing initiative. In commenting on our rightsizing reports, State endorsed our framework and said it plans to incorporate elements of our rightsizing questions into its future planning processes, including its MPPs. State also has begun to take further actions in managing its overseas presence—along the lines that we recommended in our June 2002 report on hardship posts— including revising its assignment system to improve staffing of hardship posts and addressing language shortfalls by providing more opportunities for language training. In addition, State has already taken some rightsizing actions to improve the cost effectiveness of its overseas operating practices. For example, State plans to spend at least $80 million to purchase and renovate a 23-acre, multi-building facility in Frankfurt, Germany—slated to open in mid- 2005—for use as a regional hub to conduct and support diplomatic operations; has relocated more than 100 positions from the Paris embassy to the regional Financial Services Center in Charleston, South Carolina; and is working with OMB on a cost-sharing mechanism, as previously mentioned, that will give all U.S. agencies an incentive to weigh the high costs to taxpayers associated with assigning staff overseas. In addition to these rightsizing actions, there are other areas where the adoption of industry best practices could lead to cost reductions and streamlined services. For example, in 1997, we reported that State could significantly streamline its employee transfer and housing relocation processes. We also reported in 1998 that State’s overseas posts could potentially save millions of dollars by implementing best practices such as competitive sourcing. In light of competing priorities as new needs emerge, particularly in Iraq and Afghanistan, State must be prepared to make difficult strategic decisions on which posts and positions it will fill and which positions it could remove, relocate, or regionalize. State will need to marshal and manage its human capital to facilitate the most efficient, effective allocation of these significant resources. Up-to-date information technology, along with adequate and modern office facilities, is an important part of diplomatic readiness. We have reported that State has long been plagued by poor information technology at its overseas posts, as well as weaknesses in its ability to manage information technology modernization programs. State’s information technology capabilities provide the foundation of support for U.S. government operations around the world, yet many overseas posts have been equipped with obsolete information technology systems that prevented effective interagency information sharing. The Secretary of State has made a major commitment to modernizing the department’s information technology. In March 2003, we testified that the department invested $236 million in fiscal year 2002 on key modernization initiatives for overseas posts and plans to spend $262 million over fiscal years 2003 and 2004. State reports that its information technology is now in the best shape it has ever been, including improved Internet access and upgraded computer equipment. The department is now working to replace its antiquated cable system with a new integrated messaging and retrieval system, which it acknowledges is an ambitious effort. State’s OIG and GAO have raised a number of concerns regarding the department’s management of information technology programs. For example, in 2001, we reported that State was not following proven system acquisition and investment practices in attempting to deploy a common overseas knowledge management system. This system was intended to provide functionality ranging from basic Internet access and e-mail to mission-critical policy formulation and crisis management support. We recommended that State limit its investment in this system until it had secured stakeholder involvement and buy-in. State has since discontinued the project due to a lack of interagency buy-in and commitment, thereby avoiding additional costs of more than $200 million. Recognizing that interagency information sharing and collaboration can pay off in terms of greater efficiency and effectiveness of overseas operations, State’s OIG reported that the department recently decided to merge some of the objectives associated with the interagency knowledge management system into its new messaging system. We believe that the department should try to eliminate the barriers that prevented implementation of this system. As State continues to modernize information technology at overseas posts, it is important that the department employ rigorous and disciplined management processes on each of its projects to minimize the risks that the department will spend large sums of money on systems that do not produce commensurate value. Linking performance and financial information is a key feature of sound management—reinforcing the connection between resources consumed and results achieved—and an important element in giving the public a useful and informative perspective on federal spending. A well-defined mission and clear, well understood strategic goals are essential in helping agencies make intelligent trade-offs among short- and long-term priorities and ensure that program and resource commitments are sustainable. In recent years, State has made improvements to its strategic planning process both at headquarters and overseas that are intended to link staffing and budgetary requirements with policy priorities. For instance, State has developed a new strategic plan for fiscal years 2004 through 2009, which, unlike previous strategic plans, was developed in conjunction with USAID and aligns diplomatic and development efforts. At the field level, State revised the MPP process so that posts are now required to identify key goals for a given fiscal year, and link staffing and budgetary requirements to fulfilling these priorities. State’s compliance with the Government Performance and Results Act of 1993 (GPRA), which requires federal agencies to prepare annual performance plans covering the program activities set out in their budgets, has been mixed. While State’s performance plans fell short of GPRA requirements from 1998 through 2000, the department has recently made strides in its planning and reporting processes. For example, in its performance plan for 2002, State took a major step toward implementing GPRA requirements, and it has continued to make improvements in its subsequent plans. As we have previously reported, although connections between specific performance and funding levels can be difficult to make, efforts to infuse performance information into budget deliberations have the potential to change the terms of debate from simple outputs to outcomes. Continued improvements to strategic and performance planning will ensure that State is setting clear objectives, tying resources to these objectives, and monitoring its progress in achieving them—all of which are essential to efficient operations. Now I would like to discuss some of the challenges USAID faces in managing its human capital, evaluating its programs and measuring their performance, and managing its information technology and financial systems. I will also outline GAO’s findings from our reviews of USAID’s democracy and rule of law programs in Latin America and the former Soviet Union. Since the early 1990s, we have reported that USAID has made limited progress in addressing its human capital management issues and managing the changes in its overseas workforce. A major concern is that USAID has not established a comprehensive workforce plan that is integrated with the agency’s strategic objectives and ensures that the agency has skills and competencies necessary to meet its emerging foreign assistance challenges. Developing such a plan is critical due to a reduction in the agency’s workforce during the 1990s and continuing attrition—more than half of the agency’s foreign service officers are eligible to retire by 2007. According to USAID’s OIG, the steady decline in the number of foreign service and civil service employees with specialized technical expertise has resulted in insufficient staff with needed skills and experience and less experienced personnel managing increasingly complex programs. Meanwhile, USAID’s program budget has increased from $7.3 billion in 2001 to about $12 billion in fiscal year 2003, due primarily to significant increases in HIV/AIDS funding and supplemental funding for emerging programs in Iraq and Afghanistan. The combination of continued attrition of experienced foreign service officers, increased program funding, and emerging foreign policy priorities raises concerns regarding USAID’s ability to maintain effective oversight of its foreign assistance programs. USAID’s lack of progress in institutionalizing a workforce planning system has led to certain vulnerabilities. For example, as we reported in July 2002, USAID lacks a “surge capacity” that enables it to quickly hire the staff needed to respond to emerging demands and post-conflict or post- emergency reconstruction situations. We also reported that insufficient numbers of contract officers affected the agency’s ability to deliver hurricane reconstruction assistance in Latin America in the program’s early phases. USAID is aware of its human capital management and workforce planning shortcomings and is now beginning to address some of them with targeted hiring and other actions. USAID continues to face difficulties in identifying and collecting the data it needs to develop reliable performance measures and accurately report the results of its programs. Our work and that of USAID’s OIG have identified a number of problems with the annual results data that USAID’s operating units have been reporting. USAID has acknowledged these concerns and has undertaken several initiatives to correct them. Although the agency has made a serious effort to develop improved performance measures, it continues to report numerical outputs that do not gauge the impact of its programs. Without accurate and reliable performance data, USAID has little assurance that its programs achieve their objectives and related targets. In July 1999, we commented on USAID’s fiscal year 2000 performance plan and noted that because the agency depends on international organizations and thousands of partner institutions for data, it does not have full control over how data are collected, reported, or verified. In April 2002, we reported that USAID had evaluated few of its experiences in using various funding mechanisms and different types of organizations to achieve its objectives. We concluded that with better data on these aspects of the agency’s operations, USAID managers and congressional overseers would be better equipped to analyze whether the agency’s mix of approaches takes full advantage of nongovernmental organizations to achieve the agency’s purposes. USAID’s information systems do not provide managers with the accurate information they need to make sound and cost-effective decisions. USAID’s OIG has reported that the agency’s processes for procuring information technology have not followed established guidelines, which require executive agencies to implement a process that maximizes the value and assesses the risks of information technology investments. In addition, USAID’s computer systems are vulnerable and need better security controls. USAID management has acknowledged these weaknesses and the agency is making efforts to correct them. Effective financial systems and controls are necessary to ensure that USAID management has timely and reliable information to make effective, informed decisions and that assets are safeguarded. USAID has made progress in correcting some of its systems and internal control deficiencies and is in the process of revising its plan to remedy financial management weaknesses as required by the Federal Financial Management Improvement Act of 1996. To obtain its goal, however, USAID needs to continue efforts to resolve its internal control weaknesses and ensure that planned upgrades to its financial systems are in compliance with federal financial system requirements. Our reviews of democracy and rule of law programs in Latin America and the former Soviet Union demonstrate that these programs have had limited results and suggest areas for improving the efficiency and impact of these efforts. In Latin America, we found that U.S. assistance has helped bring about important criminal justice reforms in five countries. This assistance has also help improve transparency and accountability of some government functions, increase attention to human rights, and support elections that observation groups have considered free and fair. In several countries of the former Soviet Union, U.S. agencies have helped support a variety of legal system reforms and introduced some innovative legal concepts and practices in the areas of legislative and judicial reform, legal education, law enforcement, and civil society. In both regions, however, sustainability of these programs is questionable. Establishing democracy and rule of law in these countries is a complex undertaking that requires long-term host government commitment and consensus to succeed. However, host governments have not always provided the political support and financial and human capital needed to sustain these reforms. In other cases, U.S.-supported programs were limited, and countries did not adopt the reforms and programs on a national scale. In both of our reviews, we found that several management issues shared by USAID and the other agencies have affected implementation of these programs. Poor coordination among the key U.S. agencies has been a long-standing management problem, and cooperation with other foreign donors has been limited. U.S. agencies’ strategic plans do not outline how these agencies will overcome coordination problems and cooperate with other foreign donors on program planning and implementation to maximize scarce resources. Also, U.S. agencies, including USAID, have not consistently evaluated program results and have tended to stress output measures, such as the numbers of people trained, over indicators that measure program outcomes and results, such as reforming law enforcement practices. Further, U.S. agencies have not consistently shared lessons learned from completed projects, thus missing opportunities to enhance the outcomes of their programs. Mr. Chairman, this completes my prepared statement. I would be happy to respond to any questions you or other members of the committee may have at this time. For future contacts regarding this testimony, please call Jess Ford or John Brummet at (202) 512-4128. Individuals making key contributions to this testimony include Heather Barker, David Bernet, Janey Cohen, Diana Glod, Kathryn Hartsburg, Edward Kennedy, Joy Labez, Jessica Lundberg, and Audrey Solis. Overseas Presence: Conditions of Overseas Diplomatic Facilities. GAO- 03-557T. Washington, D.C.: March 20, 2003. Overseas Presence: Rightsizing Framework Can Be Applied at U.S. Diplomatic Posts in Developing Countries. GAO-03-396. Washington, D.C.: April 7, 2003. Embassy Construction: Process for Determining Staffing Requirements Needs Improvement. GAO-03-411. Washington, D.C.: April 7, 2003. Overseas Presence: Framework for Assessing Embassy Staff Levels Can Support Rightsizing Initiatives. GAO-02-780. Washington, D.C.: July 26, 2002. State Department: Sale of Unneeded Property Has Increased, but Further Improvements Are Necessary. GAO-02-590. Washington, D.C.: June 11, 2002. Embassy Construction: Long-Term Planning Will Enhance Program Decision-making. GAO-01-11. Washington, D.C.: January 22, 2001. State Department: Decision to Retain Embassy Parking Lot in Paris, France, Should Be Revisited. GAO-01-477. Washington, D.C.: April 13, 2001. State Department: Staffing Shortfalls and Ineffective Assignment System Compromise Diplomatic Readiness at Hardship Posts. GAO-02-626. Washington, D.C.: June 18, 2002. Foreign Languages: Human Capital Approach Needed to Correct Staffing and Proficiency Shortfalls. GAO-02-375. Washington, D.C.: January 31, 2002. Information Technology: State Department-Led Overseas Modernization Program Faces Management Challenges. GAO-02-41. Washington, D.C.: November 16, 2001. Foreign Affairs: Effort to Upgrade Information Technology Overseas Faces Formidable Challenges. GAO-T-AIMD/NSIAD-00-214. Washington, D.C.: June 22, 2000. Electronic Signature: Sanction of the Department of State’s System. GAO/AIMD-00-227R. Washington, D.C.: July 10, 2000. Major Management Challenges and Program Risks: Department of State. GAO-03-107. Washington, D.C.: January 2003. Department of State: Status of Achieving Key Outcomes and Addressing Major Management Challenges. GAO-02-42. Washington, D.C.: December 7, 2001. Observations on the Department of State’s Fiscal Year 1999 Performance Report and Fiscal Year 2001 Performance Plan. GAO/NSIAD-00-189R. Washington, D.C.: June 30, 2000. Major Management Challenges and Program Risks: Department of State. GAO-01-252. Washington, D.C.: January 2001. U.S. Agency for International Development: Status of Achieving Key Outcomes and Addressing Major Management Challenges. GAO-01-721. Washington, D.C.: August 17, 2001. Observations on the Department of State’s Fiscal Year 2000 Performance Plan. GAO/NSIAD-99-183R. Washington, D.C.: July 20, 1999. Major Management Challenges and Program Risks: Implementation Status of Open Recommendations. GAO/OCG-99-28. Washington, D.C.: July 30, 1999. The Results Act: Observations on the Department of State’s Fiscal Year 1999 Annual Performance Plan. GAO/NSIAD-98-210R. Washington, D.C.: June 17, 1998. Major Management Challenges and Program Risks: U.S. Agency for International Development. GAO-03-111. Washington, D.C.: January 2003. Foreign Assistance: Disaster Recovery Program Addressed Intended Purposes, but USAID Needs Greater Flexibility to Improve Its Response Capability. GAO-02-787. Washington, D.C.: July 24, 2002. Foreign Assistance: USAID Relies Heavily on Nongovernmental Organizations, but Better Data Needed to Evaluate Approaches. GAO-02- 471. Washington, D.C.: April 25, 2002. Major Management Challenges and Program Risks: U.S. Agency for International Development. GAO-01-256. Washington, D.C.: January 2001. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | In recent years, funding for the Department of State has increased dramatically, particularly for security upgrades at overseas facilities and a major hiring program. The U.S. Agency for International Development (USAID) has also received more funds, especially for programs in Afghanistan and Iraq and HIV/AIDS relief. Both State and USAID face significant management challenges in carrying out their respective missions, particularly in areas such as human capital management, performance measurement, and information technology management. Despite increased funding, resources are not unlimited. Thus, State, USAID, and all government agencies have an obligation to ensure that taxpayer resources are managed wisely. Long-lasting improvements in performance will require continual vigilance and the identification of widespread opportunities to improve the economy, efficiency, and effectiveness of State's and USAID's existing goals and programs. GAO was asked to summarize its findings from reports on State's and USAID's management of resources, actions taken in response to our reports, and recommendations to promote cost savings and more efficient and effective operations at the department and agency. Overall, State has increased its attention to managing resources, and its efforts are starting to show results, including potential cost savings and improved operational effectiveness and efficiency. For example, in 1996, GAO criticized State's performance in disposing of its overseas property. Between fiscal years 1997 through 2002, State sold 129 properties for more than $459 million with plans to sell additional properties between fiscal years 2003 through 2008 for approximately $300 million. Additional sales would help offset costs of replacing about 160 unsecure and deteriorating embassies. State is now taking a more businesslike approach with its embassy construction program, which is estimated to cost an additional $17 billion beginning in fiscal year 2004. Cost-cutting efforts allowed State to achieve $150 million in potential cost savings during fiscal year 2002. State should continue its reforms as it determines requirements for, designs, and builds new embassies. The costs of maintaining staff overseas are generally very high. In response to management weaknesses GAO identified, State has begun addressing workforce planning issues to ensure that the government has the right people in the right places at the right times. State should continue this work and adopt industry best practices that could reduce costs and streamline services overseas. GAO and others have highlighted deficiencies in State's information technology. State invested $236 million in fiscal year 2002 on modernization initiatives overseas and plans to spend $262 million over fiscal years 2003 and 2004. Ongoing oversight of this investment will be necessary to minimize the risks of spending large sums of money on systems that do not produce commensurate value. State has improved its strategic planning to better link staffing and budgetary requirements with policy priorities. Setting clear objectives and tying resources to them will make operations more efficient. GAO and others have also identified some management weaknesses at USAID, mainly in human capital management and workforce planning, program evaluation and performance measurement, information technology, and financial management. While USAID is taking corrective actions, better management of critical systems is essential to safeguard the agency's funds. Given the added resources State and USAID must manage, current budget deficits, and new requirements since Sept. 11, 2001, oversight is needed to ensure continued progress toward effective management practices. This focus could result in cost savings or other efficiencies. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
Financial assistance to help students and families pay for postsecondary education has been provided for many years through student grant and loan programs authorized under title IV of the Higher Education Act of 1965, as amended. Examples of these programs include Pell Grants for low-income students, PLUS loans to parents and graduate students, and Stafford loans. Much of this aid has been provided on the basis of the difference between a student’s cost of attendance and an estimate of the ability of the student and the student’s family to pay these costs, called the expected family contribution (EFC). The EFC is calculated based on information provided by students and parents on the Free Application for Federal Student Aid (FAFSA). Statutory definitions establish the criteria that students must meet to be considered independent of their parents for the purpose of financial aid, and statutory formulas establish the share of income and assets that are expected to be available for the student’s education. In fiscal year 2005, the Department of Education made approximately $14 billion in grants, and title IV lending programs made available another $57 billion in loan assistance. Title IV also authorizes programs funded by the federal government and administered by participating higher education institutions, including the Supplemental Educational Opportunity Grant (SEOG), Perkins loans, and federal work- study aid, collectively known as campus-based aid. Table 1 provides brief descriptions of the title IV programs that we reviewed in our 2005 report and includes two programs—Academic Competitiveness Grants and National Science and Mathematics Access to Retain Talent Grants—that were created since that report was issued. Postsecondary assistance also has been provided through a range of tax preferences, including postsecondary tax credits, tax deductions, and tax- exempt savings programs. For example, the Taxpayer Relief Act of 1997 allows eligible tax filers to reduce their tax liability by receiving, for tax year 2006, up to a $1,650 Hope tax credit or up to a $2,000 Lifetime Learning tax credit for tuition and course-related fees paid for a single student. The fiscal year 2005 federal revenue loss estimate of the postsecondary tax preferences that we reviewed was $9.15 billion dollars. Tax preferences discussed as part of our 2005 report include the following: Lifetime Learning Credit—income-based tax credit claimed by tax filers on behalf of students enrolled in one or more postsecondary education courses. Hope Credit—income-based tax credit claimed by tax filers on behalf of students enrolled at least half-time in an eligible program of study and who are in their first 2 years of postsecondary education. Student Loan Interest Deduction—income-based tax deduction claimed by tax filers on behalf of students who took out qualified student loans while enrolled at least half-time. Tuition and Fees Deduction—income-based tax deduction claimed by tax filers on behalf of students who are enrolled in one or more postsecondary education courses and have either a high school diploma or a General Educational Development (GED) credential. Section 529 Qualified Tuition Programs—College Savings Programs and Prepaid Tuition Programs—non-income-based programs that provide favorable tax treatment to investments and distributions used to pay the expenses of future or current postsecondary students. Coverdell Education Savings Accounts—income-based savings program providing favorable tax treatment to investments and distributions used to pay the expenses of future or current elementary, secondary, or postsecondary students. As figure 1 demonstrates, the use of tax preferences has increased since 1997, both in absolute terms and relative to the use of title IV aid. Postsecondary student financial assistance provided through programs authorized under title IV of the Higher Education Act and the tax code differ in timing of assistance, the populations that receive assistance, and the responsibility of students and families to obtain and use the assistance. Title IV programs and education-related tax preferences differ significantly in when eligibility is established and in the timing of the assistance they provide. Title IV programs generally provide benefits to students while they are in school. Education-related tax preferences, on the other hand, (1) encourage saving for college through tax-exempt saving, (2) assist enrolled students and their families in meeting the current costs of postsecondary education through credits and tuition deductions, and (3) assist students and families repaying the costs of past postsecondary education through a tax deduction for student loan interest paid. While title IV programs and tax preferences assist many students and families, program and tax rules affect eligibility for such assistance. These rules also affect the distribution of title IV aid and the assistance provided through tax preferences. As a result, the beneficiaries of title IV programs and tax preferences differ. Title IV programs generally have rules for calculating grant and loan assistance that give different consideration to family income, assets, and college costs in the award of financial aid. For example, Pell Grant awards are calculated by subtracting the student’s EFC from the maximum Pell Grant award ($4,050 in academic year 2006-2007), or the student’s cost of attendance, whichever is less. Because the EFC is closely linked to family income and circumstances (such as the size of the family and the number of dependents in school), and modest EFCs are required for Pell eligibility, Pell awards are made primarily to families with modest incomes. In contrast, the maximum unsubsidized Stafford loan amount is calculated without direct consideration of financial need: students may borrow up to their cost of attendance, minus the estimated financial assistance they will receive. As table 2 shows, 92 percent of Pell financial support in 2003-2004 was provided to dependent students whose family incomes were $40,000 or below, and the 38 percent of Pell recipients in the lowest income category ($20,000 or below) received a higher share (48 percent) of Pell financial support. Because independent students generally have lower incomes and accumulated savings than dependent students and their families, patterns of program participation and dollar distribution differ. Participation of independent students in Pell, subsidized Stafford, and unsubsidized Stafford loan programs is heavily concentrated among those with incomes of $40,000 or less: from 74 percent (unsubsidized Stafford) to 95 percent (Pell) of program participants have incomes below this level. As shown in table 3, the distribution of award dollars follows a nearly identical pattern. Many education-related tax preferences have both de facto lower limits created by the need to have a positive tax liability to obtain their benefit and income ceilings on who may use them. For example, the Hope and Lifetime Learning tax credits require that tax filers have a positive tax liability to use them and income-related phase-out provisions in 2005 that began at $45,000 and $90,000 for single and joint filers, respectively. Furthermore, tax-exempt savings are more advantageous to families with higher incomes and tax liabilities because, among other reasons, these families hold greater assets to invest in these tax preferences and have a higher marginal tax rate, and thus benefit the most from the use of these tax preferences. Table 4 shows the income categories of tax filers claiming the three tax preferences available to current students and/or their families along with the reduced tax liabilities from those preferences in 2004. The federal government and postsecondary institutions have significant responsibilities in assisting students and families in obtaining assistance provided under title IV programs but only minor roles with respect to tax filers’ use of education-related tax preferences. To obtain federal student aid, applicants must first complete the FAFSA, a form which required students to complete up to 100 fields in 2006-2007. Submitting a completed FAFSA to the Department of Education largely concludes students’ and families’ responsibility in obtaining aid. The Department of Education is responsible for calculating students’ and families’ EFC on the basis of the FAFSA, and students’ educational institutions are responsible for determining aid eligibility and the amounts and packaging of awards. In contrast, higher education tax preferences require students and families to take more responsibility. Although postsecondary institutions provide students and IRS with information about higher education attendance, they have no other responsibilities for higher education tax credits, deductions, or tax-preferred savings. The federal government’s primary role with respect to higher education tax preferences is the promulgation of rules; the provision of guidance to tax filers; and the processing of tax returns, including some checks on the accuracy of items reported on those tax returns. The responsibility for selecting among and properly using tax preferences rests with tax filers. Unlike title IV programs, users must understand the rules, identify applicable tax preferences, understand how these tax preferences interact with one another and with federal student aid, keep records sufficient to support their tax filing, and correctly claim the credit or deduction on their return. According to our analysis of IRS data on the use of Hope and Lifetime tax credits and the tuition deduction in our 2005 report, some tax filers appear to make less-than-optimal choices among them. The apparent suboptimal use of postsecondary tax preferences may arise, in part, from the complexity of these provisions. Making poor choices among tax preferences for postsecondary education may be costly to tax filers. For example, families may strand assets in a tax-exempt savings vehicle and incur tax penalties on their distribution if their child chooses not to go to college. They may also fail to minimize their federal income tax liability by claiming a tax credit or deduction that yields less of a reduction in taxes than a different tax preference or by failing to claim any of their available tax preferences. For example, if a married couple filing jointly with one dependent in his/her first 2 years of college had an adjusted gross income of $50,000, qualified expenses of $10,000 in 2006, and tax liability greater than $2,000, their tax liability would be reduced by $2,000 if they claimed the Lifetime Learning credit but only $1,650 if they claimed the Hope credit. In our 2005 report, we found that some people who appear to be eligible for tax credits and/or the tuition deduction did not claim them. The files of about 77 percent of the tax year 2002 tax returns that we were able to review were apparently eligible to claim one or more of the three tax preferences. However, about 27 percent of those returns, representing about 374,000 tax filers, failed to use the any of them. The amount by which these tax filers failed to reduce their tax averaged $169; 10 percent of this group could have reduced their tax liabilities by over $500. Suboptimal choices were not limited to tax filers who prepared their own tax returns. A possible indicator of the difficulty people face in understanding education-related tax preferences is how often the suboptimal choices we identified were found on tax returns prepared by paid tax preparers. We estimate that about 50 percent of the returns we found that appear to have failed to optimally reduce the tax filer’s tax liability were prepared by paid tax preparers. Generalized to the population of tax returns we were able to review, returns prepared by paid tax preparers represent about 223,000 of the approximately 447,000 suboptimal choices we found. Our April 2006 study of paid tax preparers corroborated the problem of confusion over which of the tax preferences to claim. Of the 9 undercover investigation visits we made to paid preparers with a taxpayer with a dependent college student, 3 preparers did not claim the credit most advantageous to the taxpayer and thereby cost these taxpayers hundreds of dollars in refunds. In our investigative scenario, the expenses and the year in school made the Hope education credit far more advantageous to the taxpayer than either the tuition and fees deduction or the Lifetime Learning credit. The apparently suboptimal use of postsecondary tax preferences may arise, in part, because of the complexity of using these provisions. Tax policy analysts have frequently identified postsecondary tax preferences as a set of tax provisions that demand a particularly large investment of knowledge and skill on the part of students and families or expert assistance purchased by those with the means to do so. They suggest that this complexity arises from multiple postsecondary tax preferences with similar purposes, from key definitions that vary across these provisions, and from rules that coordinate the use of multiple tax provisions. Twelve tax preferences are outlined in the IRS publication, Tax Benefits for Education, for use in preparing 2005 returns (the most recent publication available). The publication includes 4 different tax preferences for educational saving. Three of these preferences—Coverdell Education Savings Accounts, Qualified Tuition Programs, and U.S. education savings bonds—differ across more than a dozen dimensions, including the tax penalty that occurs when account balances are not used for qualified higher education expenses, who may be an eligible beneficiary, annual contribution limits, and other features. In addition to learning about, comparing, and selecting tax preferences, filers who wish to make optimal use of multiple tax preferences must understand how the use of one tax preference affects the use of others. The use of multiple education-related tax preferences is coordinated through rules that prohibit the application of the same qualified higher education expenses for the same student to more than one education- related tax preference, sometimes referred to as “anti-double-dipping rules.” These rules are important because they prevent tax filers from underreporting their tax liability. Nonetheless, anti-double-dipping rules are potentially difficult for tax filers to understand and apply, and misunderstanding them may have consequences for a filer’s tax liability. Little is known about the effectiveness of federal grant and loan programs and education-related tax preferences in promoting attendance, choice, and the likelihood that students either earn a degree or continue their education (referred to as persistence). Many federal aid programs and tax preferences have not been studied, and for those that have been studied, important aspects of their effectiveness remain unexamined. In our 2005 report, we found no research on any aspect of effectiveness for several major title IV federal postsecondary programs and tax preferences. For example, no research had examined the effects of federal postsecondary education tax credits on students’ persistence in their studies or on the type of postsecondary institution they choose to attend. Gaps in the research-based evidence of federal postsecondary program effectiveness may be due, in part, to data and methodological challenges that have proven difficult to overcome. The relative newness of most of the tax preferences also presents challenges because relevant data are just now becoming available. In 2002, we recommended that Education sponsor research into key aspects of effectiveness of title IV programs, that Education and the Department of the Treasury collaborate on such research into the relative effectiveness of title IV programs and tax preferences, and that the Secretaries of Education and Treasury collaborate in studying the combined effects of tax preferences and title IV aid. In April 2006, Education’s Institute for Education Sciences (IES) issued a Request for Applications to conduct research on, among other things, “evaluating the efficacy of programs, practices, or policies that are intended to improve access to, persistence in, or completion of postsecondary education.” Multiyear projects funded under this subtopic are expected to begin in July 2007. As we noted in our 2002 report, research into the effectiveness of different forms of postsecondary education assistance is important. Without such information federal policymakers cannot make fact-based decisions about how to build on successful programs and make necessary changes to improve less effective programs. The budget deficit and other major fiscal challenges facing the nation necessitate rethinking the base of existing federal spending and tax programs, policies, and activities by reviewing their results and testing their continued relevance and relative priority for a changing society. In light of the long-term fiscal challenge this nation faces and the need to make hard decisions about how the federal government allocates resources, this hearing provides an opportunity to continue a discussion about how the federal government can best help students and their families pay for postsecondary education. Some questions that Congress should consider during this dialog include: Should the federal government consolidate postsecondary education tax provisions to make them easier for the public to use and understand? Given its limited resources, should the government further target title IV programs and tax provisions based on need or other factors? How can Congress best evaluate the effectiveness and efficiency of postsecondary education aid provided through the tax code? Can tax preferences and title IV programs be better coordinated to maximize their effectiveness? Mr. Chairman and Members of the Committee, this concludes our statement. We welcome any questions you have at this time. For further information regarding this testimony, please contact Michael Brostek at (202) 512-9039 or [email protected] or George Scott at (202) 512-7215 or [email protected]. Individuals making contributions to this testimony include David Lewis, Assistant Director; Jeff Appel, Assistant Director; Shirley Jones, Sheila McCoy, John Mingus, Jeff Procak, Carlo Salerno, Andrew Stephens, and Michael Volpe. The federal government helps students and families save, pay for, and repay the costs of postsecondary education through grant and loan programs authorized under title IV of the Higher Education Act of 1965, and through tax preferences—reductions in federal tax liabilities that result from preferential provisions in the tax code, such as exemptions and exclusions from taxation, deductions, credits, deferrals, and preferential tax rates. Assistance provided under title IV programs include Pell Grants for low- income students, the newly established Academic Competitiveness and National Science and Mathematics Access to Retain Talent Grants, PLUS loans, which parents as well as graduate and professional students may apply for, and Stafford loans. While each of the three grant types reduces the price paid by the student, student loans help to finance the remaining costs and are to be repaid according to varying terms. Stafford loans may be either subsidized or unsubsidized. The federal government pays the interest cost on subsidized loans while the student is in school, and during a 6-month period known as the grace period, after the student leaves school. For unsubsidized loans, students are responsible for all interest costs. Stafford and PLUS loans are provided to students through both the FFEL program and the William D. Ford Direct Loan Program (FDLP). The federal government’s role in financing and administering these two loan programs differs significantly. Under the FFEL program, private lenders, such as banks, provide loan capital and make loans, and the federal government guarantees FFEL lenders a minimum yield on the loans they make and repayment if borrowers default. Under FDLP, federal funds are used as loan capital and loans are provided through participating schools. The Department of Education and its private-sector contractors jointly administer the program. Title IV also authorizes programs funded by the federal government and administered by participating higher education institutions, including the Supplemental Educational Opportunity Grant (SEOG), Perkins loans, and federal work-study aid, collectively known as campus-based aid. To receive title IV aid, students (along with parents, in the case of dependent students) must complete a Free Application for Federal Student Aid form. Information from the FAFSA, particularly income and asset information, is used to determine the amount of money—called the expected family contribution—that the student and/or family is expected to contribute to the student’s education. Statutory definitions establish the criteria that students must meet to be considered independent of their parents for the purpose of financial aid, and statutory formulas establish the share of income and assets that are expected to be available for the student’s education. Once the EFC is established, it is compared with the cost of attendance at the institution chosen by the student. The cost of attendance comprises tuition and fees; room and board; books and supplies; transportation; miscellaneous personal expenses; and, for some students, additional expenses. If the EFC is greater than the cost of attendance, the student is not considered to have financial need, according to the federal aid methodology. If the cost of attendance is greater than the EFC, then the student is considered to have financial need. Title IV assistance that is made on the basis of the calculated need of aid applicants is called need-based aid. Key characteristics of title IV programs are summarized in table 5 below. Prior to the 1990s, virtually all major federal initiatives to assist students with the costs of postsecondary education were provided through grant and loan programs authorized under title IV of the Higher Education Act. Since the 1990s, however, federal initiatives to assist families and students in paying for postsecondary education have largely been implemented through the federal tax code. The federal tax code now contains a range of tax preferences that may be used to assist students and families in saving for, paying, or repaying the costs of postsecondary education. These tax preferences include credits and deductions, both of which allow tax filers to use qualified higher education expenses to reduce their federal income tax liability. The tax credits reduce the tax filers’ income tax liability on a dollar-for-dollar basis but are not refundable. Tax deductions permit qualified higher education expenses to be subtracted from income that would otherwise be taxable. To benefit from a higher education tax credit or tuition deduction, a tax filer must use tax form 1040 or 1040A, have an adjusted gross income below the provisions’ statutorily specified income limits, and have a positive tax liability after other deductions and credits are calculated, among other requirements. Tax preferences also include tax-exempt savings vehicles. Section 529 of the tax code makes tax free the investment income from qualified tuition programs. There are two types of qualified tuition programs: savings programs established by states and prepaid tuition programs established either by states or by one or more eligible educational institutions. Another tax-exempt savings vehicle is the Coverdell Education Savings Account. Tax penalties apply to both 529 programs and Coverdell savings accounts if the funds are not used for allowable education expenses. Key features of these and other education-related tax preferences are described below, in table 6. Our review of tax preferences did not include exclusions from income, which permit certain types of education-related income to be excluded from the calculation of adjusted gross income on which taxes are based. For example, qualified scholarships covering tuition and fees and qualified tuition reductions from eligible educational institutions are not included in gross income for income tax purposes. Similarly, student loans forgiven when a graduate goes into certain professions for a certain period of time are also not subject to federal income taxes. We also did not include special provisions in the tax code that also extend existing tax preferences when tax filers support a postsecondary education student. For example, tax filers may claim postsecondary education students as dependents after age 18, even if the student has his or her own income over the limit that would otherwise apply. Also, gift taxes do not apply to funds used for certain postsecondary educational expenses, even for amounts in excess of the usual $11,000 limit on gifts. In addition, funds withdrawn early from an Individual Retirement Account are not subject to the usual 10 percent penalty when used for either a tax filer’s or his or her dependent’s postsecondary educational expenses. For an example of how the use of college savings programs and the tuition deduction is affected by “anti-double-dipping” rules, consider the following: To calculate whether a distribution from a college savings program is taxable, tax filers must determine if the total distributions for the tax year are more or less than the total qualified educational expenses reduced by any tax-free educational assistance, i.e., their adjusted qualified education expenses (AQEE). After subtracting tax-free assistance from qualified educational expenses to arrive at the AQEE, tax filers multiply total distributed earnings by the fraction (AQEE / total amount distributed during the year). If parents of a dependent student paid $6,500 in qualified education expenses from a $3,000 tax-free scholarship and a $3,600 distribution from a tuition savings program, they would have $3,500 in AQEE. If $1,200 of the distribution consisted of earnings, then $1,200 x ($3,500 AQEE / $3,600 distribution) would result in $1,167 of the earnings being tax free, while $33 would be taxable. However, if the same tax filer had also claimed a tuition deduction, anti-double-dipping rules would require the tax filer to subtract the expenses taken into account in figuring the tuition deduction from AQEE. If $2,000 in expenses had been used toward the tuition deduction, then the taxable distribution from the section 529 savings program would rise to $700. For families such as these, anti-double-dipping rules increase the computational complexity they face and may result in unanticipated tax liabilities associated with the use of section 529 savings programs. We used two data sets for this testimony: Education’s 2003-2004 National Postsecondary Student Aid Study and the Internal Revenue Service’s 2002 and 2004 Statistics of Income. Estimates from both data sets are subject to sampling errors and the estimates we report are surrounded by a 95 percent confidence interval. The following tables provide the lower and upper bounds of the 95 percent confidence interval for all estimate figures in the tables in this testimony. For figures drawn from these data, we provide both point estimates and confidence intervals. | Federal assistance helps students and families pay for postsecondary education through several policy tools--grant and loan programs authorized by title IV of the Higher Education Act of 1965 and more recently enacted tax preferences. This testimony summarizes and updates our 2005 report on (1) how title IV assistance compares to that provided through the tax code (2) the extent to which tax filers effectively use postsecondary tax preferences, and (3) what is known about the effectiveness of federal assistance. This hearing is an opportunity to consider whether any changes should be made in the government's overall strategy for providing such assistance or to the individual programs and tax provisions that provide the assistance. This statement is based on previously published GAO work and reviews of relevant literature. Title IV student aid and tax preferences provide assistance to a wide range of students and families in different ways. While both help students meet current expenses, tax preferences also assist students and families with saving for and repaying postsecondary costs. Both serve students and families with a range of incomes, but some forms of title IV aid--grant aid, in particular--provide assistance to those whose incomes are lower, on average, than is the case with tax preferences. Tax preferences require more responsibility on the part of students and families than title IV aid because taxpayers must identify applicable tax preferences, understand complex rules concerning their use, and correctly calculate and claim credits or deductions. While the tax preferences are a newer policy tool, the number of tax filers using them has grown quickly, surpassing the number of students aided under title IV in 2002. Some tax filers do not appear to make optimal education-related tax decisions. For example, among the limited number of 2002 tax returns available for our analysis, 27 percent of eligible tax filers did not claim either the tuition deduction or a tax credit. In so doing, these tax filers failed to reduce their tax liability by $169, on average, and 10 percent of these filers could have reduced their tax liability by over $500. One explanation for these taxpayers' choices may be the complexity of postsecondary tax provisions, which experts have commonly identified as difficult for tax filers to use. Little is known about the effectiveness of title IV aid or tax preferences in promoting, for example, postsecondary attendance or school choice, in part because of research data and methodological challenges. As a result, policymakers do not have information that would allow them to make the most efficient use of limited federal resources to help students and families. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
The Commodity Futures Trading Act of 1974 (the Act) established the Commodity Futures Trading Commission (CFTC) as an independent agency to better enforce the Commodity Exchange Act and oversee and regulate what was at the time an increasingly complex futures markets. The Act requires the agency to simultaneously submit its budget request to House and Senate Appropriations and oversight committees. The Act also grants independent leasing authority to CFTC. As such, the CFTC is not required to obtain its space through GSA. Commodity futures’ trading has grown increasingly complex since its 19th century origins when agricultural commodities dominated the industry. During the 20th century, futures’ trading expanded to include greater diversity in commodities, such as metals, oil, and financial products, including stock indexes and foreign currency. Subsequently, the Dodd- Frank Act expanded CFTC’s regulatory jurisdiction to include the previously unregulated over-the-counter derivatives market, commonly known as the “swaps” market. CFTC, with a fiscal year 2015 budget of approximately $250 million, is responsible for administering and enforcing provisions of the Commodity Exchange Act, fostering open and transparent markets, and protecting futures markets from excessive speculation, commodity price manipulation, and fraud. The agency maintains four mission-related oversight divisions: Market Oversight: conducts trade surveillance and oversees trading facilities such as futures exchanges; Swap Dealer and Intermediary Oversight: oversees registration and compliance in the derivatives market; Clearing and Risk: oversees derivatives clearing organizations and other major market participants; and Enforcement: investigates and prosecutes alleged violations of the Commodity Exchange Act. In addition, the agency maintains several divisions related to functional operations and support in its four locations. CFTC closed two regional offices in Los Angeles and Minneapolis in 2003 and 2007, respectively. CFTC began planning to substantially expand leased space prior to the enactment of the Dodd-Frank Act and then entered into leases that did not make efficient use of limited government resources. CFTC lease costs vary compared to other federal leases in the same markets. With the exception of its Washington, D.C., headquarters, CFTC’s rsf costs are lower or about the same as lease costs among other federal agencies in the regional office locations. CFTC followed some elements of leading government-leasing practices; however, the agency lacked comprehensive policies and procedures to guide efficient and cost- effective decisions for lease procurement. As a result, CFTC currently has lease obligations for unused space that extend to 2021 and beyond. CFTC renewed leases and expanded space in its Washington, D.C., headquarters and three regional office locations, prior to receiving the funding necessary to hire staff to occupy the additional space. Anticipating the increased oversight that would result from regulating and monitoring the swaps market, CFTC began planning for the expansion of its leased space in the fiscal year 2009 time frame—more than a year before the enactment of the Dodd-Frank Act in July 2010. The resulting leasing decisions negatively impacted the CFTC’s space utilization and resulted in inefficient use of limited government resources. Federal standards for internal control call for agencies to identify and analyze relevant risks associated with achieving agencies’ objectives. According to these standards, management needs to comprehensively identify risks and should consider all significant interactions between the entity and other parties as well as internal factors at both the entity-wide and activity level, including considering economic conditions. Although CFTC’s leasing decisions from fiscal years 2009 to 2012 significantly increased its space, CFTC could not provide us with an analysis of risks related to these decisions. CFTC has incrementally amended leases and expanded space in its Washington, D.C., headquarters since first occupying the building in 1995, and in 2009, CFTC extended the lease that was set to expire in 2015 by 10 years (through the end of fiscal year 2025) and expanded its leased space by more than 78 percent from fiscal years 2010 through 2012. Similarly, CFTC amended existing leases and expanded space in its Chicago and New York regional offices in 2009 and 2011, respectively, and in 2011, CFTC relocated its Kansas City regional office to a larger space. As discussed below, these expansions resulted in reduced rates of occupancy and increased costs. As Table 1 shows, overall, the CFTC increased its leased space by 74 percent from fiscal year 2008 through fiscal year 2015. The greatest increase occurred in the Kansas City Regional Office where the volume of office space more than doubled. Also, during this period, the Kansas City Board of Trade closed and merged with the Chicago Mercantile Exchange. In the Chicago and New York regional offices, CFTC also increased leased space—adding approximately 20,000 square feet in Chicago and 22,000 square feet in New York. According to CFTC, the additional space currently gives the agency the capacity to accommodate 1,289 staff overall. The agency requested additional funding to cover its new regulatory responsibilities and, as figure 1 below demonstrates, in fiscal years 2009 and 2010 the CFTC was appropriated funding in excess of its request. Figure 1 also shows that this period was followed by 5 years of appropriations less than the amount requested. Therefore, the CFTC could not expand its staff at a rate that would allow for full utilization of the additional leased space. On average, the agency received about 109 percent of the funding it requested in fiscal years 2009 and 2010 and about 76 percent of its requests from fiscal years 2011 through 2015. Figure 1 illustrates that––while not always granting CFTC’s full funding request––Congress increased funding in nominal terms for CFTC every year from fiscal year 2008 through fiscal year 2015––with the exception of fiscal year 2013 when funding declined slightly. In other words, CFTC’s fiscal year 2015 appropriations represent an increase of nearly $138 million, or about 123 percent, when compared to fiscal year 2008. The amount CFTC allocated versus the amount it requested for staff follows a similar pattern. In fiscal years 2009 and 2010, based on its higher than requested appropriation, CFTC hired more staff than it had anticipated. In the following 5 fiscal years, with a lower appropriation, it hired fewer staff than requested. CFTC, on average, hired about 13 percent less staff than it originally requested between fiscal year 2008 and fiscal year 2015 (see fig. 2). CFTC federal employee staffing increased in absolute terms by about 53 percent from fiscal year 2008 through fiscal year 2015, according to CFTC data (see table 2 below). Moreover, CFTC greatly expanded the number of on-site contractors it employs, an increase of about 324 percent during the same period. According to CFTC, since commodity futures’ trading is increasingly electronic and data intensive, most of the CFTC’s on-site contractors are involved with operating and maintaining CFTC’s electronic data systems. This increase also reflects the $35 to $55 million Congress set aside for the purchase of information technology in the appropriations for fiscal years 2012, 2014 and 2015. When CFTC’s employees and on-site contractors are combined, aggregate agency staffing increased from 549 to 1006, or more than 80 percent, from fiscal years 2008 through 2015. This increase, however, falls below the approximately 1,289 positions for which the CFTC leased additional office space. As discussed below, expanding leased space before obtaining an appropriation to fund additional staff has resulted in substantial space underutilization, which increased the space allocation per CFTC staff member (including CFTC employees and on-site contractors). According to our analysis of CFTC data, the overall allocation of useable square feet (usf) per CFTC staff member in fiscal years 2008 through 2010 was 303 square feet on average. From fiscal years 2011 through 2015, the allocation increased to 465 square feet on average in contrast to the approximately 300 usf per employee noted in CFTC’s 2009 Program of Requirements, a space-planning document for all four of the agency’s office locations. The total space utilization for all four CFTC offices combined was about 78 percent at the end of fiscal year 2015. However, each office had differing levels of space utilization, as figure 3 below illustrates, according to our analysis of CFTC data. As figure 3 above illustrates, the Kansas City Regional Office is the most underutilized of the four offices with a staff of 31, including contractors, housed in space intended to accommodate 72. When we visited the Kansas City office, officials told us that CFTC vacated approximately a third of its leased space in response to the CFTC’s OIG recommendation that the agency take steps to dispose of underutilized property in that location, including subleasing or returning the space to the landlord (see figure 4 below). According to CFTC officials, the only effective option to cease paying for the vacant space in Kansas City involves negotiating with the landlord to return the space. The landlord agreed to try to lease the vacant floor; however, there has been limited interest thus far, and CFTC continues to pay rent on the vacant space. In our review of CFTC leases, we found that all of the leases include provisions for subleasing space. CFTC officials told us that the agency was only authorized to enter into subleases in circumstances where the sublease would further the purposes of the Commodity Exchange Act. According to CFTC, subleasing the space in a manner that furthers the purposes of the Act would, as a practical matter, be very difficult to accomplish. The CFTC’s OIG released additional reports in 2015 that found underutilized space in the Chicago and New York City Regional offices but not nearly to the extent found in the Kansas City Regional Office. The report on the Chicago Regional Office recommended better utilization of space. According to our analysis of CFTC’s data, space utilization in the Chicago office improved as CFTC increased the number of staff (including contractors) from 137 in fiscal year 2014 to 150 in fiscal year 2015. The Chicago office currently utilizes about 88 percent of its space (see fig. 3). With regard to the New York City Regional Office, the OIG recommended that CFTC sublet or negotiate returning the additional space it leased beginning in fiscal year 2012. Our analysis of CFTC data found that space utilization in the New York City office, similar to the Chicago office, also improved as staff increased (including contractors) from 80 to 91, or nearly 14 percent, from fiscal year 2014 to fiscal year 2015. When we visited the New York City office in January 2016, we observed vacant offices, some of which were unfinished, unventilated, and not adjacent to one another. As of the end of fiscal year 2015, the New York City office had a utilization rate of 68 percent (see fig. 3). CFTC officials said that they have notified the landlord that they would like to return some space on one floor, but the building currently has a vacancy rate of about 30 percent, so this space would likely be difficult to rent. According to CFTC data, combined lease costs for all CFTC offices reached about $20.6 million in fiscal year 2015—a 79 percent increase in nominal dollars over the combined fiscal year 2008 lease costs (see app. II for details on lease costs). All four of the CFTC office leases typically cover a period of 10 years. As such, the current leases will not expire until fiscal years 2021 through 2025. The Kansas City Regional Office lease will expire first in 2021, followed by New York in 2022, Chicago in 2022, and Washington, D.C., in 2025. According to CFTC, lease renewal planning typically begins about to 2 years in advance of lease expiration, so it is reasonable to expect CFTC to begin planning around fiscal year 2019. CFTC officials told us that they converted certain Tenant Improvement Allowances (TIA) provided under leases into rent abatements in order to reduce rent in 2011, 2012, and 2013. CFTC used TIA to complete improvements and alterations to the CFTC office space in Washington, D.C., Chicago, New York, and Kansas City, as well as to cover such costs as architectural expenses, furnishings, equipment, cabling, and moving expenses for the CFTC offices. In addition, under the terms of certain leases, unused portion of the TIA could be converted to rental abatement and then used to offset rental payments. For example, the Kansas City Regional Office lease sets the TIA at $35 per rentable square foot. According to our analysis, at this rate, $852,670 was available for tenant improvements, and for this particular lease, any amount not expended in the first 6 months was available as a rebate against the rent expense. As appendix II shows, CFTC used $78,222 of TIA in fiscal year 2013 as a rent credit. CFTC did not state how it used TIA for space planning. We compared CFTC’s lease costs for fiscal year 2013 through fiscal year 2015 to average lease costs of other federal agencies that lease through GSA in privately owned buildings in the four markets where CFTC has offices. We also compared the cost of private sector leases, as measured by the Building Owner Management Association (BOMA) for 2013 and 2014, a widely recognized industry association. As table 3 below shows, with the exception of the Washington, D.C., headquarters––where CFTC 2015 lease costs are about 18 percent higher than the average lease costs for federal agencies leasing office space through GSA––CFTC’s rentable square foot costs are lower or about the same as lease costs among other federal agencies in the regional office locations. More specifically, CFTC’s lease costs were lower than those of other federal agencies in Kansas City and New York and slightly higher than those of other federal agencies in Chicago (see table 3 below). As discussed previously, the CFTC began planning space expansion more than one year before the Dodd-Frank Act was signed into law and entered into leasing decisions in response to anticipated requirements of Dodd-Frank without fully assessing the risk of not receiving appropriations sufficient to execute its plans. According to CFTC OIG estimates, the failure to consider this risk has resulted in the agency possibly spending as much as $74 million for vacant space, if current conditions persist through the end of the current leases in fiscal years 2021 through 2025. Thus, the CFTC is not carrying out its mission in an efficient and cost- effective manner. Both CFTC’s guidance and GSA guidance share a common purpose: to maximize the value for the government while also fulfilling the agency’s mission. CFTC’s Statement of General Principles, which outlines the actual lease acquisition process, states a goal of maximizing competition to the extent practicable and making reasonable decisions to obtain space that enable the Commission to accomplish its mission in an efficient and cost-effective manner. Similar to CFTC’s Statement of General Principles, GSA’s Leasing Desk Guide states that it aims to help ensure that it leases quality space that is the best value for the government. However, CFTC’s guidance is very high-level and lacks the detail of GSA’s guide, which provides more comprehensive leasing policies and procedures. According to federal standards for internal control, policies and procedures help ensure that actions are taken to address risks and are an integral part of an entity’s accountability for stewardship of government resources. When we applied this standard, we found that CFTC’s policies did not include guidance to assess the risk of not receiving its full budget requests. CFTC has two documents, the 2009 Program of Requirements and the 2011 Statement of General Principles, that comprise its leasing guidance. The Program of Requirements, according to CFTC officials, is a space planning document for all four of its office locations. It provides information on projected employee and contractor staff size; and requirements for offices, workstations, common use areas and other space needs. Based on the Statement of General Principles, CFTC follows select portions of leading government guidance, and regulation that facilitate: maximizing competition to the extent practicable; avoiding conflicts of interest; adhering to the requirements of procurement integrity; and making reasonable decisions to obtain space that enables the Commission to accomplish its mission in an efficient and cost-effective manner. For example, CFTC officials told us that they followed select portions of leading government guidance when they began expanding space in 2009. Consistent with internal control standards, GSA’s guidance provides comprehensive details on ways to formulate, document, and operationalize lease procurement. For example, GSA’s Leasing Desk Guide specifically states that confirming space requirements includes verifying that the client has appropriate funding. By comparison, CFTC’s guidance does not include this level of detail. The lack of this type of specificity in CFTC’s guidance may have contributed to not executing its lease procurements consistent with standards for internal control and thereby not making cost-effective decisions. Although CFTC officials told us that the agency relies on a commercial real estate broker for all phases of the office space acquisition process—including (1) conducting market surveys, advertising CFTC’s requirements and drafting solicitations for offer; (2) analyzing offers received; and (3) reviewing lease documents—this reliance did not prevent the agency from entering into lease agreements before the agency had the funding necessary to staff the space. Federal internal control standards also state that significant decisions need to be clearly documented and readily available for examination. CFTC could only provide us with partial documentation and analysis of how it made decisions to enter into new or expanded leases. CFTC officials told us that they could not locate additional documentation because the employees who had responsibility for leasing had left the agency. Without this documentation, future decision makers may lack the institutional knowledge they need to make informed decisions. Utilizing leading government guidance could have helped CFTC to make reasonable decisions to obtain space that enables the Commission to accomplish its mission in an efficient and cost-effective manner—in keeping with its Statement of General Principles. In its Fiscal Year 2014 Agency Financial Report, CFTC says it plans to review and revise its space-related policies and procedures in keeping with OMB’s National Strategy for efficient use of space and real property. As of February 2016, CFTC officials told us that these policies and procedures are under review, but could not provide any other details or a timeline for completion. Further, when the current leases expire between April 2021 and September 2025, it will have been approximately 10 years since the agency last undertook lease procurement. Without comprehensive policies and institutional knowledge, the agency may be at risk of continuing to make decisions that do not make the best use of limited government resources. As noted above, based on an executive branch memo and initiatives, a GSA study, and our own research, we have identified several options that CFTC may pursue now and in the future to increase space utilization and improve the cost-effectiveness of its leasing arrangements: (1) relocating offices to less costly locations, (2) reducing office space required through increased telework, and (3) consolidating two regional offices—Kansas City and Chicago. CFTC officials told us that these options may not be achievable before their current leases expire. However, they have not fully examined the current feasibility of these options or their potential impact on reducing leased space and increasing cost-effectiveness in the future. Looking ahead, CFTC’s current leases are set to expire from fiscal year 2021 through 2025, and CFTC officials said that a reasonable practice is to begin planning for leasing activities 2 years prior to lease expiration. In the case of high-value leases—those with an annual rent above $2.85 million— GSA’s Leasing Desk Guide suggests the lease acquisition process begin 3 to 5 years prior to lease expiration. In keeping with these time frames, CFTC would begin planning for new leases in the next few years; however, CFTC does not have a timeline for doing so. CFTC’s offices in Washington, D.C., Kansas City, Chicago, and New York City are located in privately owned buildings in close proximity to the financial markets they oversee. According to CFTC, these locations support the agency’s oversight role, as, for example, the Dodd-Frank Act requires CFTC to perform annual examinations of two important derivatives clearing organizations––organizations that process the financial transactions involved in futures trading. The examination of these organizations requires meetings with officials and routine on-site examinations of their operations. However, there are federal buildings in Chicago and New York City also conveniently located within walking distance of the current locations of CFTC’s Chicago and New York City offices. According to CFTC officials, they did not consider leasing space in the federal buildings in these locations during the time they entered into new or expanded leases. Without doing this analysis, CFTC officials could not know whether the federal buildings may have had available space at a lower rent per square foot at the time they entered into lease agreements. As a result, they may not have acquired space in a cost- effective manner, per their Statement of General Principles. CFTC’s Washington, D.C., headquarters is located in the Central Business District submarket, which has one of the highest average rental rates in the region. By comparison, some other federal agencies have located their headquarters outside of downtown Washington, D.C. For example, the Farm Credit Administration, an independent regulatory agency that examines the banks, associations, and related entities of the Farm Credit System, located its headquarters in suburban northern Virginia. Further, the U.S. Department of Commerce’s Economics & Statistics Administration announced, in January 2016, that it plans to move its Bureau of Economic Analysis—approximately 590 employees— from private leased space in downtown Washington, D.C., to federally owned space in suburban Maryland. According to the U.S. Department of Commerce, the new location is expected to save taxpayers $66 million over 10 years. The 2010 presidential memorandum directs executive branch agencies to dispose of unneeded federal real estate, including a specific directive to “take immediate steps to make better use of remaining property assets as measured by utilization and occupancy rates.” Additionally, a fiscal year 2016 House appropriations bill committee report directs CFTC “to find ways to decrease space and renegotiate leasing agreements.” CFTC has conducted some analysis of optimizing space and potential lease- cost reductions for its current locations. CFTC officials said that under current lease agreements, the agency has limited options for negotiating changes in the lease terms. For example, the leases lack provisions that would allow CFTC to terminate leases prior to the agreed-upon term in such a way that CFTC would not still be responsible for the remaining rent payments. However, CFTC has not calculated the complete analysis of potential costs and benefits of relocating offices. Without this type of analysis, CFTC cannot make fully informed decisions about the cost- effectiveness of relocating its offices in the near term nor fully assess alternatives available to improve its space utilization. According to a 2011 GSA study, federal agencies and private sector organizations have been forced to continuously evaluate their current workspace utilization. The Telework Enhancement Act of 2010 required the head of each executive agency to establish and implement a telework policy for eligible employees and requires the Office of Personnel Management to assist agencies in establishing appropriate qualitative and quantitative measures and teleworking goals. GSA’s study states that federal agencies’ expanded use of telework could reduce their real estate footprint and real estate costs. With wireless communication tools, such as smart phones and wireless networking available, federal agencies and private organizations have turned to alternative work environments with the potential to reduce workspace costs and optimize physical workspace. OMB’s National Strategy notes that employee telework, among other things, has resulted in a need for less space. For example, we found in 2013 that some agencies, such as GSA and the U.S. Department of Agriculture’s Forest Service, have adopted “office hoteling arrangements,” a practice of providing office space to employees on an as-needed basis. This reduces the need for an additional amount of physical space that an agency needs to purchase or rent. Specifically, GSA implemented a hoteling program for all employees that allowed it to eliminate the need for additional leased space at four locations in the Washington, D.C., area, resulting in projected savings of approximately $25 million in annual lease payments and about a 38 percent reduction in needed office space. Further, the U.S. Forest Service uses hoteling, among other alternative workplace arrangements, to save an estimated $5 million in annual rent. Currently, 77 percent of CFTC employees have agreements for either recurring or episodic telework (see table 4). According to GSA’s study, the average workspace typically costs between $10,000 and $15,000 annually per person. Eliminating 100 workspaces, for example, could conceivably save an organization over $1 million a year. While CFTC officials told us that they do need an on-site presence in certain cases, such as for oversight and enforcement activities, since commodity futures trading is now wholly electronic, according to CFTC officials, increased teleworking could be a possible alternative to reduce CFTC’s rental space costs in future leases. CFTC officials said that their current policy allows for recurring telework 1 to 2 days every 2 weeks but have not assessed the option of increasing telework and reducing leased space as current leases expire and are renewed. However, CFTC officials said they have efforts under way to consider what policy makes sense for their operations. OMB’s National Strategy states that a key step in improving real property management is to reduce the size of the inventory by prioritizing actions to consolidate, co-locate, and dispose of properties. As discussed, the Kansas City regional office currently has 31 staff working in office space that accommodates 72. In addition, the Kansas City Board of Trade merged with the Chicago Mercantile Exchange in 2012. According to CFTC officials, the Kansas City Board of Trade only traded futures and options for one product (hard-red winter wheat) during all or substantially all the period from fiscal years 2008 through 2012. The majority of Kansas City CFTC staff are involved with enforcement, swap dealer and intermediary oversight, and market oversight––similar to the staff in the Chicago office. Further, the Chicago Regional Office also has underutilized office space and, according to our analysis, could possibly accommodate staff from the Kansas City office. We found that CFTC’s space could be better utilized in both of these regional offices. As noted above, for the Kansas City regional office, CFTC officials have said that they have been unable to return their unused space. According to CFTC officials, they have not assessed the option of possibly consolidating these two regional offices. As a result, CFTC may continue to pay for vacant space through the duration of the Kansas City lease until 2021. While not an exhaustive list, these options—relocation, telework, and consolidation—are in keeping with OMB’s National Strategy to realize the greatest efficiency, reduce portfolio costs, and conserve resources for service and mission delivery. CFTC began planning to substantially expand leased space in anticipation of proposed requirements prior to the enactment of the Dodd- Frank Act. The agency renewed leases and expanded space in its four office locations before fully assessing the risk of not receiving sufficient funding to hire staff to use the space. By not considering this risk, CFTC has taken on the obligation to potentially pay as much as $74 million for unused space over the term of the current leases—a situation that could span more than a decade, given the agency’s lease obligations. We found that CFTC did not have comprehensive leasing policies or procedures in place, but followed some leading government guidance when procuring additional space. This lack of comprehensive policies and procedures presents challenges in making sound management decisions to obtain space in an efficient and cost-effective manner. OMB’s National Strategy states that a key step in improving real property management is to reduce the size of the inventory. Potentially cost-effective options include relocating offices to less costly locations, enhancing teleworking, and consolidating two regional offices—Kansas City and Chicago. Exploring these possibilities and establishing a timeline for completion could result in CFTC’s using its available funds in a more cost-effective manner. To help ensure that the CFTC makes cost-effective leasing decisions, and considers options for reducing future lease costs, we recommend that the Chairman of the CFTC take the following two actions prior to entering into any new or expanded lease agreements: Ensure that as CFTC revises its leasing policies and procedures, it includes comprehensive details on lease procurement that are consistent with leading government guidance and standards to assure cost-effective decisions. Establish a timeline for evaluating and documenting options to potentially improve space utilization and reduce leasing costs including, but not restricted to, (1) moving offices to less costly locations, (2) implementing enhanced telework, and (3) consolidating the Kansas City and Chicago regional offices. We provided a draft of this report to CFTC for review and comment. CFTC provided written comments, which are summarized below and reprinted in appendix IV of this report. CFTC also provided technical comments, which we incorporated as appropriate. CFTC concurred with our first recommendation that prior to entering into any new or expanded lease agreements, as CFTC revises its leasing policies and procedures, it should include comprehensive details on lease procurement that are consistent with leading government guidance and standards to assure cost-effective decisions. CFTC stated that it intends to review its procedures to address the recommendation and to ensure that the agency makes cost-effective decisions. In addition, CFTC noted that its staff will engage the General Services Administration (GSA) regarding how the two agencies can work together to better leverage GSA's leasing expertise in addressing current leasing issues and assessing future space requirements. We are encouraged by these plans, as they have the potential to help CFTC make sound leasing decisions to obtain space in an efficient and cost-effective manner. CFTC generally concurred with the second recommendation, which states that prior to entering into any new or expanded lease agreements, CFTC establish a timeline for evaluating and documenting options to potentially improve space utilization and reduce leasing costs including, but not restricted to, (1) moving offices to less costly locations, (2) implementing enhanced telework, and (3) consolidating the Kansas City and Chicago regional offices. Specifically, CFTC stated that it will develop a timeline and plans for evaluating, initiating, implementing, and documenting space-related actions, especially as the various lease expiration dates approach. CFTC further stated that it will continue to look for actions it can take to make the most efficient use of space. However, CFTC noted that it does not believe it can reduce leasing costs in the near term without incurring significant expense and likely increasing the agency's overall space-related expenses. According to agency officials, CFTC’s leases generally lack provisions that would allow CFTC to terminate leases prior to the agreed-upon term. CFTC also did not specifically agree or disagree to consider the three specific potential options we suggested for consideration. We continue to believe that CFTC should consider these options to make the most efficient use of space prior to entering into any new or expanded lease agreements. The options we suggested are in keeping with OMB’s National Strategy and other agencies’ actions to realize the greatest efficiency, reduce portfolio costs, and conserve resources for service and mission delivery. We will send copies of this report to the appropriate congressional committees and the Commissioner of the Commodity Futures Trading Commission. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at 202-512-2834 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found at the last page of this report. GAO staff who made key contributions to this report are listed in appendix V. This report examines (1) the extent to which CFTC made cost-effective decisions and followed leading government guidance in planning for additional space for fiscal years 2008 through 2015; and (2) what potential options exist to improve the cost-effectiveness of CFTC’s leasing. To identify leading government practices and guidance on leasing, we reviewed GAO reports on real property and held discussions with GSA officials. To identify requirements applicable to CFTC leasing, we reviewed federal laws, and regulations. To address the extent to which CFTC followed leading government practices and guidance for leasing additional space, we reviewed and analyzed CFTC’s strategic plans, lease procurement policies, and space-planning documents covering fiscal years 2008 through 2015. We also reviewed, analyzed and evaluated the extent to which CFTC’s leasing practice aligned with: the Office of Management and Budget’s (OMB) National Strategy for the Efficient Use of Property, (National Strategy) its Reduce the Footprint policy; GSA’s leasing practices and guidance on lease procurement and pricing; and evaluated the extent that CFTC leasing processes were consistent with the Standards for Internal Control in the Federal Government. In addition, we also reviewed and analyzed relevant CFTC’s Office of Inspector General (OIG) reports on space utilization among three of the four CFTC offices. We also obtained and analyzed CFTC data on lease payments, rentable square feet (rsf) and lease expansions, and CFTC’s staffing history. To assess the reliability of CFTC data, we determined which CFTC data were derived from computerized data systems, interviewed cognizant CFTC officials about these systems, and reviewed system documentation. We determined that these data were sufficiently reliable for the purposes of our report. To determine costs per rsf, we divided the lease costs for each CFTC office by its total rentable square footage for fiscal years 2008 through 2015. To determine the impact of CFTC’s excess space on its utilization, we converted the leased space from rsf to useable square feet (usf). We calculated the average rsf per staff member (including CFTC employees and on-site contractors) for fiscal years 2008 through 2010––the period before CFTC expanded existing or entered into new leases—and then determined the conversion factor to align this average with the 300 usf per staff member cited in CFTC’s 2009 Program of Requirements. Using this factor (21.17 percent), we calculated the average usf per staff member for fiscal years 2011 through 2015––the period after CFTC expanded existing or added new leases. To determine how CFTC lease costs compare to the average cost per rsf of other federal agencies leasing space in commercial buildings in the four markets where CFTC offices are located, we analyzed data from GSA’s lease inventory for fiscal years 2013 through 2015––the years for which these data are available. We combined the monthly GSA lease inventory reports into fiscal years and then sorted the data by leases by state and county matching those where CFTC maintains offices. Next, to better approximate CFTC leases, we sorted the data to include only those offices that were 100 percent office space and “fully serviced,” before dividing rsf by lease costs to determine the cost per square foot for each lease. We then sorted the leases by size to approximate the size range of CFTC leases and by location to include only the cities in which CFTC maintains offices. To illustrate how CFTC lease costs may compare to the private sector, we analyzed data from the Building Owners and Managers Association (BOMA). Specifically, using BOMA’s Experience Exchange Report (EER) survey data for the four markets, we sorted the data to include privately owned buildings within the city limits where CFTC maintains offices and choose the BOMA average cost per rsf for Office Rent Income category. We confirmed with BOMA officials that “Office Rent Income” from the building owners’ perspective was the equivalent of cost per rsf from the tenant perspective. BOMA’s EER survey data have not yet been compiled for fiscal year 2015. To assess the reliability of these data, we interviewed GSA and BOMA officials about how they collect and maintain the data, as well as the completeness of the data, and we determined that the data were sufficiently reliable for the purposes of our report. However, BOMA does not collect EER survey data in a way that allows for an assessment of survey coverage, that is, there is no information available to measure the percentage of buildings in any given market that are included in the data, nor is there any information available to measure the extent to which particular types of buildings may be under-or over-represented. Therefore, the measures of lease cost per square foot resulting from BOMA’s EER survey data are not generalizable to other buildings in those markets for which no BOMA survey data were reported. However, when reporting measures of cost per square foot from the BOMA EER survey data, we include the number of buildings with reportable data from which the measure was derived. We attempted to use Federal Real Property Profile (FRPP) data to determine per-square-foot lease costs between fiscal years 2008 through 2014, but based on our analysis of the data and meetings with GSA, we determined that the data were unsuitable for that purpose. To identify what potential options exist that CFTC could consider towards improving the cost-effectiveness of future lease procurement, we reviewed and analyzed CFTC’s legal authority to lease properties. We also obtained and analyzed CFTC leases and conducted site visits at each of the four offices (Washington, D.C., headquarters, (Kansas City, MO; Chicago, IL; and New York, NY). We interviewed CFTC officials at CFTC Headquarters and all of the regional offices about their business processes, staffing, and space procurement planning and management procedures. Additionally, we interviewed CFTC Office of Inspector General (OIG) officials about their findings and ongoing reviews on CFTC space utilization. Furthermore, we interviewed GSA officials to understand their perspectives on lease procurement, including those by agencies with independent leasing authority. Using our analysis of CFTC leases, space procurement planning documents, policies and procedures, our interviews with agency officials, along with our review of a current presidential memorandum, OMB real property management initiatives and GSA leasing guidance, we identified several potential options CFTC may consider to improve the cost-effectiveness of its lease portfolio. We conducted this performance audit from June 2015 to April 2016 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. A. General Principles: The general principles governing Commodity Futures Trading Commission (“CFTC” or “Commission”) acquisition of office space include: Maximizing competition to the extent practicable; Avoiding conflicts of interest; Adhering to the requirements of Procurement Integrity; and Reasoned decision-making to obtain space that enables the Commission to accomplish its mission in an efficient and cost-effective manner. Although neither the Federal Acquisition Regulation (FAR) nor the General Services Acquisition Manual (GSAM) specifically apply to the acquisition of office space by CFTC, the principles cited above are embodied in those documents. Accordingly, CFTC has chosen to comply with aspects of their requirements that facilitate these ends. This is discussed further below. B. Applicability of Regulations and Policies: The FAR does not apply to the acquisition of leased office space. Specifically, the scope of the FAR’ s coverage is defined in Section 1.104 as follows: “The FAR applies to all acquisitions as defined in: Part 2 of the FAR, except where expressly excluded.” According to Part 2.101(b), the term “acquisition” is defined as the “acquiring by contract with appropriated funds of supplies or services (including construction) by and for the use of the Federal Government through purchase or lease, whether the supplies or services are already in existence or must be created, developed, demonstrated, and evaluated.” The term “supplies” is defined as “all property except land or interest in land.” Because a lease of real property, including a lease of office space, is an interest in land, it is not a “supply” and the FAR does not apply. The GSAM is also inapplicable to the lease of office space by CFTC. The GSAM applies to the acquisition of leased office space by GSA and any agencies delegated independent leasing authority by GSA and so required by GSA to use the GSAM. CFTC’s independent leasing authority was mandated by its authorizing legislation and not by GSA. Accordingly the GSAM is not required for use by CFTC in its acquisition of office space for lease. In the absence of explicit regulatory direction, CFTC has chosen to comply with aspects of these documents that facilitate the principles cited above. The GSAM is used specifically for its guidance as to the considerations and findings necessary to support a lease procurement by other than full and open competition. Additional guidance and processes that may be applicable to lease acquisition are contained in CFTC’s acquisition policy. The acquisition process for all lease awards begins with requirements definition and completion of market research. Market research is used to determine whether CFTC is best served by an open market competitive acquisition or a follow-on award to the incumbent lessor. I. Steps in a competitive acquisition of leased office space are as follows: Develop Program of Requirements Conduct market survey Define delineated area Formulate an acquisition strategy Advertise requirement Review expressions of interest Tour properties Develop a solicitation list Draft and issue a Solicitation for Offers Draft a Technical Evaluation Plan Designate a Technical Evaluation Committee (TEC) Evaluate initial offers Complete initial TEC report Complete price analysis Complete Phase II Determination (assumes negotiation, otherwise, the Contracting Officer will draft a Source Selection Statement at this time) Conduct negotiations Solicit and evaluate revised offers Complete Final TEC Report Contracting Officer completes Source Selection Statement Memorialize terms of agreement between the parties in a lease document 2. Steps in award of a lease by other than full and open competition are as follows: Develop Program of Requirements Conduct market survey Complete Justification for Other Than Full and Open Competition Conduct negotiations Memorialize terms of agreement between the parties in a lease document The functional objective of the acquisition process described herein is to acquire office space in a building that efficiently supports CFTC’s mission; provides a high quality work environment; and offers a satisfactory breadth and variety of amenities. This outcome must be met in a manner that maximizes value to the Commission, considering price and technical factors. It must be provided at a price that is fair and reasonable. II. Construction of Space: CFTC’s office space is constructed in accordance with the terms of its office space lease agreements. Construction contracts and trade subcontracts, as appropriate, are awarded based on a competitive process that results in fair and reasonable pricing. CFTC’s Contracting Officer is privy to bid information and, in consultation with CFTC’s architect, project manager, and other knowledgeable Commission personnel, approves project pricing as well as any required contract change orders. III. Administration of Leases: CFTC’s Contracting Officer is responsible for analyzing rent-related charges and authorizing payment as appropriate. The Contracting Officer is also responsible for addressing with the landlord any issues pertaining to lease compliance. The Office of Management Operations is responsible for day-to-day facility operational matters and consults with the Contracting Officer on lease-related issues as appropriate. In addition to the contact named above, Amelia Bates Shachoy (Assistant Director), Lindsay Madison Bach, Dwayne Curry, Lawrance Evans, Terence Lam, Hannah Laufe, Sara Ann Moessbauer, Minette Richardson, Amelia Michelle Weathers, and Crystal Wesco made key contributions to this report. | The CFTC regulates certain financial markets, and the Dodd-Frank Wall Street Reform and Consumer Protection Act (Dodd-Frank Act) expanded its regulatory responsibilities. Prior to enactment of the Dodd-Frank Act, in anticipation of these increased responsibilities, the agency began planning for more space to accommodate additional staff in each of four office locations. GAO was asked to review CFTC's staffing, leasing practices, and costs. This report examines: (1) the extent to which CFTC made cost-effective decisions and used leading government guidance in planning for additional space in fiscal years 2008 through 2015 and (2) potential options to improve the cost-effectiveness of CFTC's future leasing. GAO (1) reviewed applicable federal laws, regulations, and guidance that apply to real property leasing and CFTC's space-planning documents and leases for the fiscal years 2008 through 2015; (2) analyzed data and conducted interviews with key officials from CFTC and GSA; and (3) visited all four CFTC offices. The Commodity Futures Trading Commission (CFTC) did not make cost-effective decisions consistent with leading government guidance for lease procurement and internal controls when planning for additional space in fiscal years 2008–2015. CFTC began planning for expansion in the fiscal year 2009 time frame—more than a year before the enactment of the Dodd-Frank Act in July 2010. CFTC renewed leases and expanded space in its Washington, D.C., headquarters and three regional offices in anticipation of receiving funding to hire additional staff but did not receive the amounts requested. As a result, CFTC has lease obligations for currently unused space some of which extends through 2025. Overall, the total occupancy level for all four offices combined was about 78 percent as of the end of fiscal year 2015, and each office has different occupancy levels, as shown in the figure below. CFTC has independent authority to lease real property, including office space. The two documents CFTC uses to guide the lease procurement process provide some high-level guidance on this process, but the documents do not establish specific policies and procedures to help ensure cost-effective decisions. By comparison, leading government guidance, from the General Services Administration (GSA) includes comprehensive details on lease procurement. The lack of this type of detail may have contributed to CFTC's making decisions that were not cost-effective. GAO identified several potential options that CFTC may pursue now and in the future to increase space utilization and improve the cost-effectiveness of its leasing arrangements: (1) relocating offices to less costly locations, (2) reducing office space requirements through enhanced telework, and (3) consolidating two regional offices—Kansas City and Chicago. CFTC officials told GAO that these options may not be feasible; however, the officials have not fully assessed these options or their potential for improving cost-effectiveness and do not have a timeline for doing so. To help ensure cost-effective leasing decisions, GAO recommends that CFTC (1) ensure that its revised leasing policies and procedures incorporate leading government guidance and (2) establish a timeline for evaluating and documenting options to potentially improve space utilization and reduce leasing costs. CFTC generally concurred with GAO's recommendations but noted that it would not be able to take actions to reduce lease costs in the near term. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
The Missile Defense Agency’s mission is to develop an integrated and layered BMDS to defend the United States, its deployed forces, allies, and friends. The BMDS is expected to be capable of engaging all ranges of enemy ballistic missiles in all phases of flight. This is a challenging expectation, requiring a complex combination of defensive components— space-based sensors, surveillance and tracking radars, advanced interceptors, and a battle management, command, control, and communications component—that work together as an integrated system. A typical scenario to engage an intercontinental ballistic missile (ICBM) would unfold as follows: Infrared sensors aboard early-warning satellites detect the hot plume of a missile launch and alert the command authority of a possible attack. Upon receiving the alert, land- or sea-based radars are directed to track the various objects released from the missile and, if so designed, to identify the warhead from among spent rocket motors, decoys, and debris. When the trajectory of the missile’s warhead has been adequately established, an interceptor—consisting of a kill vehicle mounted atop a booster—is launched to engage the threat. The interceptor boosts itself toward a predicted intercept point and releases the kill vehicle. The kill vehicle uses its onboard sensors and divert thrusters to detect, identify, and steer itself into the warhead. With a combined closing speed on the order of 10 kilometers per second (22,000 miles per hour), the warhead is destroyed above the atmosphere through a “hit to kill” collision with the kill vehicle. To develop a system capable of carrying out such an engagement, MDA, until December 2007, executed an acquisition strategy in which the development of missile defense capabilities was organized in 2-year increments known as blocks. Each block was intended to provide the BMDS with capabilities that enhanced the development and overall performance of the system. The first 2-year block—Block 2004—fielded a limited initial capability that included early versions of the GMD, Aegis BMD, Patriot Advanced Capability-3, and C2BMC elements. The agency’s second 2-year block—Block 2006—culminated on December 31, 2007, and fielded additional BMDS assets. Block 2006 also continued the evolution of Block 2004 by providing improved GMD interceptors, enhanced Aegis BMD missiles, upgraded Aegis BMD ships, a Forward-Based X-Band- Transportable radar, and enhancements to C2BMC software. On December 7, 2007, MDA’s Director approved a new block construct that will be the basis for all future development and fielding. Table 1 provides a brief description of all elements currently being developed by MDA. MDA made progress in developing and fielding the BMDS during 2007. Additional assets were fielded and/or upgraded, several tests met planned objectives, and other development activities were conducted. On the other hand, fewer assets were fielded than originally planned, the cost of the block increased, some flight tests were deferred, and the performance of fielded assets could not be fully evaluated. During Block 2006, MDA increased its inventory of BMDS assets while enhancing the system’s performance. The agency fielded 14 additional Ground-based interceptors, 12 Aegis BMD missiles designed to engage more advanced threats, 4 new Aegis BMD destroyers, 1 new Aegis BMD cruiser, as well as 8 C2BMC Web browsers and 1 C2BMC suite. In addition, MDA upgraded half of its Aegis BMD ship fleet, successfully conducted four Aegis BMD and two GMD intercept tests, and completed a number of ground tests to demonstrate the capability of BMDS components. Considering assets fielded during Blocks 2004 and 2006, MDA, by December 31, 2007, had cumulatively fielded a total of 24 Ground-based interceptors, 2 upgraded early-warning radars, an upgraded Cobra Dane surveillance radar, 1 Sea-based X-band radar, 2 Forward-Based X-Band Transportable radars, 21 Aegis BMD missiles, 14 Aegis BMD destroyers, and 3 Aegis BMD cruisers. In addition, MDA had fielded 6 C2BMC suites; 46 warfighter enterprise workstations with situational awareness; BMDS planner and sensor management capabilities; 31 C2BMC Web browsers, 13 with laptop planners; and redundant communications node equipment to connect BMDS elements worldwide. In March 2005, MDA submitted to Congress the number of assets it planned to field during Block 2006. However, increasing costs, technical challenges, and schedule delays prompted the agency to reduce the quantity of planned assets. Consequently, in March 2006, shortly after submitting its fiscal year 2007 budget, MDA notified Congress that it was revising its Block 2006 Fielded Configuration Baseline. Although MDA did not meet its original block fielding goals, it was able in nearly all instances to meet or exceed its revised goals. Of the four elements delivering assets during Block 2006, one—Sensors—was able to meet its original goal. However, two elements—GMD and C2BMC—were able to exceed their revised fielding goals. Table 2 depicts the goals and the number of assets fielded. Although GMD did not meet its original goal of fielding up to 15 interceptors and partially upgrading the Thule early warning radar, the element was able to surpass its revised goal of fielding 12 interceptors. By December 31, 2007, the GMD element fielded 14 interceptors—2 more than planned. To achieve its revised goal, the element’s prime contractor added a manufacturing shift during 2007 and extended the number of hours that certain shifts’ personnel worked. These actions allowed the contractor to more than double its interceptor emplacement rate. Last year, we reported that MDA delayed the partial upgrade of the Thule early-warning radar—one of GMD’s original goals—until a full upgrade could be accomplished. According to DOD, the full upgrade of Thule is the most economical option and it meets DOD’s desire to retain a single configuration of upgraded early warning radars. The Thule early warning radar upgrade is being accomplished by two separate contract awards. Raytheon was awarded a contract in April 2006 to develop and install prime mission equipment; while Boeing was expected to receive a contract in January 2008 to integrate the equipment into the BMDS ground communication network. In March 2005, MDA included three C2BMC suites as part of its fielding goal for Block 2006. These suites were to be fielded at U.S. European Command, U.S. Central Command, and another location that was to be identified later. Faced with a $30 million reduction in C2BMC’s fiscal year 2006 budget, MDA in March 2006 revised this goal to replace the 3 suites with 3 less expensive Web browsers. However, by the end of Block 2006, MDA found an innovative way to increase combatant commands’ situational awareness and planning capability. In 2005, the C2BMC program conducted a network load analysis and concluded that situational awareness and planning capability—equivalent to that provided by a suite- -could be gained by combining Web browsers and planners. To prove that this approach would work, MDA fielded 4 Web browsers and one planner at the U.S. European Command. MDA learned that this combination of hardware, fielded in the quantities needed to meet a command’s needs and connected to an existing server, provided the situational awareness and planning capability of a suite at less cost. MDA extended this approach by fielding one Web browser and one planner at four other locations—U.S. Forces Japan; U.S. Forces Korea; the Commander of U.S. Strategic Command; and the Commander of the Space and Missile Defense Command. In addition, MDA fielded one suite at U.S. Pacific Command. The Aegis BMD element was able to meet its revised block goals for only one of its two components. The program upgraded all planned ships, but fielded three fewer Aegis BMD Standard Missile-3s (SM-3) than planned. The program did not meet its revised missile goal because three U.S missiles were delayed into 2008 to accommodate an unanticipated requirement to deliver three missiles to Japan. Figure 1 below depicts the location of current BMDS assets. MDA’s Block 2006 program of work culminated with higher than anticipated costs. In March 2007, we reported that MDA’s cost goal for Block 2006 increased by approximately $1 billion because of greater than expected GMD operations and sustainment costs and technical problems. During fiscal year 2007, some prime contractors performing work for the BMDS overran their budgeted costs. To stay within its revised budget, MDA was forced to reduce the amount of work it expected to accomplish during the block. The full cost of the block cannot be determined because of the deferral of work from one block to another. In addition, some MDA prime contractors too often employ a planning methodology that has the potential to obscure the time and money that will be needed to produce the outcomes intended. If the work does not yield the intended results, MDA could incur additional future costs. While MDA struggled to contain costs during Block 2006, the agency awarded two contractors a large percentage of available fee for performance in cost and/or program management although the contractor-reported data showed declining cost and schedule performance. Both award fee plans for these contractors direct that cost and schedule performance be considered as factors in making the evaluation. While these factors are important, MDA’s award fee plans provide for the consideration of many other factors in making award fee determinations. To determine if contractors are executing the work planned within the funds and time budgeted, each BMDS program office requires its prime contractor to provide monthly Earned Value Management reports detailing cost and schedule performance. If more work was completed than scheduled and the cost of the work performed was less than budgeted, the contractor reports a positive schedule and cost variance. However, if the contractor was unable to complete all of the work scheduled and needed more funds to complete the work than budgeted, the contractor reports a negative schedule and cost variance. Of course, the results can be mixed. That is, the contractor may have completed more work than scheduled but at a cost that exceeded the budget. As shown in table 3 below, the contractors for the nine BMDS elements collectively overran their fiscal year 2007 budgets by approximately $166 million. We estimate that at completion, the cumulative overrun in the contracts could be between about $1.3 billion and $1.9 billion. Our predictions of final contract costs were developed using formulas accepted within the cost community and were based on the assumption that the contractor will continue to perform in the future as it has in the past. It should also be noted that some contracts include more than Block 2006 work. For example, the STSS contract includes work being accomplished in anticipation of future blocks. Our analysis is presented in table 3 below. Appendix II provides further details on the cost and schedule performance of the contractors outlined in the table. Technical problems and software issues caused several BMDS elements to overrun their fiscal year 2007 budgeted costs. In addition, 4 of the 10 contracts we reviewed contained some kind of replanning activity during fiscal year 2007 and the ABL contract was partially rebaselined. Contractors may replan when they conclude that the current plan for completing the effort remaining on the contract is unrealistic. A replan can include reallocating the remaining budget over the rest of the work, realigning the schedule within the contractually defined milestones, and setting either cost or schedule variances to zero or setting both to zero. A rebaseline is similar, but it may also add additional time and/or funding for the remaining work. The ABL contractor was overrunning both its fiscal year 2007 budget and schedule early in the year. Although by year’s end it appears that the contractor recovered, the contractor would have continued to overrun both its budget and its schedule if most of the contract had not been rebaselined. The contractor realized cost and schedule growth as it worked to solve software integration problems in the Beam Control/Fire Control component and dealt with a low-power laser needed for flight tests that was not putting enough energy on the target. After encountering these problems, the ABL contractor did not have sufficient schedule or budget to complete the remaining contract work. Therefore, in May 2007, the program allowed the contractor to rebaseline all of the remaining work devoted to developing, integrating, flight testing, and delivering the ABL prototype. The rebaselining effort added about $253 million to the contract and extended the contract’s period of performance by more than a year. The THAAD prime contractor’s cost overrun of $91.1 million was primarily caused by technical problems related to the element’s missile, launcher, radar, and test components. Missile component cost overruns were caused by higher than anticipated costs in hardware fabrication, assembly, and support touch labor for structures, propulsion, and other subassembly components. Additionally, design issues with the launcher’s missile round pallet and the electronics assembly that controls the launcher caused the contractor to experience higher than anticipated labor and material costs. The radar component ended the fiscal year with a negative cost variance as more staff was required than planned to resolve hardware design issues in the radar’s prime power unit. The contractor also experienced negative cost variances with the system test component because the Launch and Test Support Equipment required additional set-up time at the flight test range. The STSS contractor’s $67.7 million fiscal year 2007 cost variance is primarily attributed to problems that occurred during thermal vacuum testing of the first satellite. Since the satellites are legacy hardware built under a former program, there are no spares available for testing. As a result, the contractor needed to handle the parts carefully to avoid damage to the hardware, increasing the time devoted to the test. Further test delays occurred when a number of interface issues surfaced during testing and when the cause of component problems could not be easily traced to their source. The program office believes that the cost variance would have been less if design engineers had been available during testing. Because engineers were not present to quickly identify the cause of component problems, a time-consuming analysis of each problem was needed. In March 2007, we reported that a full accounting of Block 2006 costs was not possible because MDA has the flexibility to redefine block outcomes. That is, MDA can delay the delivery of assets or other work activities from block to block and count the work as a cost of the block during which the work is performed, even though the work does not benefit that block. For example, MDA deferred some Block 2004 work until Block 2006 so that it could use the funds appropriated for that work to cover unexpected cost increases caused by technical problems recognized during development, testing, and production. With the deferral of the work, its cost was no longer counted as a Block 2004 cost, but as a Block 2006 cost. As a result, Block 2004’s cost was understated and Block 2006’s cost is overstated. Because MDA did not track the cost of the deferred work, the agency could not make an adjustment that would have matched the cost with the correct block. The cost of Block 2006 was further blurred as MDA found it necessary to defer some Block 2006 work until a future block. For example, when the STSS contractor overran its fiscal year 2007 budget because of testing problems, the program did not have sufficient funds to launch the demonstration satellites in 2007 as planned. The work is now scheduled for 2008. The consequence of deferring Block 2004 work to Block 2006 and Block 2006 to 2008 is that the full cost of Block 2006 cannot be determined. Some MDA prime contractors too often employ a planning methodology that has the potential to obscure the time and money that will be needed to produce the outcomes intended. Contractors typically divide the total work of a contract into small efforts in order to define them more clearly and to ensure proper oversight. Work may be planned in categories including (1) level of effort (LOE) —work that contains tasks of a general or supportive nature and do not produce a definite end product—or (2) discrete work—work that has a definable end product or event. Level of effort work assumes that if the staff assigned to the effort spend the planned length of time, they will attain the outcome expected. According to earned value experts and the National Defense Industrial Association, while it is appropriate to plan such tasks as supervision or contract administration as LOE, it is not appropriate to plan tasks that are intended to result in a product, such as a study or a software build, as LOE because contractors do not report schedule variances for LOE work. Therefore, when contractors incorrectly plan discrete work as LOE, reports that are meant to allow the government to assess contractor cost and schedule performance may be positive, but the government may not have full insight into the contractor’s progress. The greater the percentage of LOE, the weaker the link between inputs (time and money) and outcomes (end products), which is the essence of earned value analysis. Essentially, depending on the magnitude of LOE, schedule variances at the bottom line can be understated. The significant amount of BMDS work being tracked by LOE may have limited our assessment of the contractors’ performance. That is, the contractor’s performance may appear to be more positive than it would be if work had been correctly planned. In such cases, the government may have to expend additional time and money to achieve the outcomes desired. MDA Earned Value Management officials agreed that some BMDS prime contractors incorrectly planned discrete work as LOE, but the agency is taking steps to remedy this situation so that they can better monitor the contractors’ performance. While it is not possible to state with certainty how much work a contractor should plan as LOE, experts within the government cost community, such as Defense Contract Management Agency officials, agree that LOE levels over 20 percent warrant investigation. According to MDA, many of its prime contractors plan a much larger percentage than 20 percent of their work as LOE. Table 4 presents the percentage of work in each BMDS prime contract that is categorized as LOE. The Aegis BMD SM-3, MKV, ABL, and C2BMC contractors planned more than half of certain work as LOE. In several instances, MDA Earned Value Management officials and program office reviewers agreed that some of the LOE work could be redefined into discrete work packages. For example, from January through December 2007, the C2BMC contractor planned 73 percent of its work as LOE. This included activities such as software development and integration and test activities that result in two definable products—software packages and tests. At the direction of the C2BMC Program Office, the C2BMC contractor redefined some contract work, including software development and integration and test activities, as discrete, reducing the amount of LOE on the contract to 52 percent. The Aegis BMD element also reported a high percentage of LOE for its Standard Missile-3 contract, particularly considering that its products— individual missiles—are quite discrete. In August 2007, the element reported that the contractor had planned 73 percent of the contract work as LOE. The portion of the work that contained this amount of LOE was completed in March 2007 with an underrun of $7.2 million. Although the contractor reported an underrun for this work upon its completion, the high percentage of LOE may have, over the contract period, distorted the contractor’s actual cost and schedule performance. Similarly, it is important to note that the amount of LOE for the SM-3 work that is currently ongoing is considerably less. Program officials told us that prior to the commencement of this segment of work, the MDA Earned Value Management Group and program officials recommended that the program minimize the amount of LOE on its contracts. Currently, only 18 percent of the SM-3 contract is considered LOE. MDA uses award fees to encourage its contractors to perform in an innovative, efficient, and effective way in areas considered important to the development of the BMDS. Because award fees are intended to motivate contractor performance for work that is neither feasible nor effective to measure objectively, award fee criteria and evaluations tend to be subjective. Each element’s contract has an award fee plan that identifies the performance areas to be evaluated and the methodology by which those areas will be assessed. An award fee evaluation board—made up of MDA personnel, program officials, and officials from key organizations knowledgeable about the award fee evaluation areas–– judges the contractor’s performance against specified criteria in the award fee plan. The board then recommends to a fee determining official the amount of fee to be paid. MDA’s Director is the fee-determining official for all BMDS prime contracts that we assessed. During fiscal year 2007, MDA awarded approximately 95 percent, or $606 million, of available award fee to its prime contractors. While the cost, schedule, and technical performance of several contractors appeared to be aligned with their award fee, two contractors were rated as performing very well in the cost and/or program management elements and received commensurate fees even though earned value management data showed that their cost and schedule performance was declining. On the other hand, MDA did not award any fee to the THAAD contractor for its management of contract cost during a time when earned value data showed steadily increasing costs. Although DOD guidance discourages the use of earned value performance metrics in award fee criteria, MDA includes this as a factor in several of its award fee plans. The agency considers many factors in rating contractors’ performance and making award fee determinations, including consideration of earned value data that shows cost, schedule, and technical trends. In addition, MDA has begun to revise its award fee policy to align agency practices more closely with DOD’s current policy that better links performance with award fees. The ABL and Aegis BMD weapon system contractors received a large percentage of the 2007 award fee available to them for the cost and/or program management element. MDA rated the ABL contractor’s performance in cost and program management elements as “very good,” awarding the contractor 88 percent of the fee available in these performance areas. According to the award fee plan, one of several factors that is considered in rating the contractor’s performance as very good is whether earned value data indicates that there are few unfavorable cost, schedule, and/or technical variances or trends. During the February 2006 to January 2007 award fee period, earned value data shows that the contractor overran its budget by more than $57 million and did not complete $11 million of planned work. Similarly, the Aegis BMD weapon system contractor was to be rated as to how effectively it managed its contract’s cost. The award fee plan for this contractor also directs that earned value be one of the factors considered in making such an evaluation. During the fee period that ran from October 2006 through March 2007, MDA rated the contractor’s cost management performance as outstanding and awarded 100 percent of the available fee. Earned value data during this time period indicates that the contractor overran its budget by more than $6 million. MDA did not provide us with more detailed information as to other factors that may have influenced its decision as to the amount of fee awarded to the ABL and Aegis BMD Weapon System contractors. MDA recognizes that there is not always a good link between the agency’s intentions for award fees and the amount of fee being earned by its contractors. In an effort to rectify this problem, the agency released a revised award fee policy in February 2007 to ensure its compliance with recent DOD policies that are intended to address award fee issues throughout the Department. Specifically, MDA’s policy directs that every contract’s award fee plan include: Criteria for each element of the award fee that is specific enough to enable the agency to evaluate contractor performance and to determine how much fee the contractor can earn for that element. The criteria is to clearly define the performance that the government expects from the contractor for the applicable award fee period and the criteria for any one element must be distinguishable from criteria for other elements of the award fee; An emphasis on rewarding results rather than effort or activity; and An incentive to meet or exceed agency requirements. Additionally, MDA’s policy calls for using the Award Fee Advisory Board to not only make award fee recommendations to the fee determining official, but to also biannually report to MDA’s Director as to whether award fee recommendations are consistent with DOD’s Contractor Performance Assessment Report—a report that provides a record, both positive and negative, on a given contract for a specific period of time. Appendix II of this report provides additional information on BMDS prime contracts and award fees. During 2007, several BMDS programs experienced setbacks in their test schedules. The Aegis BMD, THAAD, ABL, STSS, and C2BMC elements experienced test delays, but all were able to achieve their primary test objectives. GMD, on the other hand, experienced a schedule delay caused by an in-flight target anomaly that prevented full accomplishment of one major 2007 test objective. The remaining three elements—MKV, KEI, and Sensors—were able to execute all scheduled activities as planned. The Aegis BMD, THAAD, C2BMC, ABL, and STSS elements continued to achieve important test objectives in 2007, although some tests were delayed. Aegis BMD proved its capability against more advanced threats, while THAAD proved that it could intercept both inside and outside of the atmosphere. C2BMC completed a number of software and system-level tests. The ABL and STSS programs saw delays in important ground tests, but ABL was able to begin flight testing its beam control/fire control component using a low-power laser in 2007 and STSS completed thermal vacuum testing of both satellites by the end of the year. However, the delays in the ABL and STSS programs may hold up their incorporation into the BMDS during future blocks. Although the Aegis BMD program encountered some test delays, it was able to achieve all fiscal year 2007 test objectives. In December 2006, the program stopped a test after a crew member changed the ship’s doctrine parameters just prior to target launch, preventing the ship’s fire control system from conducting the planned engagement. During this test event, the weapon system failed to recognize the test target as a threat, which prevented the SM-3 missile from launching. Also, according to program officials, the system did not provide a warning message which contributed to the mission being aborted prematurely and prevented the Aegis BMD program from meeting its test objectives. However, 4 months later, the same flight test event was successfully completed and all test objectives were met. During that event, the program was able to demonstrate that the Aegis BMD could simultaneously track and intercept a ballistic missile and an anti-ship cruise missile. In June 2007, the program successfully completed its first flight test utilizing an Aegis BMD destroyer to intercept a separating target, and in November, the program conducted its first test that engaged two ballistic missile targets simultaneously. During the last test, Aegis missiles onboard an Aegis BMD cruiser successfully intercepted two short-range non-separating targets and achieved all primary test objectives outlined for this event. The THAAD program expected to complete four flight tests prior to the end of fiscal year 2007 but was only able to complete three. Two tests successfully resulted in intercepts of short-range ballistic missiles at different levels of the atmosphere. The third test successfully demonstrated component capability in a high-pressure environment and was the lowest altitude interceptor verification test to date. However, the fourth test was delayed, initially due to target availability driven by late modifications to the target hardware configuration. Additionally, during pre-flight testing, the contractor found debris in the interceptor. This caused the interceptor to be returned to the factory for problem investigation. While the problem was corrected and the interceptor was returned to the test range in only 7 days, the test was rescheduled because the test range was not available before the end of fiscal year 2007. During fiscal year 2007, the C2BMC program completed BMDS-level ground and flight tests, successfully achieving its test objectives of verifying the capabilities and readiness of a new software configuration. The software is designed to provide the BMDS with improved defense planning capability, including better accuracy and speed; a new operational network; and additional user displays. Because of the integral nature of the C2BMC product, problems encountered in some elements’ test schedules have a cascading effect on C2BMC’s test schedule. Even though this limited C2BMC testing, a review of the integrated and distributed ground test data resulted in the decision to field the software in December 2007. ABL achieved most of its test objectives during fiscal year 2007, but experienced delays during Block 2006 that deferred future BMDS program decisions. The program experienced a number of technical problems during fiscal year 2006 that pushed some planned activities into fiscal year 2007. One such activity was the execution of the program’s first of four key knowledge points—a ground test to demonstrate ABL’s ability to acquire and track a target while performing atmospheric compensation. The test was conducted in December 2006, 3 ½ months later than planned. At the culmination of the test, program officials noted two problems. First, the system’s beam control/fire control software was not integrated as anticipated. In addition, the energy that the low-power laser placed on the target during the test was not optimal. According to program officials, both of these issues were resolved before the system began flight testing the full beam control/fire control component in February 2007. However, the delays caused the program to further postpone a key lethality demonstration—a demonstration in which the ABL will attempt to shoot down a short-range ballistic missile—until last quarter of fiscal year 2009. This demonstration is important to the program because it is the point at which MDA will decide the program’s future. Although the ABL program experienced some setbacks with its first key knowledge point, it was able to meet all objectives for each subsequent knowledge point. In addition to the first knowledge point, the program planned to demonstrate three additional knowledge points during fiscal year 2007. The second knowledge point was contingent upon completion of the first. To demonstrate the achievement of the two knowledge points, the contractor performed a flight test that showed the low-power laser was integrated and the beam control/fire control functioned sufficiently to perform target tracking and atmospheric compensation against an airborne target board. The third knowledge point was completed three months ahead of the planned 2007 schedule and demonstrated that ABL’s optical subsystem was adequate to support its high-power laser system. The fourth knowledge point–the completion of a series of flight tests to demonstrate the performance of the low-power laser system in flight—was completed in August 2007. Delays in the STSS test program, along with funding shortages, postponed the planned 2007 launch of the program’s demonstration satellites. The STSS program is integrating two demonstration satellites with sensor payloads from legacy hardware developed under a former program. The use of legacy hardware has complicated the test program because spares needed for testing are not available. In order to preserve the condition of the legacy components, the program must exercise caution in handling the components to prevent damage, which has caused delays in testing. Additionally, a thermal vacuum test on the first space vehicle, to assess the ability of the satellite to operate in the cold vacuum of space, took twice as long as scheduled, due to a number of interface issues. Although the program was able to complete the integration and test of both demonstration satellites in 2007—major objectives for the program—funds were not available to launch the satellites as planned. Program officials believe that the satellites could be launched as early as April 2008 and as late as July 2008, 1 year later than originally scheduled. According to the program office, there is no margin in the 2008 budget, so any unexpected issues could put the 2008 launch date at risk. The delays in launching the STSS demonstration satellites do not impact MDA’s Block 2006 fielding plans as the satellites are intended to demonstrate a surveillance and tracking capability and do not provide any operational capability during the block. However, the delay in launching the demonstration satellites is causing a delay in MDA’s ability to initiate development of an operational constellation, which may delay a BMDS global midcourse tracking capability. Despite delays in hardware and software testing and integration, other parts of the STSS program have proceeded according to schedule. Lessons learned from the thermal vacuum test for the first satellite’s sensor payload facilitated the completion of thermal vacuum testing of the second satellite’s payload in November 2007. Additionally, command and control capabilities of the ground segment were demonstrated and the second part of the acceptance test of STSS ground components was completed in September 2007. A target anomaly prevented the GMD element from achieving all 2007 objectives. The GMD program planned to conduct three flight tests–two intercept attempts and one radar characterization test—but was only able to conduct the radar test and one intercept test. The radar characterization test was conducted in March 2007. The target was launched from Vandenberg Air Force Base and was successfully tracked by the SBX radar and the radar of two Aegis BMD ships. During the test, officials indicated the SBX exhibited some anomalous behavior, yet was able to collect target tracking data and successfully transmit the information to the C2BMC element and the GMD fire control system at DOD’s Missile Defense Integration and Operations Center. No live interceptor was launched. However, an intercept solution was generated and simulated interceptor missiles were “launched” from Fort Greely, Alaska. To address anomalous behavior, MDA adjusted software and performance parameters of the SBX radar. In May 2007, the program attempted an intercept test, but a key component of the target malfunctioned. For that reason, the weapon system did not release the Ground-based interceptor and program officials declared the flight test a “no test” event. To date, program officials have not determined the root cause of the malfunction. In September 2007, the program successfully conducted a re-test and achieved an intercept of the target using target tracking data provided by the Beale upgraded early warning radar. MDA test officials told us that aging target inventory could have contributed to the target anomaly. The officials explained that some targets in MDA’s inventory are more than 40 years old and their reliability is relatively low. Target officials told us that they are taking preventive actions to avoid similar anomalies in the future. The time needed to complete the first 2007 intercept delayed GMD’s second planned intercept attempt until at least the second quarter of fiscal year 2008. The delayed test was to have determined whether the SBX radar could provide data in “real time” that could be used by the GMD fire control component to develop a weapon task plan. Although the weapon task plan was not developed in real time during 2007, GMD was able to demonstrate that the SBX radar could plan an engagement when the target was live but the interceptor was simulated. During 2007, the KEI program redefined its development efforts and focused on near-term objectives. Also, the MKV program redefined its strategy to acquire multiple kill capability. Once redefined, these programs conducted all planned activities as scheduled and each was able to meet all planned objectives. In addition, the Sensors program successfully completed all planned tests. In June 2007, MDA directed the KEI program to focus on two near-term objectives—the development of its booster and its 2008 booster flight test. Some work, such as development of the fire control and communications and mobile launcher, was deferred into the future. During fiscal year 2007, the KEI program conducted all planned test activities, including booster static fire tests that demonstrated the rocket motor’s performance in induced environments and wind tunnel tests that gathered data to validate aerodynamic models for the booster flight controls. MKV officials redefined their acquisition strategy by employing a parallel path to develop multiple kill vehicles for the GMD and KEI interceptors and the Aegis BMD SM-3 missile. MDA initiated the MKV program in 2004 with Lockheed Martin. In 2007, the MKV program added Raytheon as a second payload provider. According to program officials, the two payload providers may use different technologies and design approaches, but both adhere to the agency’s goal of delivering common, modular MKV payloads for integration with all BMDS midcourse interceptors. In fiscal year 2007, Lockheed Martin successfully conducted static fire tests of its Divert Attitude Control System as planned. Additionally, Raytheon, funded with excess KEI funds made available when that program was replanned, began concept development. Raytheon did not have any major test activities scheduled for the fiscal year. During 2007, the Sensors program focused on testing FBX-T radars that were permanently emplaced and newly produced. After the first FBX-T was moved from its temporary location in Japan to its permanent location in Shariki, Japan, various ground tests and simulations were conducted to ensure its interoperability with the BMDS. The program also delivered a second FBX-T to Vandenberg Air Force Base, where its tracking capability is being tested against targets of opportunity. According to program officials, a decision has not been made as to where the second FBX-T radar will be permanently located. As we reported in March 2007, MDA altered its original Block 2006 performance goals commensurate with the agency’s reductions in the delivery of fielded assets. However, insufficient data exists to fully assess whether MDA achieved its revised performance goals. The performance of some fielded assets is also questionable because parts have not yet been replaced that were identified by auditors in MDA’s Office of Quality, Safety, and Mission Assurance as less reliable or inappropriate for use in space. In addition, tests of the GMD element have not included target suite dynamic features and intercept geometries representative of the operational environment in which GMD will perform its mission and BMDS tests only allow a partial assessment of the system’s effectiveness, suitability, and survivability. MDA uses a combination of simulations and flight tests to determine whether performance goals are met. Models and simulations are needed to predict performance because the cost of tests prevents the agency from conducting sufficient testing to compute statistical probabilities of performance. The models and simulations that project BMDS capability against intercontinental ballistic missiles present several problems. First, the models and simulations that predict performance of the GMD element have not been accredited by an independent agency. According to the Office of the Director, Operational Test and Evaluation without accredited models, GMD’s performance cannot be predicted with respect to (1) variations in threat parameters that lie within the bounds of intelligence estimates, (2) stressing ground-based interceptor fly-outs and exoatmospheric kill vehicle engagements, and (3) variations in natural environments that lie within meteorological norms. Second, too few flight tests have been completed to ensure the accuracy of the models’ and simulations’ predictions. Since 2002, MDA has only completed two end-to- end tests of engagement sequences that the GMD element might carry out. While these tests provide some evidence that the element can work as intended, MDA must test other engagement sequences, which would include other GMD assets that have not yet participated in an end-to-end flight test. For example, MDA has not yet used the Sea-based X-band radar as the primary sensor in an end-to-end test. Additionally, officials in the Office of the Director, Operational Test and Evaluation told us that MDA needs more flight tests to have a high level of confidence that GMD can repeatedly intercept incoming ICBMs. Further testing is also needed to demonstrate that Aegis BMD can provide real-time, long-range surveillance and tracking data for the GMD element. In March 2006, we reported that the cancellation of a GMD flight test prevented MDA from exercising Aegis BMD’s long-range surveillance and tracking capability in a manner consistent with an actual defensive mission. Program officials informed us that the Aegis BMD is capable of performing this function and has demonstrated its ability to surveil and track ICBMs in several exercises. However, MDA has not yet shown that Aegis BMD can communicate this data to GMD during a live intercept engagement and that GMD can use the data to prepare a weapon task plan for actual— rather than simulated––interceptors. Officials in the Office of the Director for Operational Test and Evaluation told us that having Aegis BMD perform long-range surveillance and tracking during a live engagement would provide the data needed to more accurately gauge performance. Similarly, MDA has not yet proved that the FBX-T radar can provide real- time, long-range surveillance and tracking data for the GMD element. On several occasions, MDA has shown that the FBX-T can acquire and track targets of opportunity, but the radar’s data has not yet been used to develop a weapon system task plan for a GMD intercept engagement. Because the radar’s permanent location in Japan does not allow MDA to conduct tests in which the FBX-T is GMD’s primary fire control radar, the Director, Operational Test and Evaluation, in 2006 recommended that prior to emplacing a second FBX-T at its permanent location that MDA test the radar’s capability to act as GMD’s primary sensor in an intercept test. Confidence in the performance of the BMDS is also reduced because of unresolved GMD technical and quality issues. The GMD element has experienced the same anomaly during each of its flight tests since 2001. This anomaly has not yet prevented the program from achieving any of its primary test objectives, but to date neither its source nor solution has been clearly identified or defined. Program officials plan to continue their assessment of test data to identify the anomaly’s root cause and have implemented design changes to mitigate the effects and reduce risks associated with the anomaly. The reliability of emplaced GMD interceptors raises further questions about the performance of the BMDS. Quality issues discovered by auditors in MDA’s Office of Quality, Safety, and Mission Assurance nearly 3 years ago have not yet been rectified in all fielded interceptors. According to the auditors, inadequate mission assurance and quality control procedures may have allowed less reliable parts or parts inappropriate for use in space to be incorporated into the manufacturing process, thereby limiting the reliability and performance of some fielded assets. The program has strengthened its quality control processes and is taking several steps to mitigate similar risks in the future. These steps include component analysis of failed items, implementing corrective action with vendors, and analyzing system operational data to determine which parts are affecting weapon system availability. MDA has begun to replace the questionable parts in the manufacturing process and to purchase the parts that it plans to replace in fielded interceptors. However, it will not complete the retrofit effort until 2012. Additionally, test officials told us that although the end-to-end GMD test conducted during 2007 demonstrated that for a single engagement sequence military operators could successfully engage a target, the target represented a relatively unsophisticated threat because it lacked specific target suite dynamic features and intercept geometry. Other aspects of the test were more realistic—such as closing velocity and fly-out range—but these were relatively unchallenging. While the test parameters may be acceptable in a developmental test, they are not fully representative of an operational environment and do not provide high confidence that GMD will perform well operationally. Finally, because BMDS assets are being fielded based on developmental tests, which are not always representative of the operational environment, operational test officials have limited test data to determine whether all BMDS elements/components being fielded are effective and suitable for and survivable on the battlefield. MDA has added operational test objectives to its developmental test program, but many of the objectives are aimed at proving that military personnel can operate the equipment. In addition, limited flight test data is available for characterizing the BMDS’ capability against intercontinental ballistic missiles. Up until 2007, the overall lack of data limited the Office of the Director of Operational Test and Evaluation, in annual assessments, to commenting on the operational realism of tests and recommending other tests needed to characterize system effectiveness and suitability. In 2007, tests provided sufficient information to partially quantify the effectiveness and suitability of the BMDS' midcourse capability (Aegis BMD and GMD) and to fully characterize a limited portion of the BMDS' terminal capability (PAC-3). However, according to the Office of the Director of Operational Test and Evaluation, further testing that incorporates realistic operational objectives and verification, validation, and accreditation of models and simulations will be needed before the performance, suitability, and survivability of the BMDS can be fully characterized. Since its initiation in 2002, MDA has been given a significant amount of flexibility in executing the development of the BMDS. While the flexibility has enabled MDA to be agile in decision making and to field an initial capability relatively quickly, it has diluted transparency into MDA’s acquisition processes, making it difficult to conduct oversight and hold the agency accountable for its planned outcomes and costs. As we reported in 2007, MDA operates with considerable autonomy to change goals and plans, which makes it difficult to reconcile outcomes with original expectations and to determine the actual cost of each block and of individual operational assets. In the past year, MDA has begun implementing two initiatives—a new block construct and a new executive board–to improve transparency, accountability, and oversight. These initiatives represent improvements over current practices, although they provide for less oversight than statutes provide for other major defense acquisition programs. In addition, Congress has directed that MDA’s budget materials, after 2009, request funds using the appropriation categories of research, development, and evaluation, procurement, operations and maintenance, and military construction, which should promote accountability for and transparency of the BMDS. In 2007, MDA redefined its block construct to better communicate its plans and goals to Congress. The agency’s new construct is based on fielding capabilities that address particular threats as opposed to the biennial time periods that were the agency’s past approach to development and fielding. MDA’s new block construct makes many positive changes. These include establishing unit cost for selected block assets, including in a block only those elements or components that will be fielded during the block, and abandoning the practice of deferring work from block to block. Table 5 illustrates MDA’s new block construct for fielding the BMDS. MDA’s new block construct provides a means for comparing the expected and actual unit cost of assets included in a block. As we noted in our fiscal year 2006 report, MDA’s past block structure did not estimate unit costs for assets considered part of a given block or categorize block costs in a manner that allowed calculations of expected or actual unit costs. For example, the expected cost of Block 2006 GMD interceptors emplaced for operational use was not separated from other GMD costs. Even if MDA had categorized the interceptors’ cost, it would have been difficult to determine the exact cost of these interceptors because MDA acquires and assembles components into interceptors over several blocks and it has been difficult to track the cost of components to a specific group of interceptors. Under the new block construct, MDA expects to develop unit costs for selected block assets—such as THAAD interceptors—and request an independent verification of that unit cost from DOD’s Cost Analysis Improvement Group. MDA will also track the actual unit cost of the assets and report significant cost growth to Congress. However, MDA has not yet determined for which assets a unit cost will be developed and how much a unit cost must increase before that increase is reported to Congress. The new construct also makes it clearer as to which assets should be included in a block. Under the agency’s prior block construct, assets included in a given block were sometimes not planned for delivery until a later block. For example, as we reported in March 2007, MDA included costs for ABL and STSS as part of its Block 2006 cost goal although those elements did not field or plan to field assets during Block 2006. Agency officials told us those elements were included in the block because they believed the elements could offer some emergency capability during the block timeframe. Finally, the new block construct should improve the transparency of each block’s actual cost. Under its prior construct, MDA deferred work from one block to another; but it did not track the cost of the deferred work so that it could be attributed to the block that it benefited. For example, MDA deferred some work needed to characterize and verify the Block 2004 capability until Block 2006 and counted the cost of those activities as a cost of Block 2006. By doing so, it understated the cost of Block 2004 and overstated the cost of Block 2006. Because MDA did not track the cost of the deferred work, the agency was unable to adjust the cost of either block to accurately capture the cost of each. MDA officials told us that under its new block construct, MDA will no longer transfer work, along with its cost, to a future block. Rather, a block of work will not be considered complete until all work that benefits a block has been completed and its cost has been properly attributed to that block. Although improvements are inherent in MDA’s new block construct, the new construct will not dispel all transparency and accountability concerns. MDA has not yet estimated the full cost of a block. Also, MDA has not addressed whether it will transfer assets produced during a block to a military service for production and operation at the block’s completion, or whether MDA will continue its practice of concurrently developing and fielding BMDS elements and components. According to its fiscal year 2009 budget submission, MDA does not plan to initially develop a full cost estimate for any BMDS block. Instead, when a firm commitment can be made to Congress for a block of capability, MDA will develop a budget baseline for the block. This budget will include anticipated funding for each block activity that is planned for the 6 years included in DOD’s Future Years Defense Plan. MDA officials told us that if the budget for a baselined block changes, MDA plans to report and explain those variations to Congress. At some future date, MDA does expect to develop a full cost estimate for each committed block and is in discussions with DOD’s Cost Analysis Improvement Group on having the group verify each estimate; but documents do not yet include a timeline for estimating block cost or having that estimate verified. For accountability, other DOD programs are required to provide the full cost of developing and producing their weapon system before system development and demonstration can begin. Until the cost of a block of BMDS capability is fully known, it will be difficult for decision makers to compare the value of investing in a block of BMDS capability to the value of investing in other DOD programs or to determine whether the block of capability that is being initiated will be affordable over the long term. The new block construct does not address whether the assets included in a block will be transferred at the block’s completion to a military service for production and operation. Officials representing multiple DOD organizations recognize that the transfer criteria established in 2002 are neither complete nor clear given the BMDS’s complexity. Without clear transfer criteria, MDA has transferred the management of only one element—the Patriot Advanced Capability-3—to the military for production and operation. Joint Staff officials told us that for all other elements, MDA and the military services have been negotiating the transition of responsibilities for the sustainment of fielded elements—a task that has proven arduous and time consuming. Although MDA documents show that under its new block construct the agency should be ready at the end of each block to deliver BMDS components that are fully mission-capable, MDA officials could not tell us when MDA’s Director will recommend that management of components, including production responsibilities, be transferred to the military. MDA officials maintain that even though a particular configuration of a weapon could be fully mission- capable, that configuration may never be produced because it could be replaced by a new configuration. Yet, by the block’s end, a transfer plan for the fully mission-capable configuration will have been drafted, developmental ground and flight tests will be complete, elements and components will be certified for operations, and doctrine, organization, training, material, leadership, personnel, and facilities are expected to be in place. Another issue not addressed under MDA’s new block construct is whether the concurrent development and fielding of BMDS elements and/or components will continue. Fully developing a component or element and demonstrating its capability prior to production increases the likelihood that the product will perform as designed and can be produced at the cost estimated. To field an initial capability quickly, MDA accepted the risk of concurrent development and fielding during Block 2004. For example, by the end of Block 2004, the agency realized that the performance of some Ground-based interceptors could be degraded because the interceptors included inappropriate or potentially unreliable parts. MDA has begun the process of retrofitting these interceptors, but work will not be completed until 2012. Meanwhile there is a risk that some interceptors might not perform as designed. MDA also continued to accept this risk during Block 2006 as it fielded assets before they were fully tested. MDA has not addressed whether it will accept similar performance risks under its new block construct or whether it will fully develop and demonstrate all elements/components prior to fielding. In March 2007, the Deputy Secretary of Defense established a Missile Defense Executive Board (MDEB) to recommend and oversee implementation of strategic policies and plans, program priorities, and investment options for protecting the United States and its allies from missile attacks. The MDEB was also to replace existing groups and structures, such as the Missile Defense Support Group (MDSG). However, while it has some oversight responsibilities, the MDEB was not established to provide full oversight of the BMDS program and it would likely be unable to carry out this mission even if tasked to do so. The MDEB will not receive some information that the Defense Acquisition Board relies upon to make program recommendations, and in other cases, MDA does not plan to seek the MDEB’s approval before deciding on a course of action. In addition, there are parts of the BMDS program for which there will be no baseline against which progress can be measured, which makes oversight difficult. According to its charter, the MDEB is vested with more responsibility than its predecessor, the MDSG. When the MDSG was chartered in 2002, it was to provide constructive advice to MDA’s Director. However, the Director was not required to follow the advice of the group. According to a DOD official, although the MDSG met many times initially, it did not meet after June 2005. This led, in 2007, to the formation of the MDEB. This board’s mission is to review and make recommendations on MDA’s comprehensive acquisition strategy to the Deputy Secretary of Defense. It is also to provide the Under Secretary of Defense, Acquisition, Technology and Logistics, with a recommended strategic program plan and a feasible funding strategy based on “business case” analysis that considers the best approach to fielding integrated missile defense capabilities in support of joint MDA and warfighter objectives The MDEB will be assisted by four standing committees. These committees, which are chaired by senior-level officials from the Office of the Secretary of Defense and the Joint Staff, could play an important oversight role as they are expected to make recommendations to the MDEB, which in turn will recommend courses of action to the Under Secretary of Defense for Acquisition, Technology, and Logistics (USD AT&L) and the Director, MDA, as appropriate. The following table identifies the chair of each standing committee as well as key committee functions. The MDEB will not have access to all information normally available to oversight bodies. For other major defense acquisition programs, the Defense Acquisition Board must approve the program’s progress through the acquisition cycle. Further, before a program can enter the System Development and Demonstration phase of the cycle, statute requires that certain information be developed. This information is then provided to the Defense Acquisition Board. However, in 2002, the Secretary of Defense allowed MDA to defer application of the defense acquisition system that among other things require programs to follow a defined acquisition cycle and obtain approval before advancing from one phase of the cycle to another. Because MDA does not follow this cycle, it does not enter System Development and Demonstration and it does not trigger the statutes requiring the development of information that the Defense Acquisition Board uses to inform its decisions. For example, most major defense acquisition programs are required by statute to obtain an independent verification of program cost prior to beginning system development and demonstration, and/or production and deployment. Independent life- cycle cost estimates provide confidence that a program is executable within estimated cost and along with other DOD-wide budget demands. Although MDA plans to develop unit cost for selected block assets and request that DOD’s Cost Analysis Improvement Group verify the unit costs, the agency does not initially plan to develop a block cost estimate and, therefore, cannot seek an independent verification of that cost. In addition, even when MDA estimates block costs, the agency will not be required to obtain an independent verification of that cost, because, as noted earlier, the BMDS program operates outside of DOD’s acquisition cycle. Although not required, MDA officials told us that they have initiated discussions with the Cost Analysis Improvement Group on independent verifications of block cost estimates. Statute also requires an independent verification of a system’s suitability for and effectiveness on the battlefield before a program can proceed beyond low-rate initial production. After the test is completed, the Director for Operational Test and Evaluation assesses whether the test was adequate to support an evaluation of the system’s suitability and effectiveness for the battlefield, whether the test showed the system to be acceptable, and whether any limitations in suitability and effectiveness were noted. However, a comparable assessment of the BMDS assets being produced for fielding will not be available to the MDEB. As noted earlier, the limited amount of testing completed, which has been primarily developmental in nature, and the lack of verified, validated, and accredited models and simulations prevent the Director of Operational Test and Evaluation from fully assessing the effectiveness, suitability, and survivability of the BMDS in annual assessments. MDA will also make some decisions without approval from the MDEB or any higher level DOD official. Although the charter of the MDEB includes the mission to make recommendations to MDA and the Under Secretary of Defense for AT&L on investment options, program priorities, and MDA’s strategy for developing and fielding an operational missile defense capability, the MDEB will not have the opportunity to review and recommend changes to BMDS blocks. According to a briefing on the business rules and processes for MDA’s new block structure, the decision to initiate a new block of BMDS capability will be made by MDA’s Director. Also cost, schedule, and performance parameters will be established by MDA when technologies that the block depends upon are mature, a credible cost estimate can be developed, funding is available, and the threat is both imminent and severe. The Director will inform the MDEB as well as Congress when a new block is initiated, but he will not seek the approval of either. Finally, there will be parts of the BMDS program that will be difficult for the MDEB to oversee because of the nature of the work being performed. MDA plans to place any program that is developing technology in a category known as Capability Development. These programs, such as ABL, KEI, and MKV, will not have a firm cost, schedule, or performance baseline. This is generally true for technology development programs in DOD because they are in a period of discovery, which makes schedule and cost difficult to estimate. On the other hand, the scale of the technology development in BMDS is unusually large, ranging from $2 billion to about $5 billion dollars a year—eventually comprising nearly half of MDA’s budget by fiscal year 2012. The MDEB will have access to the budgets planned for these programs over the next 5 or 6 years, each program’s focus, and whether the technology is meeting short-term key events or knowledge points. But without some kind of baseline for matching progress with cost, the MDEB will not know how much more time or money will be needed to complete technology maturation. MDA’s experience with the ABL program provides a good example of the difficulty in estimating the cost and schedule of technology development. In 1996, the ABL program believed that all ABL technology could be demonstrated by 2001 at a cost of about $1 billion. However, MDA now projects that this technology will not be demonstrated until 2009 and its cost has grown to over $5 billion. While the uncertainties of technology development must be recognized, some organizations suggest ways to establish a baseline appropriate for such efforts. For example, the Air Force Research Laboratory suggested a methodology to estimate a technology’s cost once analytical and laboratory studies physically validate analytical predictions of separate elements of the technology. In an effort to further improve oversight, the Joint Requirements Oversight Council proposed a plan to transition the BMDS into standard DOD processes. In August 2007, the Vice Chairman of the Joint Chiefs of Staff and Joint Requirements Oversight Council Chairman requested the Deputy Secretary of Defense approve a proposal to return MDA to the Joint Capabilities Integration and Development System process and direct the Joint Requirements Oversight Council to validate BMDS capabilities. The Vice Chairman believed that the council should exercise oversight of MDA in order to improve Department-wide capability integration. More specifically, he noted that: In 2002, the Secretary of Defense exempted the BMDS program from the traditional requirements generation process to expedite fielding the system as soon as practicable. Now that an initial capability for homeland defense has been deployed, there is no longer the same need for flexibility provided by the requirements exemption. The current process, with MDA exempted, does not allow the Joint Requirements Oversight Council to provide appropriate military advice or to validate missile defense capabilities. Without this change, there is increasing potential that MDA-fielded systems will not be synchronized with other air and missile defense capabilities being developed. The current process hinders the military departments’ ability to plan and program resources for fielding and sustainment of MDA-developed systems. In responding to the proposal, the Acting Under Secretary of Defense for AT&L recommended that the Deputy Secretary of Defense delay his approval of the Joint Staff’s proposal until the MDEB could review the proposal and provide a recommendation. However, he agreed that more Joint Requirements Oversight Council involvement was necessary for the BMDS, although he was not sure that returning BMDS to standard DOD processes was the appropriate solution to the agency’s oversight issues. Instead, he noted that the Deputy Secretary of Defense recently established the MDEB to recommend and oversee the implementation of strategic policies and plans, program priorities, and investment options for the BMDS. He stated that since the MDEB is tasked with determining the best means of managing the BMDS throughout its life cycle, it should consider the Joint Staff’s proposal. In an effort to improve the transparency of MDA’s acquisition processes, Congress has directed that MDA’s budget materials delineate between funds needed for research, development, and evaluation, procurement, operations and maintenance, and military construction. Using procurement funds will mean that MDA generally will be required to adhere to congressional policy that assets be fully funded in the year of their purchase, rather than incrementally funded over several years. The Congressional Research Service reported in 2006 that “incremental funding fell out of favor because opponents believed it could make the total procurement costs of weapons and equipment more difficult for Congress to understand and track, create a potential for DOD to start procurement of an item without necessarily stating its total cost to Congress, permit one Congress to ‘tie the hands’ of future Congresses, and increase weapon procurement costs by exposing weapons under construction to uneconomic start-up and stop costs.” Our analysis of MDA developed costs, which are presented in table 7, also shows that incremental funding is usually more expensive than full funding, in part, because inflation decreases the buying power of the dollar each year. The National Defense Authorization Act for Fiscal Year 2008 directed MDA to submit a plan to transition from using research and development funds exclusively to using procurement, operations and maintenance, military construction, and research and development funds by March 1, 2008. However, it allowed MDA to continue to use research and development funds in fiscal year 2009 to incrementally fund previously approved missile defense assets. The act also directed that beginning in fiscal year 2009, the MDA budget request include, in addition to RDT&E funds, military construction funds and procurement funds for some long lead items such as those required for the third and fourth THAAD fire units and Aegis BMD SM-3 Block 1A missiles. MDA did not request long lead funding for either THAAD or SM-3 missiles in its fiscal year 2009 budget because MDA has slipped the schedule for procuring fire units 3 and 4 by one year, and the National Defense Authorization Act for Fiscal Year 2008 was not signed in time to allow MDA to adjust its budget request for SM-3 missiles. Congress also provided MDA with the authority to use procurement funds for fiscal years 2009 and 2010 to field its BMDS capabilities on an incremental funding basis, without any requirement for full funding. Congress has granted similar authority to other DOD programs. In the conference report accompanying the Fiscal Year 2008 National Defense Authorization Act, the conferees indicated that if MDA wishes to use incremental funding after fiscal year 2010, DOD must request additional authority for a specific program or capability. Conferees cautioned DOD that additional authority will be considered on a limited case-by-case basis and that future missile defense programs will be funded in a manner more consistent with other DOD acquisition programs. Since 2002, MDA has been granted the flexibility to incrementally fund the fielding of its operational assets with research and development funds. In some cases, the agency spreads the cost of assets across 5 to 7 budget years. After reviewing the agency’s incremental funding plan for future procurements of THAAD fire units and Aegis BMD missiles, we analyzed the effect of fully funding these assets using present value techniques and found that the agency could save about $125 million by fully funding their purchase and purchasing them in an economical manner. Our analysis is provided in table 7. In addition, more detailed analysis is available in appendix III. According to our analysis, fully funding the THAAD and Aegis BMD assets will, in all instances, save MDA money. For example, full funding would save the THAAD program approximately $104 million and the Aegis BMD program nearly $22 million. In addition, by providing funds upfront, the contractors should be able to arrange production in the most efficient manner. By the end of Block 2006, MDA posted a number of accomplishments for the BMDS, including fielding more assets, conducting several successful tests, and progressing with developmental efforts. As a result, fielded capability has increased. On the other hand, some problems continue that make it difficult to assess how well the BMDS is progressing relative to the funds it has received and the goals it has set for those funds. First, under the proposed block construct, MDA plans to develop a firm baseline for each block and have it independently reviewed. However, MDA has not yet developed estimates for full block costs, so the initial baseline incorporates the budget for each block only through DOD’s Future Years Defense Plan. Second, while MDA expects to estimate unit costs and track increases, it is unclear as to what criteria will be used for reporting variances to Congress. Third, while MDA has gotten some contractors to lower the portion of work planned as level of effort, a substantial amount of work remains so planned. Fourth, while it may not be reasonable to expect the same level of accountability for technology development efforts as it is for development and production of systems, the high level of investment—up to half of its budget—MDA plans to make in technology development warrants some mechanism for reconciling the cost of these efforts with their progress. Finally, MDA fields assets before development testing is complete and without conducting operational testing. We have previously recommended that MDA return to its original non-concurrent, knowledge-based approach to developing, testing, and fielding assets. Short of that, the developmental testing that is done provides the primary basis for the Director of Operational Test and Evaluation to assess whether a block of BMDS capability is suitable and effective for the battlefield. So far, BMDS testing has not yielded sufficient data to make a full assessment. To build on efforts to improve the transparency, accountability, and oversight of the missile defense program, we recommend that the Secretary of Defense direct: MDA to develop a full cost for each block and request an independent verification of that cost; MDA to clarify the criteria that it will use for reporting unit cost MDA to examine a contractor’s planning efforts when 20 percent or more of a contract’s work is proposed as level of effort; MDA to investigate ways of developing a baseline or some other standard against which the progress of technology programs may be assessed; and MDA and the Director of Operational Test and Evaluation to agree on criteria and incorporate corresponding scope into developmental tests that will allow a determination of whether a block of BMDS capability is suitable and effective for fielding. DOD provided written comments on a draft of this report. These comments are reprinted in appendix I. DOD also provided technical comments, which we incorporated as appropriate. DOD concurred with three of our five recommendations—developing a full cost estimate for each block and requesting an independent verification of that cost, clarifying criteria for reporting unit cost variances to Congress, and examining contractors’ planning efforts when 20 percent or more of a contract’s work is proposed as level of effort. The Department indicated that MDA has already taken steps to develop new cost models aligned with its new block structure and met with DOD’s Cost Analysis Improvement Group to initiate the planning process for the independent verifications of MDA’s cost estimates. The cost estimates will extend until block completion and will not be limited by a 6-year Future Years Defense Plan window. MDA is also working to establish criteria for reporting unit cost variances and to incorporate them into an MDA directive. Finally, MDA has made a review of prime contractors’ work planning efforts part of the Integrated Baseline Review process and the Defense Contract Management Agency has agreed to continuously validate the appropriateness of each contractor’s planning methodology as part of its ongoing contract surveillance. DOD partially concurred with our recommendation that MDA investigate ways of developing a baseline or some other standard against which the progress of technology programs may be assessed. DOD observed that MDA uses knowledge points, technology readiness levels, and engineering and manufacturing readiness levels in assessing the progress of its technology programs and that it will continue to investigate other methods of making such assessments. While we recognize their value, these methods typically assess progress in the short term and do not provide an estimate of the remaining cost and time needed to complete a technology program. Because MDA must balance its efforts to improve the existing BMDS while developing new capability, DOD and MDA need to ensure that only the most beneficial technology programs in terms of performance, cost, and schedule are pursued. This will require an understanding of not only the benefit to be derived from the technology, but also an understanding of the cost and time needed to bring the technology to fruition. DOD also partially concurred with our last recommendation that MDA and the Director of Operational Test and Evaluation (DOT&E) agree on criteria and additional scope for developmental tests that will allow a full determination of the effectiveness and suitability of a BMDS block for fielding. DOD noted that it is MDA’s mission to work with the warfighter, rather than DOT&E, to determine that the BMDS is ready for fielding, but that MDA will work closely with DOT&E to strengthen the testing of BMDS suitability and effectiveness. We agree that DOT&E is not responsible for fielding decisions, but its mission is to ensure that weapon systems are realistically and adequately tested and that accurate evaluations of operational effectiveness, suitability, and survivability are available for production decisions. MDA improved the operational realism of testing in 2007 and for the first time DOT&E considered tests at least partially adequate to make an assessment of the BMDS. However, a full assessment is not yet possible and we continue to recommend that MDA and DOT&E take steps to make as full a BMDS evaluation as possible. In doing so, MDA and DOT&E can work cooperatively to reduce the number of unknowns that will confront the warfighter when the system is required operationally and improve the likelihood that the BMDS will perform as needed in the field. We are sending copies of this report to the Secretary of Defense and to the Director, MDA. We will make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you, or your staff have any questions concerning this report, please contact me at (202) 512-4841. Contact Points for our offices of Congressional Relations and Public Affairs may be found on the last page of this report. The major contributors are listed in appendix V. The Missile Defense Agency (MDA) employs prime contractors and support contractors to accomplish different tasks that are needed to develop and field the ballistic missile defense system. Prime contractors receive the bulk of funds MDA requests each year and work to provide the hardware and software for elements of the Ballistic Missile Defense System (BMDS). Support contractors provide a wide variety of useful services, such as special knowledge and skills not available in the government and the capability to provide temporary or intermittent services. MDA has prime contracts with four defense companies—Boeing, Raytheon, Lockheed Martin, and Northrop Grumman—to develop elements of the BMDS. All current contracts and agreements are cost reimbursement type that provide for payment of reasonable, allowable, and allocable incurred costs to the extent provided in the contract or agreement. The contracts also provide fee for the contractor performing the work, but the amount earned depends on many variables, including the type of cost contract, contractor performance, technical risk, and complexity of the requirement. All of the cost reimbursement contracts used for the BMDS elements include cost plus award fee aspects. Cost plus award fee contracts provide for a fee consisting of a base fee—fixed at the inception of the contract that may be zero—and an award amount based upon a subjective evaluation by the government, meant to encourage exceptional performance. It should be noted that some award fee arrangements include objective criteria such as Key Performance events. The Multiple Kill Vehicle (MKV) contract and Command, Control, Battle Management and Communications (C2BMC) Other Transaction Agreement differ somewhat from the other elements' contracts. The MKV prime contractor was awarded an indefinite delivery/indefinite quantity cost reimbursement contract. This type of contract allows MDA to order services as they are needed through a series of task orders. Without having to specify a firm quantity of services (other than a minimum or maximum quantity), the government has greater flexibility to align the tasks with available funding. The C2BMC element operates under an Other Transaction Agreement with cost reimbursement aspects. These types of agreements are not always subject to procurement laws and regulations meant to safeguard the government. MDA chose the Other Transaction Agreement to facilitate a collaborative relationship between industry, government, federally funded research and development centers, and university research centers. DOD requires that all contractors awarded cost reimbursement contracts or other agreements of $20 million or greater implement an Earned Value Management System (EVMS) to integrate the planning of work scope, schedule, and resources, and to provide insight into their cost and schedule performance. To implement this system, contractors examine the totality of the work directed by the contract and break it into executable work packages. Each work package is assigned a schedule and a budget that is expected to enable the work’s completion. On a monthly basis, the contractor examines initiated work packages to determine whether the work scheduled for the month was performed on time and within budget. If more work was completed than scheduled and the cost of the work performed was less than budgeted, the contractor reports a positive schedule and cost variance. However, if the contractor was unable to complete all of the work scheduled and needed more funds to complete the work than budgeted, the contractor reports a negative schedule and cost variance. Of course, the results can be mixed. That is, the contractor may have completed more work than scheduled but at a cost that exceeded the budget. The contractor details its performance to MDA each month in Contract Performance Reports. These reports also identify the reasons that negative or positive variances are occurring. Used properly, the earned value concept allows program managers to identify problems early so that steps can be taken before the problems increase the contract’s overall cost and/or schedule. In the course of subdividing the total work of the contract into smaller efforts, contractors plan work according to its type. Included in these classifications are discrete work—work that is expected to produce a product, such as a study, lines of software code, or a test—and work considered to be level of effort (LOE). LOE is work that does not result in a product, but is of a general or supportive nature. Supervision and contract administration are examples of work that do not produce definable end products and are appropriately planned as LOE. Several contracts for BMDS systems have relatively high proportions of work planned as LOE. When work is incorrectly planned as LOE, the contractor’s performance becomes less transparent because earned value does not recognize schedule variances for such work. Rather, it is assumed that the time budgeted for an LOE effort will produce the intended result. Although an LOE work package will report cost variances, those variances will only be measured against how much the program intended to spend at certain time intervals. If LOE were to be used on activities that could otherwise be measured discretely, the project performance data could be favorably distorted and contractors and program managers might not be able to discern the value gained for the time spent on the task. Specifically, the program’s Contract Performance Reports would not indicate whether or not the work performed produced the product expected. By losing early insight into performance, the program could potentially need to spend more time and money to complete the task. Since earned value management is less suited for work that is not intended to produce a specific product, or work that is termed LOE, the Standard for Earned Value Management Systems Intent Guide instructs that although some amount of LOE activity may be necessary, it must be held to the lowest practical level. In addition, earned value experts such as Defense Contract Management Agency officials agree that if a contractor plans more than 20 percent of the total contract work as LOE, the work plan should be examined to determine if work is being properly planned. Although the amount of LOE should be minimized, some BMDS prime contracts have a relatively high percentage of LOE. As figure 2 illustrates, the MKV contractor planned much of the work for task orders open during fiscal year 2007 as LOE. Contractors for Aegis BMD SM-3 and C2BMC also planned a high percentage of their work as LOE. Both MDA’s Earned Value Management Group and program office reviewers encouraged the SM-3 and C2BMC contractors to reduce their LOE percentages. By the end of the fiscal year, the SM-3 and C2BMC contractors had reduced the amount of work planned as LOE. In December 2006, the Aegis BMD SM-3 contractor completed work to develop and produce initial Block 1A missiles with 73 percent of this work categorized as LOE—well above the 15 percent that the Aegis BMD SM-3 program reports as its industry standard. Although we have reported that the contractor completed this segment of work ahead of cost but slightly behind schedule, it is difficult to assess whether this represents the contractor’s actual performance. The high percentage of LOE associated with this work may have limited our assessment and distorted whether the work completed was in all respects the work planned. Subsequently, the contractor initiated procurement of long lead materials to produce an additional 20 Block 1A missiles before work packages were developed. Once work packages were developed, only 18 percent of the work was planned as LOE. The C2BMC program was able to reduce the percentage of work planned as LOE, but the program continues to encourage further reductions. During fiscal year 2007, the C2BMC contractor replanned its work and reduced the amount of work planned as LOE from 73 to 52 percent. This change was implemented after two closely related reviews suggested the percentage of LOE work was too high. Both the program office and its contractor acknowledge the high level of LOE and have made plans to limit it in future work. As noted in figure 2, the MKV contractor considered all work being completed under two task orders—Task Orders 4 and 5—as LOE. The primary objective of Task Order 4 is to update the program plan and complete the systems engineering effort necessary to integrate the MKV warhead into the BMDS to the extent required for the systems requirements review. Both the system concept review, completed in July 2006, and the system requirements review, scheduled for December 2008, are major milestones. However, the contractor did not plan these milestone reviews as products. According to program officials, Task Order 4 will be reevaluated in February 2008 to reduce the amount of LOE and recognize more work as discrete. The MKV program also planned 100 percent of Task Order 5 work as LOE. Under this task order, the contractor was to design a prototype propulsion system, assemble and integrate the hardware for the prototype, and perform a static hot fire test of the integrated system. This effort culminates in hardware—a tangible end product—that is expected to exhibit certain performance characteristics during the static hot fire test. The contractor could have categorized this task order, at least in some part, as discrete work since the work was expected to deliver a defined product with a schedule that could slip or vary. Because the contractor categorized all of this task order as LOE, the program lost its ability to gauge performance and to make adjustments that might prevent contract cost growth. We analyzed Fiscal Year 2007 Contract Performance Reports for MDA’s 10 prime contracts and determined that collectively the contractors overran budgeted costs by nearly $170 million but were ahead of schedule by nearly $200 million. However, the percentage of work planned as LOE should be scrutinized before accepting this as the contractors’ actual performance because a high percentage of LOE, as noted above, can potentially distort the contractors’ cost and schedule performance. The cumulative performance of one contractor is also distorted because it rebaselined part of its work. Rebaselining is an accepted EVM procedure that allows a contractor to reorganize all or part of its remaining contract work, add additional time or budget for the remaining effort, and, under some circumstances, set affected cost and/or schedule variances to zero. When variances are set to zero, the cumulative performance of the contractor appears more positive than it is. Four of the 10 contracts we reviewed also contained some kind of replanning activity during fiscal year 2007. Contractors may replan when they conclude that the current plan for completing the effort remaining on the contract is unrealistic. A replan can consist of any of the following: reallocating the budget for the remaining effort within the existing constraints of the contract, realigning the schedule within the contractually defined milestones, and setting cost and/or schedule variances to zero. During the course of replanning a contract, the contractor must provide traceability to previous baselines as well as ensure that available funding is not exceeded. The Aegis BMD program awarded two prime contracts for its major components, the Aegis BMD Weapon System and the Standard Missile-3. During the fiscal year, the contractors completed all work at less cost than budgeted. Both contractors ended the year with positive cumulative cost variances, but negative cumulative schedule variances. Based on our analysis, we project that if the contractors continue to perform at the same level, the weapon system contractor could underrun its budget by between $8.8 million and $17.7 million, while the SM-3 contractor could complete its work on 20 Block 1A missiles for $7.4 million to $11.1 million less than budgeted. The weapon system contractor’s fiscal year 2007 cost performance resulted in a positive cost variance of $7.7 million. The positive variance was realized as two software packages required less effort than anticipated and were completed earlier than expected. Combined with its performance from earlier periods, the contractor finished the year with a cumulative positive cost variance of $7 million. This upward trend is depicted in figure 3. The contractor produced a $3.8 million unfavorable schedule variance in fiscal year 2007. The contractor reported that the unfavorable cumulative variance was caused in part by a delay in receiving component materials for the radar’s processor. During fiscal year 2007, the Aegis SM-3 contractor closed out work related to missile development and initial production of Block 1A missiles and began new work in February 2007 to manufacture an additional 20 Block 1A missiles. In performing the new work, the contractor underran its cost budget by $6.2 million, but failed to complete $4.0 million of planned work. The Aegis BMD SM-3 contractor’s cumulative cost and schedule variances are highlighted in figure 4. The positive cost variance can be attributed to several factors including cost efficiencies realized from streamlining system engineering resources and lower than planned hardware costs. Our analysis predicts that if the SM-3 contractor continues to perform as it did through September 2007, it will underrun its budgeted costs for the 20 Block 1A missiles by between $7.4 million and $11.1 million. The contractor’s negative cumulative schedule variance of $4 million for the 20 missiles was primarily caused by delayed qualification testing and integration of hardware components. In May 2007, MDA allowed ABL’s contractor to rebaseline one part of its contract after the work associated with a key knowledge point could not be completed on schedule. Because the contractor did not achieve this knowledge point as planned, the program was forced to postpone its lethality demonstration until August 2009. Technical issues including weapon system integration, beam control/fire control software modifications, and flight testing discoveries, all contributed to the delay in completing the knowledge point for the program. To provide funds and time to support the delay in the lethality demonstration, the program extended the contract’s period of performance by approximately 1 year and increased the contract’s ceiling cost by $253 million. Once the new baseline was incorporated, the contractor was able to complete fiscal year 2007 with positive cost and schedule variances of $3.7 million and $24.2 million, respectively. Figure 5 depicts the contractor’s cumulative cost and schedule performance. As shown in figure 5 above, the ABL contractor was not able to overcome the negative cost and schedule variances of prior years and ended the fiscal year with an unfavorable cumulative cost variance of $74.2 million and an unfavorable cumulative schedule variance of $25.8 million. We estimate that, at completion, the contract could overrun its budget by between $95.4 million and $202.5 million. During fiscal year 2006, the C2BMC contractor did not report earned value because it was working on a replan of its Block 2006 increment of work (known as Part 4). Following the definitization of the Part 4 replan in November 2006, the C2BMC contractor resumed full EVM reporting with the first submittal covering February 2007 data. As part of the replan, the contractor adjusted a portion of its Part 4 work and set cost and schedule variances to zero in an effort to establish a baseline commensurate with the contractor’s replanning efforts. However, even with the adjustment, the C2BMC program ended fiscal year 2007 with negative fiscal year cost and schedule variances of $11.1 million and $1.5 million, respectively. Figure 6 shows the contractor’s cumulative performance in fiscal year 2007. The unfavorable fiscal year cost variance was largely due to adding staff to support a software release; while the unfavorable fiscal year schedule variance was attributable to delays in hardware delivery, initiation of a new training system, and completing training material for the new system. Added to prior year negative variances, the C2BMC contractor reported cumulative negative cost and schedule variances of $14.5 million and $3.5 million, respectively. The contractor completed Part 4 work in December 2007 and reported an overrun of $9.9 million. The GMD prime contractor’s cost performance improved significantly in fiscal year 2007. The contractor experienced a budget overrun of $22.1 million for the fiscal year following budget overruns in both fiscal years 2005 and 2006 that exceeded $300 million. Program officials attribute this turnaround in performance to several factors, including rigorous management of the contract’s estimate at completion, quality initiatives, and joint efforts by the contractor and program office to define scope, schedule, and price of change orders. The cumulative cost variance at the end of fiscal year 2007 was over $1 billion. We estimate that at completion the contract, with a target price of $15.54 billion, could exceed its budgeted cost by between $1.06 billion and $1.4 billion. The contractor was able to complete $84.9 million more work than scheduled for fiscal year 2007, but could not overcome poor performance in earlier years and ended the year with a negative cumulative schedule variance of $52.9 million. Figure 7 illustrates both cost and schedule trends in GMD fiscal year 2007 performance. The unfavorable fiscal year cost variance is primarily attributable to the EKV. During fiscal year 2007, the EKV contractor experienced negative cost variances as it incurred additional labor costs to recover delivery schedules, manufacturing schedule delays, hardware manufacturing problems, and embedded software development and system integration problems. With 18 percent of the EKV work remaining, the negative trends on this component could continue. As we reported last year, the contractor was in the process of developing a new contract baseline to incorporate the updated scope, schedule, and budget that the contractor was working toward. In September 2006, phase one of the new baseline, covering fiscal year 2006-2007 efforts, had been implemented and validated through the Integrated Baseline Review of the prime contractor and its major subcontractors. Phase two of the review was completed in December 2006. Subsequent to the reviews, fiscal year 2007 ground and flight tests were replanned to reflect a contract change that added additional risk mitigation effort to one planned flight test and added a radar characterization system test. The KEI contractor replanned its work in April 2007 when MDA directed the program to focus, in the near term, on two main objectives: booster vehicle development and the 2008 booster flight developmental test. Prior to the replan, the KEI program was developing a land-mobile capability with fire control and communications and mobile launcher components. Although the contractor’s primary objectives are now focused around the booster segment of work, it is still performing some activities related to the fire control and communications component. During fiscal year 2007, the contractor incurred a positive cost variance of $2.1 million and a negative schedule variance of $7.5 million. Combined with variances from earlier fiscal years, the cumulative cost variance is a positive $5.7 million and the cumulative schedule variance is a negative $12.8 million. Figure 8 illustrates KEI’s cumulative performance over the course of the fiscal year. KEI’s fiscal year favorable cost variance primarily results from completing work on the fire control and communications component, as well as systems engineering and integration with fewer staff than planned. We were unable to estimate whether the total contract is likely to be completed within budgeted cost since the contract is only 10 percent complete and trends cannot be developed until at least 15 percent of the contract is completed. Work related to the interceptor’s booster and systems engineering and integration contributed to KEI’s cumulative negative fiscal year schedule variance of $7.5 million. The contractor reports that the booster work was understaffed, which caused delays in finalizing designs that, in turn, delayed procurement of subcomponents and materials and delayed analysis and tests. While the reduction in staff for systems engineering and integration work reduced costs for the contractor, it also delayed completion of the weapon system’s scheduled engineering, flight, and performance analysis products. We could evaluate only two of five MKV task orders open during fiscal year 2007 because the contractor did not report sufficient earned value data to make an assessment of the other three meaningful. MDA awarded the MKV contract in January 2004 and has since initiated eight task orders through its indefinite delivery/indefinite quantity contract. During fiscal year 2007, the program worked on five of these task orders—Task Orders 4, 5, 6, 7, and 8. We evaluated the contractor’s cost and schedule performance for Task Orders 5 and 6 only. Of the three task orders that we did not evaluate, the contractor began reporting full earned value on two so late in the fiscal year that little data was available for analysis. In the third case, the contractor’s reports did not include all data needed to make a cost and schedule assessment. In June 2006, MDA issued Task Order 5 which directed the design, assembly, and integration of the hardware for a prototype propulsion system, and a static hot fire test of the integrated prototype. Because the contractor planned all activities for this task order as level of effort, the contractor reported zero schedule variance. Contract Performance Reports show that in preparation for the hot fire test in August 2007, the program discovered anomalies indicative of propellant contamination in the prototype’s propulsion system. These anomalies led to multiple unplanned propellant tank anomaly investigations, which contributed to the unfavorable $2.3 million cost variance for the fiscal year. Additionally, during the hot fire test, one of the thrusters in the propulsion system’s divert and attitude control component experienced anomalies due to foreign object contamination. This anomaly led to unplanned investigations which also contributed to increased costs. Figure 9 below depicts the unfavorable cumulative cost variance of $2.7 million and cumulative schedule variance of zero reported for Task Order 5. Based on our analysis, we predict the contractor will overrun its contract costs by between $2.6 million and $2.9 million. MKV’s objective for Task Order 6 is to manufacture a prototype seeker capable of acquiring, tracking, and discriminating objects in space. The program plans to demonstrate the prototype seeker, which is a component of a carrier vehicle, through testing in 2009. In contrast to Task Order 5, the contractor correctly planned the bulk of Task Order 6 as discrete work and has been reporting the work’s cost and schedule status since March 2007. During this time, the contractor has completed 37 percent of the work directed by the task order at $0.3 million less than budgeted. The contractor was also able to complete $0.9 million more work than planned. See Figure 10 for an illustration of cumulative cost and schedule variances for this task order. The program attributes its favorable fiscal year cost and schedule variances for Task Order 6 to the early progress made on interface requirements, hardware procurements, component drawings, and the prototype seeker’s architecture. Because detailed designs for the seeker are derived from models, the program is anticipating some rework will be needed as the designs are developed, processed, and released. Although program officials are expecting some degradation in cumulative cost and schedule variances to occur, the program does not expect an overrun of the contract’s budgeted cost at completion. Based on the contractor’s performance to date, we predict, at contract completion, the contractor will underrun costs by between $0.8 million and $2.5 million. The Sensors contractor’s performance during fiscal year 2007 resulted in a positive cost variance of $3.9 million and an unfavorable schedule variance of $8.8 million. Added to variances from prior years, the contractor is reporting cumulative positive cost and schedule variances of $24.1 million and $17.8 million, respectively. The contractor’s performance in 2007, suggests that at completion the contract will cost from $22.0 million to $46.8 million less than budgeted. The variances, depicted below in figure 11 represent the Sensors contractor’s cumulative cost and schedule performance over fiscal year 2007. The contractor has reported favorable schedule and cost variances since the contract’s inception because the program was able to leverage the hardware design of the THAAD radar to reduce development timelines and it implemented manufacturing efficiencies to reduce manufacturing costs. However, during fiscal year 2007, the contractor experienced a negative schedule variance as it struggled to upgrade software expected to provide an increased capability for the FBX-T radar. After replanning a portion of its work in October 2006, the STSS contractor in fiscal year 2007 experienced an unfavorable cost variance of $67.7 million and a favorable schedule variance of $84.7 million. Combined with performance from earlier periods, the contractor is reporting cumulative negative cost and schedule variances of $231.4 million and $19.7 million, respectively. Figure 12 shows both cost and schedule trends during fiscal year 2007. During the fiscal year, the contractor was able to accomplish a significant amount of work ahead of schedule after a replan added additional time for planned work efforts. However, the contractor was unable to overcome the negative schedule variances incurred in prior years. Delays in hardware and software testing as well as integration issues contributed to fiscal year 2007’s negative cost variance. We did not estimate the cost of the STSS contract at completion. The contract includes not only the effort to develop and launch two demonstration satellites (the Block 2006 capability) but also effort that will benefit future blocks. Block 2006 work is about 86 percent complete, while work on future blocks is about 16 percent complete. The THAAD contractor overran its fiscal year 2007 budgeted costs by $91.1 million but accomplished $19.0 million more work than scheduled. Cumulatively, the contractor ended the year with an unfavorable cost variance of $195.2 million and a negative schedule variance of $9.1 million, as shown by figure 13. The THAAD prime contractor’s cost overrun of $91.1 million was primarily caused by technical problems related to the element’s missile, launcher, radar, and test components. Missile component cost overruns were caused by higher than anticipated costs in hardware fabrication, assembly, and support touch labor as well as subcontractor material costs for structures, propulsion, and other sub-assembly components. Additionally, design issues with the launcher’s missile round pallet and the electronics assembly that controls the launcher caused the contractor to experience higher than anticipated labor and material costs. More staff than planned was required to resolve hardware design issues in the radar’s prime power unit, causing the radar component to end the fiscal year with a negative cost variance. The contractor also experienced negative cost variances with the system test component because the Launch and Test Support Equipment required additional set-up time at the flight test range. THAAD’s prime contractor fared better in performing scheduled work. It was able to reduce its negative cumulative schedule variance over the course of the fiscal year because subcontracted missile items were delivered early and three flight tests were removed from the test program to accommodate target availability and budget constraints, allowing staff more time to work on current efforts. The contractor projects an overrun of $174 million at contract completion, while we estimate that the overrun could range from $227.2 million to $325.8 million. To achieve its projection, the contractor needs to complete $1.04 worth of work for every dollar spent. In contrast, during fiscal year 2007, the contractor achieved an average of $0.82 worth of work for each dollar spent. Therefore, it seems unlikely that the contractor will be able to achieve its estimate at completion. Like other DOD programs, MDA has not always effectively used award fees to encourage contractors toward exceptional performance but it is making efforts to revise its award fee policy to do so. Over the course of fiscal year 2007, the agency sometimes rolled over large percentages of award fee—in most cases for work that was moved to later periods but also for one contractor that exhibited poor performance. In addition, some award fee plans allow fee to be awarded to contractors for merely meeting the requirements of their contract. For two contractors, MDA awarded fee amounts that were linked to very good or outstanding work in the cost and/or program management performance elements. During their award fee periods, the contractors’ earned value data showed declines in cost and/or schedule variances, although there are several other factors considered when rating contract performance. However, in June 2007, MDA issued a revised draft of its award fee guide in an effort to more closely link the amount of award fees earned with the level of contractor performance. In an effort to encourage its defense contractors to perform in an innovative, efficient, and effective way in areas considered important to the development of the BMDS, MDA offers its contractors the opportunity to collectively earn billions of dollars through monetary incentives known as award fees. Award fees are intended to motivate exceptional performance in subjective areas such as technical ingenuity, cost, and schedule. Award fees are appropriate when contracting and program officials cannot devise predetermined objective targets applicable to cost, technical performance, or schedule. Currently, all 10 of the contracts we assessed for BMDS elements utilize award fees in some manner to incentivize their contractor’s performance. Each element’s contract has an award fee plan that identifies the performance areas to be evaluated and the methodology by which those areas will be assessed. At the end of each period, the award fee evaluation board, made up of MDA personnel, program officials, and officials from key organizations knowledgeable about the award fee evaluation areas, begins its process. The board judges the contractor’s performance and recommends to a fee determining official the amount of fee to be paid. For all BMDS prime contracts we assessed, the fee determining official is the MDA Director. Table 1 provides a summary of the award fee process. GAO has found in the past that DOD has not always structured and implemented award fees in a way that effectively motivates contractors to improve performance and achieve acquisition outcomes. Specifically, GAO cited four issues with DOD’s award fee processes. GAO reported that in many evaluation periods when rollover—the process of moving unearned available award fee from one evaluation period to the next—was allowed, the contractor had the chance to earn almost the entire unearned fee, even in instances when the program was experiencing problems. Additionally, DOD guidance and federal acquisition regulations state that award fees should be used to motivate excellent contractor performance in key areas. However, GAO found that most DOD award fee contracts were paying a significant portion of the available fee from one evaluation period to the next for what award fee plans describe as “acceptable, average, expected, good, or satisfactory” performance. Furthermore, DOD paid billions of dollars in award fees to programs whose costs continued to grow and schedules increased by many months or years without delivering promised capabilities to the warfighter. GAO also found that some award fee criteria for DOD programs were focused on broad areas— such as how well the contractor was managing the program—instead of criteria directly linked with acquisition outcomes—such as meeting cost and schedule goals, and delivering desired capabilities. All of these DOD practices contribute to the difficulty in linking elements of contractor performance considered in award fee criteria to overall acquisition outcomes and may lessen the motivation for the contractor to strive for excellent performance. We assessed all award fee plans for the BMDS elements and fiscal year 2007 award fee letters for 9 of the 10 contractors. Our review revealed that during 2007 MDA experienced some of the same award fee problems that were prevalent in other DOD programs. MDA did not roll fee forward often, but when it did the contractor was, in one case, able to earn 100 percent of that fee. Also, MDA allowed another contractor to earn the unearned portion of fiscal year 2007 award fee in the same period through a separate pool composed of the unearned fee but tied to other performance areas. In two other instances, MDA awarded fee amounts that were linked to very good or outstanding work in the cost and/or program management performance element. However, during the award fee periods, earned value data indicates that these two contractors’ cost and/or schedule performance continued to decline. Although DOD guidance discourages use of earned value performance metrics in award fee criteria, MDA includes this as a factor in several of its award fee plans. MDA considers many factors in rating contractors’ performance and making award fee determinations, including considerations of earned value data that shows cost, schedule, and technical trends. Table 9 provides the award fee MDA made available to its contractors, as well as the fee earned during fiscal year 2007. MDA is awarding some BMDS contractors a large percentage of the fees rolled over from a prior period. The agency’s award fee plans allow the fee determining official, at his discretion, to rollover all fee that is not awarded during one period to a future period. For example, in accordance with MDA’s award fee policy, the fee determining official may consider award fee rollover when a slipped schedule moves an award fee event to another period, it is the desire of the fee determining official to add greater incentive to an upcoming period, and when the contractor improves performance to such a great extent that it makes up for previous shortfalls. During fiscal year 2007, MDA rolled fee forward for 3 of the 8 contractors for which award fee letters were available. Table 10 presents a synopsis of this data. As noted in table 10, MDA rolled over a large percentage of the fee that was not earned by the THAAD contractor during fiscal year 2007. During its last award fee period in fiscal year 2007, the THAAD contractor did not earn any of the fee associated with cost management. The award fee letter cited unfavorable cost variances and a growing variance projected at completion of the contract as the reasons for not awarding any of the fee for cost management. However, the fee determining official decided to roll 100 percent of that portion of the unearned fee to a rollover pool tied to minimizing cost overruns. Fee will be awarded from this pool at the end of the contract. By rolling the fee forward, MDA provided the contractor an additional opportunity to earn fee from prior periods. Rolling over fee in this instance may have failed to motivate the contractor to meet or exceed expectations. The award fee plan for the GMD contract allowed the contractor to not only rollover fee, but to earn all unearned fee in the same period. During the fiscal year, the GMD contractor earned 97.7 percent of the $330 million dollars in award fees tied to performance areas outlined in the award fee plan. However, the award fee plan made provisions for the contractor to earn the unearned $7.5 million by creating a separate pool funded solely from this unearned portion and awarding the fee for performance in other areas. In this instance, the contractor did not have to wait to earn rolled over fees in later award fee periods—it was able to receive the unearned portion in the same period despite not meeting all of the criteria for its original objectives. GMD officials told us that this fee incentivized the contractor to achieve added objectives. In contrast, the fee determining official handled rollover of fee on the ABL contract in accordance with DOD’s new policy. According to ABL’s award fee plan, MDA was to base its 2007 award fee decision primarily on the outcome of three knowledge points. During this period, the contractor completed two of the knowledge points, but could not complete a third. To encourage the contractor to complete the remaining knowledge point in a timely manner, the fee determining official rolled over only 35 percent of the fee available for the event. All of the award fee plans we assessed allowed MDA to award fees for satisfactory ratings—that is, work considered to meet most of the requirements of the contract. Some award fee plans even allow fee for marginal performance or performance considered to meet some of the requirements of the contract. By paying for performance at the minimum standards or requirements of the contract, the intent of award fees to provide motivation for excellence above and beyond basic contract requirements is lost. While the definitions of satisfactory or marginal differed from element to element, the award fee plans allotted roughly more than 50 percent award fee to contractors performing at these levels. According to the award fee plans, MDA allows between 51 and 65 percent of available fee for work rated as marginal for the C2BMC and KEI contractor and no less than 66 percent of available fee for satisfactory performance by the ABL contractor. MDA’s practice of allowing more than 50 percent of available fee for satisfactory or, even, marginal performance illustrates why DOD in April 2007 directed that no more than 50 percent of available fee be given for satisfactory performance on all contract solicitations commencing after August 1, 2007. Earned value is one of several factors that according to the award fee plans for the ABL and Aegis BMD Weapon System contractors will be considered in rating the contractors’ cost and program management performance. During a good part of fiscal year 2007, earned value data for both contractors showed that they were overrunning their fiscal year cost budgets. In addition, the ABL contractor was not completing all scheduled work. Even considering these variances, MDA presented the contractors with a significant portion of the award fee specifically tied to cost and/or program management. In contrast, the THAAD contractor also experienced downward trends in its cost variance during its last award fee period in fiscal year 2007, but was not paid any of the award fee tied to cost management. The ABL and Aegis BMD Weapon System contractors received a large percentage of the 2007 award fee available to them for the cost and/or program management element. According to ABL’s award fee plan, one of several factors that is considered in rating the contractor’s performance as “very good” is whether earned value data indicates that there are few unfavorable cost, schedule, and/or technical variances or trends. During the award fee period that ran from February 2006 to January 2007, MDA rated the contractor’s cost and program management performance as very good and awarded 88 percent of the fee available for these areas of performance. Yet, earned value data indicates that the contractor overran its budget by more than $57 million and did not complete $11 million of planned work. Similarly, the Aegis BMD weapon system contractor was to be rated in one element of its award fee pool as to how effectively it managed its contract’s cost. Similar to ABL’s award fee plan, the weapon system contractor’s award fee plan directs that earned value data be one of the factors considered in evaluating cost management. During the fee period that ran from October 2006 through March 2007, MDA rated the contractor’s performance in this area as outstanding and awarded the contractor 100 percent of the fee tied to cost management. Earned value data for this time period indicates that the contractor overran its budget by more than $6 million. MDA did not provide us with more detailed information as to other factors that may have influenced its decision as to the amount of fee awarded to the ABL and Aegis BMD contractors. In another instance, MDA more closely linked earned award fee to contractor performance. The THAAD contractor continued to overrun its 2007 cost budget, and was not awarded any fee tied to the cost management element during its last award fee period in fiscal year 2007. The award fee decision letter cites several examples of the contractor’s poor cost performance including cost overruns and an increased projected cost variance at contract completion. These and other cost management issues led the fee determining official to withhold the $9.8 million to be awarded on the basis of cost management. MDA has made efforts to comply with DOD policy regarding some of GAO’s recommendations and responded to the DOD issued guidance by releasing its own revised award fee policy in February 2007. According to the policy, every contract’s award fee plan is directed to include: a focus on developing specific award fee criteria for each element of an emphasis on rewarding results rather than effort or activity, and an incentive to meet or exceed MDA requirements. Additionally, the directive calls for using the Award Fee Advisory Board, established to make award fee recommendations to the fee determining official, to biannually review and report to the Director on the consistency between MDA’s award fees and DOD’s Contractor Performance Assessment Report—which provides a record, both positive and negative, on a given contract for a specific period of time. MDA’s directive also requires program managers to implement MDA’s new award fee policy at the earliest logical point, which is normally the beginning of the next award fee period. MDA is currently constructing a revised draft of its award fee guide that addresses the rollover and rating scale issues from DOD’s March 2006 and April 2007 memorandums. In the latest draft, MDA limits rollover to exceptional cases and adopts the Under Secretary’s limitation of making only a portion of award fee available for rollover. MDA’s latest draft of the guide also makes use of the latest ratings scale, referencing the Under Secretary’s April 2007 direction, and applies the usage of the new scales to contract solicitations beginning after July 31, 2007. MDA sometimes finds that events such as funding changes, technology advances, and concurrent development and deployment of the BMDS arise that make changes to the contract’s provisions or terms necessary. MDA describes contract changes that are within the scope of the contract but whose final price, or cost and fee, the agency and its contractor have not agreed upon as unpriced changes. MDA has followed the FAR in determining how quickly the agency should reach agreement on such unpriced changes’ price, or cost and fee. According to the FAR, an agreement should be reached before work begins if it can be done without adversely affecting the interest of the government. If a significant cost increase could result from the unpriced change, and time doesn’t permit negotiation of a price, the FAR requires the negotiation of a maximum price unless it is impractical to do so. In 2007, MDA began applying tighter limits on definitization of price. MDA also issues unpriced task orders. MDA uses this term to describe task orders issued under established contract ordering provisions, such as an indefinite delivery/indefinite quantity contract, for which a definitive order price has not yet been agreed upon. MDA has followed the FAR requirements that task orders placed under an indefinite delivery/indefinite quantity contract must contain, at least, an estimated cost or fee. During Block 2006—January 1, 2006 through December 31, 2007—MDA authorized 137 unpriced changes and task orders with a value of more than $6 billion. Consistent with the FAR requirements noted above, of the total 137 unpriced changes and unpriced task orders, 61 percent of these— totaling $5.9 billion—were not priced for more than 180 days. Agreement on the price of several was not reached for more than a year, and agreement on the price of one was not reached for more than two and a half years. Table 11 below shows the value of unpriced changes and task orders issued on behalf of each BMDS element and the number of days after the contractor was authorized to proceed with the work before MDA and its contractor agreed to a price, or cost and fee, for the work. Realizing that unpriced changes and unpriced task orders may greatly reduce the government’s negotiation leverage and typically result in higher cost and fee for the overall effort, MDA, in February 2007, issued new contract guidance that required tighter limits on the timeframes for reaching agreement on price, or cost and fee. The agency now applies some of the Defense Federal Acquisition Regulation Supplement guidelines established for undefinitized contract actions to unpriced changes and unpriced task orders. Undefinitized contract actions are different from MDA’s unpriced changes or unpriced task orders in that they are contract actions on which performance is begun before agreement on all contract terms, including price, or cost and fee, is reached. A contract modification or change will not be considered an undefinitized contract action if it is within the scope and under the terms of the contract. MDA has elected to follow some of the stricter undefinitized contract action guidelines because the agency believes the guidelines will lead to better cost results. Similar to the undefinitized contract action guidelines, the agency’s new guidelines require that MDA’s unpriced changes and unpriced task orders be definitized within 180 days, that the contractor be given a dollar value that it cannot exceed until price agreement is reached, and that approval for the unpriced change or task order be obtained in advance. MDA’s new policy also, to the maximum extent practicable, limits the amount of funds that a contractor may be given approval to spend on the work before agreement is reached on price to less than 50 percent of the work’s expected price. MDA officials maintain that support contracts provide necessary personnel and are instrumental in developing the BMDS quickly. The agency contracts with 45 different companies that provide the majority of the personnel who perform a variety of tasks. Table 12 illustrates the broad categories of job functions that MDA support contractors carry out. Last year we reported that MDA had 8,186 approved personnel positions. This number has not changed appreciably in the last year. According to MDA’s manpower database, about 8,748 personnel positions—not counting prime contractors—currently support the missile defense program. These positions are filled by government civilian and military employees, contract support employees, employees of federally funded research and development centers (FFRDC), researchers in university and affiliated research centers, as well as a small number of executives on loan from other organizations. MDA funds around 95 percent of the total 8,748 positions through its research and development appropriation. Of this 95 percent, 2,450, or about 29 percent, of the positions are set aside for government civilian personnel. Another 60 percent, or 5,005 positions, are allotted for support contractors. The remaining 11 percent are positions either being filled, or expected to be filled, by employees of FFRDCs and university and affiliated research centers that are on contract or under other types of agreements to perform missile defense tasks. MDA officials noted that nearly 500 of the 8,748 personnel positions available were currently vacant. Table 13 shows the staffing levels within the BMDS elements. Support contractors in MDA program and functional offices may perform tasks that closely support those tasks described in the FAR as inherently governmental. According to the FAR, tasks such as determining agency policy and approving requirements for prime contracts should only be performed by government personnel. Contract personnel that, for example, develop statements of work, support acquisition planning, or assist in budget preparation are carrying out tasks that may closely support tasks meeting this definition. Having support contractors perform these tasks may create a potential risk that the contractors may influence the government’s control over and accountability for decisions. MDA officials told us that when support contractors perform tasks that closely support those reserved for government employees the agency mitigates its risk by having knowledgeable government personnel provide regular oversight or final approval of the work to ensure that the data being generated is reasonable. In the tables below we provide more information comparing the cost of purchasing THAAD and Aegis BMD assets incrementally versus fully- funding the assets. Table 14 presents MDA’s incremental funding plans for THAAD fire units 3 and 4, 48 Aegis BMD (SM-3) missiles to be produced during Blocks 2012 and 2014, and 19 shipsets intended to improve the performance of Aegis BMD ships. Tables 15 through 17 present our analysis of the cost of purchasing these same assets with procurement funds and following Congress’ full-funding policy. To examine the progress MDA made in fiscal year 2007 toward its Block 2006 goals, we examined the accomplishments of nine BMDS elements. The elements included in our review collectively accounted for 77 percent of MDA’s fiscal year 2007 research and development budget request. We evaluated each element’s progress in fiscal year 2007 toward Block 2006 schedule, testing, performance, and cost goals. In assessing each element we examined Program Execution Reviews, test plans and reports, production plans, Contract Performance Reports, and MDA briefing charts. We developed data collection instruments that were completed by MDA and each element program office. The instruments gathered detailed information on completed program activities including tests, prime contracts, and estimates of element performance. To understand performance issues, we talked with officials from MDA’s Deputy for Engineering and Program Director for Targets and Countermeasures, each element program office, as well as the office of DOD’s Director, Operational Test and Evaluation. To assess each element’s progress toward its cost goals, we reviewed Contract Performance Reports and, when available, the Defense Contract Management Agency’s analyses of these reports. We applied established earned value management techniques to data captured in Contract Performance Reports to determine trends and used established earned value management formulas to project the likely costs of prime contracts at completion. We also interviewed MDA officials within the Deputy for Acquisition Management office to gather detailed information regarding BMDS prime contracts. We reviewed 10 prime contracts for the 9 BMDS elements and also examined fiscal year 2007 award fee plans, award fee letters, and gathered data on the number of and policy for unpriced changes and unpriced task orders. We became familiar with sections of the Federal Acquisition Regulation and Defense Federal Acquisition Regulation Supplement dealing with contract type, contract award fees, and undefinitized contract actions. To develop data on support contractors, we held discussions with officials in MDA’s Office of Business Operations. We also collected data from MDA’s Pride database on the numbers and types of employees supporting MDA operations. In assessing MDA’s accountability, transparency, and oversight, we interviewed officials from the Office of the Under Secretary of Defense’s Office for Acquisition, Technology, and Logistics and Joint Staff officials. We also examined a Congressional Research Service report, U.S. Code, DOD acquisition system policy, the MDEB Charter, and various MDA documents related to the agency’s new block structure. In determining whether MDA would save money if it fully funded THAAD and Aegis BMD assets rather than funding them incrementally, we used present value techniques to restate dollars that MDA planned to expend over a number of years to the equivalent number of dollars that would be needed if MDA fully funded the assets in the fiscal year that incremental funding was to begin. We also considered whether MDA would need to acquire long lead items for the assets and stated those dollars in the base year that their purchase would be required. We then compared the total cost of incrementally funding the assets, as shown in MDA’s funding plans, to the fully funded cost that our methodology produced. To ensure that MDA-generated data used in our assessment are reliable, we evaluated the agency’s management control processes. We discussed these processes with MDA senior management. In addition, we confirmed the accuracy of MDA-generated data with multiple sources within MDA and, when possible, with independent experts. To assess the validity and reliability of prime contractors’ earned value management systems and reports, we interviewed officials and analyzed audit reports prepared by the Defense Contract Audit Agency. Finally, we assessed MDA’s internal accounting and administrative management controls by reviewing MDA’s Federal Manager’s Financial Integrity Report for Fiscal Years 2003, 2004, 2005, 2006, and 2007. Our work was performed primarily at MDA headquarters in Arlington, Virginia. At this location, we met with officials from the Aegis Ballistic Missile Defense Program Office; Airborne Laser Program Office; Command, Control, Battle Management, and Communications Program Office; BMDS Targets Office, and MDA’s Agency Operations Office. We also met with DOD’s Office of the Director, Operational Test and Evaluation and the Office of the Under Secretary of Defense for Acquisition, Technology and Logistics in Washington, DC. In addition, in Huntsville, Alabama, we met with officials from the Ground-based Midcourse Defense Program Office, the Terminal High Altitude Area Defense Project Office, the Kinetic Energy Interceptors Program Office, the Multiple Kill Vehicle Program Office, and BMDS Tests Office. We also met with Space Tracking and Surveillance System officials in Los Angeles, California. We conducted this performance audit from May 2007 to March 2008 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the individual named above, Barbara Haynes, Assistant Director; LaTonya Miller; Sigrid McGinty; Michele R. Williamson; Michael Hesse; Steven Stern; Meredith Allen Kimmett; Kenneth E. Patton; and Alyssa Weir made key contributions to this report. | By law, GAO annually assesses the Missile Defense Agency's (MDA) progress in developing and fielding a Ballistic Missile Defense System (BMDS). Funded at $8 billion to nearly $10 billion per year, it is the largest research and development program in the Department of Defense (DOD). The program has been managed in 2-year increments, known as blocks. Block 2006, the second BMDS block, was completed in December 2007. GAO assessed MDA's progress in (1) meeting Block 2006 goals for fielding assets, completing work within estimated cost, conducting tests, and demonstrating the performance of the overall system in the field, and (2) making managerial improvements to transparency, accountability, and oversight. In conducting the assessment, GAO reviewed the assets fielded; contractor cost, schedule, and performance; and tests completed during 2007. GAO also reviewed pertinent sections of the U.S. Code, acquisition policy, and the charter of a new missile defense board. MDA made progress in developing and fielding the BMDS during Block 2006 but fell short of meeting its original goals. Specifically, it fielded additional assets such as land-based interceptors and sea-based missiles and upgraded other assets, including Aegis BMD-equipped ships. It also met most test objectives, with a number of successful tests conducted. As a result, fielded capability has increased. On the other hand, it is difficult to assess how well BMDS is progressing relative to the funds it has received because fewer assets were fielded than originally planned, the cost of the block increased by at least $1 billion, some flight tests were deferred, and the performance of the fielded system remains unverified. In particular, GAO could not determine the full cost of Block 2006 because MDA continued to defer budgeted work into the future, where it is no longer counted as a Block 2006 cost. Also making cost difficult to assess is a work planning method--referred to as level of effort--used by contractors that does not link time and money with what is produced. When not appropriately used, level-of-effort planning can obscure work accomplished, portending additional cost in the future. MDA is working to minimize the use of this planning method--a needed step as contractors overran their fiscal year 2007 budgets. Performance of the fielded system is as yet not verifiable because too few tests have been conducted to validate the models and simulations that predict BMDS performance. Moreover, the tests that are done do not provide enough information for DOD's independent test organization to fully assess the BMDS' suitability and effectiveness. GAO has previously reported that MDA has been given unprecedented funding and decision-making flexibility. While this flexibility has expedited BMDS fielding, it has also made MDA less accountable and transparent in its decisions than other major programs, making oversight more challenging. MDA, with direction from Congress, has taken several steps to address these concerns. MDA implemented a new way of defining blocks--its construct for developing and fielding BMDS increments--that should make costs more transparent. For example, under the newly-defined blocks, MDA will no longer defer work from one block to another. Accountability should also be improved as MDA will, for the first time, estimate unit costs for selected assets and report variances from those estimates. DOD also chartered a new board with more BMDS oversight responsibility than its predecessor, although it does not have approval authority for some key decisions made by MDA. Finally, MDA will begin buying certain assets with procurement funds like other programs. This will benefit transparency and accountability, because procurement funding generally requires that assets be fully paid for in the year they are bought. Previously, MDA, with Congressional authorization, was able to pay for assets incrementally over several years. Additional steps could be taken to further improve oversight. For example, MDA has not yet estimated the total cost of a block, and therefore, cannot have its costs independently verified--actions required of other programs to inform decisions about affordability and investment choices. However, MDA does plan to estimate block costs and have them verified at some future date. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
The Forest Service and Interior collectively manage about 700 million acres of federal land, much of which is considered to be at high risk of fire. Federal researchers estimate that from 90 million to 200 million acres of federal lands in the contiguous United States are at an elevated risk of fire because of abnormally dense accumulations of vegetation, and that these conditions also exist on many nonfederal lands. Addressing this fire risk has become a priority for the federal government, which in recent years has significantly increased funding for fuels reduction. Fuels reduction is generally done through prescribed burning, in which fires are deliberately lit in order to burn excess vegetation, and mechanical treatments, in which mechanical equipment is used to cut vegetation. Although prescribed burning is generally less expensive on a per-acre basis than mechanical treatment, prescribed fire may not always be the most appropriate method for accomplishing land management objectives—and in many locations it is not an option, because of concerns about smoke pollution, for example, or because vegetation is so dense that agency officials fear a prescribed fire could escape and burn out of control. In such situations, mechanical treatments are required, generating large amounts of wood—particularly small-diameter trees, limbs, brush, and other material that serve as fuel for wildland fires. Woody biomass can be used in many ways. Small logs can be peeled and used as fence posts, or can be joined together with specialized hardware to construct pole-frame buildings. Trees also can be milled into structural lumber or made into other wood products, such as furniture, flooring, and paneling. Woody biomass also can be chipped for use in paper pulp production and for other uses—for example, a New Mexico company combines juniper chips with plastic to create a composite material used to make road signs—and can be converted into other products such as ethanol and adhesives. Finally, woody biomass can be chipped or ground for energy production in power plants and other applications. Citing biomass’s potential to serve as a source of electricity, fuel, chemicals, and other materials, the President and the Congress have encouraged federal activities regarding biomass utilization—but until recently, woody biomass received relatively little emphasis. Major congressional direction includes the Biomass Research and Development Act of 2000, the Farm Security and Rural Investment Act of 2002, the Healthy Forests Restoration Act of 2003, and the American Jobs Creation Act of 2004. Utilization of woody biomass also is emphasized in the federal government’s National Fire Plan, a strategy for planning and implementing agency activities related to wildland fire management. For example, a National Fire Plan strategy document cites biomass utilization as one of its guiding principles, recommending that the agencies “employ all appropriate means to stimulate industries that will utilize small-diameter woody material resulting from hazardous fuel reduction activities.” Federal agencies also are carrying out research concerning the utilization of small-diameter wood products as part of the Healthy Forests Initiative, the administration’s initiative for wildland fire prevention. Most of the federal government’s woody biomass utilization efforts are being undertaken by USDA, DOE, and Interior. While some activities are performed jointly, each department also conducts its own activities, which generally involve grants for small-scale woody biomass projects; research on woody biomass uses; and education, outreach, and technical assistance aimed at woody biomass users. USDA, DOE, and Interior have undertaken a number of joint efforts related to woody biomass. In June 2003, the three departments signed a memorandum of understanding on woody biomass utilization, and the departments sponsored a 3-day conference on woody biomass in January 2004. The departments also have established an interagency Woody Biomass Utilization Group, which meets quarterly to discuss relevant developments and to coordinate departmental efforts. Another interdepartmental collaboration effort is the Joint Biomass Research and Development Initiative, a grant program conducted by USDA and DOE and authorized under the Biomass Research and Development Act of 2000. The program provides funds for research on biobased products. DOE also has collaborated with both USDA and BLM on assessment of biomass availability, while USDA and Interior have entered into a cooperative agreement with the National Association of Conservation Districts to promote woody biomass utilization. USDA, DOE, and Interior also participate in joint activities at the field level. For example, DOE’s National Renewable Energy Laboratory (NREL) and the Forest Service have collaborated in developing and demonstrating small power generators that use woody biomass for fuel. The Forest Service also collaborates with Interior in funding and awarding grants under the Fuels Utilization and Marketing program, which targets woody biomass utilization efforts in the Pacific Northwest. The agencies also collaborate with state and local governments to promote the use of woody biomass—for example, the Forest Service, NREL, and BLM entered into a memorandum of understanding with Jefferson County, Colorado, to study the feasibility of developing an electricity-generating facility that would use woody biomass. Most of USDA’s woody biomass utilization activities are undertaken by the Forest Service and involve grants, research and development, and education, outreach, and technical assistance. The Forest Service provides grants through its Economic Action Programs, created to help rural communities and businesses dependent on natural resources become sustainable and self-sufficient. The Forest Service also has created a grant program in response to a provision in the Consolidated Appropriations Act for Fiscal Year 2005, which authorized up to $5 million for grants to create incentives for increased use of biomass from national forest lands. Two other USDA agencies—the Cooperative State Research, Education and Extension Service (CSREES) and USDA Rural Development—maintain programs that could include woody biomass utilization activities. CSREES oversees the Biobased Products and Bioenergy Production Research grant program and the McIntyre-Stennis grant program, which provides grants to states for research into forestry issues under the McIntyre-Stennis Act of 1962. Within USDA Rural Development, the Rural Business-Cooperative Service oversees a grant program emphasizing renewable energy systems and energy efficiency among rural small businesses, farmers, and ranchers, and the Rural Utilities Service maintains a loan program for renewable energy projects. Forest Service researchers are conducting research into a variety of woody biomass issues. Researchers have conducted assessments of the woody biomass potentially available through land management projects and have developed models of the costs and revenues associated with thinning projects. Researchers also are studying the economics of woody biomass use in other ways; one researcher, for example, is beginning an assessment of the economic, environmental, and energy-related impacts of using woody biomass for power generation. The Forest Service also conducts extensive research, primarily at its Forest Products Laboratory, into uses for woody biomass, including wood-plastic composites and water filtration systems that use woody biomass fibers, as well as less expensive ways of converting woody biomass to liquid fuels. In addition, the Forest Service conducts extensive education, outreach, and technical assistance activities. Much of this activity is conducted by the Technology Marketing Unit (TMU) at the Forest Products Laboratory, which provides woody biomass users with technical assistance and expertise in wood products utilization and marketing. Forest Service field office staff also provide education, outreach, and technical assistance, and each Forest Service region has an Economic Action Program coordinator who has involvement in woody biomass issues. For example, one such coordinator organized a “Sawmill Improvement Short Course” designed to provide information to small-sawmill owners regarding how to better handle and use small-diameter material. The Forest Service also has partnerships with state and regional entities that provide a link between scientific and institutional knowledge and local users. Most of DOE’s woody biomass activities are overseen by its Office of the Biomass Program and focus primarily on research and development, although the department does have some grant and technical assistance activities. DOE’s research and development activities generally address the conversion of biomass, including woody biomass, to liquid fuels, power, chemicals, or heat. Much of this work is carried out by NREL, where DOE recently opened the Biomass Surface Characterization Laboratory. DOE also supports research into woody biomass through partnerships with industry and academia. Program management activities for these partnerships are conducted by DOE headquarters, with project management provided by DOE field offices. In addition to its research activities, DOE provides information and guidance to industry, stakeholder groups, and users through presentations, lectures, and DOE’s Web site, according to DOE officials. DOE also provides outreach and technical assistance through its State and Regional Partnership, Federal Energy Management Program (FEMP), and Tribal Energy Program. FEMP provides assistance to federal agencies seeking to implement renewable energy and energy efficiency projects, while the Tribal Energy Program provides technical assistance to tribes, including strategic planning and energy options analysis. DOE’s grant programs include (1) the National Biomass State and Regional Partnership, which provides grants to states for biomass-related activities through five regional partners; and (2) the State Energy Program, which provides grants to states to design and carry out their own renewable energy and energy efficiency programs. In addition, DOE’s Tribal Energy Program provides funds to promote energy sufficiency, economic development, and employment on tribal lands through renewable energy and energy efficiency technologies. Interior’s activities include providing education and outreach and conducting grant programs, but they do not include research into woody biomass utilization issues. Four Interior agencies—BLM, the Bureau of Indian Affairs (BIA), Fish and Wildlife Service (FWS), and National Park Service (NPS)—conduct activities related to woody biomass. These agencies conduct education, outreach, and technical assistance, but not to the same degree as the Forest Service. For example, BIA provides technical assistance to tribes seeking to implement renewable energy projects, and while FWS and NPS conduct relatively few woody biomass utilization activities, in some cases the agencies will work to find a woody biomass user nearby if a market exists for the material. Interior plans to expand its outreach efforts by using the National Association of Conservation Districts, with which it signed a cooperative agreement, to conduct outreach activities related to woody biomass. And while Interior’s grant programs generally do not target woody biomass, BIA has provided some grants to Indian tribes, including a 2004 grant to the Confederated Tribes of the Warm Springs Reservation in Oregon to conduct a feasibility study for updating and expanding a woody biomass-fueled power plant. Several other federal agencies are engaged in limited woody biomass activities through their advisory or research activities. The Environmental Protection Agency provides technical assistance, through its Combined Heat and Power Partnership, to power plants that generate combined heat and power from various sources, including woody biomass. Three other agencies—the National Science Foundation, Office of Science and Technology Development, and Office of the Federal Environmental Executive—also are involved in woody biomass activities through their membership on the Biomass Research and Development Board, which is responsible for coordinating federal activities for the purpose of promoting the use of biobased industrial products. Two groups serve as formal vehicles for coordinating federal agency activities related to woody biomass utilization. One, the Woody Biomass Utilization Group, is a multiagency group that meets quarterly on woody biomass utilization issues and is open to all national, regional, and field- level staff across numerous agencies. The other, the Biomass Research and Development Board, is responsible for coordinating federal activities to promote the use of biobased industrial products. The board consists of representatives from USDA, DOE, and Interior, as well as EPA, the National Science Foundation, Office of the Federal Environmental Executive, and Office of Science and Technology Policy. When discussing coordination among agencies, however, agency officials more frequently cited using informal mechanisms for coordination—through telephone discussions, e-mails, participation in conferences, and other means— rather than the formal groups described above. Several officials told us that informal communication among networks of individuals was essential to coordination among agencies. Officials also described other forms of coordination, including joint review teams for interagency grant programs and multiagency working groups examining woody biomass at the regional or state level. The Forest Service—the USDA agency with the most woody biomass activities—developed a woody biomass policy in January 2005, and, in March 2005, in response to a recommendation in our draft report, the agency assigned responsibility for overseeing and coordinating its woody biomass activities to an official within the Forest Service’s Forest Management branch. In addition, the agency has created the Biomass Utilization Steering Committee, consisting of the staff directors of various Forest Service branches, to provide direction and support for agency biomass utilization. DOE coordinates its woody biomass utilization activities through its Office of Energy Efficiency and Renewable Energy. Within this office, the Office of the Biomass Program directs biomass research at DOE national laboratories and contract research organizations, while the Federal Energy Management Program and the Tribal Energy Program conduct a small number of other woody biomass activities. Interior has appointed a single official to oversee its woody biomass activities and is operating under a woody biomass policy adopting the principles of the June 2003 memorandum of understanding among USDA, DOE, and Interior. Interior also has appointed a Renewable Energy Ombudsman to coordinate all of the department’s renewable energy activities, including those related to woody biomass, and has worked with its land management agencies to develop woody biomass policies allowing service and timber contractors to remove woody biomass where ecologically appropriate. Similarly, BLM has appointed a single official to oversee woody biomass efforts and has developed a woody biomass utilization strategy to guide its activities that contains overall goals related to increasing the utilization of biomass from treatments on BLM lands. Agency officials cited two principal obstacles to increasing the use of woody biomass: the difficulty in using woody biomass cost-effectively and the lack of a reliable supply of the material. Agency activities are generally targeted toward the obstacles identified by agency officials, but some officials told us that their agencies are limited in their ability to fully address these obstacles and that additional steps beyond the agencies’ authority to implement are needed. However, not all agree that such steps are appropriate. The obstacle most commonly cited by officials we spoke with is the difficulty of using woody biomass cost-effectively. Officials told us the products that can be created from woody biomass—whether wood products, liquid fuels, or energy—often do not generate sufficient income to overcome the costs of acquiring and processing the raw material. One factor contributing to the difficulty in using woody biomass cost- effectively is the cost incurred in harvesting and transporting woody biomass. Numerous officials told us that even if cost-effective means of using woody biomass were found, the lack of a reliable supply of woody biomass from federal lands presents an obstacle because business owners or investors will not establish businesses without assurances of a dependable supply of material. Officials identified several factors contributing to the lack of a reliable supply, including the lack of widely available long-term contracts for forest products, environmental groups’ opposition to federal projects, and the shortage of agency staff to conduct activities. A few officials cited internal barriers that hamper agency effectiveness in promoting woody biomass utilization, including limited agency expertise related to woody biomass and limited agency commitment to the issue. A variety of other obstacles were noted as well, including the lack of a local infrastructure for handling woody biomass, consisting of loggers, mills, and equipment capable of treating small- diameter material. Agency activities related to woody biomass were generally aimed at overcoming the obstacles agency officials identified, including many aimed at overcoming economic obstacles. For example, Forest Service staff have worked with potential users of woody biomass to develop products whose value is sufficient to overcome the costs of harvesting and transporting the material; Economic Action Program coordinators have worked with potential woody biomass users to overcome economic obstacles; and Forest Products Laboratory researchers are working with NREL to make wood-to-ethanol conversion more cost-effective. Despite ongoing agency activities, however, numerous officials believe that additional steps beyond the agencies’ authority are need to fully address obstacles to woody biomass utilization. Among these steps are subsidies and tax credits, which officials told us are necessary to develop a market for woody biomass but which are beyond the agencies’ authority. According to several officials, the obstacles to using woody biomass cost- effectively are simply too great to overcome by using the tools—grants, outreach and education, and so forth—currently at the agencies’ disposal. One official stated that “in many areas, the economic return from smaller- diameter trees is less than production costs. Without some form of market intervention, such as tax incentives or other forms of subsidy, there is little short-term opportunity to increase utilization of such material.” Some officials stated that subsidies have the potential to create an important benefit—reduced fire risk through hazardous fuels reduction—if they promote additional thinning activities by stimulating the woody biomass market. Rather than incentives or subsidies, some officials noted the potential for increased use of woody biomass through state requirements—known as renewable portfolio standards—that utilities procure or generate a portion of their electricity by using renewable resources, which could include woody biomass. But not all officials believe these additional steps are efficient or appropriate. One official told us that, although he supports these activities, tax incentives and subsidies would create enormous administrative and monitoring requirements. Another official stated that although increased subsidies could address obstacles to woody biomass utilization, he does not believe they should be implemented, preferring instead to allow research and development efforts and market forces to establish the extent of woody biomass utilization. Further, not all agree that the market for woody biomass should be expanded. One agency official told us he is concerned that developing a market for woody biomass could result in overuse of mechanical treatment (rather than prescribed burning) as the market begins to drive the preferred treatment, and representatives of one national environmental group told us that relying on woody biomass as a renewable energy source will lead to overthinning, as demand exceeds the supply that is generated through responsible thinning. The amount of woody biomass resulting from increased thinning activities could be substantial, adding importance to the search for ways to use the material cost-effectively rather than simply disposing of it. However, the use of woody biomass will become commonplace only when doing so becomes economically advantageous for users—whether small forest businesses or large utilities. Federal agencies are targeting their activities toward overcoming economic and other obstacles, but some agency officials believe that these efforts alone will not be sufficient to stimulate a market that can accommodate the vast quantities of material expected— and that additional action may be necessary at the federal and state levels. Nevertheless, we believe the agencies will continue to play an important role in stimulating woody biomass use. The Forest Service took a significant step recently by designating an agency lead for woody biomass activities, responding to a need we had identified in our draft report and enhancing the agency’s ability to ensure that its multiple activities contribute to its overall objectives. Given the magnitude of the woody biomass issue and the finite nature of agency budgets, it is essential that federal agencies appropriately coordinate their woody biomass activities—both within and across agencies—to maximize their potential for addressing the issue. Mr. Chairman, this concludes my prepared statement. I would be pleased to answer any questions that you or other Members of the Subcommittee may have at this time. For further information about this testimony, please contact me at (202) 512-3841 or at [email protected]. David P. Bixler, James Espinoza, Steve Gaty, Richard Johnson, and Judy Pagano made key contributions to this statement. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | In an effort to reduce the risk of wildland fires, many federal land managers--including the Forest Service and the Bureau of Land Management--are placing greater emphasis on thinning forests and rangelands to help reduce the buildup of potentially hazardous fuels. These thinning efforts generate considerable quantities of woody material, including many smaller trees, limbs, and brush--referred to as woody biomass--that currently have little or no commercial value. GAO was asked to determine (1) which federal agencies are involved in efforts to promote the use of woody biomass, and the actions they are undertaking; (2) how these agencies coordinate their activities; and (3) what the agencies see as obstacles to increasing the use of woody biomass, and the extent to which they are addressing the obstacles. This testimony is based on GAO's report Natural Resources: Federal Agencies Are Engaged in Various Efforts to Promote the Utilization of Woody Biomass, but Significant Obstacles to Its Use Remain (GAO- 05-373), being released today. Most woody biomass utilization activities are implemented by the Departments of Agriculture (USDA), Energy (DOE), and the Interior and include awarding grants to businesses, schools, Indian tribes, and others; conducting research; and providing education. Most of USDA's woody biomass utilization activities are undertaken by the Forest Service and include grants for woody biomass utilization, research into the use of woody biomass in wood products, and education on potential uses for woody biomass. DOE's woody biomass activities focus on research into using the material for renewable energy, while Interior's efforts consist primarily of education and outreach. Other agencies also provide technical assistance or fund research activities. Federal agencies coordinate their woody biomass activities through formal and informal mechanisms. Although the agencies have established two interagency groups to coordinate their activities, most officials we spoke with emphasized informal communication--through e-mails, participation in conferences, and other means--as the primary vehicle for interagency coordination. Internally, DOE coordinates its woody biomass activities through its Office of Energy Efficiency and Renewable Energy, while Interior and the Forest Service--the USDA agency with the most woody biomass activities--have appointed officials to oversee, and have issued guidance on, their woody biomass activities. The obstacles to using woody biomass cited most often by agency officials were the difficulty of using woody biomass cost-effectively and the lack of a reliable supply of the material; agency activities generally are targeted toward addressing these obstacles. Some officials told us their agencies are limited in their ability to address these obstacles and that incentives--such as subsidies and tax credits--beyond the agencies' authority are needed. However, others disagreed with this approach for a variety of reasons, including the concern that expanding the market for woody biomass could lead to adverse ecological consequences if the demand for woody biomass leads to excessive thinning. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
The Navy’s fleet includes aircraft carriers, cruisers, destroyers, frigates, littoral combat ships, submarines, amphibious warfare, mine warfare, combat logistics, and fleet support ships. Our review focused on surface combatant and amphibious warfare ships, which constitute slightly less than half of the total fleet. Table 1 shows the classes of surface ships we reviewed along with their numbers, expected service lives, and current average ages. Figure 1 shows the administrative chain of command for Navy surface ships. The U.S. Pacific Fleet and U.S. Fleet Forces Command organize, man, train, maintain, and equip Navy forces, develop and submit budgets, and develop required and sustainable levels of fleet readiness, with U.S. Fleet Forces Command serving as the lead for fleet training requirements and policies to generate combat-ready Navy forces. The Navy’s surface type commanders—Commander, Naval Surface Force, U.S. Pacific Fleet and Commander, Naval Surface Force, Atlantic have specific responsibilities for the maintenance, training, and readiness of their assigned surface ships. To meet the increased demands for forces following the events of September 2001, the Navy established a force generation model—the Fleet Response Plan—and in August 2006 the Navy issued a Fleet Response Plan instruction. The plan seeks to build readiness so the Navy can surge a greater number of ships on short notice while continuing to meet its forward-presence requirements. As depicted in table 2, there are four phases in the Fleet Response Plan 27-month cycle that applies to surface combatant and amphibious warfare ships. The four Fleet Response Plan phases are (1) basic, or unit-level training; (2) integrated training; (3) sustainment (which includes deployment); and (4) maintenance. In September 2009, the Commanders of U.S. Pacific Fleet and U.S. Fleet Forces directed Vice Admiral Balisle, USN-Ret., to convene and lead a Fleet Review Panel to assess surface force readiness. The Panel issued its report in February 2010. It stated that Navy decisions made to increase efficiencies throughout the fleet had adversely affected surface ship current readiness and life cycle material readiness. Reducing preventative maintenance requirements and the simultaneous cuts to shore infrastructure were two examples of the detrimental efficiencies cited in the report. The report also stated that if the surface force stayed on the present course, surface ships would not reach their expected service lives. For instance, it projected that destroyers would achieve 25- 27 years of service life instead of the 35-40 years expected. The report concluded that each decision to improve efficiency may well have been an appropriate attempt to meet Navy priorities at the time, but there was limited evidence to identify any changes that were made with surface force readiness as the top priority—efficiency was sought over effectiveness. The Fleet Review Panel made several maintenance, crewing, and training recommendations that it stated should be addressed not in isolation but as a circle of readiness. According to the report, it will take a multi-faceted, systematic solution to stop the decline in readiness, and begin recovery. We have previously reported on the Navy’s initiatives to achieve greater efficiencies and reduce costs. In June 2010, we issued a report regarding the training and crew sizes of cruisers and destroyers. In it we found that changes in training and reductions in crew sizes had contributed to declining material conditions on cruisers and destroyers. We recommended that the Navy reevaluate its ship workload requirements and develop additional metrics to measure the effectiveness of Navy training. DOD agreed with these recommendations. Also, in July 2011 we reported on the training and manning information presented in the Navy’s February 2011 report to Congress regarding ship readiness. The Navy’s report included information on ships’ ability to perform required maintenance tasks, pass inspection, and any projected effects on the lifespan of individual ships. We concluded that the Navy’s report did not provide discussion of data limitations or caveats to any of the information it presented, including its conclusions and recommendations. However, we found that the Navy did outline specific actions that it was taking or planned to take to address the declines in readiness due to manning and crew changes. In January 2011, the commanders of U.S. Fleet Forces Command and U.S. Pacific Fleet jointly instructed their type commanders to develop a pilot program to “establish a sequenced, integrated, and building block approach” to achieve required readiness levels. This pilot program began in March 2011, and in March 2012, near the end of the pilot, the Navy issued its Surface Force Readiness Manual, which details a new strategy for optimizing surface force readiness throughout the Fleet Response Plan. The strategy calls for integrating and synchronizing maintenance, training, and resources among multiple organizations such as Afloat Training Groups and Regional Maintenance Centers. For the period from 2008 to 2012, available data show variations in material readiness between different types of ships—such as material readiness differences between amphibious warfare ships and surface combatants—but data limitations prevent us from making any conclusions concerning improvements or declines in the overall readiness of the surface combatant and amphibious warfare fleet during the period. Through a variety of means and systems, the Navy collects, analyzes, and tracks data that show the material condition of its surface ships—in terms of both their current and life cycle readiness. Three of the data sources the Navy uses to provide information on the material condition of ships are casualty reports; Defense Readiness Reporting System – Navy (DRRS-N) reports; and Board of Inspection and Survey (INSURV) material inspection reports. None of these individual data sources are designed to provide a complete picture of the overall material condition of the surface force. However, the data sources can be viewed as complementary and, when taken together, provide data on both the current and life cycle material readiness of the surface force. For example, some casualty report data must be updated every 72 hours and provides information on individual pieces of equipment that are currently degraded or out of commission. DRRS-N data is normally reported monthly and focuses on current readiness by presenting information on broader capability and resource areas, such as ship command, control, and communications, rather than individual equipment. INSURV data is collected less frequently—ships undergo INSURV inspections about once every 5 years—but the data is extensive, and includes inspection results for structural components, individual pieces of equipment, and broad systems, as well as assessments of a ship’s warfighting capabilities. The INSURV data is used to make lifecycle decisions on whether to retain or decommission Navy ships. Casualty reports, DRRS-N data, and INSURV reports are all classified when they identify warfighting capabilities of individual ships. However, when casualty reports and INSURV information is consolidated and summarized above the individual ship level it is unclassified. Even summary DRRS-N data is classified, and therefore actual DRRS-N data is not included in this unclassified report. Table 3 provides additional details on each of the data sources. INSURV and casualty report data from January 2008 through March 2012 consistently show differences in material readiness between different types of ships. As illustrated in Table 4, there are differences between frigates, destroyers, cruisers, and amphibious warfare ships in their overall INSURV ratings—which reflect ship abilities to carry out their primary missions; their INSURV Equipment Operational Capability scores—which reflect the material condition of 19 different functional areas; and their average numbers of casualty reports—which reflect material deficiencies in mission essential equipment. The differences within the average Equipment Operational Capability and casualty reports were found to be statistically significant. See additional details regarding the statistical significance of average Equipment Operational Capability scores and the average number of casualty reports in Appendix I. For example, the data in table 4 shows that, for the time period covered, the material condition of amphibious ships is generally lower than that of frigates and destroyers. For example, a lower percentage of amphibious warfare ships received overall “satisfactory” ratings in INSURV inspections than destroyers and frigates; likewise, amphibious ships had lower average INSURV Equipment Operational Capability scores than those two types of ships. Amphibious warfare ships also have on average more casualty reports per ship than destroyers and frigates. According to Navy officials, some of these differences may result from differences in the size, complexity, and age of the various types of ships. Likewise, cruisers have a lower material condition than that of destroyers. The data show that 22 percent of cruisers were rated “unsatisfactory” compared to 3 percent of destroyers, and the average cruiser Equipment Operational Capability score of 0.786 was also lower than the destroyer score of 0.829. Finally, the average of 18 casualty reports per cruiser was about 24 percent higher than the 14.5 casualty reports per destroyer. DRRS-N data also show that there are readiness differences between the Navy’s different types of ships but the precise differences are classified and therefore are not included in this report. Material readiness data show some clear differences between types of ships as shown in table 4. However, when we considered the surface combatant and amphibious warfare ships in aggregate, we were unable to make any conclusions concerning trends in the overall readiness of these ships. One readiness measure—casualty reports—indicates that the material readiness of these ships has declined but other readiness measures show both upward and downward movement. Because of the relatively small number of INSURV inspections conducted each year, it is not possible to draw a conclusion about trends in the material readiness of surface combatant and amphibious warfare ships from January 2008 to March 2012 based on INSURV data. Casualty report data from January 2008 to March 2012 show that there is a significant upward trend in the average daily number of casualty reports per ship for both surface combatants and amphibious warfare ships, which would indicate declining material readiness. Specifically, the average daily numbers of casualty reports per ship have been increasing at an estimated rate of about 2 and 3 per year, respectively. Furthermore, for both ship types, there is not a statistically significant difference in the trend when comparing the periods before February 2010—when the Fleet Review Panel’s findings were published—and after February 2010. According to Navy officials, increases in casualty reports could be reflective of the greater numbers of material inspections and evaluations than in the past, which is likely to identify more material deficiencies and generate more casualty reports. Figure 2 shows the increases in casualty reports over time. Table 5 shows the summary data for all the INSURV inspections of surface combatant and amphibious warfare ships that were conducted from January 2008 through March 2012. Throughout the period, the data fluctuate in both an upward and downward direction. For example, the proportion of surface combatant and amphibious warfare ships rated ‘satisfactory’ fell 11 percent from 83 percent in 2008 to 72 percent in 2010, and then increased to 77 percent in 2011. Average Equipment Operational Capability scores also fluctuated throughout the period— increasing in 2011 and declining in 2009, 2010, and 2012. As previously noted, because of the relatively small number of INSURV inspections conducted each year, it is not possible to draw a conclusion about trends in the material readiness of surface combatant and amphibious warfare ships between 2008 and 2012 based on INSURV data. The casualty report and INSURV data that we analyzed are consistent with the findings of the Navy’s Fleet Review Panel, which found that the material readiness of the Navy’s ships had been declining prior to 2010. Our analysis showed a statistically significant increase in casualty reports between 2008 and 2010 which would indicate a declining material condition. Although the statistical significance of the INSURV data from 2008 to 2010 could not be confirmed due to the small number of ships that were inspected during this time period, that data showed declines in both the percentage of satisfactory inspections and average Equipment Operational Capability scores. The Navy has taken steps intended to improve the readiness of its surface combatant and amphibious warfare ships. However, it faces risks to achieving full implementation of its recent strategy and has not assessed these risks or developed alternative implementation approaches to mitigate risks. The Navy has taken several steps to help remedy problems it has identified in regard to maintaining the readiness of its surface combatant and amphibious warfare ships. In the past, material assessments, maintenance, and training were carried out separately by numerous organizations, such as the Regional Maintenance Centers and Afloat Training Groups. According to the Navy, this sometimes resulted in overlapping responsibilities and duplicative efforts. Further, the Navy has deferred maintenance due to high operational requirements. The Navy recognizes that deferring maintenance can affect readiness and increase the costs of later repairs. For example, maintenance officials told us that Navy studies have found that deferring maintenance on ballast tanks to the next major maintenance period will increase costs by approximately 2.6 times, and a systematic deferral of maintenance may cause a situation where it becomes cost prohibitive to keep a ship in service. This can lead to early retirements prior to ships reaching their expected service lives. In the past few years the Navy has taken a more systematic and integrated approach to address its maintenance requirements and mitigate maintenance problems. For example, in November 2010 it established the Surface Maintenance Engineering Planning Program, which provides life cycle management of maintenance requirements, including deferrals, for surface ships and monitors life cycle repair work. Also, in December 2010 the Navy established Navy Regional Maintenance Center headquarters, and began increasing the personnel levels at its intermediate maintenance facilities in June 2011. More recently, in March 2012, the Navy set forth a new strategy in its Surface Force Readiness Manual. This strategy is designed to integrate material assessments, evaluations, and inspections with maintenance actions and training and ensure that surface ships are (1) ready to perform their current mission requirements and (2) able to reach their expected service lives.supporting ship readiness to take an integrated, systematic approach to eliminate redundancy, build training proficiency to deploy at peak readiness, and reduce costs associated with late identified work. The manual addresses the need for the organizations involved in According to the Surface Force Readiness Manual, readiness is based upon a foundation of solid material condition that supports effective training. In line with this integrated maintenance and training approach, the new strategy tailors the 27-month Fleet Response Plan by adding a fifth phase that is not included in the Fleet Response Plan, the shakedown phase. This phase allows time between the end of the maintenance phase and the beginning of the basic phase to conduct a material assessment of the ship to determine if equipment conditions are able to support training. In addition, the new strategy shifts the cycle’s starting point from the basic phase to the sustainment phase to support the deliberate planning required to satisfactorily execute the maintenance phase and integrate maintenance and training for effective readiness. Under the new strategy, multiple assessments, which previously certified ship readiness all throughout the Fleet Response Plan cycle, will now be consolidated into seven readiness evaluations at designated points within the cycle. Because each evaluation may have several components, one organization will be designated as the lead and will be responsible for coordinating the evaluation with the ship and other assessment teams, thereby minimizing duplication and gaining efficiencies through synchronization. Figure 3 shows the readiness evaluations that occur within each phase of the strategy’s notional 27-month cycle. As previously noted, development of the Navy’s new strategy began with a pilot program. The pilot was conducted on ships from both the East and West coasts beginning in March 2011. Initial implementation of the new strategy began in March 2012 and is currently staggered, with ships’ schedules being modified to support the strategy’s integration of training, manning, and maintenance efforts. Ships that were not involved in the pilot program will begin implementing the strategy when they complete the maintenance phase of the Fleet Response Plan cycle. The Navy plans to fully implement the new strategy in fiscal year 2015 (i.e. to have all surface ships operating under the strategy and resources needed to conduct the strategy’s required tasks in place). While the Surface Force Readiness Manual states that providing a standard, predictable path to readiness is one of the tenets of the Navy’s new strategy, it also acknowledges that circumstances may arise that will require a deviation from the notional 27-month cycle. Certain factors could affect the Navy’s ability to fully implement its strategy, but the Navy has not assessed the risks to implementation or developed alternatives. As we have previously reported,assessment can provide a foundation for effective program management. Risk management is a strategic process to help program managers make risk decisions about assessing risk, allocating finite resources, and taking actions under conditions of uncertainty. To carry out a comprehensive risk assessment, program managers need to identify program risks from both external and internal sources, estimate the significance of these risks, and decide what steps should be taken to best manage them. Although such an assessment would not assure that program risks are completely eliminated, it would provide reasonable assurance that such risks are being minimized. As the Navy implements its new surface force readiness strategy one risk we identified involves the tempo of operations. While the strategy acknowledges circumstances may arise that require a deviation from the 27-month Fleet Response Plan cycle, it also states that predictability is necessary in order to synchronize the maintenance, training, and operational requirements. However, the tempo of operations is currently higher than planned for in the Fleet Response Plan. According to Navy officials, this makes execution of the strategy challenging. High operational tempos pose challenges because they could delay the entry of some ships into the strategy as well as the movement of ships through the strategy. For example, some ships that have been operating at increased tempos, such as the Navy’s ballistic missile defense cruisers and destroyers, have not followed the Navy’s planned 27-month cycle. Navy officials told us that requirements for ballistic missile defense ships are very high leading to quick turnarounds between deployments. They said, in some cases, ships may not have time for the maintenance or full basic and integrated/advanced training phases. The manual notes that ships without an extended maintenance period between deployments will remain in the sustainment phase. According to Navy guidance, the maintenance phase is critical to the success of the Fleet Response Plan since this is the optimal period in which lifecycle maintenance activities— major shipyard or depot-level repairs, upgrades, and modernization installations—occur. Thus, ships with a high operational tempo that do not enter the maintenance phase as planned will have lifecycle maintenance activities deferred, which could lead to increased future costs. Further, ships that do not enter the maintenance phase may be delayed entering into the strategy. This delay would be another risk to the implementation of the Navy’s new readiness strategy and ships’ lifecycle readiness. In addition, the Navy’s plan to decrease the number of surface combatant and amphibious warfare ships through early retirements is likely to increase operational tempos even further for many ships that remain in the fleet. DOD’s fiscal year 2013 budget request proposes the early retirement of seven Aegis cruisers and two amphibious ships in fiscal years 2013 and 2014. When fewer ships are available to meet a given requirement, ships must deploy more frequently. Table 6 shows the ships that the Navy plans to retire early, their ages at retirement, and their homeports. Also, recent changes in national priorities, which call for an increased focus on the Asia-Pacific region that places a renewed emphasis on air and naval forces, make it unlikely that operational tempos will decline. At the same time, DOD will still maintain its defense commitments to Europe and other allies and partners. In addition to the risks posed by high operational tempos, several supporting organizations currently have staffing levels that are below the levels needed to fulfill their roles in the new integrated readiness strategy. For example, Navy Afloat Training Group officials have identified the staffing levels required to fully support the strategy, and reported that they need an additional 680 personnel to fully execute the new strategy. As of August 2012, the Navy plans to reflect its funding needs for 410 of the 680 personnel in its fiscal year 2014 budget request and for the remaining 270 in subsequent requests. Under the new strategy, the Afloat Training Groups provide subject matter experts to conduct both material, and individual and team training. Previously the Afloat Training Groups used a “Train the Trainer” methodology, which did not require the same number of trainers because ships’ crews included their own system experts to train the crew and the Afloat Training Groups just trained the ships’ trainers. Afloat Training Group Pacific officials told us that there are times when the training events that can be offered—to ships currently under the strategy and/or ships that have not yet implemented the strategy—are limited because of their staffing level gaps. Current staffing allows executing all portions of the Basic Phase in select mission areas only. Other mission areas are expected to gain full training capability as staffing improves over the next several years. Until then, the Afloat Training Group officials plan to schedule training events within the limited capability mission areas based on a prioritized hierarchy. Further, Surface Maintenance Engineering Planning Program officials told us they are also short of staff. They said they need 241 staff to perform their requirements, but currently have 183 staff. They stated that while current budget plans include funding to reach the 241 staffing level in 2013, it will be reduced below the 241 requirement in 2014. As with the Afloat Training Groups and Surface Maintenance Engineering Planning Program, officials at the Navy Regional Maintenance Center headquarters told us they currently lack the staff needed to fully execute the ship readiness assessments called for in the new strategy. Ship readiness assessments evaluate both long-term lifecycle maintenance requirements (e.g. preservation to prevent structural corrosion) and maintenance to support current mission requirements (e.g. preventative and corrective maintenance for the Aegis Weapons System). According to the officials, ship readiness assessments allow them to deliberately plan the work to be done during major maintenance periods and prioritize their maintenance funds. The goal is for ships to receive all the prescribed ship readiness assessments in fiscal year 2013. However, Navy officials stated that they are evaluating the impact of recent readiness assessment revisions on changes in the Regional Maintenance Center’s funding and personnel requirements. The Navy has not undertaken a comprehensive assessment of the impact of high operational tempos, staffing shortages, or any other risks it may face in implementing its new readiness strategy, nor has it developed alternatives to mitigate any of these risks. The Navy does recognize in its strategy that circumstances may arise that require ships to deviate from the 27-month Fleet Response Plan cycle and has considered the adjustments to training that would need to take place in such a case. However, the strategy does not discuss, nor identify plans to mitigate, maintenance challenges that could arise from delays in full implementation. We believe the risks we identified may delay full implementation, which could lead to continued deferrals of lifecycle maintenance, increasing costs and impacting the Navy’s ability to achieve expected service lives for its ships. Today’s fleet of surface combatant and amphibious warfare ships provides core capabilities that enable the Navy to fulfill its missions. In order to keep this fleet materially and operationally ready to meet current missions and sustain the force for future requirements, the Navy must maximize the effective use of its resources and ensure that its ships achieve their expected service lives. Full implementation of its new strategy, however, may be delayed if the Navy does not account for the risks it faces and devise plans to mitigate against those risks. Navy organizations have taken individual steps to increase their staffing levels, but the Navy has yet to consider alternatives if the integration of assessment, maintenance, and training under the strategy is delayed. Without an understanding of risks to full implementation and plans to mitigate against them, the Navy is likely to continue to face the challenges it has encountered in the past, including the increased costs that arise from deferring maintenance and the early retirement of ships. This could impact the Navy’s ability to meet its long-term commitments. Further, ongoing maintenance deferrals—and early retirements that increase the pace of operations for the remaining surface force—could potentially impact the Navy’s ability to meet current missions. To enhance the Navy’s ability to implement its strategy to improve surface force material readiness, we recommend that the Secretary of Defense direct the Secretary of the Navy to take the following two actions: Develop a comprehensive assessment of the risks the Navy faces in implementing its Surface Force Readiness Manual strategy, and alternatives to mitigate risks. Specifically, a comprehensive risk assessment should include an assessment of risks such as high operational tempos and availability of personnel. Use the results of this assessment to make any necessary adjustments to its implementation plan. In written comments on a draft of this report, DOD partially concurred with our recommendations. Overall, DOD stated it agrees that risk assessment is an important component of program management, but does not agree that a comprehensive assessment of the risks associated with implementation of the Navy’s Surface Force Readiness strategy is either necessary or desirable. It also stated that existing assessment processes are sufficient to enable adjustments to implementation of the strategy. DOD also noted several specific points. For example, according to DOD, a number of factors impact surface ship readiness and some of those factors, such as budgetary decisions, emergent operational requirements, and unexpected major ship repair events are outside of the Navy’s direct control. DOD further stated that the strategy, and the organizations that support the strategy, determine and prioritize the full readiness requirement through reviews of ship material condition and assess the risk of any gaps between requirements and execution, as real world events unfold. DOD also noted that the Surface Ship Readiness strategy has a direct input into the annual Planning, Programming, Budgeting, and Execution (PPBE) process. It stated that its position is that execution of the strategy and PPBE process adequately identify and mitigate risks. DOD further believes that a separate one-time comprehensive assessment of risks, over and above established tracking mechanisms, is an unnecessary strain on scarce resources. Moreover, DOD stated that the Navy now has the technical resources available, using a disciplined process, to inform risk-based decisions that optimize the balance between current operational readiness and future readiness tied to expected service life through the standup of its Surface Maintenance Engineering Planning Program and Commander Navy Regional Maintenance Centers. Specifically, DOD noted documenting and managing the maintenance requirement is now a fully integrated process. According to DOD, the Navy’s Surface Type Commanders identify and adjudicate risks to service life and this approach is consistent with fundamental process discipline and risk management executed by the submarine and carrier enterprises. Finally, according to DOD, the Navy is continually assessing progress in achieving the strategy and has the requisite tools in place to identify changes in force readiness levels that may result from resource constraints, and will adjust the process as necessary to ensure readiness stays on track. As described in our report, we recognize that the Navy has taken a more systematic and integrated approach to address its maintenance requirements and mitigate problems, and specifically cite the Surface Readiness strategy, and actions such as standing up Surface Maintenance Engineering Planning Program and Commander Navy Regional Maintenance Centers. We also recognize that the Navy conducts various assessments of ship readiness and considers resource needs associated with implementing the strategy as part of the budget process. However, we do not agree that any of the current assessments or analyses offer the type of risk assessment that our report recommends. For example, the PPBE process does not address the specific risk that high operational tempos pose to implementation of the strategy nor does it present alternatives for mitigating this risk. Also, despite the ongoing efforts by Surface Maintenance Engineering Planning Program and Commander Navy Regional Maintenance Centers officials to document and manage the maintenance requirement of the surface force in an integrated process, both organizations are currently under staffed. The challenges identified in our report, including high operational tempos and current organizational staffing levels, have hindered the Navy’s ability to achieve the desired predictability in ships’ operations and maintenance schedules, as called for in its strategy. Given factors such as the Navy’s plan to decrease the number of ships as well as changes in national priorities that place a renewed emphasis on naval forces in the Asia Pacific region, these challenges we identified are unlikely to diminish in the near future, and there could be additional risks to the strategy’s implementation. Without an understanding of the full range of risks to implementing its strategy and plans to mitigate them, the Navy is likely to continue to face the challenges it has encountered in the past, including increased costs that arise from deferring maintenance and the early retirement of ships. Therefore, we continue to believe that a comprehensive risk assessment is needed. We are sending copies of this report to appropriate congressional committees, the Secretary of Defense, the Secretary of the navy, and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov If you or your staff have any questions about this report, please contact me at (202) 51209619. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix III. To assess how the Navy evaluates the material readiness of its surface combatant and amphibious warfare ships and the extent to which data indicate trends or patterns in the material readiness of these ships, we interviewed officials from the Commander Naval Surface Force, U.S. Pacific Fleet, Commander Naval Surface Force, U.S. Atlantic Fleet, as well as visiting a number of ships, to include the USS Leyte Gulf (CG 55), USS Arleigh Burke (DDG 51), USS San Antonio (LPD 17), and USS Higgins (DDG-76). We obtained and analyzed Navy policies and procedures for determining surface force readiness, as well as various studies and reports on the Navy’s material readiness process. We obtained and analyzed material readiness data from the Navy’s Board of Inspection and Survey (INSURV) as well as the United States Fleet Forces Command (USFF). We also met with Navy officials from the Board of Inspection and Survey and the United States Fleet Forces Command to complement our data analysis, and observed the INSURV material inspection of the USS Cole (DDG 67). We limited our data analysis to the period from January 2008 to March 2012 in order to cover a period of approximately two years prior to, and two years following, publication of the Fleet Review Panel of Surface Force Readiness report. Specifically, we analyzed data for the Navy’s guided-missile cruisers (CG 47 class), guided-missile destroyers (DDG 51 class), frigates (FFG 7 class), amphibious assault ships (LHA 1 and LHD 1 classes), amphibious transport dock ships (LPD 4 and LPD 17 classes), and dock landing ships (LSD 41 and LSD 49 classes). We analyzed data from three of the primary data sources the Navy uses to provide information on the material condition of ships: casualty reports; Board of Inspection and Survey (INSURV) material inspection reports; and the Defense Readiness Reporting System – Navy (DRRS-N) reports. None of these individual data sources are designed to provide a complete picture of the overall material condition of the surface force. From the Board of Inspection and Survey we met with INSURV officials and observed an INSURV inspection onboard the USS Cole (DDG 67) conducted on December 12, 2011 and December 14, 2011. We obtained all INSURV initial material inspection reports dating from 2008 through 2012 for cruisers, destroyers, frigates, and amphibious warfare ships. We then extracted relevant data from those reports, including INSURV’s overall assessment of the material condition of these surface ships (satisfactory, degraded, unsatisfactory), Equipment Operational Capability scores for the different functional areas of ships systems (on a 0.00 to 1.00 scale), and dates when these ships were inspected. Although INSURV provides an overall assessment, we included Equipment Operational Capability scores to provide additional insight into the material condition of a ship’s systems. Overall assessments focus on a ship’s material readiness to perform primary missions. As such, while multiple individual systems may be in an unsatisfactory condition (Equipment Operational Capability scores below 0.80 are considered “degraded,” while those below 0.60 are considered “unsatisfactory”), the ship may receive an overall rating of “satisfactory” due to its material readiness to meet its primary missions. Figure 4 below shows the process for determining INSURV ratings, with that segment for determining Equipment Operational Capability scores highlighted. We analyzed both INSURV overall ratings and Equipment Operational Capability scores to identify differences in material readiness between types of ships. To determine if there were statistically significant differences in the Equipment Operational Capability scores among four types of ships (cruisers, destroyers, frigates, and amphibious ships), we took the average of the various Equipment Operational Capability scores for each ship and conducted a one-way analysis of variance (ANOVA). In addition, we conducted post-hoc multiple comparison means tests to determine which ship types, if any, differed. Based on the results of this analysis, we concluded that there were statistically significant differences in the average Equipment Operational Capability score between the four ship types (p-value < 0.0001). Specifically, the average for amphibious ships was significantly lower, at the 95 percent confidence level, than the average scores for cruisers, destroyers, and frigates and the average for cruisers was significantly lower than the average for destroyers. In presenting our results, we standardized relevant data where necessary in order to present a consistent picture. For example, in 2010, the Board of Inspection and Survey moved from rating those ships with the worst material condition as “unfit for sustained combat operations” to rating them as “unsatisfactory.” We have treated both these ratings as “unsatisfactory” in this report. We obtained casualty report data for the same set of ships from the United States Fleet Forces Command office responsible for the Navy’s Maintenance Figure of Merit program. Casualty report data provided average daily numbers of casualty reports per ship for cruisers, destroyers, frigates, and amphibious warfare ships. We then used these daily averages to identify differences between ship types and to calculate and analyze changes in these daily averages from month to month and quarter to quarter. We assessed the reliability of casualty report data presented in this report. Specifically, the Navy provided information based on data reliability assessment questions we provided, which included information on an overview of the data, data collection processes and procedures, data quality controls, and overall perceptions of data quality. We received documentation about how the systems are structured and written procedures in place to ensure that the appropriate material readiness information is collected and properly categorized. Additionally, we interviewed the Navy officials to obtain further clarification on data reliability and to discuss how the data were collected and reported into the system. After assessing the data, we determined that the data were sufficiently reliable for the purposes of assessing the material condition of Navy surface combatant and amphibious warfare ships, and we discuss our findings in the report. To determine if there were statistically significant differences in the daily averages among the four types of ships (cruisers, destroyers, frigates, and amphibious warfare ships), we conducted a one-way analysis of variance (ANOVA), followed by post-hoc multiple comparison means tests to determine which ship types, if any, differed. Based on the results of this analysis we concluded that there were statistically significant differences in the daily averages between the four ship types (p-value < 0.0001), and specifically, the daily average for amphibious warfare ships was significantly higher, at the 95 percent confidence level, than the daily average for cruisers, destroyers, and frigates. Next we analyzed the changes in the daily averages to determine if there was an increasing, decreasing, or stationary trend from month to month. We did this separately for surface combatant ships (cruisers, destroyers, and frigates) and amphibious warfare ships. To estimate the trends, we conducted a time-series regression analysis to account for the correlation in the average daily scores from month to month. We then tested the estimated trends for significant changes after February 2010 — when the Fleet Review Panel’s findings were published – using the Chow test for structural changes in the estimated parameters. We fit a time-series regression model with autoregressive errors (AR lag of 1) to monthly data for both surface combatants and amphibious ships to account for the autocorrelation between monthly observations. The total R-squared, a measure that reflects how well the model predicts the data, was 0.9641 for the surface combatant ships model and 0.9086 for the amphibious warfare ships model which indicate both models fit the data well. A summary of the model parameters is given in the table below. We observed statistically significant positive trends in the daily average for both models. Specifically, the estimated trend for the daily average number of casualty reports per ship increased at a rate of about 2 per year (0.1770 * 12 months) for surface combatant ships and about 3 per year (0.2438 * 12 months) for amphibious warfare ships. In addition, neither of the tests for significant structural changes in the model parameters after February 2010 were significant at the 95 percent confidence level. Based on this, we concluded that there is not enough evidence to suggest there were significant changes in the estimated trends after February 2010 for either ship type. We analyzed data from the Defense Readiness Reporting System-Navy (DRRS-N), which contains data that is normally reported monthly and focuses on current readiness by presenting information on broader capability and resource areas. We obtained classified DRRS-N readiness data for all surface combatant and amphibious warfare ships from January 2008 through March 2012. DRRS-N data showed upward and downward movements between 2008 and 2012, but we did not evaluate the statistical significance of these movements. To determine the extent to which the Navy has taken steps intended to improve the readiness of its surface combatant and amphibious warfare ships including efforts to implement its recent strategy, we reviewed relevant Navy instructions on Navy material readiness, including the strategy—the Surface Force Readiness Manual—to identify the policies and procedures required by the Navy to ensure its surface ships are ready to perform their current mission requirements and reach their expected service lives. We also reviewed prior GAO work on risk management and collected and analyzed data on the resources needed to implement the strategy, and interviewed relevant officials. To gain a better understanding of how the Navy’s independent maintenance, training, and manning initiatives will be integrated into the new strategy, we collected data on the staffing resources needed to implement the strategy and met with officials from the Commander Navy Regional Maintenance Center, the Surface Maintenance Engineering Planning Program, and the Afloat Training Group Pacific. We focused primarily on the Navy’s maintenance initiatives because we have previously reported on its training and manning initiatives. In addition, we met with personnel on board four Navy ships to obtain their views on the impact of the Navy’s maintenance initiatives, such as readiness assessments and material inspections, on the readiness of these ships. Specifically, we visited the USS Leyte Gulf (CG 55), USS Arleigh Burke (DDG 51), USS San Antonio (LPD 17), and USS Higgins (DDG 76). We also discussed initial implementation of the new strategy with personnel on board the USS Higgins. We also met with officials from the Commander Naval Surface Force, U.S. Pacific Fleet who are responsible for administering the strategy for surface ships on the West coast and in Hawaii and Japan to discuss timeframes for transitioning ships into the strategy, challenges implementing the strategy, and plans to address any risks that may occur during the strategy’s implementation. Additionally, we obtained written responses to our questions from these officials and from officials at the Commander Naval Surface Force, U.S. Atlantic Fleet who administer the strategy for surface ships on the East coast. Finally, we reviewed prior GAO work on risk assessment as well as Navy testimony on the readiness of its ships and aircraft and Department of Defense strategic guidance on the key military missions the department will prepare for and budget priorities for fiscal years 2013-2017. We conducted this performance audit from July 2011 to September 2012, in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the contact named above, key contributors to this report were Michael Ferren (Assistant Director), Jim Ashley, Mary Jo Lacasse, David Rodriguez, Michael Silver, Amie Steele, Nicole Volchko, Erik Wilkins-McKee, Nicole Willems, and Ed Yuen. | In 2010, the Navy concluded that decisions it made to increase efficiencies of its surface force had adversely affected ship readiness and service life. To improve ship readiness the Navy developed a new strategy, which includes several initiatives. House Report 112-78, accompanying a proposed bill for the Fiscal Year 2012 National Defense Authorization Act (H.R.1540), directed GAO to review the recent Navy initiatives. GAO assessed 1) how the Navy evaluates the material readiness of its surface combatant and amphibious warfare ships and the extent to which data indicate trends or patterns in the material readiness of these ships, and 2) the extent to which the Navy has taken steps to improve the readiness of its surface combatant and amphibious warfare ships, including implementing its new readiness strategy. GAO analyzed Navy policies, material and readiness data from January 2008two years prior to the release of the Navys 2010 report on the degradation of surface force readinessthrough March 2012, two years after the release of the report, and interviewed headquarters and operational officials and ship crews. Recent data show variations in the material readiness of different types of ships, but do not reveal any clear trends of improvement or decline for the period from 2008 to 2012. The Navy uses a variety of means to collect, analyze, and track the material readiness of its surface combatant and amphibious warfare ships. Three data sources the Navy uses to provide information on the material readiness of ships are: casualty reports, which reflect equipment malfunctions; Defense Readiness Reporting System-Navy (DRRS-N) reports; and Board of Inspection and Survey (INSURV) material inspection reports. These data sources can be viewed as complementary, together providing data on both the current and life cycle material readiness of the surface force. INSURV and casualty report data show that the material readiness of amphibious warfare ships is lower than that of frigates and destroyers. However, there is no clear upward or downward trend in material readiness across the entire Navy surface combatant and amphibious warfare ships. From 2010 to March 2012, INSURV data indicated a slight improvement in the material readiness of the surface combatant and amphibious warfare fleet, but over that period casualty reports from the ships increased, which would indicate a decline in material readiness. DRRS-N data also show differences in material readiness between ship types, but the precise differences are classified and therefore are not included in this report. The Navy has taken steps to improve the readiness of its surface combatant and amphibious warfare ships, including a new strategy to better integrate maintenance actions, training, and manning, but it faces risks to fully implementing its strategy and has not assessed these risks or developed alternatives to mitigate them. In March 2012, near the end of a year-long pilot, the Navy issued its Surface Force Readiness Manual, which calls for integrating and synchronizing maintenance, training and manning among multiple organizations. The Navy expects this strategy to provide a standard, predictable path for ships to achieve and sustain surface force readiness, but certain factors, such as high operational tempos and supporting organizations staffing levels, could delay the entry of some ships into the strategy and the execution of the strategy. For example, one supporting organization reported needing an additional 680 personnel to fully execute the strategy. As of August 2012, the Navy plans to reflect its funding needs for 410 personnel in its fiscal year 2014 budget request and the remaining 270 in subsequent requests. Also, due to high operational tempos the phased implementation of some ships into the strategy may be delayed. Furthermore, ships that do not execute the strategys maintenance periods as planned will have lifecycle maintenance actions deferred. GAO has previously reported that risk assessment can inform effective program management by helping managers make decisions about the allocation of finite resources, and alternative courses of action. However, the Navy has not undertaken a comprehensive assessment of risks to the implementation of its strategy, nor has it developed alternatives to mitigate its risks. GAO believes operational tempo, supporting organizations staffing levels, and other risks may hinder the Navys full implementation of its surface force readiness strategy. If not addressed, this could lead to deferrals of lifecycle maintenance, which have in the past contributed to increased maintenance costs, reduced readiness, and shorter service lives for some ships. GAO recommends that the Navy conduct a comprehensive assessment of the risks the new strategy faces and develop alternatives to mitigate these risks. DOD partially concurred, but felt that current assessments sufficiently identify risks. GAO continues to believe that a comprehensive assessment that takes into account the full range of risk to the overall strategy is needed. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
Generally, an appellate court is a court of law that has the authority to review a lower court’s decision. Proceedings in appellate courts are different from those in trial courts. For example, unlike trial courts, which determine the factual issues in a case, in most situations, appellate courts determine only whether the lower courts correctly applied the law. There are no juries or witnesses in appellate courts. Rather, parties file written briefs and often present oral arguments to a panel of judges focusing on the questions of law in a case. Each appellate court has its own policies on video and audio coverage of oral arguments for public dissemination, and the development of such policies is determined by the relevant policy-making entity or each court. For instance, in March 1996, following a federal judiciary pilot program on cameras in the courtroom, the Judicial Conference—the policy-making body of the federal judiciary—authorized each circuit court of appeals to decide for itself whether to allow video and audio broadcasting of appellate proceedings. The U.S. Supreme Court is the highest appellate court in the country and has the power of judicial review, which is the ability to declare legislative and executive acts unconstitutional. The Court is part of the federal court system, which also includes U.S. courts of appeals and U.S. district courts, among others. The Court has original jurisdiction—the authority to hear a case for the first and only time—over certain cases, and appellate jurisdiction—the authority to review a lower court’s decision—on most other cases that involve a question of constitutional or federal law. Most of the cases the U.S. Supreme Court hears are appeals from lower courts. The U.S. Supreme Court has discretion over which appeals it hears and parties file petitions for writs of certiorari to ask the Court to hear cases. According to the Court’s website, the Court grants review and hears oral arguments in about 80 cases from the approximately 7,000 to 8,000 petitions it receives each Court term. The Court only grants a petition for a writ of certiorari for compelling reasons. U.S. Supreme Court rules state that such reasons may include, among other things, a U.S. court of appeals has entered a decision in conflict with the decision of another U.S. court of appeals on the same important matter; a state court of last resort has decided an important federal question in a way that conflicts with the decision of another state court of last resort or U.S. court of appeals; or a state court or U.S. court of appeals has decided an important question of federal law in a way that conflicts with relevant decisions of the U.S. Supreme Court. If a petition is granted, the case will be scheduled for oral argument. Oral arguments occur when the Court is in session on Mondays, Tuesdays, and Wednesdays, with up to two arguments scheduled per day, and generally last an hour for each case. The Court’s term begins the first Monday in October and continues until the first Monday in October the following year. Figure 2 provides additional information about how cases progress in the U.S. Supreme Court. The federal courts have jurisdiction in cases in which the United States is a party, cases involving the U.S. Constitution or federal laws, certain disputes between citizens of different states, or actions against foreign governments, among other matters. Sitting below the U.S. Supreme Court are 13 U.S. courts of appeals, which are lower appellate courts. These U.S. courts of appeals hear challenges to decisions by U.S. district courts located within their circuits, as well as appeals of certain federal administrative agencies’ decisions. Figure 3 shows the geographical boundaries of the circuits. Cases in the U.S. courts of appeals can be decided based on written briefs alone, but many cases are selected for oral argument. Appeals are generally decided by panels of three judges, but some cases can be heard before more than three judges, or en banc. Oral arguments before U.S. courts of appeals usually last about 30 minutes per case. Most decisions of the U.S. courts of appeals are final, but parties may petition the U.S. Supreme Court to review the case. Each state and the District of Columbia generally have one court of last resort and states may also have intermediate appellate courts. State courts of last resort are generally the final arbiters of state laws and constitutions, although their decisions can be appealed under certain circumstances. State court systems vary from state to state. State courts generally have broad jurisdiction and can hear cases not under the exclusive jurisdiction of federal courts. However, they may not hear cases against the United States and those involving certain specific federal laws. Cases in state courts of last resort that interpret federal law or the U.S. Constitution may be appealed to the U.S. Supreme Court. The courts of last resort in the selected countries included in our review— Australia, Canada, and the United Kingdom—are the final courts of appeals in their respective countries, and each court’s decisions are generally binding to all lower courts in that country. Table 1 has additional information on the courts of last resort in the selected countries, as well as the U.S. Supreme Court. The U.S. Supreme Court does not provide or allow video or live-audio coverage of oral arguments, but provides taped audio coverage of arguments. Specifically, beginning in the October 2010 term, the Court has posted audio recordings of all oral arguments on its website at the end of each argument week. Prior to the 2010 term, the recordings from one term of Court were not available until the beginning of the next term. The Court also provides transcripts of oral arguments on its website the same day arguments are heard and its decisions—the Court’s most important work, according to the Court’s Public Information Officer (PIO)—within minutes of their release. Further, starting with the presidential election cases in 2000, the Court began granting requests for access to audio recordings of oral arguments on the same day arguments are heard in selected cases. According to the PIO, media organizations submit written requests for such access to the Court’s Public Information Office, which forwards them to the Chief Justice for consideration. If a request is granted, the office issues a press release to inform the public in advance that the Court will provide expedited audio recordings for a given case. The Court’s Marshal, Public Information Office, and Office of Information Technology jointly make the arrangements for release of the recordings, which require advanced preparations for the significant increase in website traffic that may result. The PIO stated that the Court has made audio recordings of oral arguments available on the same day as the argument in rare cases, generally in response to extraordinarily high interest among the public and the media. From the 2000 through 2014 terms, the Court received media requests for access to same-day audio recordings of oral arguments in 58 cases. At its discretion, the Court granted these requests in 26 cases and declined them in 32 cases. Figure 4 shows U.S. Supreme Court decisions on media requests for access to same-day audio of oral arguments. As figure 4 illustrates, since October 2010, when the Court began its current practice of posting audio recordings of oral arguments at the end of each argument week, the Court has received media requests for same- day access to recordings in fewer cases—8 cases from the 2010 through 2014 terms, compared to 37 cases from the 2005 through 2009 terms. According to the General Counsel of C-SPAN, which has requested access to video or same-day audio of oral arguments in almost all of the 58 cases in which same-day audio access was requested, the network has made requests more sparingly since the Court began posting recordings of oral arguments at the end of each argument week and has limited requests to very prominent high-profile cases. See appendix III for a list of cases in which media organizations requested access to same- day audio of oral arguments and whether the Court granted or declined requests. Two of the 13 U.S. courts of appeals—the U.S. Courts of Appeals for the Second and the Ninth Circuits—allow media video coverage of oral arguments. In addition, 10 of the 13 U.S. courts of appeals regularly post audio recordings of oral arguments on their websites. Officials from 9 of these 10 courts stated that their court generally posts audio recordings on the same day arguments are heard. Table 2 summarizes the video and audio coverage and oral argument recording policies and practices in the U.S. courts of appeals for the 13 circuits. The policies and practices of these U.S. courts of appeals differ because, as discussed earlier in the report, each court has discretion to determine whether to allow video and audio coverage of appellate proceedings and how to do so. Among the courts we visited—the U.S. Courts of Appeals for the Second, Ninth, and D.C. Circuits—the policies and practices ranged from allowing media video and audio coverage of oral arguments conducted in open court upon request and streaming live video of arguments using the court’s own equipment (in the Ninth Circuit) to providing audio recordings of oral arguments on the court’s website (in the D.C. Circuit). The information below illustrates the range of policies and practices in these courts. U.S. Court of Appeals for the Second Circuit. The court’s guidelines allow media video and audio coverage of oral arguments conducted in open court, except for criminal matters. The guidelines state that the panel of judges assigned to hear oral argument has sole discretion to prohibit coverage of any proceeding, and will normally exercise this authority upon the request of any member of the panel. In practice, according to the court’s Clerk, the media is required to submit a request for video or audio coverage and the panel of judges must affirmatively grant permission to allow coverage. From its 2010 through 2014 terms, the court received requests for video coverage of oral arguments in 15 cases. Of these cases, 6 were granted and 9 were denied based on judicial discretion. The court does not post oral argument recordings on its website but provides CDs of audio recordings upon request for a $30 fee. According to the Clerk, the court’s video and audio policies require minimal resources to implement and there have been no implementation challenges. U.S. Court of Appeals for the Ninth Circuit. The court’s guidelines allow media video and audio coverage of oral arguments conducted in open court. According to court officials, such coverage is allowed for both criminal and civil cases. The guidelines require media organizations to submit a request for coverage and state that the panel of judges assigned to hear oral argument has sole discretion to grant or prohibit video or audio coverage of any proceeding. Court officials stated that the court requires a majority of the judges on the panel to grant or deny coverage. From January 1, 2010, through August 30, 2015, the court received requests for video coverage of oral arguments in 92 cases and granted them in 66 cases. The court also posts archived video recordings of arguments on its website and on YouTube.com, and in January 2015, began streaming live video of all oral arguments using its own equipment. According to the 2014 Ninth Circuit Annual Report and court officials, there were some initial technical challenges with providing live coverage, such as assembling and installing the video production systems and finding a reliable and cost-effective means to stream the arguments, but officials stated that implementation has generally been smooth. In addition, the officials said that live streaming oral arguments has decreased the number of media requests for video coverage. This has reduced the time and resources that the clerk’s office expends processing these requests, including reviewing the request forms and contacting the judges on the panel to decide upon requests. Figure 5 shows images of a courtroom camera in the U.S. Court of Appeals for the Ninth Circuit’s San Francisco courthouse, laptop controlling cameras, and oral argument video produced by the court. U.S. Court of Appeals for the D.C. Circuit. The court does not allow media video or audio coverage of oral arguments, but, beginning in September 2013, has provided audio recordings of arguments on its website using the court’s own equipment. Arguments are to be posted by 2 p.m. the same day they are heard by the court. Court officials stated that providing such audio coverage requires minimal resources and there have been no implementation challenges. See appendix IV for additional details on the video and audio policies and procedures, coverage requests and online views, and policy implementation for the U.S. Courts of Appeals for the Second, Ninth, and D.C. Circuits. Courts of last resort in 49 states have written policies that allow media video and audio coverage of oral arguments and almost all of these courts have video or audio of oral arguments available online. The D.C. Court of Appeals—the District of Columbia’s court of last resort—has no written policies on media video or audio coverage of oral arguments, and according to the court’s Clerk, does not allow such media coverage. However, the court itself streams live audio of all oral arguments on its website and, according to the Clerk, streams live video of some arguments. Although the written policies of the courts in 49 states allow media video and audio coverage, the features of these policies vary. For instance, some state policies prohibit coverage of oral arguments in certain types of cases, such as juvenile proceedings, or unless parties affirmatively consent to it, which may limit coverage, while policies in other states require that judges make an on-the-record finding in order to prohibit coverage, which indicates that there is a strong presumption that coverage is allowed. Table 3 summarizes media video and audio policy features and availability of oral arguments online of state and D.C. courts of last resort. See appendix V for information on the courts of last resort in each of the 50 states and the District of Columbia. While courts of last resort in 49 states have written policies that allow media video and audio coverage of oral arguments, the procedures they follow to do so vary. For instance, the Supreme Court of California, which we visited, both allows media organizations to use their own cameras in the court and provides a live video feed that media organizations can access upon request, while the Florida Supreme Court, which we also visited, partners with a local public broadcasting station to provide video coverage. The information that follows further illustrates variations in the policies and procedures of these two courts. Supreme Court of California. The court’s rules allow video and audio coverage of oral arguments by the media upon request and list 18 factors for judges to consider when deciding whether to grant or prohibit coverage, such as the importance of promoting public access to the judicial system and the privacy rights of all participants in the proceeding. The court permanently began allowing coverage in 1984. From 2010 through 2014, the court received requests for media video coverage in 17 cases and granted all of them. According to court officials, if the media has missed the deadline to request coverage, or based on other extenuating circumstances, the court also has the discretion to provide access to its live closed-circuit video feed of oral arguments, which the court records using its own equipment. Media organizations must obtain permission from the court’s Public Information Office to access this feed through a mult box—a box that allows multiple individuals to directly connect to a video and audio source—in the press rooms of each of the court’s locations. The officials noted that they prefer that media organizations use the court’s feed, rather than bring in their own cameras, to reduce the likelihood of any distractions or other effects on proceedings, but may still receive requests for media coverage of high-profile cases. In addition, the court periodically posts archived audio recordings and a small number of video recordings of oral arguments for selected high-profile cases on its website. According to court officials, the court also conducts annual special oral argument sessions for students, usually in October, where live video of arguments are broadcast on The California Channel, a public broadcasting station, and streamed on the channel’s website. Archived recordings of some of these arguments are also available on The California Channel’s website and other hosting sites, such as the CaliforniaCourts channel on Youtube.com. Court officials stated that there have been no challenges with implementing the court’s policies. In March 2016, the court announced that it plans to begin live streaming video of oral arguments on its website in May. Figure 6 shows pictures of the Supreme Court of California’s cameras in its San Francisco courtroom and mult box in the press room. Florida Supreme Court. The court’s rules allow media video and audio coverage of oral arguments, and Florida case law establishes a presumption that coverage is allowed and requires judges to make an on-the-record finding to prohibit coverage. Coverage was permanently allowed in 1979 and, according to the court’s Public Information Officer, the court has never prohibited coverage of oral argument in a case. The court does not allow freestanding video cameras in the courtroom during oral arguments, but partners with WFSU-Television (WFSU- TV)—a public broadcasting station—to record, broadcast, live stream online, and archive video of arguments. The court and WFSU-TV have an annual interagency agreement that details the services WFSU-TV is to provide, the court’s responsibilities, and the monthly payment the court is to make to WFSU-TV for its services. WFSU-TV staff operate the courtroom cameras and produce the videos of oral arguments. The agreement states that WFSU-TV is to be responsible for the purchase and maintenance of all equipment necessary, including the courtroom cameras, to fulfill the terms of the agreement. In addition, The Florida Channel, which is produced and operated by WFSU-TV, televises live and tape-delayed video of oral arguments. Per the agreement, The Florida Channel is required to show all broadcasts of oral arguments in their entirety and is not permitted to show only partial segments of arguments. Arguments that are broadcast on The Florida Channel are also transmitted to all interested parties via a satellite feed, which media and other organizations can access without going to the court. Further, live and archived video of all oral arguments are also available on the Florida Supreme Court Gavel to Gavel website, which is maintained by WFSU-TV. WFSU-TV officials stated that archived video is generally posted within 48 hours of arguments. According to the Public Information Officer, the close partnership between the court and WFSU-TV is key to providing access to video coverage of oral arguments. He stated that the partnership allows the court to leverage WFSU-TV staff, technical expertise, and production capabilities. For example, the court would not be able to devote the same number of staff to broadcasting oral arguments as WFSU does. In addition, WFSU has more advanced technology than the court would have been able to purchase. Figure 7 shows pictures of the Florida Supreme Court’s cameras, the court’s video and audio control room, and WFSU-TV’s production room. See appendix VI for additional details about the Supreme Court of California’s and Florida Supreme Court’s video and audio policies and procedures, coverage requests and online views, and policy implementation. The courts of last resort in the three countries included in our review— Australia, Canada, and the United Kingdom—have policies that provide video coverage of oral arguments by the court itself and do not allow media organizations to record oral arguments using their own equipment. These courts have varying procedures for providing coverage and mechanisms to help control who can use the footage and how the footage can be used. For example: High Court of Australia. Beginning in October 2013, the court has posted on its website video recordings of oral arguments heard before the full court—at least five of the court’s seven justices—in its Canberra courthouse. The court’s 2014-2015 Annual Report states that recordings are generally available at the end of each sitting day. According to the court’s Senior Executive Deputy Registrar, recordings may be posted on the next business day following arguments for some cases because they require editing to remove sensitive information, such as the names of victims in sexual assault cases. He said that this is one benefit of providing recordings of oral arguments instead of live coverage. He also stated that the court already had the technical capacity in place to record and post video of oral arguments and the costs are minimal for the court to provide such coverage. In addition, he noted that having the court use its own equipment and maintain control of the video recording process helped justices acclimate to the court’s providing video coverage and alleviate concerns about cameras being a distraction. The terms of use for the video recordings state that viewers may not modify, reproduce, publish, broadcast, or use the video of proceedings in any other way without prior written approval of the court. However, schools and universities may use video of proceedings in a classroom setting for educational purposes without prior approval. The Senior Executive Deputy Registrar stated that the court receives about 10 to 15 requests to use video recordings in a given calendar year and has approved all of them. Supreme Court of Canada. The court records video of oral arguments using its own equipment and, since February 2009, has streamed live video of arguments on its website. According to court officials, the court has never prohibited video coverage of a public proceeding. The court also has an agreement with the Canadian Public Affairs Channel (CPAC) which allows CPAC to televise and live stream arguments. The agreement states that CPAC will broadcast arguments in their entirety, but may use clips, sound bites, or excerpts for its programming, provided that they are balanced and fair to the parties and all concerned in the appeal. In addition, CPAC is authorized and has agreed to make broadcast feeds of oral arguments available to other broadcast members of the Canadian Parliamentary Press Gallery at a central node for news and public affairs broadcasts only. The court and CPAC also provide archived video recordings of oral arguments on their websites. Parties and individuals who are not members of the news media must submit a request to the court to obtain permission to use oral argument recordings. Requests are made using an electronic form on the court’s website, which requires information such as a description of the video or webcast requested, how it will be used, and the medium in which it will be used (e.g., Internet, video, film, DVD). If approval is granted, the requester is required to sign an agreement detailing the terms of use. Agreements may include provisions to, for example, use footage in a context that presents the case and the positions of the litigants in a fair and balanced way and does not harm the reputation of the court or of the counsel or justices appearing in the footage. The U.K. Supreme Court. The court records video of oral arguments using its own equipment and, in October 2014, began streaming live video of oral arguments on its website. In addition, Sky News, a U.K. broadcasting organization, has streamed the court’s live video of oral arguments on its website since May 2011. According to the court’s Head of Communications, media organizations can access the court’s video feed in a nearby broadcast studio. He stated that the court’s recording of its own video allows it to control what is filmed and interrupt or terminate coverage if necessary. The court also began making archived video recordings of oral arguments available on its website in May 2015. According to the court’s press release, footage is uploaded the next working day after an argument is heard and is available until about a year after the date of the argument. The court has established rules for how videos of oral arguments can be used by broadcasters. For example, the rules only allow use in news, current affairs, and educational programs and prohibit use in light entertainment, satirical, and other types of programs. In addition, the rules state that any stills produced from the video must be used in a way that has regard to the dignity of the court and its functions as a working body. According to the Head of Communications, all of the U.K.’s main media broadcasting organizations have agreed to these rules. He stated that the court enforces its policy to the best of its ability with limited resources and that, to his knowledge, there have not been any violations of the rules. See appendix VII for additional details about the video policies and procedures, online video views, and policy implementation of these foreign courts of last resort. The judges and attorneys we interviewed in selected appellate courts who have experience with video and audio coverage of oral arguments cited several benefits of such coverage, including greater public access to the courts and educating the public on the judicial system, among others. Administrative officials in selected courts also provided additional examples of these benefits. Public access. Fifteen of the 16 judges and all nine attorneys stated that they believed that coverage has enhanced or could potentially enhance the public’s access to the courts, particularly as the public relies more heavily on television as a principal source of information. For instance, one attorney and one judge said that, in high-profile cases or those of interest to the public, the public could be more informed about both the process and the issues in the case through video coverage. Further, the one attorney noted that providing greater access to the court through video or audio coverage is valuable because, as more information about court proceedings is available to the public, more people will understand the courts and the judicial system. In addition, two judges with whom we spoke said that a benefit of same-day audio coverage is that attorneys or other interested persons do not have to physically go to the court to hear an oral argument, but instead could access same-day audio recordings of the argument on the court’s website. Two attorneys who practice in the same court voiced some of the same benefits, stating that they believed that same-day audio coverage has provided more access to the court, information about what happens in the court, and is useful for persons who are not able to attend the oral argument. Moreover, another attorney with whom we spoke stated that he believed video or audio coverage of oral arguments increases the information available to the public and the media, which could also result in a more complete and neutral representation of oral arguments by the media. Administrative officials in selected courts also described instances in which they believed that video or audio coverage of arguments in their courts had enhanced public access. For example, according to a U.K. Supreme Court official, one of the main reasons the court provides video coverage of its proceedings is to ensure that the country’s citizens are able to watch the proceedings in their highest court and hear important points related to principles in the development of common law. In addition, an official in the High Court of Australia said that providing video recordings of proceedings allows more of the public to view proceedings because Australia is a large country and most of its population does not reside in Canberra, where the court is located. Moreover, according to officials from the U.S. Court of Appeals for the Ninth Circuit, live streaming oral arguments has increased public access to the court, particularly in cases of high public interest. For example, in November 2014, the court heard arguments in a case regarding an incident in which a high school student aimed a laser pointer at an incoming passenger jet as it approached an airport near his home. Officials stated that the courtroom was not able to accommodate the large number of students from his high school who were interested in viewing oral arguments, but students were able to watch the live-streamed video of arguments at the school. Education. Fourteen of the 16 judges and seven of the nine attorneys with whom we spoke cited public education on the judiciary as a benefit or potential benefit of video or audio coverage of oral arguments. For instance, one judge said that because the work of the courts can easily be misunderstood and is not in the headlines as much as the work of other branches of the government, video coverage is useful for providing the public a window into how the courts think about the issues in a case. Moreover, one attorney with whom we spoke stated that video coverage of oral arguments is a useful learning tool because she can review her arguments to identify areas for improvement. Additionally, this same attorney said that video coverage is useful for junior attorneys to watch so they can learn about how to conduct oral arguments and the legal issues of the case. However, another attorney stated that coverage of oral arguments may be more misleading than illuminating because, for those watching to have an accurate view of the arguments, they would need to understand the entire case and the judicial process. Additionally, ten of the 16 judges stated that they believed that coverage of oral arguments, which is only part of the decision making process, may not be helpful for understanding the case in its entirety. For instance, one judge stated that the public might attain a general understanding of the issues in a case and what was of concern to the court, but may not have all the information needed to fully understand a case after viewing arguments. Another judge noted that the written briefs that parties submit to the court are critical to understanding the case, and that oral arguments frequently address narrow aspects of the case that the judges are concerned about. Court administrative officials with whom we spoke also provided examples of instances in which video or audio coverage of oral arguments in their courts has provided educational benefits. For example, an official from the Florida Supreme Court stated that high-profile, controversial cases can be misunderstood by the public and broadcasting oral arguments in their entirety can help dispel misconceptions about the case and how the court operates. For instance, this official stated that broadcasting oral arguments in the 2000 presidential election cases that were before the Florida Supreme Court helped educate the public about the judicial system and noted that it was beneficial for the public to be able to see the arguments and draw their own conclusions. In addition, officials from the U.S. Court of Appeals for the Ninth Circuit stated that law schools have used live-streamed and archived oral arguments as a learning tool for their students. A U.K. Supreme Court official also stated that video coverage helps educate attorneys, law students, and others in the legal profession who can watch the justices in action and see how attorneys conduct arguments. Public confidence in the courts. Seven of the 16 judges and seven of the nine attorneys with whom we spoke believed that coverage has enhanced or could potentially enhance confidence in the courts. For instance, 1 judge stated that if a good judicial system is in place, video or audio coverage of arguments, which demonstrates how the system works, would increase the public’s understanding of and confidence in the courts. Further, one attorney said that providing coverage of oral arguments would show the public the work the courts conduct, as well as the quality and quantity of the work the court puts into each case. However, 7 judges with whom we spoke believed that coverage may not enhance confidence. For instance, 4 judges noted that it would be hard to identify the effect of coverage on the public’s confidence in the courts. In particular, 1 judge stated that it would depend greatly on what a person already thinks of the court before seeing any video or audio coverage of oral arguments, while 2 judges said that it would be hard to determine the specific impact of coverage on public confidence. Judicial accountability. Eight of the 16 judges and five of the nine attorneys with whom we spoke stated that coverage has increased or could potentially increase judicial accountability, although 7 judges felt that it did not affect accountability. For example, 1 judge stated that video coverage is a form of accountability in that it demonstrates how judges reason and think through cases, and helps explain the judicial process and justify the court’s results. Moreover, 1 judge said that he believed that video or audio coverage would increase the accountability of any public official whose work was covered, including judges, although he noted that the public does not and should not have access to the judges’ deliberative process. However, 4 U.S. courts of appeals judges explained that judicial accountability is already very high, and if judges make mistakes, they are documented in publicly issued opinions; therefore, they did not believe that coverage would increase judicial accountability. The judges and attorneys we interviewed in the selected appellate courts raised some concerns with video or audio coverage of oral arguments, including how the media might use such coverage, among others. Effect on court participants. Almost all judges and attorneys we interviewed stated that they did not believe video or audio coverage had affected court participants’ behavior or did not believe that such coverage would affect the behavior of court participants. For example, at least 12 of the 16 judges and eight of the nine attorneys we interviewed said that they personally were not affected by video or audio coverage and had not observed judges or attorneys appearing to grandstand, talking in sound bites, or being more attentive or courteous to others; judges altering their methods of questioning; effects on court decorum; or other changes in behavior. Three attorneys we interviewed explained that coverage did not affect their behavior because, during oral arguments, they are so focused on the arguments themselves that it is not possible to think about anything else, including video coverage. Two judges stated that judges or attorneys could grandstand and be more courteous, potentially because their questioning might be misinterpreted as badgering attorneys. However, one judge noted that he was not sure if this behavior would be caused by audio coverage. Privacy and security. Fifteen of the 16 judges and all nine attorneys with whom we spoke did not have concerns with the effect of coverage on their own privacy and security, while 1 judge we interviewed expressed concerns. Specifically, this judge recounted having personally experienced security concerns in a particular case in which a video clip during questioning by the judge was posted and disseminated on social media. The judge received threats as a result of the video coverage. In addition, 6 judges and four attorneys said that there was the potential for coverage to affect the privacy and security of court participants even though they had not experienced issues themselves. Media use of coverage. Some judges and most of the attorneys with whom we spoke also raised some concerns with how the media might use coverage of oral arguments. For instance, 5 of the 16 judges and eight of the nine attorneys we interviewed stated that they believed that video or audio coverage might potentially result in portions of the proceeding being distorted by the media. However, 11 of the 16 judges we interviewed stated that they did not believe that coverage might result in such distortions. In addition, three attorneys noted that distortions happen even without audio or video coverage. For instance, one attorney stated that the media regularly distort proceedings, including some of her own oral arguments, regardless of video or audio coverage. Another attorney stated that, in her experience, proceedings have been inaccurately covered by the media. For instance, the reporters listening to an argument are often not lawyers and may not understand the oral argument; as a result, members of the media may focus on a segment of the case that they think is interesting and use that segment in their reporting of the argument even if they have taken that segment out of context. She noted that same-day audio coverage may help prevent distortion because it allows reporters to review and confirm what actually occurred before reporting on the case and allows the public to independently listen to the entire oral argument. Four judges and four attorneys with whom we spoke stated that they believe that coverage by the court itself—such as the court recording oral arguments using its own equipment—versus coverage by the media, might help or could potentially help mitigate these concerns, including potential distortion of proceedings by the media. For example, one attorney stated that if the court produces the coverage, then the court can control it and release it as the court sees fit. He also noted that while the media generally have an incentive to promote coverage to gain viewers, the court does not have such an incentive. Further, 1 judge stated that when the court shows coverage of the entire oral argument, people have the opportunity to make their own judgements about what they see, while the media may insert their own views. Moreover, 12 of the 16 judges and five of seven attorneys with whom we spoke stated that the media showing edited segments of oral arguments is not sufficient to provide a complete or accurate understanding of court proceedings. For example, one attorney stated that the public may misunderstand the proceeding if the media show edited segments and do not provide proper context for the case. In addition, 1 judge stated that a local channel broadcasts the court’s oral arguments in their entirety, which is preferable to the media showing edited snippets. He noted that television networks rarely have the air time to broadcast an entire oral argument and will not do so unless it is for a landmark case. A representative from one media organization explained that some media outlets selectively cover oral arguments, which may appear to some as distortions, because they must report on the case in a short news segment during their broadcast. He stated that, while his organization generally broadcasts oral arguments in their entirety, other media outlets have such time constraints and cannot do so. A small number of studies have also addressed the effects of video coverage on appellate courts. See appendix II for information about these studies. In our interviews with selected appellate judges and attorneys who have had experience with video or audio coverage, we asked for their perspectives on the extent to which the benefits, concerns, or potential effects of coverage discussed previously might also apply to video or live- audio coverage of the U.S. Supreme Court’s oral arguments, if such coverage were to be allowed. Twelve of the 16 judges and eight of the nine attorneys we interviewed in selected appellate courts said that they believed that the potential benefits, concerns, and effects might apply to the U.S. Supreme Court. For example, 3 of the 16 judges and two of the nine attorneys we interviewed identified benefits to the public of video or same-day audio coverage, such as providing the public with access to the Court’s proceedings, or enhancing the public’s perception or understanding of the Court’s proceedings. One judge noted, however, that video coverage of oral arguments could distort the public’s perception of what the Court does, as the coverage would likely focus on a small number of high-profile cases and little attention would be given to the other cases the Court hears. In addition, 7 of the judges and two of the attorneys noted that, given the greater media or public interest in the U.S. Supreme Court and the higher profile or sensitivity of the cases it hears, the potential concerns and effects may be magnified. For instance, 3 judges and three attorneys felt that coverage could affect the behavior of court participants, such as justices adjusting their lines of questioning or attorneys grandstanding. Further, 5 judges and one attorney said that privacy and security concerns associated with coverage of oral arguments at the U.S. Supreme Court would be greater than at the appellate level because of the increased interest and profile of the Court and its cases. One judge explained that, because U.S. Supreme Court cases are so often high- profile, the Justices could face threats against them after every argument, compared to a small number of such instances at the appellate court level. We also requested perspectives on the potential benefits of and concerns with video coverage of oral arguments from the U.S. Supreme Court’s Public Information Officer, as well as four attorneys who have argued before the Court. Three of these four attorneys believed that the Court should allow additional access to coverage of its oral arguments. Of these three attorneys, two stated that the Court should allow video coverage and one stated that allowing more same-day audio access to oral arguments would be a good starting point. The fourth attorney stated that he would support additional access to coverage if the Court allowed it but that the Justices are in the best position to determine whether such access should be allowed. All four attorneys stated that there would be potential benefits to allowing video coverage of U.S. Supreme Court proceedings. Specifically, all of them stated that they believed allowing video coverage of U.S. Supreme Court oral arguments would enhance access to the Court. For example, one attorney said that the courtroom has a limited number of seats available and it can be costly to travel to the Court. He noted that allowing video coverage would be more equitable because all members of the public could view coverage on the television or the Internet. Another attorney stated that there is a substantial difference between access to transcripts of oral arguments, which the Court provides, and viewing oral arguments. He noted that individuals are more likely to be interested in viewing arguments. In addition, all four attorneys stated that they believed allowing video coverage would help educate either the public about the judicial system or law students and professionals on how to conduct arguments. For instance, one attorney stated that video coverage would allow individuals to see the judicial branch operating at the highest level and help increase understanding of the U.S. Supreme Court’s work. Further, three of the four attorneys believed that allowing video coverage would increase the public’s confidence in the Court. For example, one attorney stated that such coverage would allow the public to see the rigor and seriousness with which the Court conducts its business. Another attorney noted that it could help alleviate the potential public perception that the Court is a partisan institution. The U.S. Supreme Court’s Public Information Officer (PIO) and the attorneys with whom we spoke also raised potential concerns with video coverage of oral arguments. According to the PIO, individual Justices have commented on the need to ensure the fairness and efficiency of its decision-making process. They have noted that televising U.S. Supreme Court proceedings could adversely affect the dynamics of the oral arguments, diminishing the frankness and extemporaneity of the exchanges, and reducing their usefulness for both the counsel and Justices. Three of the four attorneys also shared concerns related to potential changes in the behavior of court participants, such as changes to how attorneys prepare for oral arguments and the manner in which oral arguments are conducted. For example, one attorney stated that allowing video or live-audio coverage of the Court’s oral arguments might change the tenor of the argument, which currently focuses on the genuine search for truth and is a very effective process. He noted that attorneys may feel the need to choose their words more carefully during arguments because of the potential for negative public reactions. In addition, three attorneys stated that allowing video coverage might potentially result in inexperienced attorneys playing to the public or grandstanding, but noted that such behavior would be detrimental to their case. The U.S. Supreme Court’s PIO and the attorneys we interviewed further noted concerns related to the information and perceptions the public could potentially get from viewing oral arguments. Specifically, the PIO stated that Justices have observed that oral argument is a small part of the advocacy process. According to the PIO, because oral argument merely supplements the extensive and often technical written submissions, it is generally indispensable to read the written briefs in order to understand the oral arguments. She also noted that the Justices have emphasized that the Court’s written decisions stand as the Court’s most important and enduring work—work that should not be overshadowed by one piece of the decision-making process. All four attorneys with whom we spoke stated that they believed viewing oral arguments would not provide a complete understanding of a case, but could still provide useful information. These four attorneys also stated that there is the potential for the media to distort video coverage of oral arguments to varying degrees. For example, one attorney stated that there is the potential for statements to be taken out of context or misrepresented but the benefits of coverage outweigh the risks, and another attorney stated that such distortion could be a significant problem. This attorney noted that the Court providing coverage of arguments in their entirety, as opposed to edited coverage by the media, could help alleviate this concern. Finally, the PIO stated that, above all, the Justices are trustees of an institution that has functioned well and earned the public’s confidence. The Justices have expressed caution about introducing changes that could diminish the public’s respect for and create misconceptions about the Court. The PIO stated that the Court is proceeding carefully in evaluating whether it should make changes to its current practice of not providing video camera coverage of its proceedings. We provided a draft of this report to the U.S. Supreme Court, the Administrative Office of the U.S. Courts, the Federal Judicial Center, and the Department of Defense for review and comment. They had no written comments on the draft report. The Administrative Office of the U.S. Courts provided technical comments, which we incorporated as appropriate. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the U.S. Supreme Court; Administrative Office of the U.S. Courts; the Federal Judicial Center; the state and foreign courts of last resort in which we conducted interviews— the Supreme Court of California, the Florida Supreme Court, the High Court of Australia, the Supreme Court of Canada, and the U.K. Supreme Court; the Department of Defense; appropriate congressional committees and members, and other interested parties. In addition, this report is available at no charge on GAO’s website at http://www.gao.gov. If you or your staff have any questions, please contact Diana Maurer at (202) 512-9627 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made significant contributions to this report are listed in appendix VIII. We addressed the following questions as part of this review: 1. What is the U.S. Supreme Court’s policy regarding access to video and audio of oral arguments and what are the policies of other selected appellate courts? 2. What do selected stakeholders report are the benefits of and concerns with allowing video and audio coverage of oral arguments in appellate courts, including the U.S. Supreme Court? To address the first question, we analyzed information on the U.S. Supreme Court’s policy regarding access to video and audio of oral arguments that we obtained from Court documents, the Court’s website, and its Public Information Officer, including the process by which the Court decides whether to grant media requests to release audio recordings of oral arguments on the same day of the arguments. We also analyzed data from the Public Information Officer on the cases for which the Court received requests for same-day audio recordings of oral arguments and whether the requests were granted or declined from the Court’s 2000 term—the term in which same-day audio was first made available—through the Court’s 2014 term. We assessed the reliability of these data and determined them to be reliable for the purposes of this report. This assessment included comparing these data with other available sources and obtaining information from the Court about the accuracy and completeness of these data. We also analyzed information about policies regarding access to video and audio of oral arguments from selected appellate courts. We focused on appellate courts because these courts conduct oral arguments and, as such, their proceedings and participants are most similar to those of the U.S. Supreme Court. The selected appellate courts included the U.S. courts of appeals for the 13 federal circuits and courts of last resort—the highest appellate courts in a given jurisdiction—in the 50 states and the District of Columbia. These courts were chosen because their decisions may be directly appealed to the U.S. Supreme Court under certain circumstances and/or because they are generally the highest court in their respective jurisdictions. We also included foreign courts of last resort because they are the highest appellate courts in their respective countries. We selected the High Court of Australia, Supreme Court of Canada, and United Kingdom Supreme Court because their countries have common law legal systems in which judicial decisions establish legal precedents of law that are unwritten in statutes or codes, as does the United States; populations of over 20 million; and English as an official language and the language predominantly spoken. We identified and compiled rules, court and administrative orders, guidelines, and other documentation of video and audio policies of the courts we selected by searching court websites and lexis.com and reviewing literature that discussed the video and audio policies of the courts. We also contacted administrative officials in these courts to confirm that the written policies we identified were complete and current and to obtain information on and documentation of policies not available online. We compiled the information on state courts of last resort from January through May 2015, and confirmed that our analysis of the policies was accurate, complete, and current as of January 2016 for 42 states and the District of Columbia and June through August 2015 for the 9 remaining states. In addition, we conducted interviews in person and on the phone or had written correspondence with court administrative officials in 8 selected U.S. courts of appeals, state courts of last resort, and foreign courts of last resort to obtain information on the implementation of video and audio policies in the courts, such as resource requirements or challenges. We selected U.S. courts of appeals to reflect a range of video and audio policies. As such, we visited the U.S. Courts of Appeals for the Second and the Ninth Circuits because they are the two U.S. courts of appeals that currently allow media video coverage of oral arguments, and the U.S. Court of Appeals for the D.C. Circuit because it is one of the U.S. courts of appeals that does not allow video coverage of oral arguments. We selected state courts of last resort based on their (1) range of video and audio policies, including limitations on coverage such as whether certain types of cases are excluded from coverage and whether consent of parties is required; (2) having relatively high caseloads to increase the likelihood of cases with media interest and coverage; (3) extent of experience in allowing video or audio coverage; and (4) proximity to selected U.S. courts of appeals. Using these criteria, we visited state courts of last resort in Florida and California. We also contacted courts of last resort in 3 other states that require the consent of parties before coverage is allowed or do not allow video coverage of oral arguments to arrange interviews, but officials in these states either did not respond to our requests or declined to meet with us. In addition, we conducted interviews with or received written responses from court administrative officials in our three selected foreign courts of last resort—the High Court of Australia, Supreme Court of Canada, and United Kingdom Supreme Court. The information collected from the interviews with officials in these selected appellate courts cannot be generalized to all administrative officials or appellate courts. However, the site visits and interviews provided us with valuable information about court officials’ experiences with and perspectives on a variety of policies regarding access to video and audio of oral arguments. In addition, where available, we obtained data from these courts on the number of cases for which media video or audio coverage has been requested or granted and the number of views video or audio recordings of oral arguments that are posted online have received. We assessed the reliability of these data and determined them to be reliable for the purposes of this report. This assessment included obtaining and reviewing information from court administrative officials on how the data are collected and maintained. To address the second question, we conducted semi-structured interviews with 16 judges and nine attorneys who have had experience with video or audio coverage. Specifically, we interviewed 14 judges and nine attorneys who practice in the selected federal circuit courts of appeals and state courts of last resort described above—the U.S. Courts of Appeals for the Second, Ninth, and D.C. Circuits, and courts of last resort in Florida and California—to discuss their experiences with video and/or audio coverage of oral arguments, and their perspectives on the benefits of and concerns with allowing such coverage in appellate courts, including the U.S. Supreme Court. We also interviewed 2 justices of the United Kingdom Supreme Court. We selected the judges and attorneys based on recommendations from the courts. The information obtained from these interviews cannot be generalized to all appellate courts, judges, or attorneys; however, they provided us with insights regarding the benefits of and concerns with video and audio coverage of oral arguments in these courts. In addition, we obtained written responses from the U.S. Supreme Court regarding the Justices’ perspectives on video coverage of the Court’s oral arguments. We also conducted semi- structured interviews with four attorneys who have argued before the U.S. Supreme Court to obtain their perspectives on allowing video and audio coverage of oral arguments at the Court. We selected these attorneys because they had argued nine or more cases before the Court from the 2012 through 2014 terms and based on their availability. Their perspectives cannot be generalized to all attorneys who have argued before the U.S. Supreme Court, but provide insights regarding allowing video and audio coverage of the Court’s oral arguments. Finally, we contacted or interviewed representatives from selected legal associations and media organizations to obtain their perspectives on the potential benefits of and concerns with allowing video and audio coverage of oral arguments in appellate courts, including the U.S. Supreme Court. We selected these organizations based on our review of relevant literature, their work in this area, and recommendations from others. The organizations included the American Bar Association, Federal Bar Association, C-SPAN, and the Coalition for Court Transparency, and Radio Television Digital News Association. Their perspectives cannot be generalized, but provided insights into potential benefits of and concerns with such coverage. We conducted this performance audit from January 2015 to April 2016 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Through searches of databases, Internet websites, and other sources available as of April 2015, we identified almost 400 documents that addressed video and audio coverage of court proceedings. We reviewed these documents and identified 53 studies. Of these 53 studies, 2 were studies that included findings on the effects of video coverage in appellate courts, while most of the remaining studies focused on trial courts. According to one of two researchers we interviewed who have conducted work in this area, more studies have been conducted in trial courts, rather than appellate courts, because of a greater interest in the potential effects on victims, witnesses, and jurors—stakeholders who are not involved in appellate court proceedings. Both of these researchers also stated that, in general, there is insufficient empirical or experimental research on the effects of coverage in courts. They said that conducting a rigorous study on the effects of coverage in court proceedings requires funding and time from both researchers and stakeholders involved, such as judges, attorneys, and court personnel. Neither of the two studies we identified used an experimental or quasi- experimental methodology. Instead, they reported findings on the perceived effects of video coverage in appellate courts and relied on data from surveys and interviews with stakeholders, which provided useful information on stakeholder experiences with video coverage. Table 4 describes these studies. Case Bush v. Palm Beach County Canvassing Board McConnell v. Federal Election Commission Rasul v. Bush; and Al Odah v. U.S. (Consolidated) Cheney v. USDC District Of Columbia McCreary County v. ACLU of Ky. Ayotte v. Planned Parenthood of Northern New England Rumsfeld v. Forum for Academic and Institutional Rights League of United Latin American Citizens v. Perry; Travis County, Tex. v. Perry; Jackson v. Perry; and GI Forum of Texas v. Perry (Consolidated) Philip Morris USA v. Williams Gonzales v. Planned Parenthood Federation of America Parents Involved in Community Schools v. Seattle School District No. 1 Meredith v. Jefferson County Board Of Education Davenport v. Washington Education Association; and Washington v. Washington Education Association (Consolidated) FEC v. Wisconsin Right to Life; and McCain v. Wisconsin Right to Life (Consolidated) Declined Boumediene v. Bush; and Al Odah v. United States (Consolidated) Crawford v. Marion County Election Board; and Indiana Democratic Party v. Rokita (Consolidated) Case United States v. Ressam Altria Group, Inc. v. Good Winter v. Natural Resources Defense Council, Inc. FCC v. Fox Television Stations, Inc. Pleasant Grove City, Utah v. Summum Philip Morris USA, Inc. v. Williams Caperton v. A.T. Massey Coal Co., Inc. Northwest Austin Municipal Utility District No. 1 v. Holder Holder v. Humanitarian Law Project; and Humanitarian Law Project v. Holder (Consolidated) National Federation Of Independent Business v. Florida; and Florida v. Department of Health and Human Services (Consolidated) Burwell v. Hobby Lobby Stores, Inc.; and Conestoga Wood Specialties Corp. v. Burwell (Consolidated) Obergefell v. Hodges; Tanco v. Haslam; DeBoer v. Snyder; and Bourke v. Beshear (Consolidated) The three U.S. courts of appeals that we visited—the U.S. Courts of Appeals for the Second, Ninth, and D.C. Circuits—have varying policies and procedures for video and audio coverage of oral arguments, with different levels of usage and resource requirements. Tables 5, 6, and 7 summarize the policies and procedures, coverage requests and online views, and policy implementation for these courts. Written policies require parties to affirmatively that judges must make an on-the- record finding to juvenile) Written policies require parties to affirmatively that judges must make an on-the- record finding to juvenile) ● through August 2015, but did not review policy implementation. Courts of last resort in 42 states and the District of Columbia confirmed their policies as of January 2016 and those in the 9 remaining states were confirmed as of June through August 2015. The states with asterisks were confirmed as of January 2016. This category does not include policies that prohibit media video or audio coverage of oral arguments that are not conducted in open court, such as arguments for cases that are closed to the public, sealed, or confidential under law. The D.C. Court of Appeals has no written policies on media video or audio coverage of oral arguments, and according to the court’s Clerk, does not allow such media coverage. The Florida Supreme Court partners with WFSU-Television, a public broadcasting station, to provide video coverage of oral arguments. Freestanding video cameras are not permitted in the courtroom during arguments. Although oral arguments in the Supreme Court of Illinois are not generally streamed live, video of arguments for four cases in the court were streamed live on another website in November 2015. Louisiana Canon 3 states that a judge should prohibit broadcasting, televising, recording, or taking photographs in the courtroom and areas immediately adjacent thereto at least during sessions of court or recesses between sessions except as provided by guidelines on media coverage. The guidelines for extended media coverage in appellate courts require media organizations to notify the court clerk of their intention to provide such coverage at least 20 days in advance of the proceedings and allow the chief justice to prohibit or limit coverage of Louisiana Supreme Court proceedings, among other provisions. According to the Deputy Judicial Administrator of the Louisiana Supreme Court, there is a presumption that coverage is not allowed, although exceptions may be made for cases with high public interest. Mississippi Rules for Electronic and Photographic Coverage of Judicial Proceedings prohibit media coverage of certain matters, such as those involving divorce, neglect of minors, domestic abuse, and trade secrets, but the presiding justice can allow coverage by order. Missouri Court Operating Rules prohibit media video and audio coverage of juvenile, adoption, domestic relations, and child custody hearings. The Communications Counsel of the Supreme Court of Missouri stated that this limitation does not apply to the supreme court, although the court reserves the right to make a case-by-case determination about whether such coverage would be allowed. Supreme Court Guidelines for Still and Television Camera and Audio Coverage of Proceedings in the Courts of New Jersey state that coverage is prohibited in certain proceedings, such as juvenile proceedings and those involving trade secrets, child abuse or neglect, and charges of sexual contact when the victim is alive. The Director of Communications and Community Relations for the New Jersey Courts stated that this limitation does not apply to the Supreme Court of New Jersey. North Carolina court rules state that media video and audio coverage is prohibited in certain judicial proceedings, such as juvenile and child custody proceedings and proceedings involving trade secrets. The Clerk of the Supreme Court of North Carolina stated that this limitation does not apply to the supreme court. Oklahoma’s two courts of last resort—the Supreme Court, which determines all issues of a civil nature, and the Court of Criminal Appeals, which decides all criminal matters—do not have written policies on media video and audio coverage. The office of the Chief Justice of the Oklahoma Supreme Court noted that the supreme court has left it up to each presiding judge to determine whether to allow coverage. According to the Chief Justice, the supreme court has allowed video coverage of oral arguments on a few occasions and is in the process of developing a written policy for such coverage. According to the Supreme Court of Pennsylvania’s Court Crier, the Pennsylvania Cable Network is the only media organization that can record proceedings. The courts of last resort in the two states we visited—California and Florida—have varying policies and procedures on video and audio coverage of oral arguments, with different levels of usage and resource requirements. Tables 8 and 9 summarize the policies and procedures, coverage requests and online views, and policy implementation for these courts. The courts of last resort in Australia, Canada, and the United Kingdom (U.K.) have policies that provide video coverage of oral arguments by the court itself, with varying procedures for doing so and mechanisms to help control who can use the footage and how the footage can be used. Tables 10, 11, and 12 summarize the policies and procedures, online views and usage requests, and policy implementation for these courts. In addition to the contact named above, Jill Verret (Assistant Director), Tom Jessor (Assistant Director), David Alexander (Assistant Director), Claudine Brenner, Colleen Candrl, Dominick Dale, Farrah Graham, Nina Gurak, Yvette Gutierrez, Eric Hauswirth, Tracey King, Jan Montgomery, Alice Paszel, Janet Temko-Blinder, and Johanna Wong made significant contributions to this report. | The U.S. Supreme Court—the highest appellate court in the country—hears high-interest cases potentially affecting millions. The Court generally hears oral arguments for these cases, which are open to the public. Seating in the Court is limited and media organizations, as well as members of Congress, have requested video coverage of oral arguments. GAO was asked to review video and audio coverage of proceedings in the U.S. Supreme Court and other appellate courts. This report addresses (1) the U.S. Supreme Court's policy regarding video and audio coverage of oral arguments and the policies of other selected appellate courts and (2) perspectives of selected stakeholders on the benefits of and concerns with allowing such coverage. GAO analyzed policies on video and audio coverage of oral arguments in the U.S. Supreme Court and other selected appellate courts—13 U.S. courts of appeals and the highest appellate courts in the 50 states and the District of Columbia and three foreign countries—chosen because of comparability to the U.S. Supreme Court. GAO obtained information from administrative officials in 8 courts, selected based on video and audio policies, and perspectives on the benefits of and concerns with coverage from (1) 16 judges in 6 of these courts and 9 attorneys in 5 of these courts and (2) the PIO of the U.S. Supreme Court and 4 attorneys who have argued before the Court. Results are not generalizable but provided insights on video and audio coverage of oral arguments. GAO also reviewed studies on this issue. The U.S. Supreme Court (the Court) posts audio recordings of oral arguments on its website at the end of each argument week, but does not provide video coverage of these arguments. In addition, starting in 2000, the Court began granting requests for access to audio recordings of oral arguments on the same day arguments are heard in selected cases. As of October 4, 2015, the Court had received media requests for access to same-day audio recordings in 58 cases and had granted them in 26 cases. Other selected appellate courts have varying policies on video and audio coverage of oral arguments. For example, Two of the 13 U.S. courts of appeals allow media video coverage of oral arguments. Also, 9 of these 13 courts generally post audio recordings of arguments on their websites the same day arguments are heard. The highest appellate courts in 49 states have written policies that allow media video and audio coverage of oral arguments and almost all of these courts have video or audio of oral arguments available online. The highest appellate courts in Australia, Canada, and the United Kingdom have policies that provide video coverage of oral arguments by the court itself. Stakeholders in selected courts stated that the benefits of video or audio coverage of oral arguments in their courts include educating the public on the judicial system, among others, but also expressed concerns with regard to how the media might use such coverage. For example, Fourteen of the 16 judges and seven of the nine attorneys GAO interviewed in the selected appellate courts cited public education on the judiciary as a benefit or potential benefit of video or audio coverage of arguments. One judge noted that video coverage is useful for providing a window into how the courts think about the issues in a case. Five judges and eight attorneys stated that coverage might potentially result in portions of the arguments being distorted by the media. However, four judges and four attorneys said that the court providing coverage itself might help mitigate these concerns. For example, one attorney stated that this allows the court to control and release the coverage as it sees fit. With regard to the U.S. Supreme Court allowing video coverage of oral arguments, the four attorneys GAO interviewed who have argued before the Court also cited similar educational benefits and concerns regarding the media potentially distorting coverage. Further, three of the four attorneys and the Court's Public Information Officer (PIO) raised concerns that coverage may potentially affect court participants' behavior. The PIO stated that individual Justices have commented that televising proceedings could adversely affect the dynamics of the oral arguments, among other concerns, and have expressed caution about introducing changes that could create misconceptions about the Court. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
In 2008, the most recently available data, more than 153 million cattle, sheep, hogs, and other animals ultimately destined to provide meat for human consumption were slaughtered at about 800 slaughter plants throughout the United States that engage in interstate commerce. Under federal law, meat-processing facilities that engage in interstate commerce must have federal inspectors on site. FSIS classifies plants according to size and the number of employees. Specifically, large plants have 500 or more employees; small plants have from 10 to 499 employees; and very small plants have fewer than 10 employees, or annual sales of less than $2.5 million. Under HMSA, FSIS inspectors are to ensure that animals are humanely treated from the moment they arrive at a plant until they are slaughtered. FSIS deploys these inspectors from 15 district offices nationwide. Figure 1 shows the states and territories in each FSIS district. After livestock arrive at a slaughter plant, plant employees monitor their movements as they are unloaded from trucks to holding pens and eventually led into the stunning chute. Plant employees typically restrain an animal in the chute and stun it by using one of several devices—carbon dioxide gas, an electrical current, a captive bolt gun, or a gunshot—that, as required by HMSA regulations, is rapid and effective in rendering the animal insensible. (See fig. 2.) Under HMSA, animals must be rendered insensible—that is, unable to feel pain—on the first stun before being shackled, hoisted on the bleed rail, thrown, cast, or cut. According to the expert we consulted, animals on the bleed rail that exhibit any of the following signs are considered sensible and would therefore be need to be restunned: lifting head straight up and keeping it up (righting reflex), vocalizing. Figure 2 shows stunning methods consistent with HMSA. Once the animals are considered stunned, they are shackled and hoisted onto a processing line, where their throats are cut, and they are fully bled before processing continues. HMSA exempts only ritual slaughter, such as kosher and halal slaughter, from the HMSA requirement that animals be rendered insensible on the first blow. See appendix II for a more detailed description of the movement of livestock through the plant. FSIS has issued a variety of regulations and directives instructing FSIS inspectors on how to enforce HMSA. Overall, the regulations emphasize the minimization of “excitement and discomfort” to the animals and require that they are effectively stunned before being slaughtered. In 2003, FSIS guidance on humane handling enforcement stated that inspectors were to determine whether a humane handling incident does, or will immediately lead to, an injured animal or inhumane treatment. The guidance also specified the types of actions inspectors should take when these situations occur. Also in 2003, FSIS began providing “humane interactive knowledge exchange” scenarios as an educational tool to enhance inspectors’ understanding of appropriate enforcement actions. These eight written scenarios, available on FSIS’s Web site, provide examples of inhumane incidents and suggest enforcement actions. In 2005, the agency issued additional guidance specifying egregious humane handling situations. This guidance defines egregious as any act that is cruel to animals or a condition that is ignored and leads to the harming of animals. The guidance provided the following examples of egregious acts: making cuts on or skinning conscious animals, excessively beating or prodding ambulatory or nonambulatory disabled driving animals off semitrailers over a drop-off without providing adequate unloading facilities so that animals fall to the ground, running equipment over animals, stunning animals and then allowing them to regain consciousness, leaving disabled livestock exposed to adverse climate conditions while awaiting disposition, or otherwise intentionally causing unnecessary pain and suffering to animals. If inspectors determine that an egregious humane handling incident has occurred, they may suspend inspection at the plant immediately, effectively shutting down the plant’s entire operation, and determine corrective actions with plant management and the district office. In 2008, after the reported inhumane handling incident in California, which was at the Westland/Hallmark plant, FSIS expanded its guidance to include two more examples of egregious actions for which inspectors may suspend a plant: (1) multiple failed stuns, especially in the absence of corrective actions, and (2) dismemberment of live animals. According to FSIS guidance, when FSIS inspectors observe a violation of HMSA or its implementing regulations and determine that animals are being injured or treated inhumanely, they are to take both of the following enforcement actions, which may restrict a facility’s ability to operate: Issue a noncompliance report. This report documents the humane handling violation and the actions needed to correct the deficiency in cases where the animal may be injured or harmed. Inspectors are also directed to notify plant management when issuing a noncompliance report. Issue a regulatory control action. Inspectors place a regulatory control action or a reject tag on a piece of equipment or an area of the plant that was involved in harming or inhumanely treating an animal. This tag is used to alert plant management to the need to quickly respond to violations that they can readily address. The tag prohibits the use of a particular piece of equipment or area of the facility until the equipment is made acceptable to the inspector. When inspectors determine that an egregious humane handling incident has occurred, in addition to issuing a noncompliance report and regulatory control action, FSIS may also take the following actions: Suspend plant operations. An on-site FSIS supervisor—known as an inspector-in-charge—can initiate an action to suspend plant operations when an inspector observes egregious abuse to the animals. The inspector must document the facts that serve as the basis of the suspension action in a written memorandum of interview and promptly provide that information electronically to district officials. Ultimately, district officials assess the facts supporting the suspension, take any final action, and notify officials in headquarters. Withdraw the plant’s grant of inspection. If the plant fails to respond to FSIS’s concerns about repeated and/or serious violations, the district offices may decide to withdraw all inspectors. Without FSIS inspectors on site, the plant’s products cannot enter interstate or foreign commerce. The FSIS Administrator may file a complaint to withdraw the plant’s grant of inspection and if the grant of inspection is withdrawn, the plant must then reapply for and be awarded a grant of inspection before it may resume operations. FSIS employs inspectors at plants and in FSIS districts to help enforce HMSA and its food safety inspections. In the plant, FSIS employs inspectors-in-charge, online and offline inspectors, and relief inspectors. Inspectors-in-charge are the chief inspectors in the plant and may or may not be veterinarians. These inspectors are responsible for reporting humane handling activities for each shift, as well as carrying out food safety responsibilities, and making enforcement decisions in consultation with district officials when necessary. Online inspectors are typically assigned specific duties on the slaughter line, such as inspecting carcasses and animal heads; however, they may also perform some humane handling inspection duties as well. Offline inspectors conduct a variety of inspection activities throughout the plant and may also perform some humane handling inspection activities. FSIS also employs permanent relief inspectors, who step in for plant inspectors who are absent for a period of time, and may also observe humane handling. The plant inspectors and the inspectors-in-charge are supervised by frontline supervisors, who oversee multiple plants. Each plant has at least one FSIS veterinarian who is responsible for examining livestock prior to slaughter and performing humane handling activities. Some plants may require two veterinarians, depending on the volume of animals slaughtered at the plant and the number of operating shifts. Figure 3 provides an overview of FSIS personnel involved in the enforcement of HMSA. Although FSIS does not require inspectors to observe the entire handling and slaughter process during a shift, it requires inspectors-in-charge to record the amount of time that the FSIS inspectors collectively devoted to observing humane handling during one shift. The inspectors-in-charge enter this information into a data tracking system known as the Humane Activities Tracking System. At the district level, the DVMS in each of FSIS’s 15 districts serves as the liaison between the district office and headquarters on all humane handling matters. These employees are directed to visit each plant within their district over a 12- to 18-month period and review the humane handling practices at each plant. DVMSs may also coordinate the verification of humane handling activities and educate plant inspectors on relevant humane handling information in directives, notices, and other information from headquarters through the district office to inspectors in the field. Industry groups and animal welfare organizations have recently recommended actions to improve HMSA enforcement. As an expert witness, in 2008 testimony, Dr. Grandin proposed that FSIS guidance on humane handling be clearer—especially in determining when humane handling incidents at slaughter plants should be considered egregious violations of the HMSA. She has also suggested that FSIS adopt a numerical scoring system—which has been adopted by the American Meat Institute—to determine how well animals were being stunned and handled at the plants. The system has different standards for different species of animal and can be adjusted to fit plants that slaughter fewer animals. Overall, the system seeks to reduce the subjective nature of inspections by using objective measures to help slaughter plants improve their humane handling performance. In addition, the Humane Society of the United States has proposed a variety of reforms to strengthen HMSA enforcement, including requiring FSIS inspectors to observe the entire humane handling and slaughter process during a shift. According to our survey results and analysis of FSIS data, inspectors have not taken consistent actions to enforce HMSA once they have identified a violation. These inconsistencies may be due, in part, to weaknesses in FSIS’s guidance and training for key inspection staff. While FSIS expects its inspectors to use their professional judgment based on the guidance in deciding enforcement actions, industry and others are using other tools to assist their efforts to improve humane handling performance. Furthermore, although FSIS has taken steps to correct data weaknesses in HMSA reporting that we noted in 2004, it has not used these data to analyze HMSA enforcement across districts and plants to identify inconsistent enforcement. For these reasons, FSIS cannot ensure that it is preventing the abuse of livestock at slaughter plants or that it is meeting its responsibility to fully enforce HMSA. According to FSIS officials, inspectors are to use their judgment in deciding whether to suspend a plant’s operations or take the less stringent enforcement action (that is, issue a noncompliance report and a regulatory control action) when a humane handling violation occurs. For example, FSIS guidance is unclear on what constitutes excessive electrical prodding, such as the number of times an animal can be prodded before the inspector should consider the prodding to be excessive and therefore egregious. According to FSIS’s guidance, if the inspector determines that the action was egregious, the inspector may also choose to suspend plant operations but is not required to do so. U.S. meat industry representatives have expressed concerns in interviews about the inconsistency of HMSA enforcement across districts. For example, according to American Meat Institute officials, the inconsistency in HMSA enforcement is the single most critical issue for the industry; furthermore, one official noted that a number of the differences in interpretation of HMSA compliance are related to determining whether or not an animal is sensible after stunning. In addition, the expert we consulted testified in April 2008 that FSIS inspectors need better training and clear directives to improve consistency of HMSA enforcement. Our survey results indicate differences in the enforcement actions that inspectors reported they would take when faced with a humane handling violation. In our survey, we asked inspectors their views on electrically prodding over 50 out of 100 animals. Figure 4 shows the inspectors’ responses to questions concerning electrical prodding. Under FSIS’s guidance, inspectors are directed to issue a noncompliance report and take a regulatory control action in cases of excessive electrical prodding, but suspension is not required. However, the expert we consulted told us that she considers these cases to be egregious humane handling violations that should result in suspensions. In addition, according to an FSIS training scenario, electrical prods are never to be used on the anus, eyes, or other sensitive parts of the animal. As figure 4 shows, 49 percent of the inspectors surveyed reported that they would either take a regulatory control action, such as placing a reject tag on a piece of equipment or suspending a plant’s operations for electrical prodding of most animals, and 29 percent reported that they would take none of these actions or did not know what action to take for electrical prodding most animals. Furthermore, 67 percent of the inspectors surveyed reported that they would either take a regulatory control action or suspend operations for electrical prodding in the rectal area, and 10 percent reported that they would take none of these actions or did not know what action to take for electrical prodding in the rectal area. FSIS regulations prohibit electrical prodding that the inspector considers to be excessive. FSIS guidance also states that excessive beating or prodding of ambulatory or nonambulatory disabled animals is egregious abuse—and may therefore warrant suspension of plant operations. From inspectors’ compliance reports, we identified several specific incidents in which inspectors did not either take a regulatory control action or suspend plant operations. For example: In 2008, in the Denver district, the FSIS inspector reported observing a plant employee excessively using an electrical prod as his primary method to move the cattle—using the prod approximately 55 times to move about 46 head of cattle into the stun box. Cattle vocalized at least 15 times, which the inspector believed indicated a high level of stress. The FSIS inspector stated that this incident constituted excessive use of the electrical prod. As stated in FSIS guidance, excessive use of an electrical prod is an egregious violation that calls for the issuance of both a noncompliance report and a regulatory control action and for which an inspector may suspend plant operations. In this instance, the inspector stated that he had issued a noncompliance report. The inspector did not state that he took a regulatory control action and did not suspend operations at the plant, as the guidance allows. In the opinion of the expert we consulted, this was an egregious instance that should have resulted in a suspension. In 2007, in the Minneapolis district, an FSIS inspector reported observing plant employees using the electrical prods excessively to move hogs into the stunning chute. The animals became excited, jumping on top of one another, and vocalizing excessively. From the noncompliance report, it is unclear what, if any, regulatory actions were taken. According to FSIS regulations, electrical prods are to be used as little as possible in order to minimize excitement and injury; any use of such implements that an inspector considers excessive is prohibited. In 2008, in the Dallas district, the FSIS inspector reported that a plant employee used an electrical prod to repeatedly shock cows in the face and neck in an effort to turn them around in an overcrowded area. The inspector deemed the use of the electrical prod excessive, but the report does not indicate whether any regulatory control action was taken. With regard to stunning, our survey results and review of noncompliance records also show inconsistent enforcement actions when humane handling violations occurred. As figure 5 shows, 23 percent of inspectors reported they would suspend operations, while 38 percent would issue a regulatory control action for multiple unsuccessful captive bolt gun stuns. Similarly, 17 percent reported they would suspend operations for multiple misplaced electrical stuns, and 37 percent would issue a regulatory control action. According to FSIS guidance, egregious abuses that could result in a plant suspension include stunning animals and allowing them to regain consciousness and multiple attempts to stun an animal, especially in the absence of immediate corrective measures. However, it is unclear when a suspension is warranted, even if the acts are deemed to be egregious. FSIS’s guidance simply states that an inspector-in-charge may immediately suspend the plant if there is an egregious humane handling violation— however, there is no clear directive to do so in guidance. In the opinion of the expert we consulted, if over 10 percent of the animals require a second shot or if over 5 percent of pigs had experienced an improperly placed electrical stun, plant operations should be suspended. FSIS agreed that these incidents are troubling, and possibly egregious, but did not comment further. Figure 5 shows our survey results on stunning. We also identified several incidents in FSIS’s noncompliance reports in which inspectors did not suspend plant operations or take a regulatory control action. For example, In 2009, in the Raleigh district, a plant employee stunned a bull twice in the head with a captive bolt, but the bull remained sensible. Instead of restunning the animal with the captive bolt gun, the employee then drove a steel instrument used to sharpen knives into the open hole in the bull’s head in an attempt to make the animal insensible. The bull rose to its feet and vocalized in apparent pain until it was eventually rendered insensible with a bullet to the head. FSIS regulations do not recognize this steel instrument as an acceptable stunning method. However the inspector placed a reject tag on the stun box and cited the incident as egregious in the noncompliance report but did not suspend operations. In the opinion of the expert we consulted, this incident was an example of an egregious HMSA violation that should have resulted in a suspension. In 2008, in the Denver district, the inspector reported that the first attempt to stun a bull with a captive bolt stunner appeared to misfire, resulting in smoke and the smell of powder and no response by the bull. A second stunning attempt appeared to render the bull unconscious in the stun box. However, it was followed by a third stunning attempt while the bull was still in the stun box. The employee then allowed the bull to roll out into the pit for shackling. The bull appeared unconscious but still was breathing rhythmically, indicating that the animal was still sensible. The employee then entered the pit and stunned the bull again and started conversing with another employee. The bull once again started breathing rhythmically while being shackled, a sign that the bull still had not been rendered insensible to pain as the law requires. In response, the DVMS asked the employee to stun the bull again, and this stun rendered the bull unconscious and no longer breathing rhythmically. According to the report, the plant received a noncompliance report, but no regulatory control action was taken, as called for by guidance. In the opinion of our expert consultant, a regulatory control action should have been taken in this case because of multiple stuns that left the animal breathing rhythmically. We also identified several other types of humane handling violations for which inspectors took inconsistent enforcement actions. For example, according to FSIS’s regulations, animals are not to be moved from one area to another faster than a normal walking speed, with minimum excitement and discomfort. A faster speed could result in animals being driven over each other. Furthermore, animals in a holding pen are to have access to water and, if held longer than 24 hours, access to food. According to the expert we consulted, deliberately driving animals over the top of other others and failing to provide water for animals held over a weekend are egregious humane handling violations and, in her opinion, these actions should result in plant suspensions. However, as figure 6 shows, although most inspectors would take an enforcement action, including a regulatory control action, for these violations, 40 percent of inspectors surveyed would suspend plant operations for driving animals over each other, and 55 percent would suspend plant operations for failing to provide water over a weekend. The lack of consistency in enforcement actions is highlighted by inspectors’ responses to our question about when they would suspend plant operations. According to our survey results, less than one-third of the inspectors-in-charge in the very small and small plants reported that they would be likely to suspend plant operations for multiple incorrect placements of electrical stunners and electrical prodding of most animals. Inspectors-in-charge at large plants with more frequently reported plant suspensions had more stringent views on enforcement actions than those at very small plants. For example, inspectors-in-charge at large plants more frequently reported suspensions as the enforcement actions that should be taken compared with inspectors-in-charge at very small plants. Figure 7 illustrates three humane handling scenarios in which significant differences were observed between large and very small plants. For example, large plants were more likely than very small plants to suspend plant operations for multiple incorrect electrical stuns, driving animals over the top of others, and electrically prodding most animals. We found similar indications of inconsistent enforcement across districts. According to our analysis of FSIS data, from calendar years 2005 through 2007, 10 districts of 15 FSIS districts—responsible for overseeing 44 percent of all animals slaughtered nationwide—suspended 35 plants for HMSA violations. The remaining 5 districts—responsible for overseeing 56 percent of all livestock slaughtered nationwide—did not suspend any plants. For example, the Des Moines and the Chicago districts, which oversee the first and second highest volume of livestock slaughtered nationwide, respectively, were among the 5 districts that had never issued a suspension until February 2008, according to our analysis. Before 2008, these five districts issued noncompliance reports, sometimes with regulatory control actions, such as a reject tag on a piece of equipment, rather than suspending an entire plant’s operations. For example, in 2007, in the Lawrence district, a hog was observed walking around the stunning chute grunting and bleeding from the mouth and forehead. The animal had been stunned improperly, and plant personnel stated that both stun guns were not working and were being repaired. Because the plant did not have an operable stun device, the animal suffered for at least 10 minutes while the plant repaired the gun. The FSIS inspector applied a reject tag to the stunning box; stunning operations in the area were halted until the plant had taken corrective actions, but the record did not state the amount of time that stunning was stopped. According to FSIS’s guidance, however, stunning animals and then allowing them to regain consciousness is considered egregious. Suspensions increased overall following the February 2008 Westland/Hallmark incident in California. For calendar years 2007 and 2008, more than three-quarters of all suspensions were for stun-related violations for all districts. In the 10 districts that suspended operations for calendar years 2005 and 2006, over 40 percent of those suspensions were for stunning violations. (See app. III for detailed information on the number of HMSA enforcement actions over the period we reviewed.) Furthermore, following that incident, FSIS directed the inspectors to increase the amount of time they devoted to humane handling by 50 to 100 percent for March through May 2008. FSIS found that, when the amount of time spent on humane handling was increased, the number of noncompliance reports increased as well. The Westland/Hallmark incident highlighted the problems that could occur when inspection staff inconsistently apply their discretion in determining which enforcement actions to take for humane handling violations. According to the USDA Inspector General’s 2008 report that followed the Westland/Hallmark incident, between December 2004 and February 2008, FSIS inspectors did not write any noncompliance reports or suspend operations for humane handling violations at the Westland/Hallmark plant. Nevertheless, FSIS personnel acknowledged that at least two incidents of humane handling violations had occurred at the Westland/Hallmark plant during this period, both of which involved active abuse of animals. Instead of taking an enforcement action, the inspectors verbally instructed plant personnel to discontinue the action or practice in question. The report also stated that Westland/Hallmark had an unusual lack of noncompliance reports and that inspectors did not believe they should write a noncompliance report if an observed violation was immediately resolved. Finally, our analysis of FSIS enforcement data for calendar years 2005 through August 2009 shows that suspensions were not consistently used to enforce HMSA. Figure 8 shows the total number of suspensions over the period and reveals that suspensions spiked from a low of 9 in calendar year 2005 to a high of 98 in 2008—a nearly 11-fold increase overall—and, as of August 2009, FSIS had suspended operations at 50 plants. Based on our review of the suspension records, it appears that this spike followed the February 2008 Westland/Hallmark incident. Also, more than three- quarters of these suspensions resulted from failure to render at least one animal insensible on the first stun. From calendar year 2005 through 2008, the number of noncompliance reports issued for humane handling decreased overall, while the number of animals slaughtered increased from about 128 million in 2004 to about 153 million in 2008. While we cannot determine the extent to which HMSA violations were overlooked from FSIS data and inspection reports, we attempted to determine whether a much higher rate of enforcement actions were taken on the days that DVMSs conducted their audits for humane handling. However, according to FSIS officials, the records of DVMS audit visits are incomplete, and we were therefore unable to conduct a complete analysis. As a result, we could not fully determine how often DVMSs conducted humane handling audit visits nor whether there is a higher rate of enforcement actions on the days that DVMSs conducted their audits for humane handling. Furthermore, our survey found that 85 to 95 percent of inspectors-in-charge who had taken some type of enforcement action reported that their immediate supervisor, the DVMS, and other district management personnel were moderately or very supportive of their actions. We found that incomplete guidance and inadequate training may contribute to the inconsistent enforcement of HMSA. Specifically, according to our survey results, inspectors at the plants we surveyed would like more guidance and training in seven key areas, as figure 9 shows. Furthermore, an estimated 457 inspectors-in-charge, or those at more than half the plants surveyed, reported that additional FSIS guidance or training is needed on whether a specific incident of electrical prodding requires an enforcement action. In addition, of the 80 inspectors who provided detailed responses to our survey, 15 noted the need for additional guidance, including clarification on what actions constitute egregious actions. Similarly, 25 of the 80 inspectors who provided written comments identified a need for additional training in several key areas. With respect to guidance, in 2004, we had recommended that FSIS establish additional clear, specific, and consistent criteria for district offices to use when considering whether to take enforcement actions because of repeat violations. FSIS agreed with this recommendation and delegated to the districts the responsibility for determining how many repeat violations should result in a suspension. However, incidents such as those at the Bushway Packing plant in Vermont suggest that this delegation was not successful. To date, FSIS has not issued additional guidance. Operations at this Vermont plant were suspended three times in May, June, and July 2009 for egregious humane handling violations. Two of the suspensions were for dragging nonambulatory conscious veal calves that were about 1-week old. According to a document describing the third incident, an employee threw a calf from the second tier of a truck to the first so that the calf landed on its head and side. FSIS has not issued any guidance to the district offices on how many suspensions should result in a request for a withdrawal of a grant of inspection. If specific guidance had been available on when to request a withdrawal of grant of inspection, the district office might have decided to request such a withdrawal before the October 2009 incident. If FSIS ultimately withdrew the grant, it would have required the plant to reapply for, and be awarded, a grant of inspection license before it could resume operations. Regarding training, FSIS relies primarily on “on-the-job” training by DVMSs—who are directed to visit each plant within their district over a 12- to 18-month period. In addition, supervisory veterinarians and inspectors- in-charge provide on-the-job training. FSIS officials we spoke with said that the on-the-job training needs to be integrated into a formal training program and that efforts are under way to do so. FSIS also provides some humane handling training electronically. For example, in February 2009, all inspectors assigned to slaughter plants were required to complete a mandatory 1-hour basic humane handling course online, which the agency can track centrally. FSIS officials also stated that, since 2005, incoming inspectors have been required to complete some humane handling training during orientation. According to FSIS officials we spoke with, the agency has asked the districts to begin entering data on the completion of other humane handling courses so that this information can also be tracked centrally. Our survey results suggest, however, that even inspectors-in-charge who had to complete mandatory humane handling training in February 2009 may not have been sufficiently trained. For example, an estimated 449, or 57 percent, of the inspectors-in-charge at the plants we surveyed from May through July 2009, reported incorrect answers on at least one of six possible signs of sensibility. Specifically, an estimated 133, or 18 percent, of the inspectors–in-charge, failed to identify rhythmic breathing as a sign of sensibility. In addition, in 2004, we had reported that inspectors did not have the knowledge they needed to take enforcement actions when appropriate. At that time, most of the deputy district managers, and about one-half of the DVMSs, noted that an overall lack of knowledge among inspectors about how they should respond to an observed noncompliance had been a problem in enforcing the HMSA. Several outside observers have also commented on the need for better FSIS training. Specifically: In November 2008, USDA’s Office of Inspector General found that FSIS does not have a formal, structured developmental program and system in place to ensure that all of its inspection and supervisory staff receive both formal and on-the-job training to demonstrate that they possess the competencies essential for FSIS’s mission-critical functions. The Inspector General recommended a structured training and development program that includes continuing education to provide the organizational control needed to demonstrate the competency of the inspection workforce. The Inspector General also stated that the workforce needs to be certified annually. In 2009, the National Academies’ Institute of Medicine recommended testing and improved training, with special emphasis on the quality and consistency of noncompliance reports for food safety issues. The institute noted that the decision to issue a noncompliance report is subjective and inspectors’ experience levels and training differ. Supervisory review by inspectors-in-charge may likewise be variable or subject to bias and, therefore, unreliable. In 2009, representatives of the three major industry associations—the American Meat Institute, the American Association of Meat Processors, and the National Meat Association—told us that more training on humane handling is needed for FSIS inspectors. Specifically, the American Meat Institute identified insensibility as a critical issue in enforcement and noted that additional training on the signs of insensibility, such as blinking and the righting reflex, would be helpful. In 2009, the Humane Society of the United States recommended that FSIS inspectors receive adequate in-person, on-the-ground training so they can properly assess the conditions and treatment of animals. FSIS officials stated that it launched a voluntary HMSA training program for plant employees at small slaughter plants in 2009. These plants represent the highest humane handling risk, according to FSIS officials, because plant management may not have sufficient resources to fully train plant employees on HMSA practices. In recent years, the meat industry has adopted numerical scoring and video surveillance to improve plants’ humane handling performance overall. According to FSIS officials, the agency does not require the use of such objective measures or scoring to aid judgment for enforcement purposes because situations are highly variable, and inspectors and higher-level officials are to use their judgment in conjunction with FSIS guidance. However, in December 2009, FSIS provided DVMSs with guidance on what it characterized as, an objective system to facilitate determinations of the problems that plants in their districts need to address. Several of the DVMSs we interviewed acknowledged that they have been using a form of numerical scoring on their own to assist their efforts in evaluating HSMA enforcement at the plants. The numerical scoring system was developed in 1996 by Dr. Grandin to determine how well animals were being stunned and handled at the plants. The system has different standards for different species of animal and can be adjusted to fit plants that slaughter fewer animals. This system seeks to reduce the subjective nature of inspections and uses the scoring system to help identify areas in need of improvement. For example, in a large plant, if more than 5 out of 100 animals were not rendered insensible on the first stun, the plant would fail the evaluation. Other standards include the percentage rates for slips and falls and the number of animals moved by an electrical prod. Once the plant is aware of the weaknesses, it can consider its options to improve its humane handling performance, such as repairing equipment and floors to provide better footing for the animals and targeting employee training in those specific areas. The numerical scoring system has been adopted by industry and animal welfare organizations, as well as one federal agency. At the federal level, according to agency officials, USDA’s Agricultural Marketing Service uses this system to rate slaughter plants to determine whether to approve or deny them to provide meat to the National School Lunch Program. In addition, the American Meat Institute and independent audit firms employed by restaurant chains, such as Burger King and McDonald’s, have adopted this numerical scoring system to evaluate humane handling at their associated slaughter plants. According to industry experts, a publicized humane handling incident at their plants would potentially damage their business interests. Recently, the Canadian Food Inspection Agency proposed adoption of numerical scoring for federally inspected plants in Canada. FSIS officials have stated that while the numerical scoring system may be useful in helping plants determine their humane handling performance; it should not be used to assess compliance with HMSA. Because the numerical scoring system allows for a certain percentage of stunning failures, using it would be inconsistent with the HMSA requirement that all animals must be rendered insensible on the first blow. However, as we noted earlier, this requirement has not been met consistently by slaughter plants because of human error, equipment failures, and animal movement, leaving FSIS to exercise its discretion in determining which violations require enforcement action. Video surveillance is another tool being increasingly used by slaughter plants. Specifically, slaughter plants can hire specialized video technology companies to record plant operations and audit plant performance through remote video surveillance and the use of the American Meat Institute numerical scoring system to assess humane handling performance at the plant. These video technology companies can also provide slaughter plant management with continuous feedback and customized progress reports documenting humane handling performance at their plants. According to the testimony of one video surveillance company, this technology helps plant management provide positive reinforcement to the workers who are performing well and helps identify workers who may need further training. In November 2008, the Office of the Inspector General recommended that FSIS determine whether FSIS-controlled, in-plant video monitoring would be beneficial in preventing and detecting animal abuses. However, FSIS officials responded that FSIS-controlled video cameras would not provide the definitive data needed to support enforcement of humane handling requirements, as compared with the direct, ongoing and random verification of humane handling practices at the plants. According to the Humane Society of the United States, while video surveillance might serve as a supplemental tool, it does not negate the need for real-time inspectors’ observations. According to our survey results, between 52 to 66 percent of inspectors-in-charge at large plants reported that video surveillance would be moderately or very useful in each of the five plant areas. Figure 10 illustrates our survey results on the usefulness of video surveillance for all plants. FSIS officials recently told us that they are exploring potential uses of video surveillance, but the agency had not released any official policy change, as of November 2009. In addition, of 96 inspectors who provided written comments on the usefulness of video surveillance in our survey, most frequently reported that video surveillance would facilitate more inspections in different plant locations and provide a true picture of animal handling while plant staff do not know that the inspector is watching. Since video surveillance can provide continuous footage of ongoing activities in the plant, it may provide evidence regarding alleged violations when inspectors do not directly observe humane handling. For example, according to 39 percent of inspectors-in-charge at large plants, plant staff improved their handling behavior upon the inspectors’ arrival. Furthermore, 25 percent of inspectors-in-charge at the large plants in our survey reported that plant staff often, or always, alert each other about inspectors’ movements between areas by radio or whistle, for example. Although FSIS collects humane handling data, we found that it is not fully analyzing and using these data to help ensure more consistent HMSA enforcement. For example, we found substantial differences in the range of time devoted to humane handling for large plants that slaughter market swine when we compared the amount of time devoted to humane handling activities for plants of similar size and species in an effort to determine if there were any inconsistencies among districts. Specifically, out of the six slaughter plants that kill between 700,000 to 900,000 market swine, the average time that a plant would devote to humane handling ranged from 1.8 to 9.7 hours per shift in 2008. For the nine plants that slaughter between 2 and 3 million market swine, we found that the average amount of time per shift ranged from 2.7 to 5.2 hours per shift in 2008. In January 2004, we also reported that FSIS was not adequately analyzing the narrative found in noncompliance reports. As of November 2009, FSIS headquarters officials told us that they had not begun an effort to analyze the narratives in noncompliance reports. Instead, they told us, they rely on district officials to monitor whether plant inspectors have taken consistent enforcement action for each incident. Headquarters officials also stated that they only review the percentage of humane handling activities that are recorded as noncompliant in an FSIS database, known as the Performance-Based Inspection System. However, without analyzing the narrative, FSIS cannot readily provide the reasons for the noncompliance reports—for example, whether these reports were issued for one or two failed stuns, which is not uncommon, rather than three or four failed stuns, which might be considered an egregious violation. Thus, FSIS cannot easily analyze noncompliance reports across the districts to identify trends or patterns in plant violations or potential enforcement inconsistencies across districts. Also in 2004, we reported that FSIS was not tracking humane handling activities. In response to the tracking issue, FSIS created the Humane Activities Tracking System, a database that inspectors use to record the amount of time they devote to humane handling activities in each plant. Inspectors are directed to record the total amount of time devoted to humane handling activities for each plant shift in 15-minute increments. According to our survey results, inspectors have differing views on the accuracy of the amount of time recorded in the tracking system. Specifically, 19 percent reported that the time recorded in this system was slightly or not at all accurate. However, 45 percent of the inspectors reported that the time was very accurate, and 36 percent reported that the time was moderately accurate. Furthermore, of the 93 inspectors who provided written responses detailing inspectors’ views of the reasons for the tracking database’s inaccuracies, 56 pointed out that breaking out activities into 15-minute increments limited their ability to record their actual time spent, and 29 stated that humane handling activities are concurrent with other inspection activities. In addition, 14 responses noted that supervisors or district offices had placed either a minimum or maximum on the amount of time that could be charged to humane handling. Also, several of the DVMSs we interviewed reported that the Humane Activities Tracking System does not readily produce the types of reports that are needed to oversee and manage humane handling activities in their districts. For example, they reported that the system lacked the capability to readily produce comparative analyses of similar plants to help identify trends or anomalies across districts. FSIS began analyzing data across districts from the Humane Activities Tracking System in 2008—4 years after it developed the system. Also in 2008, FSIS established the Data Analysis Integration Group in headquarters, with staff in the regional field offices to support district offices’ data needs. The group began reporting quarterly on HMSA enforcement, including the amount of time inspectors have devoted to HMSA, the number of plants suspended, and the number of noncompliance reports issued in 2009, although FSIS has not analyzed the narrative in the noncompliance reports. FSIS cannot fully identify trends in its inspection resources—specifically, funding and staffing—for HMSA enforcement, in part because it cannot track humane handling inspection funds separately from the inspection funds spent on other food safety activities. Furthermore, FSIS does not have a current workforce planning strategy to guide its efforts to allocate staff to inspection activities, including humane handling. According to FSIS officials, funds for humane handling come primarily from two sources: (1) FSIS’s general inspection account and (2) the account used to support the Humane Activities Tracking System. The general inspection account supports all FSIS inspection activities, both food safety and other activities, including humane handling enforcement. Because the same inspectors may carry out these tasks concurrently, FSIS cannot track humane handling funds separately, according to FSIS officials. According to FSIS officials, for the most part, inspectors are to devote 80 percent of their time to food safety inspection activities and 20 percent of their time to humane handling inspection and other activities. However, our analysis of resources shows that this is not the case. As table 1 shows, we estimated that the percentage of funds dedicated to HMSA enforcement has been above 1 percent of FSIS’s total annual inspection appropriation, although it rose slightly in 2008, the year in which suspensions spiked following the 2008 Westland/Hallmark incident in California. While FSIS does not track humane handling inspection activities separately, FSIS’s budget office estimates the funds needed to carry out these activities. Using FSIS’s budget estimate for HMSA enforcement for fiscal years 2005 through 2008, we estimated the percentage of FSIS’s total annual appropriation for its federal food safety inspection account that would have gone to HMSA enforcement. In contrast to FSIS’s inability to track humane handling in its general inspection fund, FSIS officials noted, the DVMSs—whose primary responsibility is humane handling activities—have a special activity code that enables FSIS to track their portion of expenses, including salaries and travel; however, these expenses represent only a small portion of the total amount FSIS spends on humane handling inspection activities. Although FSIS does not track funds spent on humane handling inspection activities separately from other inspection activities, it does track the funds specifically dedicated to supporting the Humane Activities Tracking System. For fiscal years 2005 through 2009, Congress designated a total of nearly $13 million specifically for the Humane Activities Tracking System, and FSIS has spent roughly that amount on the system, according to our review of FSIS budget data. For fiscal year 2005 and for fiscal year 2006, FSIS was required to spend the funding designated for the Humane Activities Tracking System within 2 years of the appropriation. However, beginning with fiscal year 2008, Congress folded the funding for the Humane Activities Tracking System into a larger FSIS information technology initiative, and the funding is available to FSIS until it is expended. As of November 2009, FSIS had not completed integrating the Humane Activities Tracking System into the information technology initiative, and FSIS officials could not provide an estimate of when the agency expected to do so. Although FSIS cannot directly account for the funding designated for humane handling activities, Congress in recent years has required FSIS to devote a minimum amount of full-time equivalent (FTE) staff to humane handling. Accordingly, FSIS estimates the total number of FTEs devoted to humane handling and reports this information to Congress every year. FSIS develops this estimate using Humane Activities Tracking System data on time spent on humane handling inspection activities and average inspector and veterinarian salaries. Table 2 shows that FSIS has reported exceeding Congress’s minimum FTE requirements for humane handling enforcement, according to FSIS’s calculation. For fiscal year 2010, FSIS officials told us, they planned to use $2 million of their inspection funds to enhance oversight of humane handling enforcement by hiring 24 inspectors, including both public health veterinarians and inspectors. FSIS officials planned to strategically place these additional inspectors at locations where they are most needed to support humane handling enforcement in addition to their other food safety responsibilities. FSIS officials stated that the agency determined staffing needs on the basis of such factors as the highest number of animals condemned on postmortem, the number of animals inspected and passed for human consumption, and the amount of time spent conducting humane handling inspection activities. In addition, FSIS officials stated that the agency intends to establish a headquarters-based humane handling coordinator position. This coordinator will be primarily responsible for consistently overseeing humane handling activities. While FSIS has increased its hiring, it has not done so in the context of an updated strategic workforce plan. Such a plan would help FSIS align its workforce with its mission and ensure that the agency has the right people in the right place performing the right work to achieve the agency’s goals. In February 2009, we reported that the FSIS veterinarian workforce had decreased by nearly 10 percent since fiscal year 2003 and that the agency had not been fully staffed over the past decade. We reported that, as of fiscal year 2008, FSIS had a 15 percent shortage of veterinarians and the majority of these veterinarians work for slaughter plants. The FSIS 2007 strategic workforce plan—the most recently available—identifies specific actions to help the agency address some of the gaps in recruiting and retaining these mission-critical occupations over time. However, it does not address specific workforce needs for HMSA enforcement activities. FSIS officials stated that workforce planning occurs at the district level and is determined using regulations that govern the number of inspectors required at each slaughter plant. According to district officials, they have discretion in deciding where to deploy relief inspectors. Therefore, they can deploy these inspectors at plants that they believe may require more HMSA oversight. However, more than one-third of the inspectors, who provided written comments in our survey, noted the need for additional staff or the lack of time to perform humane handling activities. Furthermore, inspectors at 80 percent of large plants stated that covering for others’ responsibilities because of leave or vacancies has reduced the time spent on humane handling activities in those plants. While FSIS officials may need flexibility at the district level to allocate inspection resources, without an updated strategic workforce plan, the agency cannot effectively determine inspection needs across districts and adjust the inspection workforce to reflect changes in the industry and in FSIS resources. Although the strategic workforce plan indicates that the agency performs this assessment annually, FSIS officials acknowledged that the agency has not updated its strategic workforce plan since 2007. We recommended in January 2004 that FSIS periodically reassess whether the level of inspection resources is sufficient to effectively enforce HMSA. As of November 2009, FSIS officials had told us that they were in the process of developing a workforce strategy but could not provide an estimated completion date. Our body of work on results-oriented management calls for organizations to identify clearly defined goals that are aligned to available resources, develop time frames for achieving these goals, and develop performance metrics for measuring progress in meeting their goals. We have recommended that all agencies adopt strategies that include these key elements. By implementing results-oriented management principles, agencies demonstrate their efforts to resolve long-standing management problems that undermine program efficiency and effectiveness, provide greater accountability for results, and enhance congressional decision making by providing more objective information on program performance. Although FSIS has strategic, operational, and performance plans for its inspection activities, these plans do not specifically address HMSA enforcement. That is, they do not clearly outline the agency’s goals for enforcing HMSA, identify expected resource needs, specify time frames, or lay out performance metrics. Specifically, FSIS Strategic Plan FY 2008 through FY 2013 provides an overview of the agency’s major strategic goals and the means to achieve those goals. However, this plan does not clearly articulate or list goals related to HMSA enforcement. Instead, the plan generally addresses agency goals, such as improving data collection and analysis, maintaining information technology infrastructure to support agency programs, and enhancing inspection and enforcement systems overall to protect public health. FSIS Office of Field Operations officials agreed that the plan does not specifically address humane handling, but they explained, the operational plans and policy performance plans contain the details concerning humane handling performance. However, as we indicate below, we did not find that these two plans provide a comprehensive strategy for HMSA enforcement: Office of Field Operations’ Operational Plan identifies specific FSIS projects or initiatives and aligns them with the appropriate strategic goal identified in the FSIS Strategic Plan for FY 2008 through FY 2013. It also specifies the estimated dates for completion and recent information on the status of the project or initiative. According to our analysis of the July 2009 version of the operational plan, the most recent version available, humane handling activities fall under FSIS’s first strategic goal—enhance inspection and enforcement systems and operations to protect public health. While the plan identifies tasks related to humane handling inspection activities, it does not identify any humane handling program goals linked to these tasks or explain how these tasks can be completed. For example, one of the plan’s listed tasks is conducting humane handling information outreach, but the plan neither indicates how this task aligns with HMSA enforcement-related goals, nor does it specify resources needed. The plan also does not set priorities for proposed activities or identify milestones that could be used to measure progress or make improvements. Additionally, the document does not match the activities with resources needed to accomplish those tasks. According to FSIS officials, the Office of Field Operations’ operational plan is an evolving document that is continually updated throughout the course of the year. Office of Policy and Program Development Strategic Plan Fiscal Years 2008-2013 identifies policy goals that support the overall FSIS Strategic Plan. However, this plan does not clearly articulate or list goals related to HMSA enforcement. Furthermore, FSIS does not have a set of performance measures for assessing the overall performance of humane handling enforcement across the districts. For example, FSIS is unable to determine whether the districts have improved their ability to enforce humane handling or may be weak in their enforcement. Although FSIS officials stated that the agency collects information such as the number of noncompliance reports, the number of egregious humane handling violations, and the number of humane handling activities performed on a routine basis by the DVMS, there is no indication of how these activities demonstrate improved enforcement of HMSA. Collecting and analyzing this type of information could be useful in identifying gaps or anomalies in performance and then developing a strategy to address them. It is difficult to know whether the reported incidents of egregious animal handling at the slaughter plants in California and Vermont are isolated cases or indicative of a more widespread problem. Either way, it is evident from our survey results and our analysis of HMSA enforcement data that inspectors did not consistently identify and take enforcement action for humane handling violations for the period we reviewed. Furthermore, our survey results suggest that inspectors are not consistently applying their discretion as to which actions to take when egregious humane handling incidents occur, or when they are repeated, in part because the guidance is unclear. That is, the guidance states that inspectors-in-charge “may” suspend plant operations. Consequently, plants cited for the same type of humane handling incident may be subject to different enforcement actions. In January 2004, we recommended that FSIS establish additional clear, specific, and consistent criteria for enforcement actions to take when faced with repeat violations. FSIS responded by delegating this responsibility to the districts. However, incidents such as those at the Vermont plant suggest that this delegation has not been effective. While FSIS has stated that inspectors require discretion in enforcement, that discretion needs to be informed by an agency policy that ensures a consistent level of enforcement within plants and across districts. Without consistent enforcement actions, FSIS does not clearly signal its commitment to fully enforce HMSA. In addition, to improve plants’ humane handling performance, the Agricultural Marketing Service, DVMSs, and others have adopted objective industry tools, such as numerical scoring, to help identify weaknesses. However, inspectors-in- charge, who are responsible for assessing daily HMSA performance at the plants, are not directed to use such scoring tools. Effective oversight of HMSA enforcement also requires FSIS to use available data to effectively manage the program, including allocating resources. FSIS has only recently begun to do so. Until 2009, FSIS did not routinely track and evaluate HMSA enforcement data—by geographic location, species, plant size, and history of compliance across districts. Although these analyses will be useful, FSIS has yet to analyze the narratives of humane handling incidents found in noncompliance reports, which would also help the agency identify weaknesses and trends in enforcement and develop appropriate strategies. Furthermore, we reiterate our January 2004 recommendation, which FSIS has not yet acted on, to periodically reassess whether its estimates still accurately reflect the resources necessary to effectively enforce the act. Finally, because FSIS does not have a comprehensive strategy for enforcing HMSA that aligns the agency’s available resources with its mission and goals, and that identifies time frames for achieving these goals and performance metrics for meeting its goals, it is not well positioned to improve its ability to enforce HMSA. We are making the following four recommendations to the Secretary of Agriculture to strengthen the agency’s oversight of humane handling and slaughter methods at federally inspected facilities. To ensure that FSIS strengthens its enforcement of the Humane Methods of Slaughter Act of 1978, as amended, we recommend that the Secretary of Agriculture direct the Administrator of FSIS to take the following three actions: establish clear and specific criteria for when inspectors-in-charge should suspend plant operations for an egregious HMSA violation and when they should take enforcement actions because of repeat violations; identify some type of objective tool, such as a numerical scoring mechanism, and instruct all inspectors-in-charge at plants to use this measure to assist them in evaluating the plants’ HMSA performance and determining what, if any, enforcement actions are warranted; and strengthen the analysis of humane handling data by analyzing the narrative in noncompliance reports to identify areas that need improvement. To ensure that FSIS can demonstrate how efficiently and effectively it is enforcing HMSA, we recommend that the Secretary of Agriculture direct the Administrator of FSIS to develop an integrated strategy that clearly defines goals, identifies resources needed, and establishes time frames and performance metrics specifically for enforcing HMSA. We provided USDA with a draft of this report for review and comment. USDA did not state whether it agreed or disagreed with our findings and recommendations. However, it stated that it plans to use both our findings and recommendations to help improve efforts to ensure that establishments comply with HMSA and humane handling regulations. USDA also recognized the need to improve the inspectors’ ability to identify trends in humane handling violations and work with academia, industry, and others to identify practices that will achieve more consistent HMSA enforcement. USDA commented that the report contained some misstatements of fact that present a false picture of FSIS’s humane handling verification and enforcement program and policies. We believe that we have fairly described FSIS policy and guidance on HMSA enforcement. In response to updated information that FSIS provided, we made appropriate revisions to clarify certain points. For example, we revised our report by deleting the portion of our analysis related to suspension data that occurred on the days that DVMSs conducted humane handling audits because on the basis of new information provided we believe that FSIS records of DVMS audit visits are incomplete. USDA also questioned whether the results of our survey of FSIS inspectors provide evidence of systemic inconsistencies in enforcement. We believe they do, and would encourage USDA to consider the views of inspectors at the plants who are responsible for daily HMSA enforcement. Our survey results are based on strict adherence to GAO standards and methodology to ensure the most accurate results possible. Furthermore, our efforts were fully coordinated with FSIS before we distributed the survey. Specifically, we vetted all of the questions with FSIS management in advance to ensure that these questions elicit responses that would reveal whether or not inspectors-in-charge understand how to fully enforce HMSA. In addition, we conducted numerous pre-tests of the survey with inspectors to ensure that we would receive the most accurate responses possible. We also coordinated with several humane handling experts who serve as FSIS consultants on training and enforcement issues to ensure that our questions would elicit the most accurate responses. USDA also provided technical comments, which we have incorporated into this report as appropriate. USDA’s written comments and our responses are presented in appendix IV. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies of this report to appropriate congressional committees; the Secretary of Agriculture; the Director, Office of Management and Budget; and other interested parties. The report also will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff members have any questions about this report, please contact me at (202) 512-3841 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix V. This report examines (1) U.S. Department of Agriculture Food Safety and Inspection Service’s (FSIS) efforts to enforce the Humane Methods of Slaughter Act of 1978, as amended (HMSA); (2) the extent to which FSIS tracks recent trends in FSIS inspection resources for enforcing HMSA; and (3) FSIS’s efforts to develop a strategy to guide HMSA enforcement. To evaluate FSIS’s efforts to enforce HMSA, we interviewed officials and collected documents from FSIS’s Office of Field Operations; Office of Policy and Program Development; Office of Program Evaluation, Enforcement and Review; and the 15 district offices. We examined a nonprobability sample of FSIS noncompliance reports to provide illustrative examples of humane handling violations. In doing so, we searched for the words “prod” and “stun” in 533 noncompliance reports for 2007 and 589 noncompliance reports for 2008. Of these 1,122 reports, 272 reports included either the word “stun” or “prod” in reference to a violation. We then selected several of the reports that described violations appearing to be egregious and provided these reports to the expert we consulted for her assessment. This expert determined that the violations described in some of these reports were not sufficiently clear or detailed to determine whether they represented egregious violations, while others were clearly egregious in her judgment. We also reviewed FSIS suspension data, data from the humane handling tracking system and district veterinary medical specialist reports in all 15 of FSIS’s district offices for fiscal years 2005 through 2009. To assess the reliability of these data, we examined them for obvious errors in completeness and accuracy, reviewed existing documentation about the systems that produced the data, and questioned knowledgeable officials about the data and systems. We determined that the data were sufficiently reliable for the purposes of our review, with any limitations noted in the text. We also reviewed the HMSA enforcement reports produced by FSIS’s Office of Data Analysis and Integration Group, as well as meeting minutes from the monthly district veterinary medical conferences. To understand FSIS policy and guidance on humane slaughter enforcement, we reviewed relevant regulations and FSIS instructions. From May 2009 through July 2009, we also surveyed inspectors-in-charge—those responsible for reporting on humane handling enforcement in the plants—from a random sample of inspectors at 257 livestock slaughter plants that were stratified by size—very small, small, and large. We adopted FSIS definition for small, very small, and large plants. We obtained an overall survey response rate of 93 percent. Table 3 shows the population and sample size distribution of slaughter plants by large, small and very small plant size. Each of the inspectors-in- charge had a nonzero probability of being included, and that probability could be computed for any inspector-in-charge. Each inspector-in-charge was subsequently weighted in the analysis to account statistically for all the members of the population, including those who were not selected. We analyzed all responses, including the written responses that we received from the survey by conducting a content analysis and categorizing the responses accordingly. The results of our survey are presented in a special publication titled Humane Methods of Slaughter Act: USDA Inspectors’ Views on Enforcement that can be viewed at GAO-10-244SP. We met with key officials from FSIS’s Office of Field Operations who are responsible for implementing HMSA at the headquarters level. To understand district officials’ perspectives on HMSA enforcement, we conducted semistructured interviews with each of FSIS’s 15 district veterinary medical specialists (DVMS), 15 district managers, and 15 resource management analysts. We also performed a content analysis on all semistructured interviews to determine the districts’ perspective on training, guidance, and resources available for humane handling enforcement. To understand the perspective of animal welfare groups and the meat industry, we met with representatives from the Humane Society of the United States, the Animal Welfare Institute, the American Meat Institute, the National Meat Association, and the American Association of Meat Processors. We reviewed these organizations’ proposed reforms for HMSA enforcement. We also attended the 2009 American Meat Institute Humane Handling Conference in Kansas City, Missouri. To gain a better understanding of how the industry evaluates HMSA performance, we attended the Professional Animal Auditor Certification Organization training for meat plants in Denison, Iowa, in November 2008 and visited pork and beef slaughter plants that use a numerical scoring system. We also consulted animal handling expert Dr. Temple Grandin, who is a world-renowned expert on animal welfare who has served as a consultant to industry and FSIS, written extensively on modern methods of livestock handling, and designed slaughter facilities that have helped improve animal welfare in the United States and in other countries. Dr. Grandin provided her expert opinion on select humane handling incidents that we identified as possible HMSA violations. In addition to Dr. Grandin, we also spoke with animal welfare and food safety consultants to understand key principles of humane handling techniques and enforcement. We also met with representatives of the U.S. Department of Agriculture’s Agricultural Marketing Service to understand how the agency uses numerical scoring to evaluate humane handling at the plants that provide meat to the National School Lunch Program. In order to understand FSIS training efforts, we attended an FSIS training seminar for small and very small plants held in Dallas, Texas, in February 2009, and met with FSIS officials at the agency’s Center for Learning in Washington, D.C., as well as with FSIS consultants who provide training in HMSA enforcement. To identify the extent to which FSIS tracks recent trends in inspection resources for enforcing HMSA, we reviewed FSIS funding and staffing data for each district. We also conducted semistructured interviews with resource management analysts in each of FSIS’s 15 district offices and interviewed key officials in the Resource Management and Planning Office within the Office of Field Operations. We performed a content analysis on all semistructured interviews to determine each districts’ perspective on inspection resources available for humane handling enforcement. In order to understand how FSIS reports its annual full-time equivalent staff for humane handling to Congress, we collected funding and other relevant data and met with key officials in FSIS’s Office of Field Operations and Office of Management and Office of the General Counsel, as well as the U.S. Department of Agriculture’s Office of Budget and Program Analysis. To assess FSIS’s efforts to develop a strategy to enforce HMSA, we reviewed relevant FSIS strategies, including the FSIS Strategic Plan FY 2008 through FY 2013, and the FSIS 2007 Strategic Workforce Plan. We also reviewed the July 2009 version of the Office of Field Operations’ Operational Plan and the Office of Policy and Program Development Strategic Plan Fiscal Years 2008-2013. Furthermore, we reviewed humane handling performance data from the Office of Policy and Program Development. We met with representatives of the FSIS Office of Management on human capital issues and officials from the Office of Personnel Management in Washington, D.C. To identify the key elements of a strategic plan, we reviewed the Government Performance and Results Act of 1993, as well as past GAO reports. We conducted this performance audit for our work from October 2008 to February 2010, in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Figure 11 illustrates the areas in a typical, mid-sized plant from which inspectors can observe HMSA compliance, although inspectors are not always present in all areas. Figure 12 provides an overview of the percentage of plant suspensions for HMSA enforcement that occurred in each district for calendar year 2008. The percentages were determined based on the total number of plants in each districts and the number of reported suspensions. As the figure illustrates, the Jackson district had the highest percentage of suspensions. The following are GAO’s comments on the U.S. Department of Agriculture’s letter dated January 22, 2010. 1. Our report acknowledges FSIS’s efforts to increase its humane handling enforcement efforts since the events at Westland/Hallmark. However, FSIS did not provide source material for some of the data in its comments, making it difficult to determine the completeness and reliability of the data provided. Therefore we could not include in the report the data that FSIS provides in its comments. 2. We believe our report provides an accurate picture of FSIS’s humane handling enforcement activities. However, we have modified text in response to FSIS’s technical comments as appropriate or have explained why we disagree with FSIS’s comments, as noted below. 3. We revised the report to reflect the agency’s comments by deleting the portion of our analysis in our draft report that related to the suspension data that occurred on the days that DVMSs conducted humane handling audits. The report now states that the recods of DVMS audit visits are incomplete and that we were unable to conduct the complete analysis. As a result, we could not fully determine how often DVMSs conducted humane handling audit visits nor whether there is a higher rate of enforcement actions on the days that DVMSs conducted their audits for humane handling. Specifically, our original analysis of the DVMS visits was based on data that FSIS provided to us during the course of our review. Based on the information originally provided to us by FSIS during our audit, these data met all of GAO’s data reliability standards. In January 2010, after receiving a draft copy of this report for comment, FSIS provided us with revised suspension data and informed us that the original data it had provided were incomplete. However, after reviewing the January 2010 data, we believe the revised data contain incomplete information, and we are therefore unable to corroborate the DVMS humane handling audit visit data. 4. We modified the report to clarify that the FSIS Administrator may file a complaint to withdraw a grant of federal inspection. 5. We modified the report to clarify the difference between a withdrawal of inspectors and a withdrawal of the grant of inspection. We added that only the FSIS Administrator may file a complaint to withdraw a grant of federal inspection. However, the district office can still request such a withdrawal. In 2004, we recommended that FSIS establish additional, clear, specific and consistent criteria for district offices to use when considering whether to take enforcement actions because of repeat violations. We continue to believe that more specific guidance would be valuable to better address situations such as the one at the Bushway Packing plant in Vermont. It is also important to note that inspectors need to be trained to identify what actions may warrant such a request to ensure that FSIS is fully enforcing HMSA. 6. Although we did not state that numerical scoring is not regulatory in nature, we did state that using it to measure compliance would be inconsistent with the HMSA requirement that animals be rendered insensible to pain on the first blow. However, we believe that FSIS, in using its enforcement discretion, should identify some type of objective tool, such as a numerical scoring mechanism, and instruct all inspectors-in-charge at plants to use this measure to assist them in evaluating their plants’ HMSA performance and determining what, if any, enforcement actions are necessary in the agency’s exercise of its enforcement discretion. 7. We acknowledge in the report FSIS’s efforts to strengthen its analysis of humane handling data later this year. Although FSIS officials informed us of plans to implement the Public Health Information System, we found that those plans have experienced delays, and the system has yet to be implemented. For example, Public Health Information System was originally scheduled to be fully functional in the fall 2009—we now understand that the expected date has shifted to the end of 2010. Without the availability of this system, we analyzed the humane handling data that FSIS made available to us during the course of our review. 8. FSIS questioned whether our survey results provide evidence of systemic inconsistencies in enforcement. Our survey results are based on strict adherence to GAO standards and methodology to ensure the most accurate results possible, as summarized in appendix I of this report. From May 2009 through July 2009, we surveyed inspectors-in- charge—those responsible for reporting on humane handling enforcement in the plants—from a random sample of inspectors at 257 livestock slaughter plants that were stratified by size—very small, small, and large. We obtained an overall survey response rate of 93 percent. 9. Concerning FSIS’s comment on two of our survey questions, our survey results showed that 29 percent of the inspectors reported that they would not take any enforcement action or did not know what enforcement action to take for electrical prodding of most animals. Ten percent of the inspectors reported that they would take no enforcement action or did not know what action to take for electrical prodding in the rectal area. These figures suggest that FSIS may not be fully enforcing HMSA. While FSIS states that HMSA enforcement requires that inspectors make qualitative judgments since each livestock slaughter operation is unique, we found that humane handling experts in academia and industry firmly believe that such judgments need to be based on some type of objective standards, regardless of the size, construction, layout and staffing at the plants. We appreciate FSIS’s statement that it plans to examine the GAO survey results as it continues to improve its enforcement training and policies and urge FSIS to fully use the information in the survey results to identify practices that may achieve more consistent enforcement of HMSA. 10. We modified the report to clarify that HMSA exempts ritual slaughter from the requirement we discuss in the sentences immediately preceding the text in that section of the report—that an animal be rendered insensible to pain on the first blow—not to the general HMSA requirements. 11. Our report is correct as stated. FSIS refers to FSIS Directive 6900.2, Rev. 1, section VI (A) but FSIS does not refer to section VI (B), which states that if an inspector determines that “a noncompliance with humane slaughter and handling requirements has occurred and animals are being injured or treated inhumanely,” the inspector is to take two specific actions: (1) document the noncompliance on a noncompliance record and (2) take a regulatory control action. FSIS’s misapplication of the directive may further illustrate the lack of clarity in FSIS policy on humane handling enforcement, which may contribute to the lack of a clear understanding at the inspector level. 12. Nearly three-quarters of the inspectors-in-charge responding in our survey reported that they were not veterinarians. While 100 percent of the IICs at the large plants that we surveyed were veterinarians, 88 percent of those at very small plants in our representative survey were not veterinarians, and 57 percent of IICs at small plants were not veterinarians. In addition, we modified the text to clarify the responsibility of FSIS veterinarians prior to slaughter. 13. We modified figure 3 to show that patrol veterinarian only applies to some small and very small plants. 14. On page 31 of this report, we state that “FSIS began analyzing data across districts from the Humane Activities Tracking System in 2008— 4 years after it developed the system.” We also recognize that the Data Analysis Integration Group began “reporting quarterly on HMSA enforcement, including the amount of time inspectors have devoted to HMSA, the number of plants suspended, and the number of noncompliance reports issued in 2009.” In reviewing these reports, however, we found no analysis indicating that FSIS used these data to evaluate HSMA enforcement across the districts and plants to identify inconsistent enforcement. Also, FSIS officials acknowledged in our final meeting in November 2009, that it has never conducted any analysis of the noncompliance reports to determine patterns or trends in HMSA enforcement. Furthermore, although FSIS provided us with its monthly minutes of its DVMS conference calls from March through September 2009, these minutes did not identify any FSIS analysis of HMSA enforcement across the districts and possible inconsistent patterns. FSIS did not grant our request to attend the monthly DVMS conference calls in order to better understand the nature of the DVMS discussion and attempt to determine if such analysis was under way. 15. We modified the text to indicate that there is “no clear directive to do so in guidance.” Although regulations and policy documents describe when suspensions may take place, the agency has offered no clear directive as to when they should take place. 16. We changed the text to state “six possible signs of sensibility” to clarify, as noted in footnote 17 (now footnote 15), that the list of signs included two that, alone, do not generally indicate sensibility. In addition, we re-checked the coding used in our analysis to ensure that the calculations were correct. We found no discrepancies or errors. Therefore, these results clearly demonstrate that inspectors-in-charge may not have been sufficiently trained. 17. The National Academies’ Institute of Medicine study found weaknesses in the noncompliance reports, and as we stated, the institute recommended testing and improved training with special emphasis on the quality and consistency of noncompliance reports for food safety issues. Because FSIS’s inspection personnel are responsible for completing noncompliance reports for both food safety and humane handling violations, it is evident that improving training on the quality and consistency of those reports would be useful in supporting FSIS humane handling compliance efforts. 18. Our analysis of similar sized plants with similar slaughter volumes revealed substantial differences in the amount of time devoted to humane handling in different districts. This information might better inform FSIS officials to manage resources and/or training to help improve performance. 19. We disagree. We conducted this analysis in an effort to gain some perspective on the percent of FSIS annual appropriation for inspection devoted to humane handling and estimated that it has been above 1 percent of FSIS’s total annual inspection appropriation. FSIS officials informed us that 80 percent of their time should be devoted to food safety and 20 percent to humane handling inspection and other activities. Because FSIS cannot track humane handling funds separately, the agency was unable to provide the amount of funds that it devotes to humane handling activities. To provide context for the reader, we estimated the percentage of the total annual inspection appropriations dedicated to HMSA enforcement. We modified the text to expand the definition of FSIS inspection fund to include other activities such as livestock slaughter, poultry slaughter, processing inspection, egg inspection, import inspection, in-commerce compliance, district office activities and food safety enforcement activities. However, this clarification does not change the calculation. 20. We disagree. While the OIG report states that “events that occurred at Hallmark were not a systemic failure of the inspection processes/system as designed by FSIS,” it is important to note that its scope was based on observations at 10 cull cow (older and weaker) slaughter facilities. Nevertheless, the OIG report presented 25 recommendations to strengthen FSIS activities, and FSIS accepted all of these recommendations. Specifically, OIG recommended that FSIS needs to “reassess the inhumane handling risks associated with cull slaughter establishments and determine if more frequent or in-depth reviews need to be conducted.” The report also recommended “that a structured training and development program, with a continuing education component, be developed for both its inspection and management resources.” Furthermore, our survey results and analysis of HMSA enforcement data —that inspectors did not consistently identify and take enforcement action for humane handling violations for the period we reviewed—indicate a more widespread problem. Therefore, we continue to believe that it is difficult to know whether these incidents are isolated or not, and the extent of such incidents is difficult to determine because FSIS does not evaluate the narrative in noncompliance reports. In addition to the individual named above, other key contributors to this report were Thomas M. Cook, Assistant Director; Nanette J. Barton; Michele E. Lockhart; Beverly A. Peterson; Carol Herrnstadt Shulman; and Tyra J. Thompson. Important contributions were also made by Kevin S. Bray, Michele C. Fejfar, Justin Fisher, Carol Henn, Kirsten Lauber, and Ying Long. | Concerns about the humane handling and slaughter of livestock have grown; for example, a 2009 video showed employees at a Vermont slaughter plant skinning and decapitating conscious 1-week old veal calves. The Humane Methods of Slaughter Act of 1978, as amended (HMSA) prohibits the inhumane treatment of livestock in connection with slaughter and requires that animals be rendered insensible to pain before being slaughtered. The U.S. Department of Agriculture's (USDA) Food Safety and Inspection Service (FSIS) is responsible for HMSA. GAO was asked to (1) evaluate FSIS's efforts to enforce HMSA, (2) identify the extent to which FSIS tracks recent trends in resources for HMSA enforcement, and (3) evaluate FSIS's efforts to develop a strategy to guide HMSA enforcement. Among other things, GAO received survey responses from inspectors at 235 plants and examined a sample of FSIS noncompliance reports and suspension data for fiscal years 2005 through 2009. GAO's survey results and analysis of FSIS data suggest that inspectors have not taken consistent actions to enforce HMSA. Survey results indicate differences in the enforcement actions that inspectors would take when faced with a humane handling violation, such as when an animal was not rendered insensible through an acceptable stunning procedure by forcefully striking the animal on the forehead with a bolt gun or properly placing electrical shocks. Specifically, 23 percent of inspectors reported they would suspend operations for multiple unsuccessful stuns with a captive bolt gun whereas 27 percent reported that they would submit a noncompliance report. GAO's review of noncompliance reports also identified incidents in which inspectors did not suspend plant operations or take regulatory actions when they appeared warranted. The lack of consistency in enforcement may be due in part to the lack of clarity in current FSIS guidance and inadequate training. The guidance does not clearly indicate when certain enforcement actions should be taken for an egregious act--one that is cruel to animals or a condition that is ignored and leads to the harming of animals. A noted humane handling expert has stated that FSIS inspectors need clear directives to improve consistency of HMSA enforcement. According to GAO's survey, FSIS's training may be insufficient. For example, inspectors at half of the plants did not correctly answer basic facts about signs of sensibility. Some private sector companies use additional tools to assess humane handling and improve performance. FSIS cannot fully identify trends in its inspection funding and staffing for HMSA, in part because it cannot track HMSA inspection funds separately from the inspection funds spent on food safety activities. FSIS also does not have a current workforce planning strategy for allocating limited staff to inspection activities, including HMSA enforcement. FSIS has strategic, operational, and performance plans for its inspection activities but does not clearly outline goals, needed resources, time frames, or performance metrics and does not have a comprehensive strategy to guide HMSA enforcement. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
The Postal Service’s goal is to deliver at least 95 percent of local First-Class Mail overnight and to achieve 100-percent customer satisfaction. Delivery performance is measured in 96 metropolitan areas across the nation and results are published quarterly. This measurement system, known as the External First-Class Measurement System (EXFC), is based on test mailings done by Price Waterhouse. For the Washington, D.C., metropolitan area, EXFC results are available separately for Washington, D.C.; Northern Virginia; and Southern Maryland. Nationwide averages are also available for comparison purposes. Customer satisfaction is measured in 170 metropolitan areas across the nation, and results are also published quarterly. This measurement system, known as the Customer Satisfaction Index (CSI), is administered by the Opinion Research Corporation. Each quarter it mails a questionnaire to thousands of households asking them how they would rate their overall satisfaction with the Postal Service’s mail service. For the Washington, D.C., metropolitan area, CSI results are available separately for Washington, D.C.; Northern Virginia; Southern Maryland; and Suburban Maryland. The processing and distribution facility for Southern Maryland is located in Prince George’s County. The facility for Suburban Maryland is located in Montgomery County. Nationwide averages are also available for comparison purposes. The Postal Service said that its delivery service and customer satisfaction goals—nationwide and locally—are ambitious, and attaining those goals will require a high level of employee commitment. For example, the quarter 4, 1994, EXFC nationwide average was 12 percentage points below the established goal. To gauge employee attitudes and satisfaction levels, the Service has administered a questionnaire to all employees in each of the last 3 years. This questionnaire is commonly known as the Employee Opinion Survey (EOS), and survey results are available for the nation, broken down by local postal facility. In conducting our review, we (1) obtained and analyzed numerous Postal Service reports containing data on factors affecting mail processing and delivery; (2) obtained and analyzed numerous types of performance data for both the local Washington, D.C., area and the nation, as well as for other selected locations; (3) interviewed various postal and union officials; (4) observed mail processing operations at local processing and distribution centers and local postal stations; and (5) examined recent reports on mail service issued by the Postal Service’s Inspection Service and the Surveys and Investigations Staff of the House Committee on Appropriations. (Additional background information and more details on our objectives, scope, and methodology are presented in appendix I.) Mail service and customer satisfaction in the Washington, D.C., metropolitan area have consistently been below stated goals; generally below the national average; and, in 1994, substantially below the levels attained in 1993. Specifically, service in the Washington metropolitan area, as measured quarterly by EXFC, has been below the national average in 16 of the 17 quarters since EXFC was first established in 1990. The national average ranged between 79 and 84 percent in that time period but has always been below the 95-percent on-time delivery goal. Figure 1 compares mail delivery service in the Washington metropolitan area, over time, with the national average and delivery service goal. Further analysis of EXFC data showed that delivery scores in the Washington, D.C., metropolitan area have been among the worst in the nation. For example, 88 percent of the time, service in Northern Virginia and Southern Maryland was in the bottom 25 percent of all locations where service was measured; 76 percent of the time, service in Washington, D.C., was in the bottom 25 percent. Additionally, delivery service scores in the Washington, D.C., metropolitan area for quarter 4, 1994, were significantly below the scores attained for quarter 4 the previous year. Southern Maryland’s score, for example, dropped 8 percentage points. Residential customer satisfaction in much of the Washington, D.C., metropolitan area, as measured by CSI, has generally been below the national average. (See figure 2.) Since 1991, the Opinion Research Corporation has sent CSI questionnaires to postal customers on a quarterly basis asking them how satisfied they were with mail service. Information collected during these 16 quarters show that in each quarter between 85 and 89 percent of customers nationwide rated their satisfaction with the Service’s overall performance as excellent, very good, or good. In 12 of 16 quarters, Northern Virginia customers reported being as satisfied, or more satisfied, than the nation as a whole. Customer satisfaction in the other locations that make up the metropolitan area—Southern Maryland; Washington, D.C.; and Suburban Maryland—was lower. For example, Washington, D.C., customers rated the Postal Service lower than the national average in all 16 quarters. Further analysis of CSI scores showed that customer satisfaction was lower in all parts of the Washington, D.C., metropolitan area during quarter 4 of fiscal year 1994 than during comparable periods in 1991, 1992, and 1993. (A detailed discussion of mail service conditions in the Washington, D.C., metropolitan area is presented in appendix II.) Mail service in the Washington, D.C., metropolitan area is poor for a number of reasons, including (1) the Postal Service’s inability to effectively deal with the unexpected growth in mail volume, (2) mail handling process problems, and (3) labor-management problems. Over the past few months, the Postal Service has initiated additional actions in each of these areas in an effort to improve mail service. In 1994, the percentage increase in the amount of mail delivered in the Washington, D.C., metropolitan area was twice the national average. According to Postal Service officials, the Postal Service had not anticipated this growth and was unprepared to process and deliver the increased volume of mail. Complicating the situation were several factors that worked against the Postal Service. First, according to Postal officials, local processing and delivery units experienced staffing problems because more craft people than expected accepted a retirement incentive (buyout) of up to 6 months’ salary and left the Service during the 1992 restructuring. Also, staffing ceilings were put into place in anticipation of more automation equipment. These events, according to Postal officials, left the delivery units with too few people to handle the increased volume of mail. Additionally, the processing units were operating with too many unskilled, temporary employees who had been hired to replace more costly career employees who retired in 1992. Training also became an issue when some new supervisors were placed in jobs where they were not familiar with the work of the employees they were supervising. After considerable attention was focused on these problems in the spring of 1994, the Postal Service took steps to hire new, permanent employees and strengthen training for supervisors and craft personnel. Second, to focus additional attention on customer service, separate lines of reporting authority were established for mail processing and mail delivery functions under the Executive Vice President/Chief Operating Officer during the 1992 restructuring. This realignment of responsibilities was done as part of the Postmaster General’s broad strategy to make the Postal Service more competitive, accountable, and credible. This action left no single individual with the responsibility and authority to coordinate and integrate the mail processing and delivery functions at the operating levels of the organization. The primary focus of each of the function managers was to fulfill the responsibilities of his or her function. Working with the other function managers became a secondary concern. Consequently, because critical decisions affecting both mail processing and customer services could not be made by one individual at the operating level of the organization, coordination problems developed. In June 1994, the Postmaster General moved responsibility for processing and delivery down to the Area Vice President level, and on January 10, 1995, postal officials announced plans for establishing a position under the Mid-Atlantic Area Vice President that would be responsible for overseeing all processing and delivery functions in the Washington, D.C., metropolitan area and Baltimore area. Time slippages in the automation program was another factor that affected the Postal Service’s ability to handle the increased volume of mail. More mail than planned had to be processed manually or on mechanical letter-sorting machines. The Postal Service had expected that by 1995 almost all letter mail would be barcoded by either the Postal Service or mailers and be processed on automated equipment. However, automation fell behind schedule in 1993-1994. The new projected date for barcoding all letter mail has slipped to the end of 1997. (A detailed discussion of the Postal Service’s inability to respond effectively to the unexpected mail volume growth in the Washington, D.C., metropolitan area is presented in appendix III.) Delivery service in the Washington metropolitan area was also adversely influenced by various mail handling process problems, including (1) the unnecessary duplicative handling of much mail addressed to Northern Virginia, (2) overnight service areas that managers believed were geographically too large, (3) mail arriving too late for normal processing, (4) the absence of a control system for routinely pinpointing the specific causes of delays in specific pieces or batches of mail, and (5) failure of employees to follow prescribed processing procedures. The Postal Service has taken action to address, at least in part, each of these problems. Some of the more significant actions taken include (1) reducing the amount of mail handled by more than one processing facility in Northern Virginia, (2) processing more mail at local facilities rather than transporting it to distant processing and distribution centers, (3) working with the large mailers to get them to mail earlier in the day and give advance notice when mailing unusually large volumes, (4) taking the first steps to develop a system that can pinpoint causes of delayed mail, and (5) requiring greater adherence to established operating procedures. Additionally, a number of service improvement teams are continuing to examine mail handling processes in an effort to identify other areas needing improvement. Examples provided by local postal officials that most clearly illustrate problems affecting the local area are discussed below. Duplicative mail handling: Much mail sent to the Northern Virginia area was delayed because it was processed by both the Dulles and Merrifield facilities. Further delays also occurred because of the time lost transporting mail between the two facilities. Duplicative mail handling occurred because the Dulles and Merrifield facilities are jointly responsible for certain ZIP Code service areas and most facilities sending mail to Northern Virginia did not separate the mail between the two facilities. There is no easy way to split up the service areas between the two facilities geographically—it would require realigning and changing some ZIP Codes. That option had not been vigorously pursued because of the adverse reaction from customers anticipated by the Service. However, the Postal Service recently began working with major feeders of overnight mail to work out an interim solution—i.e., the feeder facilities are to sort mail more completely before sending it to the Merrifield and Dulles facilities. Additionally, the Postal Service, in commenting on a draft of this report, said that it will be installing a Remote Bar Coding System site at the Dulles processing and distribution center (P&DC) that, along with other processing changes, will virtually eliminate the need for duplicative handling of mail for some Northern Virginia ZIP Codes. Overnight service areas that are too large: Consistent overnight delivery service in some parts of the Washington, D.C., metropolitan area is difficult to achieve because some service areas may be too large for the current collection, transportation, and delivery network. For example, mail from some of the outlying areas in the service area—e.g., Leonardtown and California, Maryland—does not arrive at the Southern Maryland processing facility until 10:00 or 11:00 p.m. This severely compresses the amount of time available for processing the mail and getting it back out to the post offices in time for delivery the next day. To address this problem, the Postal Service plans to process mail from Leonardtown and California, in addition to other Southern Maryland areas, at a closer facility in Waldorf (Charles County), Maryland. Additionally, the Postal Service is installing more “local only” collection boxes, which should reduce the amount of mail that has to be transported to distant processing and distribution centers. Mail arriving too late for timely processing: Large quantities of mail are frequently entered into the mail stream significantly past the times established for normal processing. This would not be a problem, however, were it not for the expectation that deliveries would be made the next day. Managers told us they have few options other than to accept late-arriving mail and then rush to meet dispatch times. They said that to do otherwise would upset the delicate balance between providing customer service and meeting established time schedules. To help establish a more orderly workflow, the Postal Service has been actively working with large mailers in the area to get them to mail earlier in the day and also to notify the Postal Service ahead of time when large mailings are expected to arrive. (A detailed discussion of all five mail handling process problems and corrective actions taken is presented in appendix IV.) In addition to academic studies, EOS, EXFC, and CSI survey results indicated that a relationship exists between employee attitudes and service performance. Employee attitudes about postal management in most of the facilities in the Washington, D.C., area, like employee attitudes in many other big cities, were in the bottom 25 percent of units nationwide. Similarly, EXFC and CSI scores for Washington, D.C., and other big cities were also relatively low compared to other areas of the country. Disruptive workforce management problems were more prevalent in the Washington, D.C., metropolitan area than in most other parts of the country. Postal Service data showed that employees in the Washington, D.C., metropolitan area experienced greater than average use of sick leave and a higher-than-normal use of work assignments with limited/light duties for employees who, due to physical restrictions, are unable to perform normal duties. Managers told us that excessive use of sick leave and limited/light duty assignments indicate possible abuse and result in lower productivity. Those managers believed, and EOS tended to support the view, that excessive employee absences and unavailability for regular duties were often the result of substance abuse and poor employee attitudes. EOS data suggested that employees in the Washington, D.C., metropolitan area perceived a greater than average level of substance abuse and had more negative attitudes about postal management than employees in most other locations nationwide. Postal management recognizes that improving employee attitudes and attendance is critical to improving delivery performance and customer satisfaction. However, the Postal Service cannot improve employee attitudes and attendance unilaterally. Successful change will require the support and cooperation of employees and their unions. The need for joint cooperation was pointed out in our recent report on Postal Service labor-management relations. The Postmaster General has initiated a number of actions to improve this relationship. For example, he recently invited all the parties representing postal employees to attend a national summit and commit to reaching, within 120 days, a framework agreement for addressing labor-management problems. The rural carriers union and the three management associations accepted the invitation. However, the leaders of the three largest postal unions had not accepted as of December 31, 1994. They said they would wait until the current round of contract negotiations is completed before making a decision on the summit. (A detailed discussion of labor-management relations is presented in appendix V.) The Postal Service provided written comments on a draft of this report. It recognized the need to improve service and highlighted its continuing efforts to produce significant improvements in customers’ satisfaction with their mail service. The Postal Service said that it was continuing to move ahead with numerous improvements in the area’s mail processing and distribution centers. For example, it cited the installation of the Remote Bar Coding System site at the Dulles P&DC to help resolve the duplicative handling of some mail addressed to Northern Virginia. It also cited efforts to begin processing more mail at the Waldorf (Charles County), Maryland facility in order to improve service in Southern Maryland. Additionally, the Postal Service said that it was looking into diagnostic technologies as a means of improving its ability to identify underlying causes of delayed mail. The Postal Service said that new supervisors are receiving the training they need, and that the Service is continuing to hire more letter carriers and mail handlers and to place them where they are most needed. The Postal Service further said that through the outstanding work of thousands of dedicated employees, it was turning the corner in providing quality service in the Washington, D.C., metropolitan area. It said that the actions taken are beginning to produce results and cited, as an example, the improved EXFC scores attained during the first quarter of 1995. The Postal Service agreed with our conclusion that improving labor-management relations is a key element in any long-term solution to mail service problems. It said that efforts in this area must include correcting problems that arise from a collective bargaining process that is not working. Further, it said that postal unions and postal management must work together to change this process. Where appropriate, the Postal Service’s comments have been incorporated into the text of this report. Its comments, in total, are included as appendix VI. As arranged with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the date of this letter. At that time, we will distribute copies of the report to the Postmaster General, other House and Senate postal oversight committees, and other interested parties. Copies will also be made available to others upon request. Major contributors to this report are listed in appendix VII. If you have any questions about the report, please call me on (202) 512-8387. In fiscal year 1994, the Postal Service delivered about 177 billion pieces of mail nationwide. About 94 billion, or 53 percent, was First-Class Mail. Revenues for all classes of mail totaled about $50 billion in fiscal year 1994. Revenue from First-Class Mail totaled about $29.4 billion—approximately 59 percent of total revenue. The Postal Service field organization comprises 10 service areas. The Mid-Atlantic Area provides service to the Washington, D.C., metropolitan area and surrounding states. (See figure I.1.) Suburban Maryland Washington, DC Northern Virginia (Merrifield) The Mid-Atlantic area is subdivided into nine performance clusters. The Northern Virginia and Capital performance clusters provide mail service for the Washington, D.C., metropolitan area. The Northern Virginia cluster consists of two mail processing and distribution centers (P&DC), one of which is at Merrifield, Virginia, and one at Dulles International Airport; and the Northern Virginia customer service district. The Capital cluster consists of three P&DCs, with one each in Capitol Heights, Maryland (Southern Maryland); Gaithersburg, Maryland (Suburban Maryland); and Brentwood (Washington, D.C.); as well as the Capital customer service district. The Mid-Atlantic Area Vice President is responsible for day-to-day management of the Mid-Atlantic Area. Efficient collection, processing, and transportation of mail are critical to timely mail delivery and customer satisfaction. Most processing is done at P&DCs, which (1) distribute most local mail to post offices for delivery and (2) dispatch nonlocal mail to other postal facilities for further sorting and distribution. The types of mail processing operations include (1) high-speed processing on automated equipment, (2) mechanized processing on letter sorting machines, and (3) manual sorting. Automated processing is the most efficient of the three methods, and its use is increasing as more automated equipment is installed. The Postal Service’s goal is to deliver at least 95 percent of its First-Class Mail within the following timeframes: (1) overnight for First-Class Mail originating (being sent) and destinating (being received) within the local delivery area defined by the Postal Service; (2) 2 days (generally) for First-Class Mail traveling outside the local area, but within 600 miles; and (3) 3 days for all other domestic First-Class Mail. Nationwide, during the fourth quarter of fiscal year 1994, the Postal Service delivered about 83 percent of its overnight mail, 74 percent of its 2-day mail, and 79 percent of its 3-day mail within established delivery standards. The Postal Service has for several years sponsored measurement systems—the External First-Class Measurement System (EXFC), the Customer Satisfaction Index (CSI), and the Employee Opinion Survey (EOS)—that have allowed assessments of its delivery performance, as well as of customer and employee satisfaction. The Service uses information from these systems to identify areas needing improvement and also publishes summary data that the Service and public can use to hold management and employees accountable for Postal Service performance. Our objectives were to (1) document the recent history of on-time mail delivery service problems for overnight First-Class Mail in the Washington, D.C., metropolitan area; (2) determine the reasons why mail service was below the desired level; and (3) identify any Postal Service actions to improve service. We did not review the Postal Service’s delivery performance for First-Class Mail outside the local service area or for other mail classes (i.e., Express, second-, third-, and fourth-class). The Washington, D.C., metropolitan area, as used in this report, includes the Northern Virginia and Capital clusters. To accomplish our objectives, we obtained and analyzed numerous Postal Service reports containing data on factors affecting mail processing and delivery. We examined national and local Postal Service workhour reports, financial reports, and “FLASH” reports. FLASH reports provide, among other things, detailed information on overtime, mail volume, the number of addresses where mail can be delivered, sick leave usage, limited duty workhours, and the number of hours spent on training. The reports generally covered 4-week accounting periods for fiscal years 1991 through 1994. They included information for the nation, as well as for the Northern Virginia cluster, the Capital cluster, and the units included in these two clusters. Because of changes in accounting and reporting in fiscal year 1993, we did not use 1993 data below the cluster level. We also obtained and analyzed numerous types of performance data for the local Washington, D.C., area and for the nation, as well as for other judgmentally selected locations. These data included delivery service scores as measured by the Postal Service’s EXFC measurement system, customer satisfaction scores as measured by CSI, and employee opinions as determined by EOS. These data covered fiscal years 1991 through 1994, except for EOS, which was conducted in 1992, 1993, and 1994. In 1992, we reported that CSI was a statistically valid survey of residential customer satisfaction with the quality of service provided by the Postal Service. We have not evaluated the validity of the EXFC and EOS survey. We interviewed (1) the Chief Operating Officer/Executive Vice President of the Postal Service; (2) the Vice President of the Mid-Atlantic Service Area; (3) the customer service managers for the Northern Virginia and Capital clusters; (4) the plant managers at Merrifield, Brentwood, and Capitol Heights; (5) Inspection Service officials responsible for audits of postal operations; and (6) various other program and operations officials at headquarters, the Mid-Atlantic area office, local P&DCs, and local delivery units. We also discussed the causes of mail delivery problems with representatives from the National Association of Letter Carriers and the American Postal Workers Union. Additionally, we observed mail processing operations at local P&DCs and local postal delivery units. We also obtained and analyzed documentation on initiatives to improve service in the Washington, D.C., metropolitan area, although we did not evaluate the effectiveness of those initiatives. We also reviewed recent reports on mail service issued by the Inspection Service and the Surveys and Investigations Staff of the House Committee on Appropriations. We requested comments on a draft of this report from the Postal Service. Written comments were received and are discussed on page 11 and included as appendix VI. We did our work from September 1994 to December 1994 in the Washington, D.C., metropolitan area in accordance with generally accepted government auditing standards. The Postal Service’s goal is to deliver 95 percent of the mail on time as measured by EXFC and to achieve 100-percent customer satisfaction as measured by CSI. To date, however, the Postal Service has fallen considerably short of those goals, both nationally and in the Washington, D.C., metropolitan area. EXFC data show that mail delivery service in the Washington, D.C., area has consistently been among the worst in the nation. EXFC is administered under contract by Price Waterhouse and measures delivery time between the scheduled pickup of mail at collection boxes or post offices and the receipt of that mail in the home or business. EXFC test mailings are done in 96 metropolitan areas across the country. Results are published quarterly for overnight First-Class Mail. Within the Washington metropolitan area, EXFC delivery scores are available for Northern Virginia, Southern Maryland, and Washington, D.C. Since EXFC was first established in 1990, delivery scores for overnight First-Class Mail in the Washington, D.C., metropolitan area have, except for the first quarter reported (fourth quarter of 1990), been below the national average, and the national average has always been below the performance goal established by the Postal Service. (See figure 1.) Our further analysis of EXFC scores showed that mail service in the Washington, D.C., metropolitan area was not only below the national average, but also was generally among the worst in the nation. As shown in table II.1, Northern Virginia, Southern Maryland, and Washington, D.C., frequently ranked in the bottom 25 percent of the metropolitan areas where delivery performance was measured. Often, these locations were in the bottom 10 percent. D.C. EXFC data also showed that Washington metropolitan area delivery service in fiscal year 1994 was generally below the levels of service provided in fiscal years 1991 through 1993. (See figure II.1.) Northern Virginia was the exception. Delivery service in Northern Virginia was better in fiscal year 1994 than it was in 1991 and 1992, but not as good as it was in fiscal year 1993. Washington, D.C. EXFC scores can be affected by the performance of neighboring P&DCs. For example, mail originating in Southern Maryland and going to the District of Columbia passes through the Southern Maryland P&DC and the Washington, D.C., P&DC (the destinating facility). The time taken is reflected in Washington, D.C.’s EXFC score, even though it may have been delayed because of a problem at the Southern Maryland P&DC. Because of the impact other locations may have on individual EXFC scores, we obtained and compared the test scores for “turnaround” mail in Northern Virginia, Southern Maryland, and Washington, D.C., with the published EXFC scores for each of the three locations where service is measured in the Washington area. Table II.2 shows that delivery scores for turnaround mail were higher than the published EXFC scores, but still below the 95-percent delivery performance standard. Washington, D.C. Customer satisfaction with mail service, as measured by CSI, varied among residents in Northern Virginia, Suburban Maryland, Southern Maryland, and Washington, D.C. In fiscal year 1991, the Postal Service developed and implemented CSI to track residential customer satisfaction. CSI is administered under contract by Opinion Research Corporation. Each quarter since it was implemented, the contractor has mailed a questionnaire to thousands of households throughout the nation asking them how they would rate their overall satisfaction with the Postal Service’s performance (poor/fair/good/very good/excellent). The Postal Service publicly discloses quarterly overall satisfaction ratings for 170 metropolitan areas, as well as the nationwide average. The Postal Service began reporting quarterly CSI scores in the first quarter of fiscal year 1991 for 40 metropolitan areas. Since then, the survey has been expanded to 170 locations. Results from the first survey showed that, nationally, 87 percent of customers thought the Postal Service’s overall performance was excellent, very good, or good. Since then, quarterly scores have ranged between 85 and 89 percent. The CSI score for quarter 4, 1994, was 85 percent. Among the 170 locations surveyed, customer satisfaction scores are reported for four locations in the Washington, D.C., metropolitan area: Northern Virginia, Suburban Maryland, Southern Maryland, and Washington, D.C. Of these locations, as shown in figure 2, residents of Northern Virginia gave the highest satisfaction rating on the overall performance of the Postal Service. In 12 of the 16 quarters since the Postal Service began reporting CSI scores, Northern Virginia’s scores equalled or exceeded the national average. However, in 3 of the last 4 quarters reported, satisfaction decreased, with scores falling 1 to 3 percentage points below the national average. Suburban Maryland’s postal customers were less satisfied. In 9 of the 16 quarters since the Postal Service began reporting CSI scores, Suburban Maryland’s scores fell below the national average. Customer satisfaction in Suburban Maryland decreased in the last 4 quarters—dropping from 90 percent in quarter 4, 1993, to 80 percent in quarter 4, 1994. Southern Maryland postal customers have been less satisfied than Northern Virginia and Suburban Maryland customers. In fact, Southern Maryland’s score fell below the national average in 13 of the 16 quarters since quarter 1, 1991. Of the four local areas with CSI scores comprising the Washington, D.C., metropolitan area, Washington, D.C., itself has been rated lowest on overall performance. In all 16 quarters since the Postal Service began reporting CSI scores, Washington, D.C.’s scores were lower than the national average. In addition, its scores, like most others, began to drop in quarter 4, 1993. Further analysis of CSI data showed that customer satisfaction in Washington, D.C.; Southern Maryland; Suburban Maryland; and Northern Virginia was lower in quarter 4, 1994, than it was in quarter 4 of any of the preceding 3 fiscal years. (See figure II.2.) Washington, D.C. Postal officials cited the unexpected growth in mail volume in 1994 as one of the principal causes of the breakdown of delivery service in the Washington, D.C., metropolitan area. They said the Postal Service was unable to respond to the unanticipated growth in volume because (1) local delivery units had numerous unfilled vacancies and the workforce at the processing and distribution centers comprised many unskilled, temporary employees; and (2) an organizational change had weakened management control over the span of processing and delivery activities. Timely processing and delivery of the mail were further complicated because employee complement ceilings had been put into place in anticipation of automation. However, automation fell behind schedule in 1993 and 1994. Postal officials cited an unanticipated heavy mail volume in 1994 as one of the principal causes for the slip in service performance, both nationally and locally. Nationally, mail volume grew by about 6 billion pieces between 1993 and 1994—a 3.5-percent increase. Mail volume data, in number of pieces, were not available below the national level. At the local delivery unit level, mail volume is measured in feet. This measure, referred to as city delivery volume feet (CDVF), reflects the amount of mail delivered by carriers. The data showed that the rate of increase in the amount of mail delivered by carriers in the Northern Virginia and Capital performance clusters was about twice the rate of increase experienced nationwide. (See table III.1) Postal Service officials said they had not anticipated that much growth in volume either nationally or locally. Furthermore, they believed that any 1994 increase in volume could be handled without increasing the workforce size because the deployment of additional automated equipment would make processing and delivery more efficient. In retrospect, however, the Postal Service officials said that staffing was inadequate and that automation was able to handle only about half of the volume increase. According to Postal Service officials, a shortage of trained employees contributed to poor mail service in the Washington, D.C., metropolitan area. The shortage resulted from the loss of skilled employees during the restructuring and buyout, hiring decisions based on an unrealistic automation schedule, and some inadequately trained supervisors. The Postal Service lost many skilled craft employees as a result of the 1992 restructuring and buyout. Nationally, 16,882 clerks, 11,933 city carriers, and 2,346 mail handlers took the buyout—about 5.8 percent of all employees in this group. Additionally, more than 16,000 other employees also left the Service. In the Washington, D.C., area, 1,165 craft employees took the buyout—about 6.6 percent of the craft employees in the local area. Employees in the Washington, D.C., area who took the buyout had an average length of service of about 27 years. In testimony before the Subcommittee on Treasury, Postal Service, and General Government, House Committee on Appropriations, the Postmaster General said that in looking back at the 1992 restructuring, the Postal Service “let a few too many people go, and . . . cut too deeply in some functional areas.” In planning the 1992 restructuring, the Postal Service had intended to eliminate approximately 30,000 overhead positions that were not involved in mail processing or delivery. However, the Postmaster General wanted to avoid a reduction-in-force, so he extended the buyout offer to clerks, carriers, mail handlers, postmasters, and others in order to open up vacancies for employees whose overhead positions were eliminated but who were either not eligible or did not want to retire. Consequently, more than 47,000 employees opted for the special retirement incentives offered in the Fall of 1992. This number was greater than the Postal Service had expected. However, officials viewed the loss as an opportunity to hire less costly noncareer employees—who could later be terminated more easily than career employees as more automation was moved into place. As the downsizing/restructuring got under way in the fall of 1992, Members of Congress, mailers, and employee groups expressed considerable concern about a possible adverse impact on mail delivery service. However, when compared to the same periods the previous year, service nationwide and in Southern Maryland remained stable and even showed signs of improvement immediately following the restructuring. EXFC scores for Washington, D.C., and Northern Virginia, on the other hand, fell immediately following the restructuring in comparison to the scores received during the same period the previous year. By quarter 2, 1994, nationwide scores and scores for Washington, D.C.; Southern Maryland; and Northern Virginia were below the scores received for quarter 2, 1993, and in July 1994, the Vice President of the Mid-Atlantic Area said that staffing had become a significant problem in the Washington, D.C., area. He noted that in December 1993, in preparation for additional automated sorting systems, the Postal Service had put in place employee complement ceilings. As a result of this action, he said, delivery units struggled with unfilled vacancies, and the processing and distribution centers had to rely on a workforce with many unskilled, temporary employees. These problems were confirmed by local Washington, D.C., area postal officials. They said that because of the departure of many experienced carriers, clerks, and supervisors during the restructuring, the Postal Service’s ability to quickly and accurately sort and deliver mail in the Washington, D.C., area was adversely affected. They also agreed with the Vice President of the Mid-Atlantic area that the shortage of career employees resulting from the employee complement ceilings put in place in late 1993, combined with the large number of unskilled, temporary employees, adversely affected their ability to provide accurate, on-time delivery service. Reacting to the staffing problems, the Vice President for the Mid-Atlantic Area said that the Postal Service was placing emphasis on obtaining adequate numbers of employees and making sure they were in the right places at the right time. As of July 1994, the Postal Service had approximately 18,000 craft employees in the Washington, D.C., metropolitan area. Between that time and October 1994, 130 new staff had been hired in Southern Maryland, including 55 letter carriers, 40 clerks, and 35 mail handlers. In Suburban Maryland, the Postal Service had hired 62 new letter carriers and 34 clerks. In Northern Virginia, 300 new employees had been hired, half of whom were letter carriers. In Washington, D.C., 168 letter carriers, 30 clerks, and 31 mail handlers had been hired. Another staffing issue that arose from the restructuring involved a management decision that placed some employees into supervisory positions when they were not familiar with the work of the employees they were supervising. The Postal Service said it did this to avoid relocating employees outside the Washington, D.C., metropolitan area. However, this action raised additional congressional concerns about the adequacy of training for new supervisors. The Postal Service began making changes to its training program after the restructuring and believes that its ability to train people properly, quickly, and economically is being strengthened. For example, postal officials said that the supervisory training program was being revised and a curriculum based on needs assessment was being developed. In commenting on a draft of this report, the Postal Service said that new supervisors are getting the training they need, and that the Service is continuing to hire more letter carriers and mail handlers and to place them where they are most needed. Compounding staffing problems was the delay in expected benefits from automation. The Postal Service had expected that by 1995 virtually all letter mail would be barcoded by either the Postal Service or the mailer. However, in April 1994, it announced that the barcoding goal date had slipped to the end of 1997. Automation increases the efficiency of mail processing by decreasing the volume that has to be sorted by relatively slower and more costly mechanized or manual processing—potentially leading to higher EXFC scores. Mechanized sorting on letter sorting machines, on the other hand, requires operators to memorize difficult sort schemes and key in ZIP Code information. This human intervention results in higher potential for mishandling mail, causing delays. With automated processing, barcoded letters are sorted in high-speed barcode sorters, often to the level of the street address, with limited human intervention. As automation becomes fully deployed, the Postal Service expects most mail to be already sorted by the time it gets to a carrier for delivery. Shortly after taking office in 1992, Postmaster General Runyon began a top-down restructuring of the Postal Service. This was part of a broad strategy to make the Service more competitive, accountable, and credible. One key component of the restructuring was the separation of mail processing and mail delivery at all levels of the organization below the Executive Vice President/Chief Operating Officer of the Postal Service. This action resulted in splitting accountability for processes critical to mail delivery service. The value of separating responsibility for the mail processing function (which takes place primarily at processing and distribution centers) from the mail delivery function (which takes place primarily at local post offices) has been controversial. The separation left no single manager with the responsibility and authority to coordinate and integrate the mail processing and delivery functions in the Washington, D.C., metropolitan area. Each manager’s primary focus became the fulfillment of his or her own individual responsibilities. Working with managers of other functions became secondary. Consequently, critical decisions affecting both mail processing and customer services in the Washington, D.C., area were not being made by one manager at the operating level of the organization. For example, when we visited one post office in Northern Virginia, local postal officials complained that too much unsorted and misrouted mail was routinely sent to local post offices in order to keep the Merrifield P&DC from having a backlog of unprocessed mail. On the day of our visit, these officials showed us a container of misrouted mail from Merrifield that included not only overnight First-Class Mail but also Priority Mail. The Postmaster noted that by the time this mail could be sent back to Merrifield to be correctly sorted, it would be at least 1 day late. Since there was no one manager with jurisdiction over processing and delivery functions in the Washington, D.C., metropolitan area, resolution of conflicts between the two functions could be accomplished only through the direct involvement of the area vice president, who had responsibility for six states and Washington, D.C. The Inspection Service also identified excessive misrouted mail as a significant problem in the Washington, D.C., area in its May 1994 report on mail conditions in the Mid-Atlantic Area. In a December 1994 Inspection Service report, it also cited the split in responsibilities between processing and delivery as a significant problem in the Washington, D.C., area. The report cited the absence of teamwork and cohesiveness among managers. The Inspection Service said that there needs to be a “glue” to hold the managers of the processing and delivery functions together in the Washington, D.C., metropolitan area. Additionally, representatives from the National Association of Letter Carriers and the American Postal Workers Union told us that the split in responsibilities between processing and delivery was a significant contributing factor to poor mail delivery service in the Washington, D.C., metropolitan area. In June 1994, the Postmaster General changed the management structure to increase the levels of teamwork and accountability in the Postal Service. He took this action in response to feedback from Members of Congress, postal customers, and employees regarding the separation of the customer service function and the processing and distribution function that followed the 1992 restructuring. The Postmaster General combined the responsibility for customer service and mail processing and distribution at a lower level in the organization—from the Chief Operating Officer/Executive Vice President to the area office level. Instead of each of the 10 areas having a manager for customer service and another for mail processing and distribution, one overall manager with the rank of Vice President was put in charge of both customer service and mail processing and distribution. On January 10, 1995, the Postal Service made an additional change designed to push accountability farther down in the organization. On that date, postal officials announced plans for establishing a position under the Mid-Atlantic Area Vice President that would oversee all processing and delivery functions in the Washington/Baltimore area. Several mail handling process problems contributed to the poor delivery service in the Washington, D.C., metropolitan area. These problems included (1) the unnecessary duplicative handling of much mail addressed to Northern Virginia, (2) the difficulty of meeting delivery standards in some outlying areas, (3) the arrival of mail too late for processing and delivery the next day, (4) the lack of a system for routinely pinpointing the causes of delays in specific pieces or batches of mail, and (5) the failure to follow established procedures. Mail addressed to two of the seven ZIP Code service areas in Northern Virginia is often processed by both the Merrifield and Dulles processing and distribution centers and is sometimes delayed by the unnecessary additional processing. This duplicative handling occurs because the Merrifield and Dulles centers are jointly responsible for processing mail addressed to the 220 and 221 ZIP Code service areas. This is partly a result of the way ZIP Codes were first assigned within the 220 and 221 delivery service areas. In 1963, when the ZIP Code service areas were first established, the Dulles facility did not exist; therefore, Merrifield was responsible for all of 220 and 221. At that time, postal officials at Merrifield assigned Zip Codes using an alphabetic listing of all post offices in these two service areas. Because the assignments were made alphabetically, there was no clear geographic distinction between the 220 and 221 service areas. Subsequently, in 1992, when the Dulles facility became operational, there was no good way of isolating either the 220 or 221 service area for processing at Dulles. Therefore, both facilities assumed joint responsibility for processing mail addressed to 220 and 221. In 1991, however, a plan was approved at the headquarters staff level to restructure the ZIP Codes in these two service areas, but top management did not approve that plan because of concerns over reactions from postal customers about ZIP Code changes. Depending on the originating point and predetermined routing schedules, mail addressed to 220 or 221 is to go to either the Merrifield or Dulles centers for processing. The receiving center is to sort the mail to identify the mail that is to be delivered within its service area and then dispatch the remaining mail to the other center for further processing. Postal officials said this procedure results in excessive transportation between the two facilities and duplicative sorting, which can also translate into delayed mail. Postal officials were unable to say precisely how much mail was subjected to this duplicative processing but said it involved substantial quantities. As a partial solution to the problem of duplicative mail handling in the Northern Virginia area, the Postal Service has begun asking the primary feeders of overnight mail to Northern Virginia to sort that mail to a 5-digit level and transport it to the appropriate center in Northern Virginia for further processing. The Postal Service expects this change to reduce the duplicative handling of mail between the two centers, but it places more processing work on the other facilities. The Postal Service, in commenting on a draft of this report, said that it will be installing a Remote Bar Coding System site at the Dulles P&DC that it said will virtually eliminate the need for duplicative handling of mail for some Northern Virginia ZIP Codes. Plant managers at the Southern Maryland and Northern Virginia P&DCs believe that consistent overnight delivery is difficult to achieve in certain outlying areas. They believe an extensive 1990 effort to revise delivery standards and establish more realistic overnight delivery service areas did not go far enough. The plant manager at the Southern Maryland P&DC, in particular, believes that he has an excessively large overnight delivery service area, which he believes has an adverse impact on his EXFC scores. In 1990, in an effort to provide better mail delivery service by improving the Postal Service’s ability to consistently deliver mail within the standards, the Postal Service changed 6,389 (44 percent) of its 14,578 overnight delivery areas nationwide to 2-day service areas. Although this change relaxed the delivery standards for some areas, standards for other areas were unchanged. Two areas in Southern Maryland that were cited by the plant manager at the Southern Maryland P&DC as examples of outlying locations where overnight deliveries were not relaxed and are, at best, challenging are Leonardtown and California, Maryland. Mail from both of these locations is processed at the Southern Maryland processing and distribution center. The plant manager at Southern Maryland said mail from Leonardtown and California often does not arrive at the Southern Maryland center for processing until 10:00 or 11:00 p.m. He said the post offices were unable to get the mail to him earlier in the day because the carriers were often making deliveries and picking up mail until late in the evening. He said that because of the time required to process the mail through the facility, it is difficult to get the mail back out to Leonardtown and California in time for delivery the next day. Partly to address the delivery problem to outlying areas, the Postal Service is planning to process mail from Leonardtown and California, in addition to other Southern Maryland areas, at a facility in Waldorf (Charles County), Maryland, which is closer to Leonardtown and California. The Postal Service believes that by decentralizing processing it will be better able to serve the Southern Maryland mailing public and provide more reliable, consistent service. In addition, to improve mail flow, the Postal Service is installing more “local only” collection boxes in high-traffic locations throughout the Washington, D.C., area. The ZIP Codes covered by that service are to be clearly displayed on the collection boxes. Customers using these boxes should receive overnight service because that mail will not leave the local area for processing. Mail also arrived late at area P&DCs for reasons other than the size of the service area. Each P&DC has established an operating plan specifying critical entry times for receipt of mail in order to meet established clearance and dispatch times at the P&DC. However, area plant managers told us that large quantities of mail, from mailers and other postal facilities, frequently arrived past the critical entry times. This compressed the amount of time that P&DCs had available for processing the mail. The area managers said they have few options other than to accept the mail and then rush to meet their clearance and dispatch times. They feel that to do otherwise would upset the delicate balance between providing customer service and meeting established time schedules. The Inspection Service identified mail arriving late at P&DC centers as one of the major contributors to delayed mail. The Inspection Service also reported that other delays occurred because bulk business mail was sometimes worked out of sequence—i.e., the latest arriving mail was being worked first instead of last. Postal officials at the Southern Maryland P&DC said local mailers routinely deposited large amounts of bulk business mail on their docks late in the day and expected deliveries to be made the next day. To better plan for and manage its workload, Postal Service officials said customer service representatives were more actively working with major mailers in the area to get them to mail earlier in the day and also notify the Postal Service ahead of time when large mailings are expected to arrive. Additionally, some of the mail processing that was being done at P&DCs is now being shifted to local post offices. Postal Service officials believe this will expedite mail distribution to carriers and improve service to customers. As of December 31, 1994, the Postal Service did not have a system that could be used to examine delayed mail and pinpoint where, in the processing and delivery stream, the mail fell behind schedule. Without being able to pinpoint problems in the mailstream, the Postal Service is forced to react to the effects of delivery problems on customer service instead of taking timely steps to avoid or reduce late deliveries. The Postal Service has nearly 40,000 post offices, stations, and branches that collect and deliver over 570 million pieces of mail daily. Between collection and delivery, mail is transported, sorted, and delivered by over 700,000 employees working in or out of over 349 mail processing and distribution facilities. A First-Class letter traveling from coast to coast passes through a myriad of mail processing, transportation, and delivery operations. Mail typically moves between processing steps in a distribution facility, or among facilities, in batches carried in large mail containers. The Postal Service has systems that use barcoding or other forms of automated identification of containers to assist in the control and movement of containers. However, these systems are not designed to provide operational data on a comprehensive basis that allow the Postal Service to track each mail container through the entire processing and distribution cycle. Consequently, postal management cannot track First-Class Mail that was delayed and gather related data to promptly determine when, where, and why it fell behind schedule. One floor supervisor at the Brentwood processing facility in Washington, D.C., explained the implications of this weakness. He said that any postal employee can examine a container of mail at any point in the processing and delivery cycle and determine whether that mail is on schedule. This is possible because each P&DC has an operating plan establishing “windows” for receiving, processing, and dispatching mail. Therefore, a mail handler can examine the postmark on a mailpiece, compare it to the mail processing timetable (operating plan), and determine whether or not the mailpiece is delayed. However, if the mailpiece is delayed, the critical factors that cannot be determined are when, where, and why the mailpiece fell behind schedule. In other words, there is no “history” of the mailpiece (or container of mailpieces) that would pinpoint breakdowns in the mailstream and allow the Service to take corrective actions to prevent future slowdowns. For example, at Southern Maryland, we noticed mail waiting to be processed that should already have been delivered. The supervisor in charge was unable to tell us if that mail was delayed before it arrived at Southern Maryland or became delayed somewhere within the plant, nor could he tell us why it was delayed. Without a diagnostic tool for tracking delayed mail to the source of the problem, corrective actions can be made only to the extent that breakdowns in the mailstream are significant enough to either become conspicuous to postal managers—such as large volumes of mail being consistently late from a particular facility—or cause EXFC or CSI ratings to drop. Although the Postal Service has not yet developed a system that can review the history of delayed mailpieces to identify points and causes of delays, it has taken steps to try to identify systemwide problems that could cause mail delays. For example, Postal Headquarters has set up a National Operations Management Center that allows officials to monitor mail flow across the nation and respond to performance problems and changing customer needs. Management also reports that it is identifying “pinch points,” which slow mail in the postal network, and rerouting mail when the need arises. Postal officials recognize the need for a capability to track delayed mail. They said that since most letters and flats are now barcoded, a logical next step would be the handling of batches of mail under some form of computer-assisted tracking and control system. According to Postal technicians, since all mail moves between processing steps in a distribution center, or among centers, in batches carried in some form of container, it is possible to identify those containers and their contents with a machine-readable code that would enable computer-based systems to monitor their movements. Accordingly, the Postal Service is developing a program for the automated identification and tracking of single high-value mailpieces or batches of mail in containers. This program, known as the Unit-Load Tracking Architecture (ULTRA), is still in an early formative stage and may take years to develop and implement. Under the ULTRA system, unique codes would be applied to letters, parcels, sacks, trays, and containers that would allow the Postal Service to track the units through the postal system. This comprehensive system could allow definitive identification of the points and causes of processing and delivery delays. In commenting on a draft of this report, the Postal Service said that it was also looking into other diagnostic technologies as means of improving its ability to identify underlying causes of delayed mail. Over the past few months, the Inspection Service reported many instances where failure to follow established mail processing procedures contributed to delays. Many instances have been identified where mail was not picked up from collection boxes; various types of mail were commingled in the same container, causing double handling and reduced cancelling efficiency; color codes designating delivery dates were not used or were used improperly; and inaccurate reports were prepared on mail conditions. For example, the Washington, D.C., P&DC was not placing color codes on a large volume of its mail. This led to mail being worked out of sequence and sometimes delayed. The Inspection Service also identified improper color coding as a significant problem in the delivery units. The Inspection Service reported that significant progress has been made in following established procedures for collecting, separating, color coding, and properly reporting on mail conditions. According to Postal officials, these actions are being accomplished primarily through increased training and reminders to employees of the need to adhere to established procedures. In December 1994, several service improvement teams were in place. These teams comprised both craft and management employees from a variety of functions. A major part of the teams’ work is to examine mail flow processes and identify other weaknesses that may be contributing to late mail. Despite the potential benefits of operational changes, long-term improvements in delivery service will require labor and management to work together toward a common goal of continually improving customer service. Fundamental changes must occur in labor relations in order to increase employee commitment and reduce the conflicts between labor and management that currently exist. This is particularly true in the Washington, D.C., metropolitan area. Workforce management problems that were disruptive to mail handling operations have occurred more frequently in the Washington, D.C., metropolitan area than in most other parts of the country. Improving employee commitment is one of the Postmaster General’s corporate goals. In a recent study of labor relations, we found a negative labor climate that did not foster employee commitment. Our reportdisclosed that labor-management relations problems persist on the factory floor of postal facilities. A negative labor climate can impair both productivity and product quality. A number of studies have documented that there is a relationship between employees’ attitudes and performance. One of the most prevalent workforce management problems in the Washington, D.C., metropolitan area was running mail handling operations without a full complement of workers. Often, employees were unexpectedly absent or otherwise unavailable to do their normal work assignments. Unexpected absences often involved the use of sick leave. Employees can also be unavailable for their regular work if they have been injured or are otherwise considered by their physician to be medically incapable of performing normal duties. Some managers said that unusually high usage of sick leave and limited/light duty indicated possible abuse. Managers also said, and the EOS tends to support, that excessive employee absences and unavailability for regular duties are often brought about by substance abuse or poor employee attitudes. Postal Service data showed that employees in the Washington, D.C., metropolitan area experienced greater than average use of sick leave and a higher than normal use of limited duty and light duty work assignments. The EOS also suggested a greater than average level of perceived substance abuse. In addition, the EOS index suggested that Washington, D.C., area employee attitudes about postal management ranked among the lowest in the country. Figure V.1 shows that sick leave usage from 1992 through 1994 for the Northern Virginia and Capital clusters was higher than the national average. The Northern Virginia sick leave usage rates, expressed as a percentage of total workhours, were 3.27, 3.11, and 3.29 during the period, while the Capital cluster rates were 3.56, 3.31, and 3.62, respectively. These usage rates were greater than the national averages, which were 3.22, 3.01, and 3.13 for the period. As figure V.2 shows, limited/light duty hours as a percent of total workhours were about twice the national average in the Capital cluster and about one and one-quarter times the national average in the Northern Virginia cluster. The EOS responses suggested that many employees believed there were substance abuse problems (alcohol and drugs) in the Postal Service, which could have caused attendance problems and poor employee performance. Locally, as shown in figure V.3, a higher than average percentage of employees in the Southern Maryland; Washington, D.C.; Merrifield, Virginia; and Suburban Maryland P&DCs believed alcohol abuse was a problem where they work. Postal Service employees also perceived drug abuse as a problem in the Washington, D.C., area, as shown in figure V.4. None of the local P&DCs reported lower than average perceptions of drug abuse. Employees in delivery units generally perceived that substance abuse was much less of a problem than did employees in the P&DCs. Employee attitudes can be a factor in the level of employee commitment. One measure of employee attitudes is the EOS Index—the average favorable response on 20 employee opinion survey questions. These questions deal with how managers and supervisors treat employees; respond to their problems, complaints, and ideas; and deal with poor performance and recognize good performance. As table V.1 shows, the postal workforce in the Washington, D.C., metropolitan area gave local management relatively low marks, placing most of the units in the area in the bottom 25 percent of all units nationwide. Customer Service (post offices) Washington, D.C., P&DC Southern Maryland P&DC Suburban Maryland P&DC Customer Service (post offices) The Washington, D.C., area was not unlike other large, urban areas with regard to the relationship between low employee morale and low service scores. As table V.2 shows, the EOS Index scores for most units in nine other large urban areas that we judgmentally selected for comparison purposes ranked in the bottom half of all units nationwide. Like the EOS Index scores, the EXFC and CSI scores for these nine big cities also were relatively low compared to scores in other areas of the country. Figures V.5 through V.7 show that EXFC scores for most of the nine cities have usually fallen below the national average. Figures V.8 through V.10 show that CSI scores for eight of the nine cities have also usually fallen below the national average. We recently reported, and the Postal Service has acknowledged, that improving labor-management relations is a long-term proposition. In our recently issued report on labor-management relations, we recommended that the Postal Service, the unions, and management associations develop a long-term agreement (at least 10 years) for changing the workroom climate for both processing and delivery functions. Postal Service efforts to address problems in Chicago illustrate that breakthrough improvements require a long-term effort. Responding to our 1990 letter highlighting our observations on the need for mail delivery service improvements in Chicago, the Postmaster General developed a plan for improving service. Four years later, service in Chicago remained poor. Chicago has a long history of low EXFC scores, and in early 1994 attention was again focused on its mail delivery service problems. About 40,000 pieces of undelivered mail were found in a letter carrier’s truck parked outside a post office in Chicago. The oldest envelopes bore postmarks from December 1993. A month later the Chicago police discovered more than 100 pounds of burning mail beneath a viaduct on the Chicago South Side. That same day, another 20,000 pieces of undelivered mail—some up to 15 years old—were found behind the home of a retired carrier in southwest Chicago. When CSI quantified the level of customer dissatisfaction, Chicago ranked last 15 of the 16 times the survey has been conducted. The Postmaster General reacted by creating a 27-member Chicago Improvement Task Force to identify and correct service problems. The Postal Service reported a number of corrective actions instituted by the task force that were designed to improve mail delivery service. Similar to the situation in Washington, D.C., the task force found operations problems as well as problems with the attitudes of employees. Despite the task force’s corrective actions, Chicago has not made breakthrough improvement. Although there has been greater on-time performance, reduced delayed mail, fewer complaints, and less waiting time in line, Chicago’s EXFC performance for quarter 4, 1994, remained 6 points below its score in the same quarter in the prior year and 12 points below the national average. Customer satisfaction also remained poor at 51 percent. Operations improvements are vital, but they will not solve all delivery service problems. Short-term gains through operational improvements may eventually succumb to the obstacle to permanent improvement—namely, a negative labor climate. Long-term improvements require substantive improvements in labor-management relations. Since taking office in July 1992, the Postmaster General has been working to forge a labor-management partnership to change the culture in the Postal Service. His goal is to shift the Postal Service culture from one that is “operation driven, cost driven, authoritarian, and risk averse” to one that is “success-oriented, people oriented, and customer driven.” We previously reported that the Postmaster General developed a labor-management partnership through the National Leadership Team structure, held regular leadership meetings that included all Postal Service officers and the national presidents of the unions and management associations, and changed the management reward systems to encourage teamwork and organizational success. However, as we also previously reported, there is no overall agreement among the unions and management for change at the field operations level. They have been unable to come to terms on a clear framework or long-term strategy for ensuring that first-line supervisors and employees at processing plants and post offices buy into renewed organizational values and principles. In his November 30, 1994, statement before the Subcommittee on Federal Service, Post Office, and Civil Service, Senate Committee on Governmental Affairs, the Postmaster General testified that the Postal Service supports our September 1994 report recommendations calling for the Service, unions, and management associations to develop a long-term agreement on objectives and approaches for demonstrating improvements in the work climate of both processing and delivery operations. At the hearing, he proposed that the Leadership Team form a task force made up of leaders of the unions and management associations and key postal vice presidents. Mr. Runyon said the task force should have a 120-day agenda “to explore [GAO’s] recommendations, set up pilot projects, and move forward now to accelerate change in our corporate attitudes and culture.” While his labor-management summit proposal received the support of the rural carriers and the three management associations, the leaders of the three largest postal unions have not yet agreed to the summit. They said they are waiting until the current round of contract negotiations is completed before making a decision on the summit. Michael E. Motley, Associate Director James T. Campbell, Assistant Director Lawrence R. Keller, Evaluator-in-Charge Roger L. Lively, Senior Evaluator Charles F. Wicker, Senior Evaluator Lillie J. Collins, Evaluator Kenneth E. John, Senior Social Science Analyst The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (301) 258-4097 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | Pursuant to a congressional request, GAO reviewed mail delivery service in the Washington, D.C. metropolitan area, focusing on the: (1) recent on-time delivery service problems for overnight first-class mail; (2) reasons why mail service is below the desired level; and (3) United States Postal Service's (USPS) actions to improve service in the area. GAO found that: (1) mail service in the Washington, D.C. area has declined in part due to an unexpected increase in mail volume; (2) labor-management relations in the Washington, D.C. area are among the worst in the country; (3) customer satisfaction in the area declined dramatically in 1994 because local units were unable to maintain mail service at previous levels due to employee shortages, poor labor-management relations, and the recent organizational change; (4) USPS has taken action to address unnecessary duplicate mail handling; (5) postal managers believe that metropolitan overnight delivery areas are too large for the current delivery network; and (6) USPS agreed with the conclusion that the key element in improving mail service is to improve labor-management relations. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
Distance education is not a new concept, but in recent years, it has assumed markedly new forms and greater prominence. Distance education’s older form was the correspondence course—a home study course generally completed by mail. More recently, distance education has increasingly been delivered in electronic forms, such as videoconferencing and the Internet. Some of these newer forms share more features of traditional classroom instruction. For example, students taking a course by videoconference generally participate in an actual class in which they can interact directly with the instructor. Many postsecondary schools have added or expanded electronically-based programs, so that distance education is now relatively common across the entire postsecondary landscape. We estimate that in the 1999-2000 school year, about 1.5 million of the 19 million students involved in postsecondary education took at least one electronically transmitted distance education course. Education reports that an estimated 84 percent of four-year institutions will offer distance education courses in 2002. While newer forms of distance education may incorporate more elements of traditional classroom education than before, they can still differ from a traditional educational experience in many ways. For example, Internet- based distance education, in which coursework is provided through computer hookup, may substitute a computer screen for face-to-face interaction between student and instructor. Chat rooms, bulletin boards, and e-mail become common forms of interaction. Support services, such as counseling, tutoring, and library services, may also be provided without any face-to-face contact. As the largest provider of student financial aid to postsecondary students (an estimated $52 billion in fiscal year 2002), the federal government has a substantial interest in the quality of distance education. Under Title IV of the HEA, the federal government provides grants, work-study wages, and student loans to millions of students each year. For the most part, students taking distance education courses can qualify for this aid in the same way as students taking traditional courses. Differences between distance education and traditional education pose challenges for federal student aid policies and programs. For example, in 1992, the Congress added requirements to the HEA to deal with problems of fraud and abuse at correspondence schools—the primary providers of distance education in the early 1990’s. These requirements placed limitations on the use of federal student aid at these schools due to poor quality programs and high default rates on student loans. Such problems demonstrate why it is important to monitor the outcomes of such forms of course delivery. In monitoring such courses, the federal government has mainly relied on the work of accrediting agencies established specifically for providing outside reviews of an institution’s educational programs. Our analysis of the NPSAS showed that the estimated 1.5 millionpostsecondary students who have taken distance education courses have different demographic characteristics when compared with the characteristics of postsecondary students who did not enroll in distance education. These differences included the following. Distance education students are older. As figure 1 demonstrates, students who took all their courses through distance education tended to be older, on average, when compared to other students. Distance education students are more likely to be married. Figure 2 shows that graduate and undergraduate students that took all of their courses through distance education are more likely to be married than those taking no distance education courses. Undergraduates taking distance education courses are more likely to be female. Women represented about 65 percent of the undergraduate students who took all their courses through distance education. In contrast, they represented about 56 percent of undergraduates who did not take a distance education course. For graduate students, there was no significant difference in the gender of students who took distance education courses and those who did not. Distance education students are more likely to work full-time. As figure 3 shows, a higher percentage of distance education students work full-time when compared to students who did not take any distance education courses. This difference was greatest among graduate students where about 85 percent of the students that took all of their courses through distance education worked full-time compared to 51 percent of students who did not take any distance education courses. Distance education students are more likely to be part-time students. As might be expected, distance education students tend to go to school on a part-time basis. For undergraduates, about 63 percent of the students who took all their courses through distance education were part-time students while about 47 percent of the students who did not take any distance education courses were part-time students. This trend also occurred among graduate students (about 79 percent of those who took their entire program through distance education were part-time students compared with about 54 percent of those who did not take any distance education courses). Distance education students have higher average incomes. Figure 4 shows that in general, graduate students that took distance education courses tended to have higher average incomes than students that did not take any distance education courses. We found similar patterns for undergraduate students. In addition to the demographic characteristics of distance education students, NPSAS provides certain insights on the characteristics of institutions that offer distance education programs. Among other things, it provides data on the modes of delivery that institutions used to provide distance education and the types of institutions that offered distance education. Public institutions enrolled the most distance education students. For undergraduates, public institutions enrolled more distance education students than either private non-profit or proprietary institutions. Of undergraduates who took at least one distance education class, about 85 percent did so at a public institution (about 79 percent of all undergraduates attended public institutions), about 12 percent did so at private non-profit institutions (about 16 percent of all undergraduates attended private non-profit institutions), and about 3 percent did so at proprietary schools (about five percent of all undergraduates attended proprietary schools). For graduate students, public institutions also enrolled more—about 63.5 percent—distance education students than private non-profit or proprietary schools (32 and 4.5 percent, respectively). About 58 percent, 40 percent, and two percent of all graduate students attended public institutions, private non-profit, and proprietary schools, respectively. Institutions used the Internet more than any other mode to deliver distance education. Postsecondary institutions used the Internet more than any other mode to deliver distance education. At the three main types of institutions (public, private non-profit, and proprietary), more than half of the undergraduate students who took at least one distance education course did so over the Internet. Over 58 percent of undergraduate distance education students at public institutions used the Internet and over 70 percent of undergraduate distance education students at private non-profit and proprietary schools also used the Internet. Institutions that offered graduate programs also used the Internet as the primary means of delivering distance education courses. For graduate students who took at least one distance education class, 65 percent of students at public institutions used the Internet, compared with about 69 percent of students at private non-profit institutions, and about 94 percent of students at proprietary institutions. Institutions enrolled the most distance education students in subjects related to business, humanities, and education. For undergraduates, about 21 percent of students who took their entire program through distance education studied business and 13 percent studied courses related to the humanities. This is similar to patterns of students who did not take any distance education classes (about 18 percent studied business and about 15 percent studied humanities). For graduate students, about 24 percent of students who took their entire program through distance education enrolled in courses related to education and about 19 percent studied business. Again, this is similar to patterns of graduate students who did not take any distance education classes (about 23 percent studied education and about 17 percent studied business). Federal student aid is an important consideration for many students who take distance education courses, although not to the same degree as students in more traditional classroom settings. Students who took their entire program through distance education applied for student aid at a lower rate than students who did not take any distance education courses (about 40 percent compared with about 50 percent), and fewer also received federal aid (about 31 percent compared with about 39 percent). Nonetheless, even these lower percentages for distance education represent a substantial federal commitment. A number of issues related to distance education and the federal student aid program have surfaced and will likely receive attention when the Congress considers reauthorization of the HEA or when Education examines regulations related to distance education. Among them are the following: “Fifty percent” rule limits aid to correspondence and telecommunication students in certain circumstances. One limitation in the HEA—called the “50 percent rule”—involves students who attend institutions that provide half or more of their coursework through correspondence or telecommunications classes or who have half or more of their students enrolled in such classes. When institutions exceed the 50 percent threshold, their students become ineligible to receive funds from federal student aid programs. As distance education becomes more widespread, more institutions may lose their eligibility. Our initial work indicates about 20 out of over 6,000 Title IV-eligible institutions may face this problem soon or have already exceeded the 50 percent threshold. Without some relief, the students that attend these institutions may become ineligible for student aid from the federal government in the future. As an example, one institution we visited already offers more than half its courses through distance education; however, it remains eligible for the student aid program because it has received a waiver from Education’s Distance Education Demonstration Program. Without a change in the statute or a continuation of the waiver, more than 900 of its students will not be eligible for student aid from the federal government in the future. To deal with this issue, the House passed the Internet Equity and Education Act of 2001 (H.R. 1992) in October 2001. The House proposal allows a school to obtain a waiver for the 50 percent rule if it (1) is already participating in the federal student loan program, (2) has a default rate of less than 10 percent for each of the last three years for which data are available, and (3) has notified the Secretary of Education of its election to qualify for such an exemption, and has not been notified by the Secretary that such election would pose a significant risk to federal funds and the integrity of Title IV programs. The Senate is considering this proposal. Federal student aid policies treat living expenses differently for some distance education students. Currently, students living off-campus who are enrolled in traditional classes or students enrolled in telecommunications classes at least half-time can receive an annual living allowance for room and board costs of at least $1,500 and $2,500, respectively. Distance learners enrolled in correspondence classes are not allowed the same allowance. Whether to continue to treat these distance education students differently for purposes of federal student aid is an open policy question. Regulations Relating to “Seat” Time. Institutions offering distance education courses that are not tied to standard course lengths such as semesters or quarters have expressed difficulty in interpreting and applying Education’s “seat rules,” which are rules governing how much instructional time must be provided in order for participants to qualify for federal aid. In particular, a rule called the “12-hour rule” has become increasingly difficult to implement. This rule was put in place to curb abuses by schools that would stretch the length of their educational programs without providing any additional instruction time. Schools would do this to maximize the amount of federal aid their students could receive and pass back to the school in the form of tuition and fees. The rule defined each week of instruction in a program that is not a standard course length as 12 hours of instruction, examination, or preparation for examinations. Some distance education courses, particularly self-paced courses, do not necessarily fit this model. Further, the rule also produces significant disparities in the amount of federal aid that students receive for the same amount of academic credit, based simply on whether the program that they are enrolled in uses standard academic terms or not. In August 2002, Education proposed replacing the 12-hour rule with a “one- day rule,” which would require one day of instruction per week for any course. This rule currently applies to standard term courses, and as proposed, it would cover, among other things, nonstandard term courses. Education plans to publish final regulations that would include this change on or before November 1, 2002. Some institutions that might provide nonstandard distance education courses remain concerned, however, because Education has not identified how the “one-day rule” will be interpreted or applied. In considering changes in policy that are less restrictive but that could improve access to higher education, it will be important to recognize that doing so may increase the potential for fraud if adequate management controls are not in place. While our work examining the use of distance education at Minority Serving Institutions (MSIs) is not yet completed, the preliminary data indicate that MSIs—and more specifically, minority students at MSIs— make less use of distance education than students at other schools. NPSAS includes data for a projectable number of students from Historically Black Colleges and Universities and Hispanic Serving Institutions, but it only includes one Tribal College. We plan to send a questionnaire to officials at all three MSI groups to gain a better understanding of their use of distance education technology. In the meantime, however, the available NPSAS data showed the following: Students at Historically Black Colleges and Universities tend to use distance education to a lesser extent than non-MSI students. About 6 percent of undergraduate students at Historically Black Colleges and Universities enrolled in at least one distance education course and about 1.1 percent took their entire program through distance education. These rates are lower than students who took at least one distance education course or their entire program through distance education at non-MSIs. Hispanic students attending Hispanic Serving Institutions use distance education at a lower rate than their overall representation in these schools. About 51 percent of the undergraduates at Hispanic Serving Institutions are Hispanic, but they comprise only about 40 percent of the undergraduate students enrolled in distance education classes. This difference is statistically significant. Similarly, our analysis also shows that the greater the percentage of Hispanic students at the institution, the lower the overall rate of distance education use at that school. Since NPSAS includes data from only one Tribal College, we were unable to develop data on the extent that Tribal College students use distance education. However, our visits to several Tribal Colleges provide some preliminary insights. Our work shows that distance education may be a viable supplement to classroom education at many Tribal Colleges for a number of reasons. Potential students of many Tribal Colleges live in communities dispersed over large geographic areas—in some cases potential students might live over a hundred miles from the nearest Tribal College or satellite campus—making it difficult or impossible for some students to commute to these schools. In this case, distance education is an appealing way to deliver college courses to remote locations. Additionally, officials at one Tribal College told us that some residents of reservations may be place-bound due to tribal and familial responsibilities; distance education would be one of the few realistic postsecondary education options for this population. Also important, according to officials from some Tribal Colleges we visited, tribal residents have expressed an interest in enrolling in distance education courses. The HEA focuses on accreditation—a task undertaken by outside agencies—as the main tool for ensuring quality in postsecondary programs, including those offered through distance education. The effectiveness of these accreditation reviews, as well as Education’s monitoring of the accreditation process, remains an important issue. To be eligible for federal funds, a postsecondary institution or program must be accredited by an agency recognized by Education as a reliable authority on quality. Education recognizes 58 separate accrediting agencies for this purpose, of which only 38 are recognized for Title IV student aid purposes. The 58 accrediting agencies operate either regionally or nationally, and they accredit a wide variety of institutions or programs, including public and private, non-profit two-year or four-year colleges and universities; graduate and professional programs; proprietary vocational and technical training programs; and non-degree training programs. Some accrediting agencies accredit entire institutions and some accredit specialized programs, departments, or schools that operate within an institution or as single purpose, freestanding institutions. The HEA and regulations issued by Education establish criteria under which Education will recognize an accreditation agency as a reliable authority regarding the quality of education. The HEA states that accrediting agencies must assess quality in 10 different areas, such as curriculum, student achievement, and program length. Under the HEA, an accrediting agency is required to include distance education programs when assessing quality. In doing so, an accrediting agency must consistently apply and enforce its standards with respect to distance education programs as well as other educational programs at the institution. Our analysis in this area is not as far along as it is for the other topics we are discussing today. We plan to review a number of accreditation efforts to determine the way in which accrediting agencies review distance education programs. We expect that our work will address the following issues: How well accrediting agencies are carrying out their responsibilities for reviewing distance education. The HEA does not contain specific language setting forth how distance learning should be reviewed. Instead, it identifies key areas that accrediting agencies should cover, including student achievement and outcomes, and it relies on accrediting agencies to develop their own standards for how they will review distance education programs. We will look at how accrediting agencies are reviewing distance education programs and the standards that are being used. How well Education is carrying out its responsibilities and whether improvements are needed in Education’s policies and procedures for overseeing accrediting agencies. Under the HEA, Education has authority to recognize those agencies it considers to be reliable authorities on the quality of education or training provided. Accrediting agencies have an incentive to seek Education’s recognition because without it, students at the institutions they accredit would not be eligible to participate in federal aid programs. We will conduct work to identify what improvements, if any, are needed in Education’s oversight of accrediting agencies. In closing, distance education has grown rapidly over the past few years and our work indicates that distance learning might present new educational opportunities for students. Congress and the Administration need to ensure that changes to the HEA and regulations do not increase the chances of fraud, waste, or abuse to the student financial aid programs. At the request of this Committee, and members of the House Committee on Education and the Workforce, we will continue our study of the issues that we have discussed today. Mr. Chairman, this concludes my testimony. I will be happy to respond to any questions you or other members of the Committee may have. | Increasingly, the issues of distance education and federal student aid intersect. About one in every 13 postsecondary students enrolls in at least one distance education course, and the Department of Education estimates that the number of students involved in distance education has tripled in just 4 years. As the largest provider of financial aid to postsecondary students, the federal government has a considerable interest in distance education. Overall, 1.5 million out of 19 million postsecondary students took at least one distance education course in the 1999-2000 school year. The distance education students differ from other postsecondary students in a number of respects. Compared to other students, they tend to be older and are more likely to be employed full-time while attending school part-time. They also have higher incomes and are more likely to be married. Many students enrolled in distance education courses participate in federal student aid programs. As distance education continues to grow, several major aspects of federal laws, rules, and regulations may need to be reexamined. Certain rules may need to be modified if a small, but growing, number of schools are to remain eligible for student aid. Students attending these schools may become ineligible for student aid because their distance education programs are growing and may exceed statutory and regulatory limits on the amount of distance education an institution can offer. In general, students at minority serving institutions use distance education less extensively than students at other schools. Accrediting agencies play an important role in reviewing distance education programs. They, and Education, are "gatekeepers" with respect to ensuring quality at postsecondary institutions--including those that offer distance education programs. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
This section describes NNSA’s nuclear security enterprise, lithium production, the process for qualifying lithium, DOE’s capital asset acquisition process and mission need statement development, and NNSA’s lithium production strategy. NNSA is responsible for the management of the nation’s nuclear weapons, nuclear nonproliferation, and naval reactor programs. NNSA relies on contractors to carry out these responsibilities and manage day- to-day operations at each of its eight sites. These sites include laboratories, production plants, and a test site. Together, these sites implement NNSA’s Stockpile Stewardship program that, among other things, includes operations associated with maintenance, refurbishment, and dismantlement of the nuclear weapons stockpile. As discussed previously, lithium is a key component of nuclear weapons and is therefore essential for the refurbishment of the nuclear weapons stockpile. The following NNSA sites are involved in processes or decisions that impact the supply of lithium: The NNSA Production Office is responsible for overseeing contractor performance at the Pantex Plant and Y-12 National Security Complex, including the majority of the physical work on weapon refurbishment. The Pantex Plant located near Amarillo, Texas, dismantles retired nuclear weapons. The Y-12 Nuclear Security Complex disassembles canned subassemblies (CSA) from dismantled weapons; these CSAs contain lithium components that are the source material for lithium production for refurbished weapons. NNSA’s Y-12 site is also responsible for lithium production, which involves recovering lithium-6 from disassembled weapons, cleaning it, and preparing the cleaned lithium into forms suitable for refurbished weapons. NNSA’s Los Alamos and Lawrence Livermore National Laboratories qualify, or approve, the lithium produced at Y-12 to ensure that it is suitable for use in refurbished weapons. NNSA program offices are responsible for overseeing and supporting the activities performed by its contractors. NNSA’s Office of Stockpile Management, within its Office of Defense Programs, oversees the maintenance, refurbishment, and dismantlement of nuclear weapons—to include overseeing Y-12’s plans for meeting lithium demand. The lithium production process at NNSA’s Y-12 National Security Complex involves multiple steps and requires specialized equipment and a controlled environment, according to NNSA’s lithium production strategy. The lithium production process can be broken down into three stages: (1) lithium recovery from disassembled weapons, (2) lithium purification or cleaning, and (3) lithium forming and machining (see fig. 1). Recovery. (Stage 1) The recovery of lithium source material from disassembled weapons is performed at Y-12 in building 9204-2E. Y-12 recovers lithium hydride and deuteride from CSAs it receives from the Pantex Plant. Historic purification process. (Stage 2) The historic purification process relied on wet chemistry, conducted at Y-12 in building 9204-2. Using wet chemistry Y-12 purified the lithium hydride and deuteride (source material) recovered from dismantled weapons using hydrochloric acid. The resulting purified lithium chloride salt was then stored in 55-gallon drums at Y-12 until it was needed for use. The lithium chloride was subjected to electrolysis to produce lithium metal, which was then placed in a reactor vessel with either hydrogen or deuterium gas for conversion to lithium hydride or deuteride. The bulk lithium hydride or deuteride resulting from this process was then ready for use as feedstock for the lithium forming and machining phase. Current cleaning process. (Stage 2) The current cleaning process relies on DMM, which entails sanding and wiping the lithium hydride and deuteride (source material) removed directly from the disassembled weapons to remove impurities. This cleaned material becomes bulk material feedstock for the lithium forming and machining phase. The cleaning process is conducted in building 9202; the cleaned components are packaged and moved to building 9204-2 for forming and machining. Lithium forming and machining. (Stage 3) Lithium forming and machining are conducted in building 9204-2 and involves preparing the purified or cleaned lithium feedstock for use in refurbished weapons. During this stage, the lithium hydride or deuteride (feedstock) is broken into pieces and fed into a crusher/grinder to pulverize it into a powder, which is then blended and loaded into molds for pressing. The resulting blanks are machined into high-precision components. Historically, the machine dust resulting from this process was purified using wet chemistry and reused. Now, Y-12 stores this dust for future use but cannot recycle it without wet chemistry capabilities. Regardless of whether lithium undergoes DMM or wet chemistry, the resulting end product (i.e., lithium components suitable for refurbished weapons) must be qualified through a process approved by the design laboratories (Los Alamos and Lawrence Livermore National Laboratories). Qualification entails testing for chemical and mechanical homogeneity, density, and tensile properties, among other things. Although only the end product (lithium component) must be qualified, Y- 12 prepares for qualification by evaluating the lithium material throughout the production process. Y-12 may evaluate the source material (i.e., lithium components from retired weapons), the processes used to produce lithium (i.e., cleaning, machining), and the feedstock for the forming and machining (i.e., the purified or cleaned lithium). Wet chemistry produced a homogeneous feedstock that only had to be evaluated once for use in a given weapon system in production, regardless of the source material. DMM feedstock, however, is not necessarily homogeneous and the source material, which may contain impurities, must be evaluated separately for each weapon system in production. DOE Order 413.3B governs NNSA’s capital asset acquisition activities, including the Critical Decision (CD) process. The CD process breaks down capital acquisition into project phases that progress from a broad statement of mission need into well-defined requirements. Each critical decision point requires successful completion of the preceding phase and marks an authorization to increase the commitment of resources by DOE. Under Order 413.3B, the first two CDs—CD-0 (Approve Mission Need) and CD-1 (Approve Alternative Selection and Cost Range)—span the analysis of alternatives process; with the majority of the analysis of alternatives being conducted during CD-1 and ending with CD-1 approval. CD-0 corresponds to the preconceptual design process. DOE’s capital asset acquisition process, or its critical decision process, is depicted in figure 2. DOE’s Order 413.3B and DOE’s Mission Need Statement Guide (G 413.3-17) provide direction and guidance for preparing a mission need statement. A mission need statement identifies the capability gap between the current state of a program’s mission and the mission plan. It is the first step in the identification and execution of a DOE capital asset project. DOE’s Mission Need Statement Guide includes nonmandatory approaches for meeting requirements and is not intended to be a requirements document. The purpose of the guide is to provide suggested content, definitions, and examples for creating a mission need statement that fulfills DOE Order 413.3B. Suggested content, according to the guide, includes, among other things (1) a description of the capability gap, (2) alternatives, or approaches, for addressing the mission need, and (3) a section for estimated cost and schedule ranges to acquire various alternatives. NNSA’s lithium production strategy involves developing new lithium production capabilities in the long term and using existing capabilities until these long-term capabilities are available. As discussed previously, the lithium production strategy calls for the design and construction of a new lithium production facility that would provide lithium production capabilities beyond 2025. To that end, NNSA began the process of identifying a mission need for lithium capabilities in June 2014—the first step in the identification and execution of a DOE project—and finalized its mission need statement in January 2015. NNSA’s lithium production strategy for meeting lithium demand through 2025 includes five key elements: (1) increasing DMM cleaning capabilities and qualifying additional weapon systems to serve as lithium source material; (2) converting its inventory of lithium chloride into a usable form; (3) procuring available enriched lithium from an outside source; (4) implementing new technologies for, among other things, purifying machine dust; and (5) sustaining the existing facility through investments in infrastructure and operations to support lithium operations until a new facility is available. The strategy also discusses challenges associated with implementing the strategy and actions that may mitigate these challenges. NNSA has identified various challenges in its lithium production strategy that may impact its ability to meet demand for lithium through and beyond 2025. NNSA has also identified actions that may mitigate these challenges. The challenges pertain to three key areas: (1) insufficient supply of qualified lithium material, (2) catastrophic failure of buildings or equipment, and (3) potential delays in the availability of the proposed new lithium production facility (Lithium Production Capability facility). NNSA has identified challenges associated with its strategy for ensuring that it has a sufficient supply of lithium material for defense program requirements through and beyond 2025. NNSA’s supply of currently qualified lithium—lithium approved for use in weapon systems in refurbishment—will run out by 2020, according to the lithium production strategy. In April 2015, NNSA officials told us that due to additional recent increases in demand, with no additional action to increase supply, this date has moved to 2018. According to Y-12 officials, about 50 percent of lithium is lost as machine dust in the machining process. Y-12 currently stores this dust for future use but cannot recycle it without certain wet chemistry capabilities. As a result, reliance on DMM alone will require approximately twice the source lithium from dismantled weapons than when wet chemistry is in operation. According to NNSA’s lithium production strategy, however, increasing the supply of qualified lithium material may be a challenge for the following three reasons. First, dismantlement and disassembly schedule changes could delay or reduce the availability of lithium source material. Because NNSA’s weapons dismantlement and disassembly decisions drive the availability of source material for DMM, changes to the schedules could impact the available supply of lithium. According to Y-12 documents, NNSA’s decisions to hold certain weapons components for eventual, but not immediate, reuse and to hold some in its strategic reserve have decreased the amount of lithium material available. We previously found that NNSA’s retention of certain weapons components and uncertain policy decisions regarding when some will be released for disassembly pose challenges to Y-12’s ability to plan for future work. Y-12 officials told us that they estimate future supply and base their planning on NNSA’s dismantlement and disassembly schedule. However, uncertainty in the dismantlement and disassembly schedule may make it difficult to determine whether a sufficient supply of lithium is available for production. Second, it may be more difficult to qualify lithium source material under Y-12’s current cleaning process (DMM)—which may reduce the supply of source material available. Because source material undergoing DMM is purified only through a surface cleaning (i.e., manual sanding and wiping), according to the lithium production strategy, ensuring that the end product can be chemically certified— part of the qualification—requires that the source material be selected from a supply of recycled lithium components known to possess sufficient chemical purity to meet specifications. In other words, not all potential sources of lithium will be of sufficient purity or quality, which may further reduce the available supply. Third, it is more time-consuming to qualify lithium under Y-12’s current cleaning process (DMM). According to Y-12 officials, qualifying lithium produced through DMM is more rigorous and time-consuming because the lithium source material recovered from each dismantled weapon system must be qualified separately. In addition, the feedstock—cleaned lithium ready for machining—must also be qualified. In contrast, when source material is purified using wet chemistry, the resulting feedstock is homogeneous and therefore the source material and feedstock only have to be qualified once for use in a given weapon system. According to NNSA officials, with no additional action to increase supply, Y-12 may run out of qualified lithium by 2018. According to the lithium production strategy, Y-12 has plans and schedules in place to qualify, by the end of fiscal year 2017, additional weapons systems as sources for material. This would extend the supply of qualified DMM source material into the early 2020s. Y-12 officials said that they are working with the design laboratories to streamline the qualification process—for example, to qualify multiple weapon systems as sources of DMM feedstock to multiple weapon systems in refurbishment. NNSA has identified the catastrophic failure of buildings or equipment as a challenge that could impact its ability to meet lithium demand until a new facility is available. For example, building 9204-2 is a key facility for lithium production. However, according to the lithium production strategy, the building, together with much of the equipment inside, has deteriorated and is beyond its expected life span. Specifically, the building has experienced both internal and external deterioration of concrete in the roofs, walls, and ceilings from exposure to corrosive liquids and processing fumes (see fig. 3). In March 2014, for example, a 300-pound slab of concrete fell from the ceiling into an active work area—an area that has since been roped off and is no longer in use (see fig. 4). Moreover, according to the lithium production strategy, the building was not built in accordance with current codes and standards, is costly to operate, and has multiple vulnerabilities that could threaten the entire production process. Y-12’s operations health risk assessments rate the equipment for two parts of the lithium production process conducted in 9204-2 as among the highest health risks at Y-12, according to the mission need statement for lithium production. Although certain parts of the DMM process are conducted in a different building (building 9202), moving material between buildings is inefficient and may not be sustainable if the use of DMM is to increase, according to the lithium production strategy. Specifically, DMM components are cleaned—manually sanded—in a closed container in building 9202. The cleaned components are packaged in sealed bags, placed in drums, and moved to building 9204-2 for crushing and grinding. As future demand increases and Y-12 meets this demand through increased use of DMM, according to the lithium production strategy, this process will strain the capacity of building 9202 and DMM cleaning capabilities will have to be installed in building 9204-2. NNSA has also identified as a challenge, potential delays in the availability of the proposed Lithium Production Capability facility. According to the lithium production strategy, because building 9204-2 has been deteriorating rapidly in recent years and cannot be reasonably upgraded to ensure an enduring source of lithium components for the stockpile beyond 2025, the design and construction of a new lithium production facility that would provide lithium production capabilities beyond 2025 is called for. Key elements of the strategy—such as qualifying additional weapon systems for use as source material for DMM in order to meet demand for lithium—are based on the assumption that the Lithium Production Capability facility will be designed and constructed from 2016 to 2023 and ready for use by 2025. However, the lithium production strategy notes that fiscal constraints could affect the availability of this facility in 2025. We have previously found that NNSA construction projects often experience schedule delays. To address the challenges it has identified, NNSA has identified several mitigating actions, which are presented in its lithium production strategy. Many of the same five elements discussed previously that make up the strategy for meeting demand for lithium through 2025 are also cited as mitigating actions intended to address challenges. Specifically, the lithium production strategy cites varying combinations of the following mitigating actions: Accelerate the design and construction of the Lithium Production Capability facility. Procure lithium from outside sources. Pursue outsourcing of lithium materials production. Convert existing inventory of lithium chloride to a usable form of lithium. Identify and qualify additional weapon systems for use as lithium source material for DMM. Utilize leased or third party financed facilities for lithium production activities. Develop and deploy new purification and material production technologies and techniques, including machine dust recycling. Negotiate a dismantlement schedule that aligns the selected units for dismantlement and the dismantlement schedule with mission needs. Maintain spares and develop required specifications for backup of key process equipment. Maintain technical and operational skills and knowledge by establishing a prototype wet chemistry operation. The mitigating actions identified in the lithium production strategy are in early stages of development, and may bring additional challenges. For example, the strategy offers as a mitigating action the conversion of Y- 12’s existing inventory of lithium chloride to lithium metal. However, NNSA cannot convert this material to lithium metal without restarting certain steps in the wet chemistry process or outsourcing the conversion of lithium chloride to lithium metal to an external vendor. With either option, as stated in the lithium production strategy, after the stored lithium chloride is converted to lithium metal, Y-12 plans to convert the lithium metal to lithium hydride on-site. According to the strategy, this would require a significant investment in the existing facility (building 9204-2) to address deferred maintenance and refurbish key equipment. NNSA did not develop a mission need statement for lithium production that is fully independent of a particular solution, contrary to the direction of DOE Order 413.3B. In January 2015, NNSA program officials submitted a statement of mission need, or CD-0, for lithium production for approval to the Deputy Administrator for Defense Programs, NNSA. This statement was approved on June 10, 2015. As part of the preconceptual design (CD-0) approval process, the mission need—which DOE defines in Order 413.3B as a credible gap between current capabilities and those required to meet the goals articulated in the strategic plan—and functional requirements—the general parameters that the selected alternative must have to address the mission need—must be identified. The order directs that the mission need should be independent of a particular solution. According to the order and related guidance, this approach allows a program office the flexibility to explore a variety of solutions. NNSA’s mission need statement for lithium production, however, expresses the gap in terms of a particular solution—specifically, a new facility. The Lithium Production Capability mission need statement is a 24-page document that includes, among other things, a description of the capability gap, alternatives for addressing its mission need, and a section for estimated cost and schedule ranges. Specifically, the document describes the capability gap that exists due to the deteriorating condition of building 9204-2 and states that the mission need for lithium production is aligned with NNSA’s strategic plans—citing passages from NNSA’s strategic plan. For example, the document describes the primary capability gap as the loss of Y-12’s wet chemistry process due to the degraded condition of building 9204-2. The mission need statement details this gap in terms of functional and operational gaps, including (1) the continued physical deterioration of the building where lithium operations are being conducted and the resulting shortage of components; (2) the continuous deterioration of mechanical and electrical systems in the existing facility (building 9204-2), with increasing unsustainable energy costs and greenhouse gas emissions, which will affect controlled work environments, ongoing operations, and delivery of mission work; (3) the inability to introduce new technologies into the facility due to its degraded condition; and (4) the facility’s noncompliance with current codes. NNSA’s mission need statement also characterizes the capability gap in terms of demand for lithium but devotes most of the mission need statement to describing the current condition of its existing lithium production facility. According to the mission need statement, specific lithium requirements are contained in the Fiscal Year 2015 Production and Planning Directive and the classified annexes of the Stockpile Stewardship and Management Plans. This is the only characterization in the mission need statement of the capability gap in terms of demand for lithium. The remaining discussion describes the capability gap in terms of the degraded condition of building 9204-2. Order 413.3B and related guidance do not state that the capability gap should be defined in terms of program requirements. NNSA’s mission need statement lists seven alternatives for addressing its mission need: do nothing, outsource the lithium processing capability, refurbish/repurpose one or more of the existing Y-12 facilities, lease off-site suitable facilities, secure third-party financing to build one or more new facilities, consider new modular facilities to transfer missions from existing facility or facilities that are beyond repair, and build a complete and functioning facility at Y-12. According to DOE Order 413.3B, the mission need should be independent of a particular solution, and should not be defined by the equipment, facility, technological solution, or physical end-item. In addition, the DOE order states that the mission need should be described in terms of the general parameters of the solution, how it fits within the mission of the program, and why it is critical to the overall accomplishment of the department’s mission, including the benefits to be realized. However, some of language used and information included in NNSA’s mission need statement suggests that NNSA may have given preference to a single alternative—building the Lithium Production Capability facility at Y-12—before identifying a mission need and conducting an analysis of alternatives. For example, the section describing the benefits from closing the capability gap includes phrases such as, an alternative facility that is code compliant, and replacing the existing facilities with an alternative facility will significantly improve NNSA’s capability and efficiency in performing its Stockpile Stewardship and other national security missions at Y-12. In addition, NNSA included in its mission need statement rough-order-of- magnitude estimates of the project cost and schedule ranges for only one alternative—build and equip a functioning facility at Y-12. According to its mission need statement, NNSA estimates that construction of the new facility will cost $302 million to $646 million (with $431 million “likely”) and includes a schedule range estimate for project completion between fiscal year 2024 and fiscal year 2026. Providing such estimates for only one alternative is contrary to DOE guidance that states that a mission need statement should provide a rough order of magnitude estimate of the project cost and schedule ranges to acquire various capability alternatives that address the stated mission need. NNSA officials said that they did not include cost and schedule estimates for other alternatives because there is no DOE requirement to do so. These officials acknowledged that DOE guidance states that a mission need statement is to provide cost estimates for various alternatives, but noted that this provision is not a requirement. NNSA officials noted that they plan to analyze other alternatives for meeting the mission as part of CD-1. However, because NNSA’s mission need statement did not include rough-order-of-magnitude estimates of the project cost and schedule ranges for other alternatives, it appears to be biased toward a particular solution and may introduce bias into the rest of the analysis of alternatives process. This, in turn, could undermine the purpose of the CD process: to help ensure that NNSA chooses the best alternative that satisfies the mission need on the basis of selection criteria, such as safety, cost, or schedule. Giving preference to a particular solution may exclude serious consideration of other potential viable alternatives. In our December 2014 report on the analysis of alternatives process applied by NNSA, we found that conducting such an analysis without a predetermined solution is a best practice. In that report, DOE and NNSA officials acknowledged that unreliable analysis of alternatives is a risk factor for major cost increases and schedule delays for NNSA projects. We recommended that DOE incorporate best practices into its analysis of alternatives requirements to minimize the risk of developing unreliable analyses of alternatives and incurring major cost increases and schedule delays on projects. DOE agreed with our recommendation, but we noted in the report that DOE’s unspecified, open-ended date for responding to this recommendation may have indicated a lack of urgency or concern about the need to implement these recommendations. We are encouraged that NNSA officials plan to analyze alternatives for meeting the mission need for lithium production requirements as they proceed with the conceptual design phase of their capital asset acquisition process. However, by completing its preconceptual design (CD-0) phase with a mission need statement that is not fully independent of a particular solution, NNSA is not following DOE’s project management order and may limit objective consideration of the other six alternatives identified for meeting mission requirements. Having prepared cost and schedule estimate ranges for only one of the seven alternatives—thus demonstrating preference for that alternative—may affect the rest of NNSA’s analysis of alternatives process. This preference could potentially undermine NNSA’s ability to choose the best alternative that satisfies the mission need. To improve NNSA’s ability to choose the best alternative that satisfies the mission need for lithium production, we recommend that the Secretary of Energy request that NNSA’s Deputy Administrator for Defense Programs take steps to ensure that NNSA objectively consider all alternatives, without preference for a particular solution, as it proceeds with the analysis of alternatives process. Such steps could include clarifying the statement of mission need for lithium production so that it is independent of a particular solution. We provided a draft of this product to NNSA for comment. NNSA provided written comments, which are reproduced in full in appendix II, as well as technical comments, which we incorporated in our report as appropriate. In its comments, NNSA neither agreed nor disagreed with our recommendation. However, it stated that our conclusion that the agency has pre-selected an alternative for the Lithium Production Capability is not correct. It further stated that NNSA will conduct an Analysis of Alternatives, beginning in July 2015, and that it fully intends to evaluate multiple options, such as the use of an existing facility, the use of a new facility, or outsourcing. We maintain that our conclusion is well supported. We did not conclude that NNSA would not conduct an analysis of alternatives, but that its mission need statement for lithium production was not fully independent of a particular solution, and that demonstrating preference for one alternative—a replacement facility for lithium production—may affect the rest of NNSA’s analysis of alternatives process and could potentially undermine NNSA’s ability to choose the best alternative that satisfies the mission need. Such a focus may introduce a bias into the analysis alternatives process. We stand by our recommendation that NNSA objectively consider all alternatives, without preference for a particular solution, as it proceeds with the analysis of alternatives process. We are sending copies of this report to the appropriate congressional committees, the Secretary of Energy, and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff members have any questions about this report, please contact me at (202) 512-3841 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix III. To describe the challenges the National Nuclear Security Administration (NNSA) has identified with its lithium production strategy, we reviewed NNSA and Y-12 National Security Complex documents related to lithium production and lithium requirements. These documents included the Lithium Production Capability (LPC) CD-0 package—comprising LPC Mission Need Statement, Y-12 National Security Complex and the LPC Program Requirements Document; the Lithium Materials Production Transition Implementation Plan; the Y-12 Materials Production Strategy; and the Building 9204-2 Ops Plan for Sustainment Activities. We also conducted a site visit to Y-12 and interviewed NNSA and Y-12 officials, as well as officials from the weapons design laboratories—Los Alamos and Lawrence Livermore National Laboratories. We coordinated with the Department of Energy’s (DOE) Office of the Inspector General (DOE-IG), which is/was conducting a related audit, to scope our work. Specifically, DOE-IG conducted an in-depth analysis of Y-12’s forecasting of lithium supply and demand, coordination among NNSA program offices responsible for funding and implementation of lithium matters, facility conditions and maintenance and their impact on lithium production, and Y-12’s lithium production strategy. To determine the extent to which NNSA developed an independent mission need statement for lithium production independent of a particular solution, in accordance with DOE direction and guidance, we identified the requirements and guidance by reviewing DOE Order 413.3B (“Program and Project Management for the Acquisition of Capital Assets”) and DOE G 413.3-17 (“Mission Need Statement Guide”). We also reviewed our previous report entitled DOE and NNSA Project Management: Analysis of Alternatives Could Be Improved by Incorporating Best Practices to better understand the analysis of alternatives process. We then reviewed the Lithium Production Capability (LPC) CD-0 package; the Lithium Materials Production Transition Implementation Plan; the Y-12 Materials Production Strategy; and the Building 9204-2 Ops Plan for Sustainment Activities, and compared these documents with the direction and guidance. We also conducted a site visit to Y-12 and interviewed NNSA and Y-12 officials regarding the mission need statement and overall strategy. We conducted this performance audit from October 2014 to July 2015 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. David C. Trimble, (202) 512-3841 ([email protected]) In addition to the individual named above, Diane LoFaro, Assistant Director; Alisa Beyninson; Kevin Bray; R. Scott Fletcher; Cynthia Norris; Steven Putansu; Dan Royer; and Kiki Theodoropoulos made key contributions to this report. | An isotope of lithium is a key component of nuclear weapons and is essential for their refurbishment. NNSA halted certain aspects of its lithium production operation—conducted at its Y-12 site—in May 2013 due to the condition of the site's 72-year old lithium production facility. Y-12 management concluded that usable lithium could run out without additional actions. In response, NNSA developed a strategy that proposed a new lithium production facility by 2025 and identified “bridging” actions needed to meet demand through 2025. In January 2015, NNSA submitted for approval a mission need statement for lithium production capabilities. Senate Report 113-176 included a provision for GAO to review lithium production at NNSA's Y-12 site. This report (1) describes the challenges NNSA has identified with its lithium production strategy, and (2) determines the extent to which NNSA developed a mission need statement that is independent of a particular solution, as called for in DOE's directive on project management. To do this work, GAO reviewed relevant agency directives, guidance, and other documents and interviewed agency officials. The National Nuclear Security Administration's (NNSA) has identified various challenges in its lithium production strategy that may impact its ability to meet demand for lithium in the future, as well as actions that may mitigate these challenges. These challenges pertain to three key areas. First, NNSA may not have a sufficient supply of lithium material for defense program requirements. NNSA officials told GAO in April 2015 that, due to additional recent increases in demand, its supply of currently qualified lithium—lithium approved for use in weapon systems in refurbishment—will run out by 2018 without additional actions. Second, at NNSA's Y-12 National Security Complex in Oak Ridge, Tennessee, where lithium production operations are conducted, the existing lithium production facility and equipment are at risk of catastrophic failure. In March 2014, for example, a 300-pound slab of concrete fell from the ceiling into an active work area (this area is no longer in use). Third, fiscal constraints could cause delays in the construction of a new lithium production facility. NNSA, in its lithium production strategy, also identifies various actions that it could take to mitigate these challenges—including procuring lithium from outside sources and outsourcing certain aspects of the lithium production process. However, the mitigating actions are in early stages of development, and may bring additional challenges. In developing and implementing its lithium production strategy, NNSA did not develop a mission need statement that is fully independent of a particular solution, contrary to the agency directive on Program and Project Management for the Acquisition of Capital Assets, which governs the design and construction of new facilities (DOE Order 413.3B). According to this directive, the mission need statement should be independent of a particular solution, and it should not be defined by the equipment, facility, technological solution, or physical end-item. This allows the program office responsible for the capital asset project to explore a variety of alternatives. In January 2015, NNSA program officials submitted a mission need statement for lithium production for approval to the Deputy Administrator for Defense Programs, NNSA. It was approved on June 10, 2015. The mission need statement included, among other things, a description of the capability gap, alternatives for addressing its mission need—such as building a new facility, leasing off-site facilities, or outsourcing lithium processing—and estimated cost and schedule ranges. However, the document expresses the capability gap in terms of a particular solution—specifically, a new facility. For example, it includes multiple references to an alternative facility to replace the existing facility, suggesting that NNSA gave preference to building a new facility. In addition, it did not include cost and schedule estimates for six of the seven alternatives presented in the mission need document. The mission need statement includes cost and schedule estimates only for the alternative of building a functioning facility at Y-12. NNSA officials told GAO that they plan to analyze other alternatives for meeting the mission need for lithium production. However, by seemingly giving preference to a particular solution in its mission need document, NNSA is not following DOE's project management order, which may preclude serious consideration of other potential viable alternatives. A mission need statement biased toward a particular solution may introduce bias into the rest of the analysis of alternatives process. GAO recommends that NNSA objectively consider all alternatives, without preference for a particular solution, as it proceeds with its analysis of alternatives process. NNSA neither agreed nor disagreed with GAO's recommendation; however, it disagreed with the conclusions. GAO continues to believe its conclusions are fair and well supported. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
Administered by SBA’s Office of Disaster Assistance (ODA), the Disaster Loan Program is the primary federal program for funding long-range recovery for nonfarm businesses that are victims of disasters. It is also the only form of SBA assistance not limited to small businesses. Small Business Development Centers (SBDC) are SBA’s resource partners that provide disaster assistance to businesses. SBA officials said that SBDCs help SBA by doing the following: conducting local outreach to disaster victims, assisting declined business loan applicants or applicants who have withdrawn their loan applications with applications for reconsideration or re-acceptance, assisting declined applicants in remedying issues that initially precluded loan approvals, and providing business loan applicants with technical assistance, including helping businesses reconstruct business records, helping applicants better understand what is required to complete a loan application, compiling financial statements, and collecting required documents. SBA can make available several types of disaster loans, including two types of direct loans: physical disaster loans and economic injury disaster loans. Physical disaster loans are for permanent rebuilding and replacement of uninsured or underinsured disaster-damaged property. They are available to homeowners, renters, businesses of all sizes, and nonprofit organizations. Economic injury disaster loans provide small businesses that are not able to obtain credit elsewhere with necessary working capital until normal operations resume after a disaster declaration. Businesses of all sizes may apply for physical disaster loans, but only small businesses are eligible for economic injury loans. SBA has divided the disaster loan process into three steps: application, verification and loan processing, and closing. Applicants for physical disaster loans have 60 days from the date of disaster declaration to apply for the loan and applicants for the economic injury disaster loan applicants have 9 months. Disaster victims may apply for a disaster business loan through the disaster loan assistance web portal or by paper submission. The information from online and paper applications is fed into SBA’s Disaster Credit Management System, which SBA uses to process loan applications and make determinations for its disaster loan program. SBA has implemented most of the requirements of the 2008 Act, which comprises 26 provisions with substantive requirements for SBA, including requirements for disaster planning and simulations, reporting, and plan updates (see app. I for a summary of the provisions). For example, SBA made several changes to programs, policies, and procedures to enhance its capabilities to prepare for major disasters. Section 12063 states that SBA should improve public awareness of disaster declarations and application periods, and create a marketing and outreach plan. In 2012, SBA completed a marketing and outreach plan that included strategies for identifying regional stakeholders (including SBDCs, local emergency management agencies, and other local groups such as business and civic organizations) and identifying regional disaster risks. SBA’s plan stated that it would (1) develop webinars for specific regional risks and promote these before the traditional start of the season for certain types of disasters such as hurricanes; and (2) establish a recurring schedule for outreach with stakeholders when no disaster is occurring. Furthermore, the most recent Disaster Preparedness and Recovery Plan from 2016 outlines specific responsibilities for conducting region-specific marketing and outreach through SBA resource partners and others before a disaster as well as plans for scaling communications based on the severity of the disaster. (See below for more information about SBA’s Disaster Preparedness and Recovery Plan.) Section 12073 states that SBA must assign an individual with significant knowledge of, and substantial experience in, disaster readiness and planning, emergency response, maintaining a disaster response plan, and coordinating training exercises. In June 2008, SBA appointed an official to head the agency’s newly created Executive Office of Disaster Strategic Planning and Operations. SBA officials recently told us that the planning office, now named the Office of Disaster Planning and Risk Management, is under the office of the Chief Operating Officer. Although the organizational structure changed, the role of the director remains the same: to coordinate the efforts of other offices within SBA to execute disaster recovery as directed by the Administrator. Among the director’s responsibilities are to create, maintain, and implement the comprehensive disaster preparedness and recovery plan, and coordinate and direct SBA training exercises relating to disasters, including simulations and exercises coordinated with other government departments and agencies. Section 12075 states that SBA must develop, implement, or maintain a comprehensive written disaster response plan and update the plan annually. SBA issued a disaster response plan in November 2009 and the agency has continued to develop, implement, and revise the written disaster plan every year since then. The plan, now titled the Disaster Preparedness and Recovery Plan, outlines issues such as disaster responsibilities of SBA offices, SBA’s disaster staffing strategy, and plans to scale disaster loan-making operations. The plan is made available to all SBA staff as well as to the public through SBA’s website. SBA has taken actions to fully address other provisions, such as those relating to augmenting infrastructure, information technology and staff as well as improving disaster lending. For example, to improve its infrastructure, information technology, and staff, SBA put in place a secondary facility in Sacramento, California, to process loans during times when the main facility in Fort Worth, Texas, is unavailable. SBA also improved its Disaster Credit Management System, which the agency uses to process loan applications and make determinations for its disaster loan program, by increasing the number of concurrent users that can access it. Furthermore, SBA increased access to funds by making nonprofits eligible for economic injury disaster loans. SBA has not piloted or implemented three guaranteed disaster loan programs. The 2008 Act included three provisions requiring SBA to issue regulations to establish new guaranteed disaster programs using private- sector lenders: Expedited Disaster Assistance Loan Program (EDALP) would provide small businesses with expedited access to short-term guaranteed loans of up to $150,000. Immediate Disaster Assistance Program (IDAP) would provide small businesses with guaranteed bridge loans of up to $25,000 from private-sector lenders, with an SBA decision within 36 hours of a lender’s application on behalf of a borrower. Private Disaster Assistance Program (PDAP) would make guaranteed loans available to homeowners and small businesses in an amount up to $2 million. In 2009, we reported that SBA was planning to implement requirements of the 2008 Act, including pilot programs for IDAP and EDALP. SBA requested funding for the two programs in the President’s budget for fiscal year 2010 and received subsidy and administrative cost funding of $3 million in the 2010 appropriation, which would have allowed the agency to pilot about 600 loans under IDAP. SBA officials also told us that they performed initial outreach to lenders to obtain reactions to and interest in the programs. They believed such outreach would help SBA identify and address issues and determine the viability of the programs. In May 2010, SBA told us its goal was to have the pilot for IDAP in place by September 2010. Furthermore, the agency issued regulations for IDAP in October 2010. In 2014, we reported on the Disaster Loan Program (following Hurricane Sandy) and found that SBA had yet to pilot or implement the three programs for guaranteed disaster loans. In July 2014, SBA officials told us that the agency still was planning to conduct the IDAP pilot. However, based on lender feedback, SBA officials said that the statutory requirements, such as the 10-year loan, made a product like IDAP undesirable and lenders were unwilling to participate unless the loan term was decreased to 5 or 7 years. Congressional action would be required to revise statutory requirements, but SBA officials said they had not discussed the lender feedback with Congress. SBA officials also told us the agency planned to use IDAP as a guide to develop EDALP and PDAP, and until challenges with IDAP were resolved, it did not plan to implement these two programs. As a result of not documenting, analyzing, or communicating lender feedback, SBA risked not having reliable information—both to guide its own actions and to share with Congress—on what requirements should be revised to encourage lender participation. Federal internal control standards state that significant events should be promptly recorded to maintain their relevance and value to management in controlling operations and making decisions. We concluded that not sharing information with Congress on challenges to implementing IDAP might perpetuate the difficulties SBA faced in implementing these programs, which were intended to provide assistance to disaster victims. Therefore, we recommended that SBA conduct a formal documented evaluation of lenders’ feedback on implementation challenges and statutory changes that might be necessary to encourage lenders’ participation in IDAP, and then report to Congress on these topics. In response to our recommendations, SBA issued an Advance Notice of Proposed Rulemaking in October 2015 to seek comments on the three guaranteed loan programs. In July 2016, SBA sent a letter to the Ranking Member of the House Committee on Small Business that discussed how the agency evaluated feedback on the three programs and explained the remaining challenges to address the statutory provisions for the three programs. Based on this action, we closed the recommendations for SBA to develop an implementation plan, formally evaluate lender feedback, and report to Congress on implementation challenges. SBA has yet to announce how it will proceed with the statutory requirements to establish these loan programs. SBA made several changes to its planning documents in response to recommendations in our 2014 report about the agency’s response to Hurricane Sandy. In 2014, we found that after Hurricane Sandy, SBA did not meet its goal to process business loan applications (21 days from receipt to loan decision). SBA took an average of 45 days for physical disaster loan applications and 38 days for economic injury applications. According to SBA, the agency received a large volume of electronic applications within a few days of the disaster. While SBA created web- based loan applications to expedite the process and encouraged their use, the agency noted that it did not expect early receipt of such a high volume of loan applications early in its response and delayed increasing staffing. At the time of our 2014 report, SBA also had not updated its key disaster planning documents—the Disaster Preparedness and Recovery Plan and the Disaster Playbook—to adjust for the effects a large-volume, early surge in applications could have on staffing, resources, and forecasting models for future disasters. According to SBA’s Disaster Preparedness and Recovery Plan, the primary goals of forecasting and modeling are to predict application volume and application receipt as accurately as possible. Federal internal control standards state that management should identify risk (with methods that can include forecasting and strategic planning) and then analyze the risks for their possible effect. Without taking its experience with early application submissions after Hurricane Sandy into account, SBA risked being unprepared for such a situation in future disaster responses, potentially resulting in delays in disbursing loan funds to disaster victims. We therefore recommended that SBA revise its disaster planning documents to anticipate the potential impact of early application submissions on staffing, resources, and timely disaster response. In response to our recommendation, SBA updated its key disaster planning documents, including the Disaster Preparedness and Recovery Plan and Disaster Playbook, to reflect the impact of early application submissions on staffing for future disasters. For example, the documents note that the introduction of the electronic loan application increased the intake of applications soon after disasters. SBA received 83 percent of applications electronically in fiscal year 2015 and 90 percent in 2016. The documents also note that the electronic loan application has reduced the time available to achieve maximum required staffing and that SBA has revised its internal resource requirements model for future disasters to activate staff earlier based on the receipt of applications earlier in the process. Furthermore, our review of the most recent Disaster Preparedness and Recovery Plan from 2016 shows that SBA continues to factor in the effect of electronic loan application submissions on staffing requirements. In our November 2016 report, we reviewed the actions SBA took or planned to take to improve the disaster loan program, as discussed in its Fiscal Year 2015 Annual Performance Report. SBA focused on promoting disaster preparedness, streamlining the loan process, and enhancing online application capabilities (see table 1). We also reported in November 2016 that, according to SBA officials, the agency made recent enhancements to the disaster loan assistance web portal, such as a feature that allows a loan applicant to check the status of an application and the application’s relative place in the queue for loan processing. The web portal also includes a frequently asked questions page, telephone, and e-mail contacts to SBA customer service, and links to other SBA information resources. These enhancements may have had a positive impact on the agency’s loan processing. For example, we reported that an SBA official explained that information from online applications is imported directly into the Disaster Credit Management System, reducing the likelihood of errors in loan applications, reducing follow-up contacts with loan applicants, and expediting loan processing. As we found in our November 2016 report, SBA published information (print and electronic) about the disaster loan process, but much of this information is not easily accessible from the disaster loan assistance web portal. SBA’s available information resources include the following: Disaster business loan application form (Form 5) lists required documents and additional information that may be necessary for a decision on the application. Fact Sheet for Businesses of All Sizes provides information about disaster business loans, including estimated time frames, in a question-and-answer format. 2015 Reference Guide to the SBA Disaster Loan Program and Three-Step Process Flier describe the three steps of the loan process, required documents, and estimated time frames. Partner Training Portal provides disaster-loan-related information and resources for SBDCs (at https://www.sba.gov/ptp/disaster). However, we found SBA had not effectively integrated these information resources into its online portals; much of the information was not easily accessible from the loan portal’s launch page or available on the training portal. For example, when a user clicks on the “General Loan Information” link in the loan portal, the site routes the user to SBA’s main website, where the user would encounter another menu of links. To access the fact sheet, the reference guide, and the three-step process flier, a site user would click on three successive links and then select from a menu of 15 additional links. Among the group of 15 links, the link for Disaster Loan Fact Sheets contains additional links to five separate fact sheets for various types of loans. According to SBA officials, SBA plans to incorporate information from the three-step loan process flier in the online application, but does not have a time frame for specific improvements. SBA officials also said that disaster-loan information is not prominently located on SBA’s website because of layout and space constraints arising from the agency’s other programs and priorities. We concluded that absent better integration of, and streamlined access to, disaster loan-related information on SBA’s web portals, loan applicants—and SBDCs assisting disaster victims— may not be aware of key information for completing applications. Thus, we recommended that SBA better integrate information (such as its reference guide and three-step process flier) into its portals. In response to our report, SBA stated in a January 2017 letter that the disaster loan assistance portal includes links to various loan-related resources and a link to SBA.gov, where users can access the SBA Disaster Loan Program Reference Guide and online learning center. However, SBA did not indicate what actions it would take in response to our recommendation. We plan to follow up with SBA on whether the agency plans to centrally integrate links to loan-related resources into its disaster loan assistance web portal and Partner Training Portal. We also found in our November 2016 report that SBA has not consistently described key features of the loan process in its information resources, such as the application form, fact sheet, and reference guide, and none of these resources include explanations for required documents (see table 2). The Paperwork Reduction Act has a broad requirement that an agency explain reasons for collecting information and use of the collected information. According to SBDCs we interviewed and responses from SBA and American Customer Satisfaction Index surveys, some business loan applicants found the process confusing due to inconsistent information about the application process, unexpected requests for additional documentation, and lack of information about the reasons for required documents. We concluded that absent more consistent information in print and online resources, loan applicants and SBDCs might not understand the disaster loan process. As a result, we recommended SBA ensure consistency of content about its disaster loan process by including information, as appropriate, on the (1) three-step process; (2) types of documentation SBA may request and reasons for the requests; and (3) estimates of loan processing time frames and information on factors that may affect processing time. In response to our report, SBA stated in a January 2017 letter that the agency provides consistent messaging about the time frame for making approval decisions on disaster loan applications: SBA’s goal is to make a decision on all home and business disaster loan applications within 2–3 weeks. However, SBA did not indicate that what actions it would take in response to our recommendation. We plan to follow up with SBA on whether the agency will take any action to ensure content is consistent across print and online resources, among other things. In our November 2016 report, we further found that some business loan applicants were confused about the financial terminology and financial forms required in the application. Three SBDCs we interviewed mentioned instances in which applicants had difficulty understanding the parts of the loan application dealing with financial statements and financial terminology. For example, applicants were not familiar with financial statements, did not know how to access information in a financial statement, and did not know how to create a financial statement. Although the loan forms include instructions, the instructions do not define the financial terminology. According to SBA officials, the agency’s customer service representatives can direct applicants to SBDCs for help. Two of the three SBDCs said these difficulties arose among business owners who did not have formal education or training in finance or related disciplines—and were attempting applications during high-stress periods following disasters. The Plain Writing Act of 2010 requires that federal agencies use plain writing in every document they issue. According to SBA officials, although the agency does not provide a glossary for finance terminology in loan application forms, the disaster loan assistance web portal has a “contextual help” feature that incorporates information from form instructions. SBA customer service representatives and local SBDCs also can help explain forms and key terms. SBA has taken other actions to inform potential applicants about its loan program, including holding webinars and conducting outreach. However, these efforts may not offer sufficient assistance or reach all applicants. We concluded that without explanations of financial terminology, loan applicants may not fully understand application requirements, which may contribute to confusion in completing the financial forms. Therefore, we recommended that SBA define financial terminology on loan application forms (for example, by adding a glossary to the “help” feature on the web portal). In response to our report, SBA stated in a January 2017 letter that the agency has been developing a glossary of financial terms used in SBA home and business disaster loan applications and in required supporting financial documents. Once completed, SBA stated that it will make the glossary available through the agency’s disaster loan assistance portal and the SBA.gov website. We plan to follow up with SBA once the agency completes the glossary. Chairman Chabot, Ranking Member Velázquez, and Members of the Committee, this concludes my prepared statement. I would be pleased to respond to any questions that you may have at this time. For further information on this testimony, please contact William B. Shear at (202) 512-8678 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. GAO staff who made key contributions to this testimony are Marshall Hamlett (Assistant Director), Christine Ramos (Analyst in Charge), John McGrail, and Barbara Roesmann. Appendix I: Summary of Provisions in the Small Business Disaster Response and Loan Improvements Act of 2008 This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | While SBA is known primarily for its financial support of small businesses, the agency also assists businesses of all sizes and homeowners affected by natural and other declared disasters through its Disaster Loan Program. Disaster loans can be used to help rebuild or replace damaged property or continue business operations. After SBA was criticized for its performance following the 2005 Gulf Coast hurricanes, the agency took steps to reform the program and Congress also passed the 2008 Act. After Hurricane Sandy (2012), questions arose on the extent to which the program had improved since the 2005 Gulf Coast Hurricanes and whether previously identified deficiencies had been addressed. This statement discusses (1) SBA implementation of provisions from the 2008 Act; (2) additional improvements to agency planning following Hurricane Sandy; and (3) SBA's recent and planned actions to improve information resources for business loan applicants. This statement is based on GAO products issued between July 2009 and November 2016. GAO also met with SBA officials in April 2017 to discuss the status of open recommendations and other aspects of the program. The Small Business Administration (SBA) implemented most requirements of the Small Business Disaster Response and Loan Improvements Act of 2008 (2008 Act). For example, in response to the 2008 Act, SBA appointed an official to head the disaster planning office and annually updates its disaster response plan. SBA also implemented provisions relating to marketing and outreach; augmenting infrastructure, information technology, and staff; and increasing access to funds for nonprofits, among other areas. However, SBA has not yet implemented provisions to establish three guaranteed loan programs. In 2010, SBA received an appropriation to pilot one program and performed initial outreach to lenders. However, in 2014, GAO found that SBA had not implemented the programs or conducted a pilot because of concerns from lenders about loan features. GAO recommended that SBA evaluate lender feedback and report to Congress about implementation challenges. In response, SBA sought comments from lenders and sent a letter to Congress that explained remaining implementation challenges. After Hurricane Sandy, SBA further enhanced its planning for disaster response, including processing of loan applications. In a 2014 report on the Disaster Loan Program, GAO found that while SBA encouraged electronic submissions of loan applications, SBA did not expect early receipt of a high volume of applications after Sandy and delayed increasing staffing. SBA also did not update key disaster planning documents to adjust for the effects of such a surge in future disasters. GAO recommended SBA revise its disaster planning documents to anticipate the potential impact of early application submissions on staffing and resources. In response, SBA updated its planning documents to account for such impacts. SBA has taken some actions to enhance information resources for business loan applicants but could do more to improve its presentation of online disaster loan-related information. In 2016, GAO found that SBA took or planned to take various actions to improve the disaster loan program and focused on promoting disaster preparedness, streamlining the loan process, and enhancing online application capabilities. However, GAO found that SBA had not effectively presented information on disaster loans (in a way that would help users efficiently find it), had not consistently described key features and requirements of the loan process in print and online resources, or clearly defined financial terminology used in loan applications. Absent better integration of, and streamlined access to, disaster loan-related information, loan applicants may not be aware of key information and requirements for completing the applications. Therefore, GAO recommended that SBA (1) integrate disaster loan-related information into its web portals to be more accessible to users, (2) ensure consistency of content about the disaster loan process across information resources, and (3) better define financial terminology used in the loan application forms. In January 2017, SBA indicated it was working on a glossary for the application. GAO plans to follow up with SBA about the other two open recommendations. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
To improve participation in the census among HTC groups as well as the general population, the Bureau implemented a number of outreach and enumeration activities from January 2008 through September 2010. In this report, we focus on the following four efforts: paid media, partnerships, SBE, and Be Counted/QAC. The four components of the outreach efforts, known collectively as the Integrated Communications Campaign, were paid media, a partnership program, public relations and an educational program called Census in Schools. According to Bureau officials, the components were designed to work together to unify census messages and communicate them to diverse audiences via various outlets in order to improve mail response and reduce the differential undercount. An appropriation in the American Recovery and Reinvestment Act of 2009 (Recovery Act) allowed the Bureau to increase the communications campaign’s initial budget of $410 million by an additional $220 million. The Bureau’s regional census centers (RCC) were responsible for administering the partnership program, with partnership coordinators and team leaders at each RCC overseeing the work of the partnership specialists and partnership assistants. Local census offices played a more limited role in outreach efforts, and while the local census offices reported to RCCs, they had a different reporting structure than the partnership program. SBE was meant to help ensure that people without conventional housing were included in the count. From March 28 through March 30, 2010, the Bureau attempted to enumerate those without conventional housing at facilities where they received services or at outdoor locations, such as parked cars, tent encampments, and on the street. The Bureau developed a list of potential outdoor locations based on several sources, including 2000 Census data and input from community leaders. The Bureau’s Be Counted program, which ran from March 19 to April 19, 2010, was designed to reach those who may not have received a census questionnaire, including people who did not have a usual residence on April 1, 2010, such as transients, migrants, and seasonal farm workers. The program made questionnaires available at community centers, libraries, places of worship, and other public locations throughout the country. Individuals were to pick up the forms from these sites and mail the completed questionnaires to the Bureau. Some of the sites also included a staffed QAC to help people, especially those with limited English proficiency, complete their questionnaires. The Bureau refined its paid media efforts for 2010, in part to address challenges from the 2000 Census. For example, in 2000, to target advertising to certain population groups and areas, the Bureau used data on measures of civic participation, such as voting in elections. However, the Bureau noted that civic participation did not appear to be a primary indicator of an individual’s willingness to participate in the census. To better motivate participation among different population groups, for 2010 the Bureau used, among other data sources, actual participation data from the 2000 Census, as well as market and attitudinal research that identified five mindsets people have about the census. These mindsets ranged from the “leading edge” (those who are highly likely to participate) to the “cynical fifth” (those who are less likely to participate because they doubt the census provides tangible benefits and are concerned that the census is an invasion of privacy and that the information collected will be misused). The Bureau used this information to tailor its paid media efforts. Moreover, in 2000 the Bureau did not buy additional paid media in areas with unexpectedly low participation rates. For 2010, the Bureau set aside more than $7 million to rapidly target paid media in response to specific events leading up to the census or to areas with unexpectedly low mail participation rates. Overall, the Bureau budgeted about $297.3 million on paid media in 2010, about $57 million (24 percent) more than in 2000 in constant 2010 dollars. The Bureau’s 2010 paid media budget reflected several increases. On a unit cost basis, spending increased from an average of about $2.05 per housing unit in 2000 to $2.25 per housing unit in 2010, in constant 2010 dollars. Also, the Bureau increased the percentage of the budget for media development costs from 33 percent in 2000 to 43 percent in 2010. Table 1 compares the paid media spending in 2000 to 2010. According to the Bureau, the cost increased for paid media development in part because of the extensive research done to target the media to specific groups and areas and because advertising was created in 12 more languages than in 2000. For example, to determine where paid media efforts may have the greatest impact, the Bureau developed predictive models based on 2000 census data and the evaluations of the partnership and paid media efforts from 2000. The models were provided to its contractor, DraftFCB, to aid in making paid media decisions. By better targeting paid media buys by area and message, the Bureau expected to more effectively reach those who have historically been the hardest to count. However, according to the Bureau, two factors—the use of evaluations from 2000 that did not isolate the impact of paid media from other components of the Bureau’s outreach efforts, such as the partnership program, and the age of the data used—may have limited the model’s ability to predict where paid media efforts had the greatest impact. In a further effort to reach HTC groups, in 2010 the Bureau budgeted more for paid media that targeted HTC groups, like non-English-speaking audiences, than on the national audience, which was not the case in 2000, as shown in table 2. Additionally, the Bureau strengthened its outreach efforts in 2010 by improving its monitoring and evaluation activities. For example, throughout the census the Bureau monitored the public’s awareness and attitudes toward the census via surveys and by tracking relevant blogs. The Bureau used five sources of information, including national polls and actual mail participation rates, to monitor metrics such as individuals’ understanding of the census, perceived benefits from participating in the census, and barriers to participating in the census. As a result, the Bureau used this information to identify markets and groups where additional outreach was needed. Table 3 compares key aspects of the 2000 and 2010 paid media activities. The Bureau generally implemented its 2010 paid media campaign as planned, targeting different segments of the HTC population. For example, to reach younger audiences, which are typically hard to count, the Bureau used new methods such as podcasts, YouTube videos, and social media networks such as Facebook and Twitter in addition to traditional TV and radio broadcasts. To reach people with limited English proficiency, the Bureau ran banner advertisements on, for example, Chinese language Web sites that linked directly to the Chinese language page of the Bureau’s own Web site and targeted local radio advertisements to various ethnic audiences. Moreover, to reach audiences through their media habits and interests, the Bureau integrated census messages into regularly scheduled television programming in an attempt to appeal to people in new and more personal ways. For example, a Spanish-language soap opera made one of its characters an enumerator. The Bureau also took advantage of its improved monitoring capacity and implemented a rapid response initiative to address markets with lagging mail participation rates or unforeseen events that might have affected response rates in certain markets. For example, as Census Day approached, the Bureau continuously tracked the public’s attitudes toward the census to help determine the impact of its outreach activities. The Bureau found that while the percentage of people saying they would definitely participate in the census increased from about 50 percent in December 2009 to about 89 percent in March 2010, the data indicated that specific populations would have lower participation rates. As a result, the Bureau ran additional advertising targeted at the following groups, among others: 18- to 24-year-olds whose attitudes on their intent to participate in the census were not changing over time; English-speaking Hispanics who appeared less likely than Spanish- speaking Hispanics to understand the benefits of census participation; and Hasidic Jews in Brooklyn, New York, because mail participation rates were lagging in neighborhoods known to have significant Hasidic populations. Further, in late March, the Bureau identified 23 specific media markets with mail participation rates significantly below the national average. Following rapid response efforts in these areas, 13 of these markets showed a significant increase in mail participation rates compared to the national average. The Bureau originally budgeted $7.4 million for its rapid response efforts, but added approximately $28 million from a separate management reserve fund as data analysis showed a need for media intervention, for a total of about $35 million. Of this $35 million, about $31.8 million was allocated to new media purchases and about $3 million went to media production and other costs. Of the $31.8 million, the Bureau budgeted about $17.3 million (54 percent) of the rapid response paid media funding for the general population and $14.5 million (45 percent) for specific ethnic and language audiences. The Bureau plans to assess the impact of the communications campaign on respondent attitudes and behaviors. For example, to determine how much it should invest in the paid media campaign, the Bureau held an experiment in 2010 where it flooded certain markets with more paid advertising than was used in other, similar markets. When the evaluation of this research is completed as scheduled in 2012, it could help the Bureau better determine whether greater levels of advertising would be cost-effective in terms of increasing the mail response rate of various races and ethnic groups. Moving forward, it will be important for the Bureau to use these evaluation results not only for planning 2020 Census- taking activities, but, as was the case for 2010, also for aiding in the development of a predictive model that could help the Bureau determine which media outlets provide the best return on investment in terms of raising awareness of the census and encouraging participation for specific demographic groups. The model could combine data from the 2000 and 2010 enumerations and inform allocation decisions for paid media. In designing the 2010 partnership program, the Bureau took a number of steps aimed at expanding its reach and addressing challenges from the 2000 Census. For example, in 2000, the Bureau hired about 600 partnership staff in the field who were responsible for mobilizing local support for the census by working with local organizations to promote census participation. However, we reported in 2001 that partnership specialists’ heavy workload may have limited the level of support they were able to provide individual local census offices. To help improve its ability to mobilize local support for 2010, the Bureau created a new position, the partnership assistant, and hired about 2,800 partnership staff, about five times the number of partnership staff hired in 2000. Thus, the Bureau increased the ratio of partnership staff per county and staff were not spread as thin. Additionally, for 2000, the Bureau developed a database to track, plan, and analyze partnership efforts. We reported that the database was not user- friendly, which led to inefficiencies and duplication of effort. For 2010, the Bureau revamped the partnership database to make it more user- friendly and to improve management’s ability to use the information to monitor the progress of partnership activities. For example, while the 2000 database was mainly a catalog of census partner organizations, the 2010 database was designed to enable the Bureau to more actively manage the program in part by generating reports on value-added goods and services that partners provided, such as free training space. Table 4 compares key aspects of the 2000 and 2010 partnership activities. Aided by the Recovery Act funding that allowed the Bureau to increase its presence in local communities, the Bureau’s outreach efforts resulted in recruiting over 100,000 more partners and increasing by over 100 the number of languages spoken by partnership staff. The Bureau estimated that it would spend about $280 million on partnership program costs from fiscal years 2007 through 2011, including $120 million from the Recovery Act—an increase of 54 percent from 2000. To expand partnership activities in HTC areas, the Bureau used its allocation of Recovery Act- funded partnership staff in regions with large HTC populations. As a result, while in 2000 the average ratio was one partnership staff member for every five counties, in 2010 the average ratio was almost one partnership staff member for every county. Partnership specialists conducted outreach activities that addressed the concerns of HTC communities in their areas. For example, one partnership specialist in the Atlanta region organized a conference of leaders in the Vietnamese community to ease their concerns about the confidentiality of census data. Another partnership specialist in the Los Angeles region leveraged the credibility of several large national Iranian and Arab organizations to help convince local community leaders that the census was mandated by law and that their constituents should complete and return census forms. Further, an LCOM in the Dallas region told us that partnership specialists worked to get a letter from the Mayor that helped enumerators gain access to local gated communities and apartment complexes. During the 2000 Census, LCOMs we surveyed said that the reporting structure for partnership specialists may have led to communication and coordination hurdles between the partnership staff and local census office staff. As a result, we recommended that the Bureau explore ways to increase the coordination and communication between the partnership specialists and the LCOMs. To address coordination and communication challenges in 2010, the Bureau developed additional guidance for partnership specialists and LCOMs, revised partnership training materials, and held meetings between regional operations staff and partnership staff to discuss ways to enhance communications. For example, the Bureau revised the LCOMs’ handbook to explain that partnership specialists and local census office staff have a responsibility to work together to ensure that they do not duplicate each others’ efforts. In addition, the partnership training manual specifically stated that partnership specialists should participate in local census office management meetings, provide management teams with their schedules of planned meetings and activities in advance, and update LCOMs on their completed activities. Moreover, most of the partnership staff we interviewed reported working closely or having mutually supportive relationships with local census office staff. For example, partnership staff in the Atlanta and Charlotte regions said that they attended training with local census office staff, and one partnership specialist told us that training gave them a better understanding of the roles and responsibilities of local census offices. However, LCOMs we surveyed provided a more mixed view of the coordination and communication between the partnership program and local census offices. On the one hand, 39 percent of 395 LCOMs responding to our March survey said they were generally or very satisfied with partnership staff’s assistance with local challenges. In addition, some managers provided positive comments in the open ended section of the survey about partnership staff’s assistance. For example, one LCOM commented that partnership staff assisted with local census office recruiting activities, such as setting up and providing materials for promotional events. In another example, a manager from the Boston region said that the local census office staff and the partnership specialist worked as one team and contributed to the success of the census. These results varied regionally, with more satisfaction in the Bureau’s Boston, Los Angeles, and Dallas regions than in the Philadelphia and New York regions. On the other hand, the results of our survey of LCOMs also highlight areas for improvement. In March, 50 percent of 393 LCOMs responding said they were generally or very dissatisfied with coordination between local census offices and partnership staff and a similar level of dissatisfaction was found in a follow-up survey we conducted in May after the nonresponse follow-up operation started. Among the responses of those LCOMs who elaborated on their satisfaction with coordination between local census offices and partnership staff, a key theme was a lack of cooperation or interaction between the partnership and local census office staffs. A manager from the Chicago region said that though the partnership specialist was good, the organizational structure and upper management did not allow for proper interaction. The manager said that at first, communication between the local census office staff and the partnership specialist was prohibited by the partnership specialist team leader, which impeded the local census office’s ability to make valuable community connections. One reason for the coordination challenges between local census offices and partnership staff could be their different reporting structures. As shown in figure 1, LCOMs and partnership specialists report to different officials, and the official who oversees both positions is two levels above the LCOM and three levels above the partnership specialist. According to Bureau officials, this reporting structure was established to allow partnership specialists to coordinate their efforts with other partnership specialists in the same geographical areas and share common problems and solutions. Further, some partnership specialists were responsible for reaching out to specific ethnic groups in areas covered by different local census offices, making it logistically difficult for the specialists to report to one local census office. But among the LCOMs who elaborated on their responses to our survey, a key theme was dissatisfaction with this reporting structure. For example, one manager reported that the partnership program and local census office operations are too disconnected, adding that at times both partnership staff and local census office staff were doing the same tasks. The manager said that the partnership program was an essential part of a successful census, but only when performed in conjunction with local census office operations. Another manager said that the partnership program needs a direct link to the local census office and suggested that a position such as an assistant manager for partnership be added to the local census office staff. Such a position, the manager explained, would solidify the communication between the partnership program and the local census office. Regardless of the management structure, what is clear is that more positive experiences seemed to result when LCOMs and partnership specialists dovetailed their efforts. Better communication between partnership specialists and LCOMs may have enhanced the Bureau’s capacity to reduce duplicative efforts, close any gaps in outreach to community organizations with significant HTC populations, and leverage opportunities to achieve a more complete and accurate count. The partnership tracking database could also benefit from refinements. Despite improvements, partnership staff raised concerns about its user- friendliness similar to those reported in 2000. In 2010, all the partnership specialists we interviewed reported that data entry was time consuming, and 8 of the 11 partnership staff we interviewed reported that they needed help with data entry in order to keep the database current. The Bureau expected to use the partnership database to more accurately monitor and improve partnership efforts nationally; thus the difficulty partnership staff found in updating the system is noteworthy. Initially, no partnership assistants were authorized to access the database because the Bureau wanted to ensure that data were entered into the system consistently. The Bureau was also concerned about the additional costs associated with purchasing licenses for the large number of partnership assistants. However, in response to regional partnership staff’s concerns over the partnership specialists’ struggles to update the database in a timely manner, the Bureau procured approximately 400 licenses for select partnership assistants in August 2009. But in interviews with partnership specialists from March through May 2010, they told us that they continued to experience difficulty meeting the data entry requirements. Further, Bureau managers could not be sure if information in the partnership database was up-to-date. Bureau officials told us that they expected partnership specialists to immediately log any contact they had with a partner into the database. However, our analysis of reports from the database showed, on average, that about 35 percent of users did not update the database on a weekly basis from March 4 through April 22, 2010. According to Bureau headquarters officials responsible for managing the partnership program, because the partnership data were not always current, they took the extra step of organizing weekly telephone calls between headquarters and regional partnership staff in order to gain the most up-to-date information on partnership activities. More current information during a crucial time period around Census Day, April 1, could have better positioned the Bureau to quickly identify and address problem areas. Further, Bureau managers would likely have had better data for redeploying partnership resources to low responding areas with significant HTC populations during different census operations. Although the Bureau developed English and foreign language promotional materials—both in hard copy and for the Bureau’s Web page—for partnership specialists and assistants to use when recruiting partner organizations, the materials were not available when partnership specialists were first hired. Eight of the 11 partnership specialists and assistants we interviewed reported that because promotional materials were not available when needed, it was more difficult for them to build relationships with potential partners. Specifically, the Bureau began hiring partnership specialists in January 2008. However, delivery of the promotional materials did not start until April 2009, more than a year after partnership specialists first came on board. Although this still left a year until Census Day, by not having promotional materials on hand when partnership staff first began their work, the Bureau may have missed opportunities to develop and strengthen relationships with organizations that had the ability to influence census participation among HTC groups. Further, three of the eight partnership staff who worked with non-English- speaking communities said it was difficult to obtain in-language materials when needed. For example, one partnership employee in the Los Angeles region reported being unable to engage Korean churches until after January 2010 when the needed in-language materials first became available (according to Bureau officials, in-language materials took longer to develop than English language materials because of the need to ensure accurate translations). Bureau officials acknowledged that the schedule for hiring partnership staff and the delivery of promotional materials were not well aligned. In the interim, the Bureau provided partnership staff with talking points to help them reach out to organizations in the early phase of the program. Moving forward, it will be important for the Bureau to take a fresh look at recurring problems in the partnership program, as well as reconsider time frames for the availability of promotional materials. Through improving communication and coordination between partnership and local census office staff, developing a user-friendly database to more effectively monitor the program’s progress, and ensuring that promotional materials are available for distribution when partnership specialists are first hired, the Bureau would better position itself to promote the census to HTC populations. To improve its ability to count individuals without conventional housing, the Bureau made a number of improvements to SBE, many of which were designed to address challenges experienced in 2000. For example, in 2000, SBE enumerators were not trained to enumerate all types of SBE facilities, which limited the times when enumeration could occur. In response to service providers’ requests for more flexibility on scheduling enumeration during the 3-day operation, the Bureau trained census workers to enumerate all types of SBE facilities. This change made training more consistent nationwide and enabled the Bureau to better accommodate last-minute schedule changes. Further, in some cases in 2000, the supply of census forms and training materials provided to the local offices was not adequate. In 2010, the Bureau reduced the number of form types used for enumerating individuals at SBE facilities from four to a single multipurpose form. According to Bureau officials, this change allowed them to provide an adequate number of forms to local census offices and also helped increase efficiency. The Bureau took several steps that helped it identify a larger number of SBE facilities in 2010 than in 2000, thereby positioning the Bureau to conduct a more complete count. The actual number of SBE facilities the Bureau enumerated in 2000 was 14,817, whereas for 2010 the Bureau had plans to enumerate 64,626 sites—four times more than previously enumerated. The steps included working more closely with local and national partner organizations and assigning partnership assistants a role in identifying service-providing facilities. The Bureau also developed better guidance for partnership assistants to identify TNSOLs, relying in part on input from partner organizations, such as church groups and service providers that were familiar with outdoor areas where people often spent the night. Further, the Bureau used public mailings and technology, such as the Internet, to find a broader spectrum of facilities, as compared to local telephone listings that were used in 2000. Table 5 compares key aspects of the 2000 and 2010 SBE operations. The Bureau generally implemented the SBE operation as planned, completing the 3-day operation on schedule, and spending $10.9 million, slightly more than the $10.6 million budgeted for the operation. However, while the overall budget estimate for the 2010 SBE operation was more accurate than in 2000, the actual costs for local census offices in urban HTC areas was almost double the amount budgeted—-$1.9 million compared to the actual cost of $3.6 million. Bureau officials said they will examine the data further to determine why the budget was exceeded in urban HTC areas. We have noted the Bureau’s difficulties in developing accurate cost estimates for several other Bureau operations, and the cost overrun in urban HTC areas is another example of this. As in 2000, our observers noted that enumerators were professional, responsible, knowledgeable, and highly committed to fulfilling their responsibilities. For example, during heavy rain in the Boston area, enumerators remained focused on counting individuals living under overhangs and stairwells, despite the difficult conditions. Our observers in Brooklyn reported the same of enumerators there, although enumeration of the outdoor locations was delayed one night because of adverse weather conditions. Further, one of our observers reported that in Los Angeles, cultural advocates—individuals the Bureau hired to accompany enumerators and facilitate access to certain communities—helped ease potentially tense situations. As described below, based on our observations and the results of the LCOM survey, SBE generally went well, and in some areas the Bureau appears to have addressed challenges it experienced in 2000. Enumerators we spoke with reported having enough forms in 68 of 78 sites we visited. Also, 76 percent of 359 LCOMs who responded to our question on the timing of the delivery of questionnaires and other enumerations supplies were generally or very satisfied. In contrast, during the 2000 Census, our observers noted that the timing of questionnaires and training materials was not always adequate at the locations they visited, which impeded enumerators’ ability to conduct their work in a timely manner. Our observers reported that facilities were prepared for SBE enumeration in 35 of 56 visits to SBE facilities. Furthermore, 73 percent of 356 LCOMs who responded to our question about the readiness of SBE facilities were generally or very satisfied. In instances where facilities were not prepared, there appears to have been an expectation or communication gap. Despite advance visits from the Bureau, one representative at a Baltimore facility said she was not aware that census workers were expected, and would not allow enumeration to take place because it would disrupt the individuals’ dinner and medication treatments. She was not receptive to the workers returning later the same evening. In another case, a Boston facility manager was not aware that the enumeration was to take place, but allowed the census workers to proceed. Bureau officials said that in some instances facility staff may not have communicated previous agreements for conducting the enumeration to new or other staff on duty at the time of the enumeration. Of the LCOMs we surveyed, 65 percent of 359 LCOMs were generally or very satisfied that the content of SBE training materials was tailored to accommodate local conditions, such as taking into account whether an area was urban or rural. In 2000, enumerators expressed concern that the training they received did not always adequately prepare them for the wide range of scenarios they encountered. Despite these successes, the Bureau experienced some procedural and operational challenges during SBE implementation, some of which were similar to the Bureau’s experience in 2000. The Bureau’s policy referred to in its SBE enumeration manual stipulates that when individuals state that they have already been enumerated elsewhere, the enumerator still must attempt to complete a questionnaire. While enumerators adhered to this procedure at about two-thirds of the facilities we visited, we found that in 26 of 78 visits enumerators did not attempt to enumerate individuals who told them they had already completed a questionnaire at another location. When individuals refuse to be enumerated, regardless of the reason, the Bureau’s guidance instructs enumerators to ask the facility’s contact person for information about the individual. If a contact person is not available, the enumerator should attempt to complete as much of the questionnaire as possible through observation. By not always following these procedures, enumerators may have missed individuals who should have been enumerated and the extent to which accuracy of the count was affected is unknown. As mentioned previously, Bureau officials visited SBE facilities to make agreements with service providers on conducting the actual enumeration. Our observers noted that in 15 of 78 site visits, enumerators did not arrive as scheduled at shelter locations. One of these instances occurred in Washington, D.C., where the facility manager had instructed the clientele who typically frequent that location to make an effort to be present when the enumerator arrived. According to the facility manager, the enumerator did not arrive at the scheduled time. In another instance, a facility manager at a Boston site told our observers that she was concerned that enumerators had arrived earlier than the agreed-upon time. She explained that her clientele consisted of emotionally disturbed women, many of whom had fears of authority. Thus, she said she would have preferred more time to prepare the women for the impending visit. When enumerators do not fulfill commitments, the missed appointments and the need to reschedule could make the enumeration more burdensome to service providers and detract from the Bureau’s reputation. The mobile nature of the SBE population and other factors make it difficult to precisely determine the number of enumerators that should be sent to a particular site, and sending either too many or too few enumerators each has its consequences. Although the Bureau has guidance on staffing ratios for enumerating different types of group quarters, including service-based facilities, it did not always result in optimal levels of staffing at shelters and TNSOLs. Overstaffing can lead to unnecessarily higher labor costs and poor productivity, while understaffing can affect the Bureau’s ability to obtain a complete count at a particular site. Our observers and those in the Department of Commerce’s Office of Inspector General both reported overstaffing as an issue at SBE locations. For example, at one of our SBE site visits, approximately 30 enumerators reported to the same shelter in Atlanta to conduct the enumeration. Unsure of how to proceed, the census enumerators waited for over an hour before a crew leader instructed over half of the enumerators present to leave, at which point no work had taken place. Similarly, the Department of Commerce Inspector General’s staff observed long periods of inactivity at sites and increased operational costs as a result. Also, while most LCOMs we surveyed were satisfied with SBE staffing levels, pockets of dissatisfaction existed at some locations. Of the LCOMs responding to our survey in April, 81 percent of 361 were generally or very satisfied with the number of enumerators hired to complete the SBE workload, 10 percent of managers said they were generally or very dissatisfied, and 9 percent of managers said they were neither satisfied nor dissatisfied. Of the responses from managers who elaborated on our question about their satisfaction level with the SBE operation, a key theme that emerged was overstaffing. One manager, elaborating on his response, said that he sent a detailed cost and benefit document to higher-level Bureau officials to demonstrate that the number of enumerators needed for the SBE operation in his local area should be reduced, but his request was denied. In another instance, a manager said he was required to train and hire at least 100 more enumerators than he felt were necessary. Given the Bureau’s constitutional mandate to enumerate the country’s entire population and the difficulty of enumerating the SBE population, it is not unreasonable for the Bureau to err on the side of over- rather than understaffing SBE to help ensure a complete count. Going forward, as part of the Bureau’s plans to examine SBE costs, schedule, training, and staffing, it will be important for the Bureau to determine the factors that led to less-than-optimal staffing levels and use the information to help determine staffing levels for SBE in 2020. For 2010, the Bureau developed plans that according to Bureau officials, were designed to address challenges that the Be Counted/QAC programs faced during the 2000 Census, such as (1) visibility of sites, (2) ability of the public to find where the Be Counted/QAC sites were located, and (3) monitoring of site activity. In 2000, for example, several sites we visited lacked signs publicizing the sites’ existence, which greatly reduced visibility. In some sites, census questionnaires were in places where people might not look for them, such as the bottom of a shelf. We reported that the Bureau had problems with keeping site information current, and as a result, changes in the information about the program’s site location or points of contact were not always available to the public. To address these issues, in 2010, the Bureau created banners for display in public areas of Be Counted/QAC sites, developed a Web page with locations and hours of the sites, and updated the guidance for site selection. Table 6 compares key aspects of the 2000 and 2010 Be Counted/QAC programs. The Bureau generally implemented the Be Counted/QAC program as planned. The Bureau opened around 38,000 sites, conducted the Be Counted/QAC program as scheduled from March 19 through April 19, and completed the Be Counted/QAC program under budget. The Bureau reported spending $38.7 million versus the $44.2 million budgeted. Bureau officials commented that the program came in under budget in part because the Bureau staffed the sites with one QAC representative for 15 hours a week, rather than with 1.5 representatives, as originally budgeted. This allowed the Bureau to spend less on payroll and training, according to officials. Overall, the majority of the 51 sites we visited were staffed as planned and census materials and forms were available at most sites in multiple languages. Further, the Bureau’s preliminary data on 2010 show overall activity at Be Counted/QAC sites increased, with about 1 million more forms picked up in 2010, compared to the approximately 1.7 million forms in 2000—an increase of 62 percent. Visibility is key to the effectiveness of Be Counted/QAC sites because it is directly related to people’s ability to find them. According to the Bureau’s Be Counted job aid guidance, Be Counted clerks in local census offices were responsible for monitoring sites and ensuring that banners were displayed at Be Counted/QAC locations. In many locations we visited, the Bureau’s efforts to raise the visibility of sites were evident to our observers. For example, 23 of the 51 Be Counted/QAC sites visited were displaying the banners the Bureau developed to advertise the existence of the Be Counted/QAC sites. More generally, however, there were areas for improvement. For example, our observers noted problems with “street- level” visibility in 26 of 51 Be Counted/QAC sites visited. At one site in Atlanta, for instance, no signs were visible from the main road to publicize the existence of the Be Counted site. In addition, our observers visited two sites in Brooklyn that were not visible from the street. In some cases, the banners provided by the Bureau to advertise the location of a site were not used or displayed prominently upon entering a location that housed a site. At another site in Washington, D.C., our observers noted that the banner was rolled up and leaning against a file cabinet and consequently was not clearly visible to the public. In addition, Be Counted/QAC sites were sometimes in obscure locations within the buildings in which they were housed. For example, at sites located in the basement or rear of the building, we observed no signage directing people to the Be Counted/QAC site. Further, forms and materials available at Be Counted/QAC sites were not always clearly identified and thus could have been overlooked. Figure 2 is an example of a Be Counted site in Brooklyn that was prominently visible at a library. Importantly, the banner was clearly displayed to draw attention to the site, and the time that staff would be in attendance was also obvious. In contrast, figure 3 shows a Be Counted site in Fresno, California, that was difficult to find in a barbershop. Note that the area had no signage to draw attention to the site and the forms were scattered about and difficult to find. In those instances when the Be Counted/QAC sites were not clearly visible to the public, the Bureau may have missed one of the last opportunities to directly enumerate individuals. Moving forward, the Bureau should consider more effective ways to monitor site visibility at Be Counted/QAC sites. For example, the Bureau could include visibility as one of the areas to monitor when census staff conduct their regular monitoring of the Be Counted sites. Along with visibility, the procedures used to select Be Counted/QAC sites are also key to the effectiveness of the program because they affect the extent to which sites are easily accessible to targeted populations. To improve selection of Be Counted/QAC sites in 2010, the Bureau revised its guidance on Be Counted/QAC site criteria by emphasizing locating sites in HTC areas and specifying the types of local census office areas where sites should be located (e.g., urban/HTC and urban/metropolitan). However, the guidance did not provide direction on identifying sites in locations with the likelihood of higher levels of activity, which would increase the potential for individuals to pick up Be Counted forms. Moreover, Bureau officials said they encouraged staff to take advantage of locations that were free of charge as well as locations with the likelihood of higher levels of activity. Activity levels at the Be Counted/QAC sites varied based on information from Bureau staff and our observations. QAC representatives at 8 of 43 QAC-only sites visited told us that their sites had moderate to high levels of activity while 12 of 43 QAC representatives told us their sites had low levels of activity. For example, a QAC representative at one facility in Phoenix and another in Atlanta said they had to frequently restock Be Counted forms and that they provided many people with assistance. Another QAC representative in Dallas said that he assisted up to 30 people in one day at the Be Counted/QAC site he staffed. Conversely, a QAC representative in Miami said that the LCOM was considering the site for closure because very few people visited the location and used the services. Similarly, a firefighter at a Dallas QAC site observed that the site was open for 11 days and no one visited the site during this time and the box containing materials accompanying the questionnaires (i.e., pens and language reference documents) was unopened. Additionally, during a June debriefing, where QAC representatives discussed their experiences with Bureau officials, the QAC representatives commented on the problem of low activity at some Be Counted/QAC sites, according to Bureau officials. Preliminary data on forms returned and checked in also revealed changes in activity levels at Be Counted/QAC sites for 2010. For example, an average of 20 forms were returned and checked in from each Be Counted/QAC site in 2010, down from an average of 28 in 2000. Given that the operation was conducted over a 30-day period, that translates to less than 1 form per day per site. While this difference might reflect the fact that the address list in 2010 was better than in 2000 and that fewer households were missed, it also indicates that the operation was very resource intensive relative to the number of forms that were returned. According to Bureau planning guidance, both local census office staff and partnership specialists were jointly responsible for identifying Be Counted/QAC sites, and local census office staff were responsible for monitoring the sites. However, a number of LCOMs we surveyed in May expressed concern about assistance from partnership specialists in identifying Be Counted/QAC sites. While 32 percent of 369 LCOMs who responded to our survey were generally or very satisfied with the assistance they received from partnership specialists for identifying sites, 57 percent of managers responding indicated that they were generally or very dissatisfied. Among the responses of those LCOMs who elaborated on their satisfaction level with the partnership program, one key theme that emerged was dissatisfaction with the Be Counted/QAC sites identified. For example, one LCOM commented that many of the Be Counted/QAC sites were in poor locations and were not in areas with the highest need. To the extent that the Be Counted/QAC sites were established in locations with low activity, the result was lower productivity and higher costs to the Bureau in the form of wages paid to census employees to staff and monitor the sites. There were also opportunity costs in monitoring a site with low activity when a site in a different location could have produced better results. The Be Counted/QAC program, in concept, may be a reasonable effort to include people who might have otherwise been missed by the census. However, it was also a resource-intensive operation in which relatively few questionnaires, on average per site, were generated, once the cost and effort of identifying, stocking, staffing, monitoring, and maintaining the sites are considered. More will be known about the effectiveness of the Be Counted/QAC program when the Bureau determines how many Be Counted/QAC forms resulted in adding people and new addresses to the census. Similar to SBE, the Bureau plans to assess the Be Counted/QAC program by examining costs, schedule, training, and staffing. Moving forward, it will also be important for the Bureau to explore ways to maximize the Be Counted/QAC program’s ability to increase the number of forms returned and checked in from the target population for the 2020 Census and, ultimately, determine whether fewer but more strategically placed sites could produce more cost-effective results. In 2010, the Bureau was better positioned to reach out to and enumerate HTC populations compared to 2000 in large part because its plans addressed a number of the challenges experienced in the previous decennial. For example, the Bureau focused more of its resources on targeting paid media efforts to HTC groups, employed partnership staff with a wider range of language capabilities, and developed a more comprehensive list of service-providing facilities that likely enhanced its capacity to enumerate people lacking conventional housing. Further, from an operational perspective, the Bureau generally implemented its HTC outreach and enumeration efforts consistent with its operational plans, completing them within schedule and budget. Overall, while the full impact of these efforts will not be known until after the Bureau completes various assessments, including an evaluation of the extent and nature of any under- and overcounts, the Bureau’s rigorous effort to raise awareness, encourage participation, and enumerate HTC populations likely played a key role in holding mail participation rates steady in 2010 for the overall population, a significant achievement given the various factors that were acting against an acceptable mail response in 2010. Still, certain aspects of the Bureau’s outreach and enumeration of HTC populations need attention. Key focus areas for outreach efforts include (1) ensuring the Bureau is using paid media efficiently to improve response rates, (2) improving the coordination between partnership and local census office staff to leverage opportunities to achieve a more accurate and complete count, (3) improving the partnership database to enhance its use as a management tool, and (4) making promotional materials available to partnership staff when they begin their work to improve their ability to develop relationships with partner organizations. For enumeration activities, by determining the factors that lead to the SBE staffing issues at some locations and revising site selection guidance for Be Counted/QAC sites based on visitation and other applicable data, the Bureau may increase the overall value of special enumeration activities. More generally, the Bureau invested more resources in reaching out to and enumerating HTC groups in 2010 but achieved the same overall participation rate as in 2000. This trend is likely to continue as the nation’s population gets larger, more diverse, and more difficult to count. As the Bureau looks toward the next national headcount, it plans to use the results of its evaluations for input into 2020 planning. At the same time, it will be important for the Bureau to go beyond that and use 2010 evaluation results to gain a better understanding of the extent to which the various special enumeration activities aimed at HTC groups produced a more complete and accurate census. More specifically, better information on the value added by each special enumeration activity could help the Bureau allocate its resources more cost effectively. This may include changing existing programs to increase efficiency or undertaking new special enumeration efforts altogether. To help improve the effectiveness of the Bureau’s outreach and enumeration efforts, especially for HTC populations, should they be used again in the 2020 Census, we recommend that the Secretary of Commerce require the Under Secretary for Economic Affairs as well as the Director of the U.S. Census Bureau to take the following seven actions: To improve the Bureau’s marketing/outreach efforts: Use evaluation results, response rate, and other data to develop a predictive model that would inform decisions on how much and how best to allocate paid media funds for 2020. Develop mechanisms to increase coordination and communication between the partnership and local census office staff. Possible actions include offering more opportunities for joint training, establishing protocols for coordination, and more effectively leveraging the partnership contact database to better align partnership outreach activities with local needs. Improve the user-friendliness of the partnership database to help ensure more timely updates of contact information and enhance its use as a management tool. Ensure that promotional materials, including in-language materials for the partnership program, are available when partnership staff are first hired. To improve some of the Bureau’s key efforts to enumerate HTC populations: Assess visitation, response rate, and other applicable data on Be Counted/QAC locations and use that information to revise site selection guidance for 2020. Determine the factors that led to the staffing issues observed during SBE and take corrective actions to ensure more efficient SBE staffing levels in 2020. Evaluate the extent to which each special enumeration activity improved the count of traditionally hard-to-enumerate groups and use the results to help inform decision making on spending for these programs in 2020. On December 8, 2010, the Secretary of Commerce provided written comments on the draft report, which are reprinted in appendix I. The Department of Commerce generally agreed with the overall findings and recommendations of the report. In addition, the department noted that its Economics and Statistics Administration (ESA) has management oversight responsibility for the Bureau and asked that we include ESA in our recommendation. We revised the report to reflect this comment. We are sending copies of this report to the Secretary of Commerce, the Director of the U.S. Census Bureau, the Under Secretary for Economic Affairs, and interested congressional committees. The report also is available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-2757 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix II. In addition to the contact named above, Signora May, Assistant Director; Peter Beck; David R. Bobruff; Benjamin C. Crawford; Shaunessye Curry; Kathleen Drennan; Elizabeth Fan; Robert Gebhart; Guillermo Gonzalez; Thomas Han; Paul Hobart; Brian James; Paul Kinney; Elke Kolodinski; Kirsten B. Lauber; Veronica Mayhand; Karine McClosky; Catherine Myrick; Keith O’Brien; Michael Pahr; Melanie Papasian; Rudolfo Payan; Stacy Spence; Barbara Steel-Lowney; Travis Thomson; Cheri Y. Truett; Timothy Wexler; Monique B. Williams; Carla Willis; and Katherine Wulff made key contributions to this report. 2010 Census: Data Collection Operations Were Generally Completed as Planned, but Long-standing Challenges Suggest Need for Fundamental Reforms. GAO-11-193. Washington, D.C.: December 14, 2010. 2010 Census: Follow-up Should Reduce Coverage Errors, but Effects on Demographic Groups Need to Be Determined. GAO-11-154. Washington, D.C.: December 14, 2010. 2010 Census: Cooperation with Enumerators Is Critical to Successful Headcount. GAO-10-665T. Washington, D.C.: April 30, 2010. 2010 Census: Plans for Census Coverage Measurement Are on Track, but Additional Steps Will Improve Its Usefulness. GAO-10-324. Washington, D.C.: April 23, 2010. 2010 Census: Data Collection Is Under Way, but Reliability of Key Information Technology Systems Remains a Risk. GAO-10-567T. Washington, D.C.: March 25, 2010. 2010 Census: Operational Changes Made for 2010 Position the U.S. Census Bureau to More Accurately Classify and Identify Group Quarters. GAO-10-452T. Washington, D.C.: February 22, 2010. 2010 Census: Efforts to Build an Accurate Address List Are Making Progress, but Face Software and Other Challenges. GAO-10-140T. Washington, D.C.: October 21, 2009. 2010 Census: Census Bureau Continues to Make Progress in Mitigating Risks to a Successful Enumeration, but Still Faces Various Challenges. GAO-10-132T. Washington, D.C.: October 7, 2009. 2010 Census: Communications Campaign Has Potential to Boost Participation. GAO-09-525T. Washington, D.C.: March 23, 2009. 2010 Census: Fundamental Building Blocks of a Successful Enumeration Face Challenges. GAO-09-430T. Washington, D.C.: March 5, 2009. 2010 Census: Census Bureau Needs Procedures for Estimating the Response Rate and Selecting for Testing Methods to Increase Response Rate. GAO-08-1012. Washington, D.C.: September 30, 2008. 2010 Census: The Bureau’s Plans for Reducing the Undercount Show Promise, but Key Uncertainties Remain. GAO-08-1167T. Washington, D.C.: September 23, 2008. | To overcome the long-standing challenge of enumerating hard-to-count (HTC) groups such as minorities and renters, the U.S. Census Bureau (Bureau), used outreach programs, such as paid advertising, and partnered with thousands of organizations to enlist their support for the census. The Bureau also conducted Service-Based Enumeration (SBE), which was designed to count people who frequent soup kitchens or other service providers, and the Be Counted/Questionnaire Assistance Center (QAC) program, designed to count individuals who believed the census had missed them. As requested, GAO assessed how the design of these efforts compared to 2000 and the extent to which they were implemented as planned. GAO reviewed Bureau budget, planning, operational, and evaluation documents; observed enumeration efforts in 12 HTC areas; surveyed local census office managers; and interviewed Bureau officials. The Bureau better positioned itself to reach out to and enumerate HTC populations in 2010 in part by addressing a number of key challenges from 2000. The Bureau's outreach efforts were generally more robust compared to 2000. For example, compared to 2000, the Bureau used more reliable data to target advertising; focused a larger share of its advertising dollars on HTC groups, such as non-English-speaking audiences; and strengthened its monitoring abilities so that the Bureau was able to run additional advertising in locations where mail response rates were lagging. The Bureau also significantly expanded the partnership program by hiring about 2,800 partnership staff in 2010 compared to around 600 in 2000. As a result, staff were not spread as thin. The number of languages they spoke increased from 35 in 2000 to 145 for the 2010 Census. Despite these enhancements, the outreach efforts still faced challenges. For example, while most of the partnership staff GAO interviewed reported having mutually supportive relationships with local census offices, about half of the local census office managers surveyed were dissatisfied with the level of coordination, noting duplication of effort in some cases. Additionally, a tracking database that partnership staff were to use to help manage their efforts was not user-friendly nor was it kept current. The Bureau also improved the key enumeration programs aimed at HTC groups and the efforts were generally implemented as planned, but additional refinements could improve them for 2020. For example, the Bureau expanded SBE training by teaching staff how to enumerate all types of SBE facilities, which gave the Bureau more flexibility in scheduling enumerations, and advance visits helped enhance service providers' readiness for the enumeration. Nevertheless, while most local census office managers were satisfied with SBE staffing levels, pockets of dissatisfaction existed and observers noted what appeared to be a surplus of enumerators with little work to do in some locations. While overstaffing can lead to unnecessarily higher labor costs, understaffing can also be problematic because it can affect the accuracy of the overall count, and it will be important for the Bureau to review the results of SBE to staff SBE efficiently in 2020. For the Be Counted/QAC program, the Bureau addressed visibility and site selection challenges from 2000 by developing banners to prominently display site locations and hours of operation and updating site selection guidance. For 2010, the Bureau opened around 38,000 sites and completed the monthlong operation under budget. However, the Bureau experienced recurring challenges with ensuring that the sites were visible from street level and were in areas with potential for high levels of activity, and the overall effort was resource intensive relative to the average of 20 forms that were returned and checked in from each site. Moving forward, it will be important for the Bureau to explore ways to maximize the program's ability to increase the number of forms checked in for 2020. GAO recommends that the Bureau take steps to improve the effectiveness of its outreach and enumeration activities aimed at HTC groups, including developing a predictive model to better allocate paid advertising funds, improving coordination between partnership and local census staff, revisiting SBE staffing guidance, and ensuring Be Counted/QAC sites are more visible and optimally located. Commerce generally agreed with the overall findings and recommendations. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
DHS invests in major acquisition programs to develop capabilities intended to improve its ability to execute its mission. DHS generally defines major programs as those expected to cost at least $300 million over their respective life cycles, and many are expected to cost more than $1 billion. DHS Acquisition Management Directive 102-01 (AD 102) and DHS Instruction Manual 102-01-001 (Guidebook), which includes 12 appendixes, establish the department’s policies and processes for managing these major acquisition programs. DHS issued the initial version of AD 102 in 2008 in an effort to establish an acquisition management system that effectively provides required capability to operators in support of the department’s missions. AD 102 establishes that DHS’s Chief Acquisition Officer—currently the Under Secretary for Management (USM)—is responsible for the management and oversight of the department’s acquisition policies and procedures. The USM, Deputy Secretary, and Component Acquisition Executives (CAE) are the Acquisition Decision Authorities for DHS’s major acquisition programs. Table 1 identifies how DHS categorizes the 77 major acquisition programs it identified in 2011. The Acquisition Decision Authority is responsible for reviewing and approving the movement of DHS’s major acquisition programs through four phases of the acquisition life cycle at a series of five predetermined Acquisition Decision Events. These five Acquisition Decision Events provide the Acquisition Decision Authority an opportunity to assess whether a major program is ready to proceed through the life-cycle phases. The four phases of the acquisition life cycle, as established in AD 102, are: 1. Need phase: Department officials identify that there is a need, consistent with DHS’s strategic plan, justifying an investment in a new capability and the establishment of an acquisition program to produce that capability; 2. Analyze/Select phase: the Acquisition Decision Authority designates a qualified official to manage the program, and this program manager subsequently reviews alternative approaches to meeting the need, and recommends a best option to the Acquisition Decision Authority; 3. Obtain phase: The program manager develops, tests, and evaluates the selected option; during this phase, programs may proceed through ADE 2B, which focuses on the cost, schedule, and performance parameters for each of the program’s projects; and ADE 2C, which focuses on low rate initial production issues; and 4. Produce/Deploy/Support phase: DHS delivers the new capability to its operators, and maintains the capability until it is retired; this phase includes sustainment, which begins when a capability has been fielded for operational use; sustainment involves the supportability of fielded systems through disposal, including maintenance and the identification of cost reduction opportunities; this phase tends to account for up to 70 percent of life-cycle costs. Figure 1 depicts the acquisition life cycle. An important aspect of the Acquisition Decision Events is the review and approval of key acquisition documents critical to establishing the need for a major program, its operational requirements, an acquisition baseline, and testing and support plans. AD 102—and the associated DHS Instruction Manual 102-01-001 and appendixes—provide more detailed guidance for preparing these documents than DHS’s predecessor policy. See table 2 for descriptions of the key acquisition documents requiring department-level approval before a program moves to the next acquisition phase. Level 2 programs’ life cycle cost estimates do not require department-level approval. Chief Financial Officer, Chief Procurement Officer, Chief Information Officer, Chief Human Capital Officer, Chief Administrative Services Officer, Chief Security Officer, CAE responsible for the program being reviewed, and User representatives from component(s) sponsoring the capability. The Office of Program Accountability and Risk Management (PARM) is responsible for DHS’s overall acquisition governance process, supports the IRB, and reports directly to the USM. PARM, which is led by an executive director, develops and updates program management policies and practices, oversees the acquisition workforce, provides support to program managers, and collects program performance data. In March 2012, PARM issued its first Quarterly Program Accountability Report, which provided an independent evaluation of major programs’ health and risks. The department’s program management offices are responsible for planning and executing DHS’s individual programs within cost, schedule, and performance goals. The program managers provide the IRB key information by preparing required acquisition documents that contain critical knowledge about their respective programs, facilitating the governance process. Nearly all of DHS’s program management offices are located within 12 of the department’s component agencies, such as the Transportation Security Administration, or U.S. Customs and Border Protection. Within these components, CAEs are responsible for establishing acquisition processes and overseeing the execution of their respective portfolios. Additionally, under AD 102, the USM can delegate Acquisition Decision Authority to CAEs for programs with life-cycle cost estimates between $300 million and $1 billion. Figure 2 depicts the relationship between acquisition managers at the department, component, and program level. The Office of Program Analysis and Evaluation (PA&E), within the Office of the Chief Financial Officer (OCFO), is responsible for advising the USM, among others, on resource allocation issues. PA&E coordinates with DHS’s Office of Policy on the department’s long-term strategic planning efforts, analyzing budget submissions, cost estimates, and resource constraints. PA&E also oversees the development of the Future Years Homeland Security Program (FYHSP). DHS is required to submit the FYHSP to Congress annually with each budget request. The FYHSP is DHS’s 5-year funding plan for programs approved by the Secretary that are to support the DHS strategic plan. The FYHSP provides a detailed account of time-phased resource requirements for each component, as well as programs’ cost estimates, milestones, and performance measures. Nearly all of the program managers we surveyed reported their programs had experienced significant challenges increasing the risk of poor outcomes, particularly cost growth and schedule slips. Sixty-eight of the 71 programs that responded to our survey reported that they experienced funding instability, faced workforce shortfalls, or their planned capabilities changed after initiation. Most program managers reported a combination of these challenges, as illustrated in figure 3. We have previously reported that these challenges increase the likelihood acquisition programs will cost more and take longer to deliver capabilities than expected.realistic schedules needed to accurately measure program performance, Although DHS lacks the reliable cost estimates and it has submitted some cost information to Congress, and PARM conducted an internal review of its major acquisition programs in March 2012. We used this information and our survey results to identify 42 programs that experienced cost growth, schedule slips, or both. Cost information DHS submitted to Congress provides insight into the magnitude of the cost growth for 16 of the 42 programs. Using this information, we found total project costs increased from $19.7 billion in 2008 to $52.2 billion in 2011, an aggregate increase of 166 percent. See figure 4. We have previously reported that cost growth and schedule slips can lead to reduced capabilities, decreasing the value provided to the operator—as well as the value of the resources invested in the programs. This poor performance threatens the department’s ability to successfully field the capabilities it is pursuing. Prior to entering the Obtain phase, programs are to establish the specific capabilities they plan to develop to improve DHS’s ability to execute its mission. Forty-three survey respondents reported that their programs changed planned capabilities after the initiation of design and development activities, which occurs between ADE 2B and testing. We have previously found that both increases and decreases in planned capabilities are associated with cost growth and schedule slips. We have found that increasing planned capabilities can lead to cost growth or schedule slips because programs are more costly to change after they begin development activities. Alternatively, we have stated that programs may choose to decrease their planned capabilities in response to cost growth or schedule slips in an effort to maintain affordability or deliver certain capabilities when needed. At DHS, we found that more than half of the 43 programs that reported changing their capabilities had experienced cost growth or schedule slips, regardless of whether their planned capabilities increased, decreased, or both. See figure 5. The 43 survey respondents that reported their planned capabilities changed identified five key reasons for the changes. Nineteen of the 43 survey respondents reported more than one reason. See figure 6. Survey respondents identified operator input as the most common reason for increasing planned capabilities after the initiation of development efforts, even though officials at the department, component, and program levels all said operator input at the initiation of design and development is very useful. For example, in 2011, we reported that the U.S. Citizenship and Immigration Services’s Transformation program did not fully define its planned capabilities before it awarded a contract to develop a new system to enhance the adjudication of applications. After the contract was awarded, the program office worked with those officials most familiar with adjudication operations and discovered that the functions were more complex than expected. As a result, the program office revised the requirements, and the deployment date for key capabilities slipped from April 2011 to October 2012. Alternatively, DHS program managers identified funding availability as the most common reason for decreasing planned capabilities after the initiation of development efforts. In the past, we have stated that agencies may reduce planned capabilities in this manner when their programs experience cost growth. Decreasing planned capabilities in response to affordability concerns may be fiscally responsible, but as a result, operators may not receive the capability originally agreed upon to address existing capability gaps. DHS is required to establish out-year funding levels for programs annually in the FYHSP. Changes to planned out-year funding levels create funding instability, which we have previously found increases the risk of cost growth, schedule slips, and capability shortfalls. Sixty-one survey respondents reported that their programs have experienced funding instability, and we found that 44 of the 61 programs had also realized cost growth, schedule slips, or capability reductions. Additionally, 29 survey respondents reported that their programs had to resequence the delivery of certain capabilities. For example, Coast Guard officials told us they deferred some of the HH-60 helicopter’s capabilities because of funding constraints across their portfolio of programs. The Coast Guard delayed delivery of dedicated radar to search the surface of the water in order to replace critical components, such as main rotor blades, as planned. Figure 7 identifies how program managers reported funding instability has affected their programs. Forty-five of the 61 survey respondents that reported their programs experienced funding instability also reported reasons for the funding instability. Twenty-two survey respondents reported more than one reason. See figure 8. Eighteen survey respondents reported that their program experienced a funding decrease because of another program’s funding needs. We have previously reported that agencies often change funding levels in this manner when they commit to more programs than they can afford. A PA&E official told us that DHS’s resource requirements exceed the department’s funding levels, and that the department has allowed major acquisition programs to advance through the acquisition life cycle without identifying how they will be funded. Furthermore, a PA&E official stated that DHS has not been able to determine the magnitude of its forthcoming funding gap because cost estimates are unreliable. The director of the department’s cost analysis division determined that only 12 major acquisition programs met most of DHS’s criteria for reliable cost estimates when it reviewed the components’ fiscal year 2013 budget submissions. In 2010, we reported that DHS officials had difficulty managing major programs because they lacked accurate cost estimates. Given the fiscal challenges facing the federal government, funding shortfalls may become an increasingly common challenge at DHS, leading to further cost growth that widens the gap between resource requirements and available funding. DHS acquisition policy establishes that each program office should be staffed with personnel who have appropriate qualifications and experience in key disciplines, such as systems engineering, logistics, and financial management. Fifty-one survey respondents reported that their programs had experienced workforce shortfalls—specifically a lack of government personnel—increasing the likelihood their programs will perform poorly in the future. We have previously reported that a lack of adequate staff in DHS program offices—both in terms of skill and staffing levels—increased the risk of insufficient program planning and contractor oversight, which is often associated with cost growth and schedule slips.Figure 9 below identifies the functional areas where DHS acquisition programs reported workforce shortfalls. We found that 29 of the 51 DHS programs that identified workforce shortfalls had also experienced cost growth or schedule slips.workforce shortfalls have led to insufficient program planning, hindering the development of key acquisition documents intended to inform senior- level decision making. For example, CAEs and program managers said that workforce shortfalls limited program management offices’ interaction with stakeholders and operators, and delayed or degraded test plans and cost estimates. In addition, a PARM official explained that DHS has had to rely on contractors to produce cost estimates because of workforce shortfalls, and the quality of these cost estimates has varied. The The USM has stated that properly staffing programs is one of DHS’s biggest challenges, and we have previously reported that the capacity of the federal government’s acquisition workforce has not kept pace with increased spending for increasingly complex purchases. PARM officials told us that the IRB’s program reviews include assessments of the program office workforce, but that the IRB considers staffing issues a relatively low priority, and we found the IRB has formally documented workforce-related challenges for only 11 programs. DHS acquisition policy reflects many key program management practices.critical knowledge that would help leaders make better informed investment decisions when managing individual programs. This knowledge would help DHS mitigate the risks of cost growth and schedule slips resulting from funding instability, workforce shortfalls, and planned-capability changes. However, as of April 2012, the department had only verified that four programs documented all of the critical knowledge required to progress through the acquisition life cycle. In most instances, DHS leadership has allowed programs it has reviewed to proceed with acquisition activities without meeting these requirements. Officials explained that DHS’s culture has emphasized the need to rapidly execute missions more than sound acquisition management practices, and we have found that most of the department’s major programs are at risk of cost growth and schedule slips as a result. In addition, they lack the reliable cost estimates, realistic schedules, and agreed-upon baseline objectives that DHS acknowledges are needed to accurately track program performance, limiting DHS leadership’s ability to effectively manage those programs and provide information to Congress. DHS recognizes the need to implement its acquisition policy more consistently, but significant work remains. In 2005, we reported that DHS established an investment review process that adopted many practices to reduce risk and increase the chances for successful outcomes. In 2010, we reported that AD 102 provided more detailed guidance for preparing key acquisition documents than the department’s predecessor policy. In October 2011, DHS updated the Guidebook and its appendixes, and we have found that it establishes a knowledge-based acquisition policy for program management that is largely consistent with key practices. A knowledge-based approach to capability development allows developers to be reasonably certain, at critical points in the acquisition life cycle, that their products are likely to meet established cost, schedule, and performance objectives. information needed to make sound investment decisions, and it would help DHS address the significant challenges we identified across its acquisition programs: funding instability, workforce shortfalls, and planned-capability changes. Over the past several years, our work has emphasized the importance of obtaining key knowledge at critical points in major system acquisitions and, based on this work, we have identified eight key practice areas for program management. These key practice areas are summarized in table 3, along with our assessment of DHS’s acquisition policy. In our past work examining weapon acquisition issues and best practices for product development, we have found that leading commercial firms pursue an acquisition approach that is anchored in knowledge, whereby high levels of product knowledge are demonstrated by critical points in the acquisition process. See GAO-11-233SP. Legend: DHS policy reflects key practices; ◕ DHS policy substantially reflects key practices; ◑ DHS policy partially reflects key practices. We found that DHS’s acquisition policy generally reflects key program- management practices, including some intended to help develop knowledge at critical points in the acquisition life cycle. Furthermore, the revised policy the department issued in October 2011 better reflects two key practice areas by bolstering exit criteria and taking steps to establish an adequate acquisition workforce. Specifically, the revised Guidebook and its appendixes require that refined cost estimates be reviewed at major milestones after the program baseline has been established, and used to determine whether a program has developed appropriate knowledge to move forward in the acquisition life cycle. These reviews can help reduce risk and the potential for unexpected cost and schedule growth. Additionally, the revised policy establishes that major program offices should be staffed with personnel with appropriate qualifications and experience in key acquisition disciplines. We have previously identified that the magnitude and complexity of the DHS acquisition portfolio demands a capable and properly trained workforce and that workforce shortfalls increase the risk of poor acquisition outcomes. The policy revisions could help mitigate this risk. However, there are three areas where DHS could further enhance acquisition oversight: The policy requires that DHS test technologies and manufacturing processes, but it does not require that 1) programs demonstrate technologies in a realistic environment prior to initiating development activities at the outset of the Obtain phase, or 2) manufacturing processes be tested prior to production. These practices decrease the risk that rework will be required, which can lead to additional cost growth and schedule slips. The policy requires that DHS establish exit criteria for programs moving to the next acquisition phase, and standardizes document requirements across all major programs, but it does not require that 1) exit criteria be quantifiable to the extent possible, or 2) consistent information be used across programs when approving progress within the Obtain phase, specifically at ADE 2B and 2C. These practices decrease the risk that a program will make an avoidable error because management lacks information needed to leverage lessons learned across multiple program reviews. The policy requires that program managers be certified at an appropriate level, but it does not state that they should remain with their programs until the next major milestone when possible. This practice decreases the risk that program managers will not be held accountable for their decisions, such as proceeding without reliable cost estimates or realistic schedules. PARM officials generally acknowledged DHS has opportunities to strengthen its program-management guidance. Officials reported that they are currently in the process of updating AD 102, which they plan to complete by the end of fiscal year 2012. They also plan to issue revisions to the associated guidebook and appendixes in phases. PARM officials told us that they plan to structure the revised acquisition policy by function, consolidating guidance for financial management, systems engineering, reporting requirements, and so forth. PARM officials anticipate that this organization will make it easier for users to identify relevant information as well as streamline the internal review process for future updates. DHS acquisition policy establishes several key program-management practices through document requirements. AD 102 requires that major acquisition programs provide the IRB documents demonstrating the critical knowledge needed to support effective decision making before progressing through the acquisition life cycle. For example, programs must document that they have assessed alternatives to select the most appropriate solution through a formal Analysis of Alternatives report, which must be approved by component-level leadership. Figure10 identifies acquisition documents that must be approved at the department level and their corresponding key practice areas. DHS acquisition policy requires these documents, but the department generally has not implemented its acquisition policy as intended, and in practice the department has not adhered to key program management practices. DHS’s efforts to implement the department’s acquisition policy have been complicated by the large number of legacy programs initiated before the department was created, including 11 programs that PARM officials told us were in sustainment when AD 102 was signed. We found that the department has only approved four programs’ required documents in accordance with DHS policy: the National Cybersecurity and Protection System, the Next Generation Network, the Offshore Patrol Cutter, and the Passenger Screening Program. Additionally, we found that 32 programs had none of the required documents approved by the department. See figure 11. Since 2008, DHS leadership—through the IRB or its predecessor body the Acquisition Review Board—has formally reviewed 49 of the 71 major programs that responded to our survey. It permitted 43 of those programs to proceed with acquisition activities without verifying the programs had developed the knowledge required for AD 102’s key acquisition documents. See figure 12. Officials from half of the CAE offices we spoke to reported that DHS’s culture has emphasized the need to rapidly execute missions more than sound acquisition management practices. PARM officials agreed, explaining that DHS has permitted programs to advance without department-approved acquisition documents because DHS had an operational need for the promised capabilities, but the department could not approve the documents in a timely manner. PARM officials explained that, in certain instances, programs were not capable of documenting knowledge, while in others, PARM lacked the capacity to validate that the documented knowledge was adequate. In 2008 and 2010, we reported that several programs were permitted to proceed with acquisition activities on the condition they complete key action items in the future.However, PARM officials told us that many of these action items were not addressed in a timely manner. Additionally, program managers reported that there has been miscommunication between DHS headquarters and program offices regarding implementation of the acquisition policy, and we found that DHS headquarters and program managers often had a different understanding of whether their programs were in compliance with AD 102. For example, DHS headquarters officials told us that 19 of the 40 programs that reported through our survey they had department- approved acquisition program baselines (APB) in fact did not. Because DHS has not generally implemented its acquisition policy, senior leaders lack the critical knowledge needed to accurately track program performance: (1) department-approved APBs, (2) reliable cost estimates, and (3) realistic schedules. Specifically, at the beginning of 2012, DHS leadership had approved APBs for less than one-third of the 63 programs we reviewed that are required to have one based on their progression through the acquisition life cycle. Additionally, we found that none of the programs with a department-approved APB also met DHS’s criteria for both reliable cost estimates and realistic schedules, which are key components of the APB. This raises questions about the quality of those APBs that have been approved, as well as the value of the DHS review process in practice. Figure 13 identifies how many programs currently have department-approved APBs, reliable cost estimates, and realistic schedules. The APB is a critical tool for managing an acquisition program. According to DHS’s acquisition Guidebook, the program baseline is the agreement between program, component, and department level officials, establishing how systems will perform, when they will be delivered, and what they will cost. In practice, when the Acquisition Decision Authority approves a program’s APB, among other things, it is concurring that the proposed capability is worth the estimated cost. However, we found that DHS plans to spend more than $105 billion on programs lacking current, department- approved APBs. Specifically, when DHS submitted the FYHSP to Congress in 2011, it reported that 34 of the 43 programs lacking department-approved APBs were expected to cost $108.8 billion over their acquisition life cycles. DHS did not provide cost estimates for the other 9 programs because the data were unreliable. In addition to overall cost, schedule, and performance goals, the APB also contains intermediate metrics to measure a program’s progress in achieving those goals. These intermediate metrics allow managers to take corrective actions earlier in the acquisition life cycle. DHS’s lack of APBs, PARM officials explained, makes it more difficult to manage program performance. In March 2012, PARM reported that 32 programs had experienced significant cost growth or schedule slips in its internal Quarterly Program Accountability Report. However, DHS has only formally established that 8 of its programs have fallen short of their cost, schedule, or performance goals, because approximately three-quarters of the programs PARM identified lack the current, department-approved APBs needed to authoritatively measure performance. To accurately assess a program’s performance, managers need accurate cost and schedule information. However, DHS acquisition programs generally do not have reliable cost estimates and realistic schedules, as required by DHS policy. In June 2012, the department reported to GAO that its senior leaders lacked confidence in the performance data they receive, hindering their efforts to manage risk and allocate resources. GAO-10-588SP. approved APBs. Additionally, only 12 program offices reported that they fully adhered to DHS’s scheduling guidance, which requires that programs sequence all activities, examine the effects of any delays, update schedules to ensure validity, and so forth. Eight of these programs lacked department-approved APBs. DHS’s lack of reliable performance data not only hinders its internal acquisition management efforts, but also limits Congressional oversight. Congress mandated the department submit the Comprehensive Acquisition Status Report (CASR) to the Senate and House Committees on Appropriations as part of the President’s fiscal year 2013 budget, which was submitted in February 2012. However, DHS told us that it did not do so until August 2012. Congress mandated DHS produce the CASR in order to obtain information necessary for in-depth congressional oversight, including life-cycle cost estimates, schedules, risk ratings, and out-year funding levels for all major programs. The CASR has the potential to greatly enhance oversight efforts by establishing a common understanding of the status of all major programs. In April 2012, PARM officials told us that DHS had begun to implement its acquisition policy in a more disciplined manner. They told us that they had adequate capacity to review programs, and would no longer advance programs through the acquisition life cycle until DHS leadership verified the programs had developed critical knowledge. For example, in February 2012, the IRB denied a request from the BioWatch Gen 3 program— which is developing a capability to detect airborne biological agents—to solicit proposals from contractors because its draft APB was not valid. PARM officials said they are using a risk-based approach to prioritize the approval of the department’s APBs. Specifically, they explained that one of their fiscal year 2011 initiatives was to attain department-level approval of APBs for all Level 1 programs in the Obtain phase of the acquisition life cycle. However, we found only 8 of the 19 programs PARM said fell into this category had current, department-approved APBs as of September 2012. In an effort to improve the consistency of performance data reported by program managers, PARM officials stated that they are establishing scorecards to assess cost estimates and standard work breakdown structures for IT programs. The PARM officials also explained that CAE’s performance evaluations now include an assessment of the completeness and accuracy of performance data reported for their respective programs. However, DHS must overcome significant challenges in order to improve the reliability of performance data and meet key requirements in the department’s acquisition policy. For example, department and component-level officials told us that program managers do not report on their programs in a consistent manner. Additionally, DHS officials told us that they lack cost estimating capacity throughout the department and that they must rely heavily on contractors, which do not consistently provide high-quality deliverables. In August 2012, a PARM official stated that DHS was currently in the process of hiring eight additional government cost estimators to support programs. DHS acquisition policy does not fully reflect several key portfolio- management practices, such as allocating resources strategically, and DHS has not yet reestablished an oversight board to manage its investment portfolio across the department. As a result, DHS has largely made investment decisions on a program-by-program and component-by- component basis. The widespread risk of poorly understood cost growth, coupled with the fiscal challenges facing the federal government, makes it essential that DHS allocate resources to its major programs in a deliberate manner. DHS plans to develop stronger portfolio-management policies and processes, but until it does so, DHS programs are more likely to experience additional funding instability in the future, which will increase the risk of further cost growth and schedule slips. These outcomes, combined with a tighter budget, could prevent DHS from developing needed capabilities. In our past work, we have found that successful commercial companies use a disciplined and integrated approach to prioritize needs and allocate resources. As a result, they can avoid pursuing more projects than their resources can support, and better optimize the return on their investment. This approach, known as portfolio management, requires companies to view each of their investments as contributing to a collective whole, rather than as independent and unrelated. With this enterprise perspective, companies can effectively (1) identify and prioritize opportunities, and (2) allocate available resources to support the highest priority—or most promising—opportunities. Over the past several years, we have examined the practices that private and public sector entities use to achieve a balanced mix of new projects, and based on this work, we have identified four key practice areas for portfolio management, summarized in table 4, along with our assessment of DHS acquisition policy. We found that DHS’s acquisition policy reflects some key portfolio- management practices. DHS has not designated individual portfolio managers, but it requires that the department’s Chief Acquisition Officer— currently the USM—be supported by the IRB, which includes officials representing key functional areas, such as budget, procurement, IT, and human capital. DHS’s acquisition policy also establishes that requirements, acquisition, and budget processes should be connected to promote stability. However, as acknowledged by DHS officials, the policy does not reflect several other key portfolio-management practices: The policy does not empower portfolio managers to decide how best to invest resources. This practice increases the likelihood resources will be invested effectively, and that portfolio managers will be held accountable for outcomes. The policy does not establish that investments should be ranked and selected using a disciplined process. This practice increases the likelihood the portfolio will be balanced with risk spread across products. The policy does not establish that (1) resource allocations should align with strategic goals, or (2) the investment review policy should use long-range planning. These practices increase the likelihood that the right amount of funds will be delivered to the right projects, maximizing return on investments. The policy does not require portfolio reviews (1) annually to consider proposed changes, (2) as new opportunities are identified, or (3) whenever a program breaches its objectives. These practices provide opportunities for leaders to increase the value of investments, determine whether or not the investments are still relevant and affordable, and help keep programs within cost and schedule targets. PARM officials acknowledge that the department does not currently have a policy that addresses these key portfolio-management practices. Further, they told us that there has been less focus on portfolio management than program management to date because the acquisition process is still relatively immature. As a result, DHS largely makes investment decisions on a program-by-program and component-by- component basis. In our work at the Department of Defense, we have found this approach hinders efforts to achieve a balanced mix of programs that are affordable and feasible and that provide the greatest return on investment. PARM officials anticipate that DHS will improve its portfolio-management guidance in the future by formalizing its proposed Integrated Investment Life Cycle Model (IILCM). In January 2011, DHS presented a vision of the IILCM as a means to better integrate investment management functions, including requirements development, resource allocation, and program governance. DHS explained that the IILCM would ensure mission needs drive investment decisions and establish a common framework for monitoring and assessing the department’s investments. The IILCM would be implemented through the creation of several new department- level councils, as illustrated in figure 14, which would identify priorities and capability gaps. In 2003, DHS established the Joint Requirements Council (JRC) to identify crosscutting opportunities and common requirements among DHS components, and help determine how DHS should use its resources. However, as we have previously reported, the JRC stopped meeting in 2006. In 2008, we recommended that the JRC be reinstated, or that DHS establish another joint requirements oversight board. At that time, DHS officials recognized that strengthening the JRC was a top priority. The department has proposed the creation of a Capabilities and Requirements Council (CRC) to serve in a similar role as the JRC, but the CRC is not yet established. In the absence of a JRC, or the proposed CRC, DHS budget officials explained it is difficult to develop a unified strategy to guide trade-offs between programs because of the diversity of the department’s missions. Poor program outcomes, coupled with a tighter budget, could prevent DHS from developing needed capabilities. In our work at the Department of Defense, we have found that agencies must prioritize investments, or programs will continually compete for funding by promising more capabilities than they can deliver while underestimating costs. found that success was measured in terms of keeping a program alive rather than efficiently delivering the capabilities needed. It appears the lack of prioritization is affecting DHS in the same way. As discussed earlier in our assessment of program challenges, 18 of the department’s programs reported DHS decreased their out-year funding levels because of another program’s funding needs, and 61 programs reported they experienced some form of funding instability. Until recently, the responsibility for balancing portfolios has fallen on components. However, DHS policy officials noted that component-level officials have a relatively limited perspective focused on those programs under their authority, making it more difficult to ensure the alignment of mission needs to department-level goals. Additionally, component-level officials can only make trade-offs across the portion of the DHS portfolio that falls under their purview, limiting opportunities to increase the department’s return on its investments. GAO-08-619. The USM and PARM officials have stated they recognize the value of portfolio management, and they have taken some steps to fill the gap left without a functioning JRC or CRC. A PARM official stated that, starting in 2012, PARM is collaborating with the Offices of the Chief Information, Financial, and Procurement Officers, as well as the Office of Policy, to conduct portfolio reviews from a functional, cross-component perspective. In the past, PARM’s portfolio reviews focused on each component individually. This new functional approach is establishing portfolios based on departmentwide missions, such as domain awareness or screening, and PARM officials intend to produce trade-off recommendations for prioritizing funding across different components. They also intend to use functional portfolio reviews to provide greater insight into the effects of funding instability, and the USM has stated that the portfolio reviews will inform the department’s fiscal year 2014 budget. DHS intends for the proposed CRC to make trade-offs across the functional portfolios. PARM’s Quarterly Program Accountability Report (QPAR), issued in March 2012, also has the potential to inform DHS’s portfolio management efforts. In developing the QPAR, PARM used a standardized set of five criteria to measure the value of each program: mission alignment, architectural maturity, capability gap, mission criticality, and DHS benefit. This allowed PARM to identify 48 high-value and 13 low-value programs. However, the QPAR does not recommend using the information to prioritize resource allocations, which would address a key portfolio management practice. Further, DHS’s widespread lack of department- approved Mission Need Statements (MNS) undermines efforts to improve portfolio management and prioritize investments. The MNS links capability gaps to the acquisitions that will fill those gaps, making it a critical tool for prioritizing programs. The MNS also provides formal executive-level acknowledgment that there is a mission need justifying the allocation of DHS’s limited resources. However, only about 40 percent of DHS’s major acquisition programs have a department-approved MNS. DHS has introduced seven initiatives that could improve acquisition management by addressing longstanding challenges we have identified— such as funding instability and acquisition workforce shortfalls—which DHS survey respondents also identified in 2012. Implementation plans are still being developed for all of these initiatives, and DHS is still working to address critical issues, particularly capacity questions. Because of this, it is too early to determine whether the DHS initiatives will be effective, as we have previously established that agencies must sustain progress over time to address management challenges. DHS is also pursuing a tiered-governance structure that it has begun to implement for IT acquisitions. Before the department can regularly delegate ADE decision authority through this tiered-governance structure, DHS must successfully implement its seven acquisition management initiatives and apply its knowledge-based acquisition policy on a more consistent basis to reduce risks and improve program outcomes. In 2005, we identified acquisition management as a high-risk area at DHS. Since then, we have issued multiple reports identifying acquisition management challenges. In 2008, we made several recommendations intended to help DHS address those challenges, and in September 2010, we provided DHS a list of specific acquisition management outcomes the department must achieve to help address the high-risk designation. This list largely drew from our past recommendations, and stressed that the department must implement its knowledge-based acquisition policy consistently. DHS has generally concurred with our recommendations, but still faces many of the same challenges we have previously identified. In 2011, DHS began to develop initiatives to address these challenges, and DHS has continued to evolve these plans in 2012. In January 2011, DHS produced the initial iteration of its Integrated Strategy for High Risk Management in order to measure progress in addressing acquisition management challenges we had identified, as well as financial management, human capital, IT, and management integration issues. The department subsequently produced updates in June 2011, December 2011, and June 2012. These updates present the department’s progress in developing and implementing its initiatives. Additionally, in December 2011, DHS issued the Program Management and Execution Playbook (Playbook), which expounded on some of those initiatives, and introduced a vision for a “more mature, agile, and effective process for program governance and execution.” Figure 15 identifies seven key DHS initiatives and how they correspond to acquisition management challenges we have identified. As envisioned, the DHS initiatives would better position the department to implement its knowledge-based acquisition policy on a more consistent basis to reduce risks and ultimately improve individual program outcomes. The initiatives would also help address challenges identified by survey respondents in 2012, particularly funding instability and acquisition workforce shortfalls. Additionally, the IILCM would enhance DHS’s ability to effectively manage its acquisition portfolio as a whole. DHS has made progress implementing some of the initiatives intended to address the challenges we have identified. In June 2012, DHS reported that all of its components had an approved CAE in place and the Procurement Staffing Model had been completed. In August 2012, DHS told us that eight Centers of Excellence had been chartered. However, from January 2011 to June 2012, the schedules for four of the seven initiatives slipped by at least 6 months, including the schedule for the IILCM, which slipped by a year. In March 2012, an official responsible for the IILCM initiative stated that many acquisition officials throughout the department do not yet understand the intended benefits of the IILCM. Thirty-two survey respondents reported that they were not at all familiar with the initiative, as opposed to nine that reported they were very familiar with the IILCM. Additionally, officials from three CAE offices, including two CAEs, told us that they were not familiar with the IILCM. Previously, we have reported that it is important to involve employees and obtain their ownership when transforming organizations.schedule slips and their causes. DHS has a diverse, critical, and challenging mission that requires it to respond to an ever-evolving range of threats. Given this mission, it is important that DHS maintain an agile and flexible management approach in its day-to-day operations. However, DHS must adopt a more disciplined and systematic approach for managing its major investments, which are intended to help meet critical mission needs. DHS has taken some steps to improve investment management, but most of its major acquisition programs continue to cost more than expected, take longer to deploy than planned, or deliver less capability than promised. These outcomes are largely the result of DHS’s lack of adherence to key knowledge-based program management practices, even though many are reflected in the department’s own acquisition policy. DHS leadership has authorized and continued to invest in major acquisition programs even though the vast majority of those programs lack foundational documents demonstrating the knowledge needed to help manage risks and measure performance. This limits DHS’s ability to proactively identify and address the challenges facing individual programs. Further, although the department’s acquisition policy contains many key practices that help reduce risks and increase the chances for successful outcomes, the policy does not include certain program management practices that could further enhance acquisition management. For example, the policy does not require that programs demonstrate technologies in a realistic environment prior to initiating development activities, or that exit criteria be quantifiable to the extent possible. Cost growth and schedule slips at the individual program level complicate DHS’s efforts to manage its investment portfolio as a whole. When programs encounter setbacks, the department has often redirected funding to troubled programs at the expense of others, which in turn are more likely to struggle. Additionally, DHS acquisition policy does not fully reflect key portfolio-management practices that would help improve investment management across the department. For example, the policy does not empower portfolio managers to invest resources in a disciplined manner or establish that investments should be ranked and selected using a disciplined process. DHS acknowledges the importance of having strong portfolio-management practices. However, DHS does not have a process to systematically prioritize its major investments to ensure that the department’s acquisition portfolio is consistent with DHS’s anticipated resource constraints, which is particularly important because of the diversity of the department’s missions. Since 2008, we have emphasized the need for DHS to re-establish an oversight board dedicated to addressing portfolio management challenges. DHS has produced plans to establish such a board, but the concept is still under development. It is essential that DHS take a more disciplined acquisition management approach moving forward, particularly as the department must adjust to a period of governmentwide funding constraints. Without greater discipline, decisionmakers will continue to lack critical information and the department will likely continue to pay more than expected for less capability than promised, which will ultimately hinder DHS’s day-to-day operations and its ability to execute its mission. Further, Congress’s ability to assess DHS funding requests and conduct oversight will remain limited. To its credit, DHS has undertaken a variety of initiatives over the past two years designed to address the department’s longstanding acquisition management challenges, such as increasing acquisition management capabilities at the component-level. However, more disciplined program and portfolio management at the department-level is needed before DHS can regularly delegate major milestone decision authority to component-level officials. Widespread challenges—including funding instability and acquisition workforce shortfalls—cost growth, and schedule slips indicate how much further DHS must go to improve acquisition outcomes. We recommend that the Secretary of Homeland Security direct the Under Secretary for Management to take the following five actions to help mitigate the risk of poor acquisition outcomes and strengthen the department’s investment management activities: Modify DHS acquisition policy to more fully reflect the following program management practices: Require that (1) programs demonstrate technologies in a realistic environment prior to initiating development activities, and (2) manufacturing processes be tested prior to production; Require that (1) exit criteria be quantifiable to the extent possible, and (2) consistent information be used across programs at ADE 2B and 2C; State that program managers should remain with their programs until the next major milestone when possible; Modify DHS acquisition policy to more fully reflect the following portfolio management practices: Empower portfolio managers to decide how best to invest Establish that investments should be ranked and selected using a Establish that (1) resource allocations should align with strategic goals, and (2) the investment review policy should use long-range planning; and Require portfolio reviews (1) annually to consider proposed changes, (2) as new opportunities are identified, and (3) whenever a program breaches its objectives; Ensure all major acquisition programs fully comply with DHS acquisition policy by obtaining department-level approval for key acquisition documents before approving their movement through the acquisition life cycle; Once the department’s acquisition programs comply with DHS acquisition policy, prioritize major acquisition programs departmentwide and ensure that the department’s acquisition portfolio is consistent with DHS’s anticipated resource constraints; and Clearly document that department-level officials should not delegate ADE decision authority to component-level officials for programs lacking department approved APBs or not meeting agreed-upon cost, schedule, and performance thresholds. DHS provided us with written comments on a draft of this report. In its comments, DHS concurred with all five of our recommendations and noted that two should be closed based on actions taken. The department’s written comments are reprinted in appendix V. DHS also provided technical comments that we incorporated into the report as appropriate. DHS identified specific actions the department would take to address three of our recommendations. DHS stated that it was in the process of revising its policy to more fully reflect key program management practices. Additionally, DHS stated that it would continue to mature and solidify the portfolio review process over the next few years, and that it would revise its policy to reflect this process. DHS anticipates that this effort will also help the department prioritize its major acquisition programs departmentwide, and help ensure that the department’s acquisition portfolio is consistent with anticipated resource constraints. DHS concurred with and requested we close our recommendation that the department ensure all acquisition programs fully comply with DHS acquisition policy by obtaining department-level approval for key acquisition documents before approving their movement through the acquisition life cycle. DHS stated that, in effect, its executive review board is approving a program’s documents when it advances the program, thus satisfying this recommendation. As we noted in our report, DHS officials told us in April 2012 that the department has begun to implement its acquisition policy in a more disciplined manner and that it will no longer advance programs through the acquisition life cycle until DHS leadership verifies the programs have developed critical knowledge. However, it would be premature to close this recommendation until DHS demonstrates, over time, the consistent verification of the critical knowledge captured in key documents, especially as we found that nearly all of the department’s major acquisition programs lack at least some of these acquisition documents. DHS also concurred with and requested we close our recommendation that the department clearly document that department-level officials should not delegate ADE decision authority to component-level officials for programs lacking department approved APBs or not meeting agreed- upon cost, schedule, and performance thresholds. DHS stated that it amended AD 102 to clarify that decision authority for any program that breaches an approved APB’s cost, schedule or performance parameters will not be delegated to component-level officials, thus satisfying this recommendation. However, the amendment DHS provided does not include this language or clearly document the department’s stated position. For this reason, it would be premature to close this recommendation at this time. In addition to commenting on our recommendations, the department made a number of observations on our draft report. For example, DHS stated that the report references many practices that occurred prior to the time period of the audit, and that the department has made measurable progress on a number of fronts. While we reviewed investment management activities going back to November 2008 to coincide with the issuance of AD 102, we also accounted for progress made through August 2012 by assessing ongoing DHS initiatives intended to address investment management challenges in the future. DHS also noted that our survey of 71 programs captured valuable information, but suggested the survey data cannot be generalized and expressed concern that it would be used as the basis for a recommendation. To clarify, none of the recommendations in this report are based on the survey data. In the absence of reliable program data, we surveyed program managers to obtain their perspectives on challenges facing the department’s acquisition programs, and we obtained responses from 92 percent of the major acquisition programs DHS identified in 2011. DHS noted that programs can experience cost growth and schedule slips without a “breach.” We recognize the validity of this point and our findings are consistent with this position. DHS incorrectly suggested that our data sources for quantifying cost growth – the Future Years Homeland Security Programs (FYHSP) issued in 2008 and 2011 – did not consistently account for costs beyond the initial five-year period. However, these two FYHSPs aggregated funding levels for each program to produce a total project cost. To measure total project cost growth for the 16 programs, as depicted in figure 4, we compared the total project costs reported in the 2008 FYHSP to the total project costs reported in the 2011 FYHSP. Thus, we measured changes in total project costs, not just costs over two different five-year periods. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until September 19, 2012. At that time, we will send copies to the Secretary of Homeland Security. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-4841 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix VI. The objectives of this review were to assess the Department of Homeland Security’s (DHS) acquisition management activities. Specifically, we assessed the extent to which: (1) DHS’s major acquisition programs face challenges that increase the risk of poor outcomes; (2) DHS has policies and processes in place to effectively manage individual acquisition programs; (3) DHS has policies and processes in place to effectively manage its portfolio of acquisition programs as a whole; and (4) DHS has taken actions to resolve the high-risk acquisition management issues we have identified in previous reports. To answer these questions, we reviewed 77 of the 82 programs DHS included in its fiscal year 2011 Major Acquisition Oversight List (MAOL), which identified each program the department designated a major acquisition in 2011. We excluded 5 programs that were canceled in 2011; these are identified in appendix IV. The 77 selected programs were sponsored by 12 different components and departmental offices. To determine the extent to which major DHS acquisition programs face challenges increasing the risk of poor outcomes, we surveyed the program managers for all 77 programs, and received usable responses from 71 programs (92 percent response rate). Appendix III presents the survey questions we asked, and summarizes the responses we received. The web-based survey was administered from January 12, 2012, to March 30, 2012. Respondents were sent an e-mail invitation to complete the survey on a GAO web server using a unique username and password. During the data collection period, nonrespondents received a reminder e-mail and phone call. Because this was not a sample survey, it has no sampling errors. The practical difficulties of conducting any survey may also introduce nonsampling errors, such as difficulties interpreting a particular question, which can introduce unwanted variability into the survey results. We took steps to minimize nonsampling errors by pretesting the questionnaire in person with program management officials for five different programs, each in a different component. We conducted pretests to make sure that the questions were clear and unbiased, the data and information were readily obtainable, and that the questionnaire did not place an undue burden on respondents. Additionally, a senior methodologist within GAO independently reviewed a draft of the questionnaire prior to its administration. We made appropriate revisions to the content and format of the questionnaire after the pretests and independent review. All data analysis programs used to generate survey results were independently verified for accuracy. To determine the extent to which major DHS acquisition programs face challenges increasing the risk of poor outcomes, we also reviewed the 2008 and 2011 versions of the Future Years Homeland Security Program (FYHSP), all acquisition decision memoranda documenting DHS executive review board decisions from November 2008 to April 2012, the Office of Program Accountability and Risk Management’s (PARM) initial Quarterly Program Assessment Report (QPAR), issued March 2012, and other management memos identifying available program-performance data. The survey results and documentation review allowed us to identify program performance, and the reasons for any poor performance. We also interviewed individuals at the component and department-level to enhance our understanding of common challenges. At the component level, we interviewed six of the eight Component Acquisition Executives that had been designated by the USM, and interviewed representatives of the remaining two. At the department level, we interviewed policy, budget, and acquisition oversight officials, including the Deputy Assistant Secretary for the Office of Strategic Plans, the department’s Chief Information Officer, the Executive Director of PARM, and the Director of Program Analysis and Evaluation (PA&E). These officials provided a strategic perspective on program management challenges, and shared valuable insights regarding the limitations of available program performance data. Based on their input, we chose to use FYHSP data to calculate cost growth for individual programs where possible because the document is provided to Congress and constitutes DHS’s most authoritative, out-year funding plan. To determine the extent to which DHS policies and processes are in place to effectively manage individual acquisition programs, as well as the department’s acquisition portfolio as a whole, we identified key acquisition management practices and assessed the extent to which DHS policies and processes reflected those practices. We identified the key practices through a review of previous GAO reports, which are listed in appendix II. We compared DHS Acquisition Directive 102-01 (AD 102), an associated guidebook—DHS Instruction Manual 102-01-001—and the guidebook’s 12 appendixes to those key practices, and identified the extent to which they were reflected in the department’s acquisition policy using a basic scoring system. If the DHS policy reflected a particular key practice, we assigned the policy a score of 5 for that practice. If the policy did not reflect the key practice, we assigned it a score of 1. We then took the average score for all the key practices in a particular area—as identified in appendix II—to establish an overall score for each key practice area. We concluded that key practice areas that scored a 5 were reflected in the policy, scored a 4 were substantially reflected, scored a 3 were partially reflected, and scored a 2 were minimally reflected. We subsequently met with PARM officials to discuss our analysis, identify relevant sections of the policy that we had not yet accounted for, and solicit their thoughts on those key practices that were not reflected in the policy. In order to assess DHS’s processes for implementing its policy, we surveyed program managers, and interviewed component and department-level officials. We also reviewed DHS’s plans for the Integrated Investment Life Cycle Model (IILCM), which is being designed to better integrate the department’s investment management functions. Further, we reviewed all acquisition decision memoranda documenting DHS executive review board decisions from November 2008 to April 2012, the March 2012 QPAR, and other management memos identifying available program-performance data, and any limitations of that data. To determine the extent to which DHS has taken actions to resolve the high-risk acquisition management issues we have identified in previous reports and this audit, we reviewed the first three versions of the DHS Integrated Strategy for High Risk Management—issued in January, June, and December 2011. We also reviewed the DHS Program Management and Execution Playbook, issued in December 2011. We identified initiatives intended to improve acquisition management, the department’s progress in implementing those initiatives, and enduring challenges confronting the department. We also surveyed program managers, and interviewed component and department-level officials to obtain their perspectives on the initiatives. We conducted this performance audit from August 2011 to September 2012 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. To determine the extent to which the Department of Homeland Security (DHS) has policies and processes in place to effectively manage individual acquisition programs, and the department’s acquisition portfolio as a whole, we identified key acquisition management practices established in our previous reports examining DHS, the Department of Defense, NASA, and private sector organizations. The specific program- and portfolio-management practices, as well as the reports where we previously identified the value of those practices, are presented below. The following list identifies several key practices that can improve outcomes when managing a portfolio of multiple programs. Information Technology: Critical Factors Underlying Successful Major Acquisitions. GAO-12-7. Washington, D.C.: October 21, 2011. Acquisition Planning: Opportunities to Build Strong Foundations for Better Services Contracts. GAO-11-672. Washington, D.C.: August 9, 2011. NASA: Assessments of Selected Large-Scale Projects. GAO-11-239SP. Washington, D.C.: March 3, 2011. Defense Acquisitions: Assessments of Selected Weapon Programs. GAO-11-233SP. Washington, D.C.: March 29, 2011. Defense Acquisitions: Strong Leadership Is Key to Planning and Executing Stable Weapon Programs. GAO-10-522. Washington, D.C.: May 6, 2010. Defense Acquisitions: Assessments of Selected Weapon Programs. GAO-10-388SP. Washington, D.C.: March 30, 2010. Defense Acquisitions: Many Analyses of Alternatives Have Not Provided a Robust Assessment of Weapon Systems Options. GAO-09-665. Washington, D.C.: September 24, 2009. Department of Homeland Security: Billions Invested in Major Programs Lack Appropriate Oversight. GAO-09-29. Washington, D.C.: November 18, 2008. GAO Cost Estimating and Assessment Guide: Best Practices for Developing and Managing Capital Program Costs. GAO-09-3SP. Washington, D.C.: March 2009. Defense Acquisitions: Sound Business Case Needed to Implement Missile Defense Agency’s Targets Program. GAO-08-1113. Washington, D.C.: September 26, 2008. Defense Acquisitions: A Knowledge-Based Funding Approach Could Improve Major Weapon System Program Outcomes. GAO-08-619. Washington, D.C.: July 2, 2008. Defense Acquisitions: Realistic Business Cases Needed to Execute Navy Shipbuilding Programs. GAO-07-943T. Washington, D.C.: July 24, 2007. Best Practices: An Integrated Portfolio Management Approach to Weapon System Investments Could Improve DOD’s Acquisition Outcomes. GAO-07-388. Washington, D.C.: March 30, 2007. Best Practices: Better Support of Weapon System Program Managers Needed to Improve Outcomes. GAO-06-110. Washington, D.C.: November 30, 2005. NASA’s Space Vision: Business Case for Prometheus 1 Needed to Ensure Requirements Match Available Resources. GAO-05-242. Washington, D.C.: February 28, 2005. Information Technology Investment Management: A Framework for Assessing and Improving Process Maturity. GAO-04-394G. Washington, D.C.: March 2004. Executive Guide: Leading Practices in Capital Decision-Making. GAO/AIMD-99-32. Washington, D.C.: December 1998. To help determine the extent to which major Department of Homeland Security (DHS) acquisition programs face challenges increasing the risk of poor outcomes, we surveyed the program managers for all 77 programs, and received usable responses from 71 programs (92 percent response rate). The web-based survey was administered from January 12, 2012, to March 30, 2012. We present the survey questions we asked and summarize the responses we received below. Number of respondents 28 3. In what phase(s) of the DHS Acquisition Directive (AD) 102 acquisition lifecycle is your program currently? (select all that apply) 1. Need (Prior to Acquisition Decision Event (ADE) 1) 2. Analyze/Select (Between ADE 1 and ADE 2) 4. Production/deploy/support (Post ADE 3) I do not refer to this source 4 DHS Acquisition Program Management Division (APMD)/ Program Analysis and Risk Management (PARM) Number of respondents 26 5. Which of the following DHS-provided opportunities, if any, has your program management office used to understand DHS acquisition guidance, AD 102; and if used, how useful was the opportunity, if at all? 1. Check box at left if your program is not required to follow AD 102, and then click here to skip to question 13 Training session(s) on AD 102 hosted by DHS headquarters Did your program management office use this opportunity? If used, how useful was the opportunity, if at all? Not at all useful 1 Manuals and templates for implementing AD 102 provided by DHS headquarters Did your program management office use this opportunity? If used, how useful was the opportunity, if at all? Number of respondents 50 Direct support for your program from DHS Acquisition Program Management Division (APMD)/ Program Analysis and Risk Management (PARM) Did your program management office use this opportunity? If used, how useful was the opportunity, if at all? Not at all useful 2 Did your program management office use this opportunity? If used, how useful was the opportunity, if at all? Number of respondents 16 6. How clear or unclear is the DHS AD 102 acquisition guidance and framework for managing the following types of acquisitions? Neither clear nor unclear 6 unclear Very unclear 4 4 7. How clear or unclear is DHS AD 102 acquisition guidance, including the guidebook and appendices, regarding each of the following? Number of respondents 22 8. How clear or unclear is DHS AD 102 acquisition guidance, including the guidebook and appendices, on how to develop each of the following key acquisition documents? Operational Requirements Document (ORD) Test and Evaluation Master Plan (TEMP) Number of respondents 64 9. How long is the average component and DHS review period for key acquisition documents required by AD 102 (e.g. MNS, ORD, LCCE and APB)? More than 1 year 5 10. After an Acquisition Review Board (ARB) review, how adequately does an Acquisition Decision Memo (ADM) communicate action items? No opinion Not applicable 5 6 11. How has the introduction of AD 102 helped or hindered your ability to manage your program's cost and schedule and the overall acquisition program? Number of respondents 65 12. If you would like to elaborate on any of your previous responses regarding the clarity and/or implementation of DHS acquisition guidance (AD 102) please use the following space. 13. Does your program have a DHS-approved Acquisition Program Baseline (APB)? No (please explain below) 31 13a. If your program does not have a DHS-approved APB, please explain why it does not have one in the box below. 14. How does your program's current projected cost compare against its DHS-approved APB? Cost exceeds the APB by 8 percent or 15. How does your program's current projected schedule compare to its DHS-approved APB? Schedule is ahead of the APB 3 16. How do your program's current planned system capabilities compare to its DHS-approved APB? Number of respondents 39 17. How frequently, if at all, does your program management office use each of the following performance metrics to monitor your program's progress? Submitted to DHS leadership, not yet approved 6 Operational Requirements Document (ORD) Test and Evaluation Master Plan (TEMP) Submitted to DHS leadership, not yet approved 7 Submitted to DHS leadership, not yet approved 9 22. When setting operational requirements, which of the following processes best describes your program's efforts to consider alternatives at the program level? No AOA or trade-off analysis was been set. Neither clear nor unclear 2 unclear Very unclear 1 3 26. Which of the following are reasons your program's KPPs have changed or been redefined since development activities began (ADE 2A)? Associated capabilities were determined unnecessary Not a reason Do not know 2 14 To demonstrate traceability between MNS, ORD, and TEMP Not a reason Do not know 3 11 Not a reason Do not know 1 1 27. Since your program's design and development activities began (ADE 2A), how have each of the following factors affected your planned capabilities, if at all? Number of respondents 66 29. Prior to the initiation of low-rate initial production (ADE 2C), how many reliability goals were met, if any, by production- representative prototypes demonstrated in the intended environment? Not applicable, the program will not use low-rate initial production 21 30. Has the program used an independent testing authority? No Do not know 0 49 32. Does your program use a five-year funding plan to project resource needs? No Do not know 0 3 33. Did your program's funding levels in each of the following budget documents meet your program's required funding needs as reflected in your APB? 1. Check box at left if your program does not have an APB, and then click here to skip to question 34 Number of respondents 71 At program start, component's commitment in a Resource Allocation Plan (RAP) No, funds in the document were below the APB Do not know Not applicable 4 6 5 At program start, DHS's commitment in a Resource Allocation Decision (RAD) Number of respondents 17 35. If your program has experienced funding instability (e.g. a change in planned out-year funding from one five-year funding plan to the next five-year funding plan), did it affect your program in each of the following ways? Number of respondents 22 36. If a gap existed between FY11 enacted funding and FY11 required funding, how effectively were you, as a program manager, able to directly communicate the impact on your program to DHS and component leadership? 1. Check box at left if your program did not experience a gap between FY11 enacted and required funding, and then click here to skip to question 37 Component leadership (Component head, CAE, etc.) Somewhat effectively Not effectively 1 DHS leadership (Deputy Secretary, USM, PARM officials, etc.) Somewhat effectively Not effectively 1 3 37. If you would like to elaborate on how resource allocation (i.e. funding) has affected the program's ability to achieve cost, schedule and performance goals, please use the following space. 38. Since the program was initially staffed, how many program managers have overseen the program management office (PMO)? Number of respondents 71 39. What is the number of government FTEs in your PMO for each of the following functional areas? Number of government FTEs staffed at initiation of development activities Number of government FTEs currently staffed Number of government FTEs currently identified as a need by the program Business functions (includes auditing, business, cost estimating, financial management, property management, and purchasing) Number of respondents 47 Engineering and technical (includes systems planning, research, development and engineering; life cycle logistics; test and evaluation; production, quality and manufacturing; and facilities engineering) Next-generation Periodic Reporting System (nPRS) Establishing the Integrated Investment Life Cycle Model (IILCM) Not at all familiar 31 Empowering the Component Acquisition Executives (CAEs) Not at all familiar 18 Establishing Functional Coordination Office (e.g. Screening Coordination Office) Number of respondents 67 Developing APEX, a decision support tool owned by PARM to capture and synthesize information from nPRS and IMS Not at all familiar 37 47. How helpful, if at all, will the following DHS initiatives be in helping you manage your acquisition program? Establishing the Integrated Investment Life Cycle Model (IILCM) Not at all helpful 6 Empowering the Component Acquisition Executives (CAEs) Not at all helpful 7 Establishing Functional Coordination Office (e.g. Screening Coordination Office) Creating Executive Steering Councils for program governance Not at all helpful 8 Forming the Capabilities and Requirements Council Not at all helpful 8 Developing APEX, a decision support tool owned by PARM to capture and synthesize information from nPRS and IMS Not at all helpful 9 48. Please use the following space to describe any additional actions that DHS could implement that would help you better manage your acquisition program (i.e. improvements for acquisition governance and document development). 49. Please identify any significant challenges affecting your program's ability to achieve program objectives (i.e. cost, schedule, and capabilities) that have not been adequately addressed above. 50. If you would like, please identify any practices your program has found significantly helpful in managing your program. Table 6 below identifies the 71 major Department of Homeland Security (DHS) acquisition programs that responded to our survey. It consists of all the programs DHS included in its 2011 Major Acquisition Oversight List, with the exception of the 6 programs that did not respond to our survey (see table 7), and the 5 programs that were cancelled in 2011 (see table 8). Table 6 also identifies whether each program’s Mission Need Statement (MNS), Operational Requirements Document (ORD), Acquisition Program Baseline (APB), Integrated Logistics Support Plan (ILSP), and Test and Evaluation Master Plan (TEMP) have been approved at the department level. Table 7 identifies the programs that were included in DHS’s 2011 Major Acquisition Oversight List, but did not respond to our survey. Table 8 identifies the programs that were included in DHS’s 2011 Major Acquisition Oversight List, but were cancelled in 2011. In addition to the contact named above, Katherine Trimble (Assistant Director), Nathan Tranquilli (Analyst-in-Charge), John Crawford, David Garcia, Jill Lacey, Sylvia Schatz, Rebecca Wilson, Candice Wright, and Andrea Yohe made key contributions to this report. Immigration Benefits: Consistent Adherence to DHS’s Acquisition Policy Could Help Improve Transformation Program Outcomes. GAO-12-66. Washington, D.C.: November 22, 2011. Coast Guard: Action Needed As Approved Deepwater Program Remains Unachievable. GAO-11-743. Washington, D.C.: July 28, 2011. Opportunities to Reduce Potential Duplication in Government Programs, Save Tax Dollars, and Enhance Revenue. GAO-11-318SP. Washington, D.C.: March 1, 2011. High-Risk Series: An Update. GAO-11-278. Washington, D.C.: February 2011. Defense Acquisitions: Assessments of Selected Weapon Programs. GAO-11-233SP. Washington, D.C.: March 29, 2011. Secure Border Initiative: Controls over Contractor Payment for the Technology Component Need Improvement. GAO-11-68. Washington, D.C.: May 25, 2011. Department of Homeland Security: Assessments of Selected Complex Acquisitions. GAO-10-588SP. Washington, D.C.: June 30, 2010. The Office of Management and Budget’s Acquisition Workforce Development Strategic Plan for Civilian Agencies. GAO-10-459R. Washington, D.C.: April 23, 2010. Defense Acquisitions: Measuring the Value of DOD’s Weapon Programs Requires Starting with Realistic Baselines. GAO-09-543T. Washington, D.C.: April 1, 2009. Department of Homeland Security: Billions Invested in Major Programs Lack Appropriate Oversight. GAO-09-29. Washington, D.C.: November 18, 2008. Homeland Security: Challenges in Creating an Effective Acquisition Organization. GAO-06-1012T. Washington, D.C.: July 27, 2006. Homeland Security: Successes and Challenges in DHS’s Efforts to Create an Effective Acquisition Organization. GAO-05-179. Washington, D.C.: March 29, 2005. Results-Oriented Cultures: Implementation Steps to Assist Mergers and Organizational Transformations. GAO-03-669. Washington, D.C.: July 2, 2003 Best Practices: Better Matching of Needs and Resources Will Lead to Better Weapon System Outcomes. GAO-01-288. Washington, D.C.: March 8, 2001. | DHS invests extensively in major acquisition programs to develop new systems that help the department execute its many critical missions. In 2011, DHS reported to Congress that it planned to invest $167 billion in these major acquisition programs. We previously found that DHS had not managed its investments effectively, and its acquisition management activities have been on GAOs High Risk List since 2005. This report addresses the extent to which (1) major DHS acquisition programs face key challenges; (2) DHS has policies and processes to effectively manage individual acquisition programs; (3) DHS has policies and processes to effectively manage its portfolio of acquisition programs as a whole; and (4) DHS has taken actions to address the high-risk acquisition management issues GAO has identified in previous reports. GAO surveyed all 77 major program offices DHS identified in 2011 (92 percent response rate), reviewed available documentation of acquisition decisions from November 2008 to April 2012, and interviewed officials at DHS headquarters and components. Nearly all of the Department of Homeland Security (DHS) program managers GAO surveyed reported their programs had experienced significant challenges. Sixty-eight of the 71 respondents reported they experienced funding instability, faced workforce shortfalls, or their planned capabilities changed after initiation, and most survey respondents reported a combination of these challenges. DHS lacks the data needed to accurately measure program performance, but GAO was able to use survey results, information DHS provided to Congress, and an internal DHS review from March 2012 to identify 42 programs that experienced cost growth, schedule slips, or both. GAO gained insight into the magnitude of the cost growth for 16 of the 42 programs, which increased from $19.7 billion in 2008 to $52.2 billion in 2011, an aggregate increase of 166 percent. DHS acquisition policy reflects many key program management practices that could help mitigate program risks. It requires programs to develop documents demonstrating critical knowledge that would help leaders make better informed investment decisions when managing individual programs. However, DHS has not consistently met these requirements. The department has only verified that four programs documented all of the critical knowledge the policy requires to proceed with acquisition activities. Officials explained that DHSs culture has emphasized the need to rapidly execute missions more than sound acquisition management practices. Most major programs lack reliable cost estimates, realistic schedules, and agreed-upon baseline objectives, limiting DHS leaderships ability to effectively manage those programs and provide information to Congress. DHS recognizes the need to implement its acquisition policy more consistently, but significant work remains. DHS acquisition policy does not fully reflect several key portfolio management practices, such as allocating resources strategically, and DHS has not yet re-established an oversight board to manage its investment portfolio across the department. As a result, DHS has largely made investment decisions on a program-by-program and component-by-component basis. The widespread risk of poorly understood cost growth, coupled with the fiscal challenges facing the federal government, makes it essential that DHS allocate resources to its major programs in a deliberate manner. DHS plans to develop stronger portfolio-management policies and processes, but until it does so, DHS programs are more likely to experience additional funding instability, which will increase the risk of further cost growth and schedule slips. These outcomes, combined with a tighter budget, could prevent DHS from developing needed capabilities. DHS has introduced seven initiatives that could improve acquisition management by addressing longstanding challenges GAO and DHS survey respondents have identified, such as funding instability and acquisition workforce shortfalls. Implementation plans are still being developed, and DHS is still working to address critical issues. Because of this, it is too early to determine whether the DHS initiatives will be effective, as GAO has previously established that agencies must sustain progress over time to address management challenges. DHS is also pursuing a tiered-governance structure, but it must reduce risks and improve program outcomes before regularly delegating major milestone decision authority. GAO recommends that DHS modify its policy to better reflect key program and portfolio management practices, ensure acquisition programs fully comply with DHS acquisition policy, prioritize major acquisition programs departmentwide and account for anticipated resource constraints, and document prerequisites for delegating major milestone decision authority. DHS concurred with all of GAOs recommendations, and noted its progress on a number of fronts, which is accounted for in the report. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
Established in 1956, DI is an insurance program that provides monthly cash benefits to workers who are unable to work because of severe long term disability. Workers who have worked long enough and recently enough are insured for coverage under the DI program. To meet the definition of disability under the DI program, an individual must have a medically determinable physical or mental impairment that (1) has lasted or is expected to last at least 1 year or to result in death and (2) prevents the individual from engaging in substantial gainful activity (SGA). Individuals are considered to be engaged in SGA if they have countable earnings above a certain dollar level. Once a person is on the disability rolls, benefits continue until (1) the beneficiary dies, (2) the beneficiary becomes eligible for Social Security retirement benefits at full retirement age, (3) SSA determines that the beneficiary is no longer eligible for benefits because his or her earned income exceeds the SGA level, or (4) SSA decides that the beneficiary’s medical condition has improved to the point that he or she is no longer considered disabled. In 2002, SSA paid about $60 billion in DI cash benefits to 5.5 million disabled workers, with average monthly benefits amounting to $834 per person. In addition to receiving cash assistance, beneficiaries automatically qualify for Medicare after 24 months of DI entitlement. During the 1970s, as the number of disability awards increased significantly and resulted in substantial cost increases for the DI program, the Congress became concerned about the growth of the DI program and program rules that provided disincentives to returning to work. To encourage DI beneficiaries to return to work—and, potentially, to leave the benefit rolls—the Congress has, over the years, enacted legislation providing various work incentives. Such incentives include a trial work period during which beneficiaries may earn any amount for 9 months within a 60-month period and still receive full cash and medical benefits and continued Medicare coverage that allows beneficiaries to maintain eligibility for Medicare for at least 39 months following a trial work period as long as medical disability continues. In an effort to further address these issues, the Congress, in 1980, required SSA to conduct demonstration projects to evaluate the effectiveness of policy alternatives that could encourage DI beneficiaries to reenter the workforce. A key aspect of this demonstration authority is SSA’s ability to waive DI and Medicare program rules to the extent needed in conducting these projects. The legislation granting DI demonstration authority also identified a variety of policy alternatives for SSA to consider testing, including (1) alternative ways of treating DI beneficiaries’ work-related activity such as methods allowing for a reduction in benefits based on earnings and (2) modifications in other rules, such as the trial work period and Medicare eligibility waiting period, that may serve as obstacles to DI beneficiaries returning to work. In addition, this legislation identified several requirements pertaining to the design and evaluation of DI demonstration projects. In particular, these projects were required to be of sufficient scope and carried out on a wide enough scale to permit a thorough evaluation of the policy alternatives studied such that the results would be generally applicable to the operation of the DI program. The law additionally required SSA to submit reports to the Congress announcing the initiation of DI demonstration projects as well as periodic reports describing the status of these projects and a final report on all projects carried out under the demonstration authority. SSA was directed to make recommendations, when appropriate, for legislative or administrative changes in its reports to the Congress. Another important aspect of SSA’s DI demonstration authority is that unlike other SSA research activities, which are funded through congressional appropriations, these projects can be paid for with DI Trust Fund and Old-Age and Survivors Insurance Trust Fund monies. Therefore, SSA is not required to obtain congressional approval for DI demonstration expenditures, although it is required to receive approval from the Office of Management and Budget for an annual apportionment of Trust Funds for these demonstrations. SSA’s DI demonstration authority has always been granted on a temporary basis and therefore has been subject to periodic review and renewal by the Congress. After initially granting this authority for a 5-year period, the Congress subsequently renewed it several times, in 1986, 1989, 1994, 1999, and 2004. The renewal of SSA’s authority has sometimes been delayed so that SSA has, on several occasions, gone without DI demonstration authority. For example, after its demonstration authority expired in June 1996, SSA was not again granted DI demonstration authority until December 1999. Most recently, the Congress extended this demonstration authority through December 2005. In addition to granting this general DI demonstration authority, the Congress may enact legislative mandates for SSA to conduct specific DI demonstration projects. For example, the Ticket to Work and Work Incentives Improvement Act of 1999 required SSA to conduct a demonstration to assess the effectiveness of a benefit offset program under which DI benefits are reduced by $1 for every $2 in earnings (above a certain level) by a beneficiary. SSA’s authority to conduct this demonstration is similar in some respects to the authority it has under its general DI demonstration statute. For instance, the statute allows waiver of DI and Medicare program provisions to carry out this benefit offset demonstration. However, some differences exist between the two authorities. In particular, the benefit offset demonstration authority provides a more detailed and comprehensive list of demonstration objectives for SSA to fulfill than does SSA’s general authority. For example, the benefit offset demonstration authority lists six “matters to be determined,” which include assessments of project costs; savings to the Trust Funds; and project effects on employment outcomes such as wages, occupations, benefits, and hours worked. Regardless of the authority under which they are carried out, demonstration projects examining the impact of social programs are inherently complex and difficult to conduct. Measuring outcomes, ensuring the consistency and quality of data collected at various sites, establishing a causal connection between outcomes and program activities, and separating out the influence of extraneous factors raise formidable technical and logistical problems. Thus, these projects generally require a planned study and considerable time and expense. Adding to these complexities are other administrative or statutory requirements affecting SSA’s DI demonstrations. For example, SSA’s policy is that its demonstration projects must not make those who participate in the project worse off, which could limit the specific types of policy alternatives the agency can study or the methods used to study such alternatives. Although the legislation granting DI demonstration authority does not prescribe the use of particular methodological approaches, SSA has repeatedly recognized that the law’s general requirements for demonstration evaluations require SSA to conduct these projects in a rigorous manner that provides the agency with a reliable basis for making policy recommendations. Rigorous methods are required to estimate the net impact of a tested disability policy option because many other factors, such as the economy, can influence whether a beneficiary returns to work. In an August 2002 report to the SSA Commissioner, an SSA advisory panel stated that it is widely agreed that experimental designs, “when feasible from operational and budgetary perspectives and when they can be conducted without serious threats to their validity, are the best methodology for determining the effects of changes in government programs.” In addition, SSA officials and other researchers have noted the advantages of experimental designs in providing policymakers with more clear-cut results that are less subject to debate than results derived from other methods. However, when experimental designs are not feasible or desirable, the use of quasi-experimental designs offers a reasonably rigorous evaluation alternative that may, under certain circumstances, offer advantages over experimental designs. Other factors may also limit the rigor of DI demonstrations, including insufficient sample sizes, inconsistency in demonstration design or implementation across multiple project sites, and deficiencies in data collection. Such design, implementation, and evaluation weaknesses may hamper SSA’s use of project results as a basis for making policy recommendations because they limit the agency’s ability to (1) control for factors external to the demonstration, (2) generalize demonstration results to a wider population of DI beneficiaries, and (3) isolate the effects of specific policy interventions from the overall effects produced by a demonstration. The Office of Program Development and Research (OPDR) is the entity within SSA that develops and implements demonstration projects for the DI and Supplemental Security Income (SSI) Programs. OPDR program and research staff—sometimes with the assistance of outside research organizations—identifies the broad outlines and requirements of disability program demonstration projects, including the basic objectives, scope, and methodological standards for these projects. SSA then issues formal notices requesting public or private sector organizations to submit offers to conduct the demonstration projects, which may include development of a detailed design plan, provision of technical support, collection of project data, or evaluation of project results. On the basis of SSA’s review of submitted proposals and bids, the agency may enter into grants, cooperative agreements, or contractual arrangements with one or more organizations to carry out demonstration projects. For example, a single demonstration may involve cooperative agreements with states to design and implement projects as well as contracts with one or more research institutions to provide technical assistance to the states and evaluate demonstration results. Project managers in OPDR have the primary responsibility for overseeing demonstration projects to ensure that they meet SSA’s technical and programmatic requirements. OPDR collaborates with SSA’s Office of Acquisition and Grants in issuing formal project notices and solicitations and, subsequently, in overseeing grant or contract performance. SSA has not used its demonstration authority to extensively evaluate a wide range of DI policy areas dealing with return to work. Until very recently, SSA has focused its demonstration efforts primarily on a relatively narrow set of policy issues dealing with the provision of vocational rehabilitation and employment services, despite being given the authority to assess a much broader range of policy alternatives. Even in the area of vocational rehabilitation and employment issues, SSA’s use of DI demonstration authority has not been comprehensive and, therefore, did not extensively address key policy issues that the agency is currently grappling with under its Ticket to Work program. SSA’s recently initiated or proposed demonstrations have begun to address a broader range of policy issues. However, the agency has no systematic processes or mechanisms for ensuring that it is adequately identifying and prioritizing those issues that could best be addressed through use of its demonstration authority. The DI demonstration projects that SSA has conducted since 1980 have not extensively addressed a wide range of return-to-work policy issues. Since first being granted DI demonstration authority 24 years ago, SSA has used this authority to complete four projects, with another project nearing completion.Total costs for these projects amount to at least $107 million, of which about $42 million was paid for from the Old-Age and Survivors Insurance and Disability Insurance (OASDI) Trust Funds. The legislation granting DI demonstration authority to SSA provided the agency with an opportunity to examine a broad set of return-to-work policy alternatives and even identified some specific alternatives for SSA to consider studying, including (1) reducing, rather than terminating, benefits based on earnings; (2) lengthening the trial work period; (3) decreasing the 24-month waiting period for Medicare benefits; (4) altering program administration; (5) earlier referral of beneficiaries for rehabilitation; and (6) using employers and others to stimulate new forms of vocational rehabilitation. The projects SSA has conducted thus far have focused predominantly on the latter category of issues involving vocational rehabilitation and have focused to a lesser extent—or not at all—on other key policy issues affecting return to work (see table 1). More specifically, examination of policy alternatives dealing with the provision of vocational rehabilitation and employment services has been the primary objective of four of the five completed or nearly completed demonstrations. Although two of these projects also examined other DI return-to-work policy issues—such as the possible effects of changes in program work incentives and alterations in the provision of medical benefits—they did so to only a limited extent. None of the projects looked at other potentially significant DI policy issues, such as the possibility of changing SSA’s benefit structure to allow for a reduction in benefits, rather than a complete cutoff of benefits, based on earnings. Furthermore, SSA has not used its DI demonstration authority to comprehensively examine issues involving vocational rehabilitation, including key policy issues with which the agency is currently grappling. For example, SSA did not extensively test key elements of what eventually became the Ticket to Work program. Although the ticket program was not formally proposed by SSA in a legislative package until 1997, as early as 1989, in an annual report to the Congress on SSA’s demonstration activities, SSA noted that among its ideas for improving SSA’s ability to assist beneficiaries in returning to work was a voucher program that could be used to pay for vocational rehabilitation services from private providers. SSA told the Congress that such a program, as well as other possible policy changes, would need to be thoroughly tested as a prerequisite to developing a new nationwide program. However, only one project completed under SSA’s DI demonstration authority—Project Referral System for Vocational Rehabilitation Providers (Project RSVP), initiated in 1997—addressed an issue directly relevant to the ticket program, namely, the use of a contractor to perform certain administrative functions for an expanded vocational rehabilitation referral and reimbursement program. But our review of project documentation and our discussions with SSA officials indicate that Project RSVP was more of an effort to make an operational change in the way SSA managed its vocational rehabilitation program than a study to evaluate the advantages and disadvantages of such a change. In fact, we could not identify any end product or final results for this project. SSA also made another attempt, ultimately unsuccessful, to directly address issues related to establishment of a ticket program. In the Omnibus Budget Reconciliation Act of 1990, the Congress mandated that SSA use its DI demonstration authority to assess the advantages and disadvantages of permitting DI beneficiaries to select from among both public and private vocational rehabilitation providers. But in January 1993, SSA reported to the Congress that it would be unable to conduct this demonstration because of an insufficient number of providers willing to participate in the project. SSA explained that the performance-based reimbursement provisions of the proposed project appeared to be the reason why providers were reluctant to participate. Despite the Congress’ expressed interest in these issues, SSA did not attempt to identify alternative ways to carry out such a demonstration. In particular, given that SSA remained very interested in the expanded use of private rehabilitation providers for the DI program, the difficulties encountered in recruiting providers for the demonstration should have highlighted the need for SSA to further study the issue of provider reimbursement before proceeding with any policy initiatives in this area. SSA’s current Deputy Commissioner for Disability and Income Security Programs told us that if SSA had used its demonstration authority to study these types of issues in the 1990s, SSA might have been able to identify and possibly resolve these issues then rather than struggling to do so now. In addition, such information could have been helpful in the Congress’ consideration of the ticket legislation’s merits as it deliberated whether to enact this program. In contrast to the completed and nearly completed demonstration projects, SSA’s more recent projects, which are generally in the early planning or proposal stages, represent a much more wide-ranging set of demonstrations (see table 2). For example, the projects, as currently described, will deal with a variety of issues such as early provision of cash and medical benefits and a change in the benefit payment structure to allow a benefit offset for beneficiaries earning above the SGA level. This more comprehensive approach to demonstrations is due in part to legislative changes. For example, the Ticket to Work Act mandated that SSA conduct a benefit offset demonstration and also permitted SSA, for the first time, to conduct demonstrations involving DI applicants, thereby allowing SSA to test ideas such as early provision of cash and medical benefits and vocational rehabilitation services to individuals who have not yet entered the disability rolls. In addition, SSA has recently placed a high priority on conducting disability demonstration projects that examine the key issues affecting beneficiaries’ return to work. This priority was reflected in the SSA Commissioner’s September 25, 2003, testimony before the House Committee on Ways and Means, Subcommittee on Social Security, in which she announced several new demonstrations as part of a broader strategy to improve the DI and SSI programs. SSA estimates that these recently proposed and initiated projects will cost about $357 million, $293 million of which will be paid for from the OASDI Trust Funds. Despite SSA’s recent broadening of the scope of its projects, the agency does not have in place any systematic processes for identifying and assessing potential issues that could be well suited for study under SSA’s demonstration project authority. Therefore, there is no assurance that the agency will, in future demonstration efforts, maintain its current focus on a broad array of return-to-work policy issues. Our discussions with SSA officials and review of a study examining earlier demonstration efforts indicate that the agency’s agenda for demonstration projects is subject to significant change over time resulting, in part, from changes in executive branch and SSA leadership and senior management. The effects of such changes may include termination of projects or significant delays and modifications in their planning and implementation. For example, in its 1994 report examining SSA’s Research Demonstration Program, the agency’s Inspector General noted that changes in SSA leadership had disrupted the accomplishment of RDP objectives. The disability research and advisory officials we spoke with also indicated that SSA’s project priorities and decisions are significantly influenced by larger political and organizational changes, which may prevent SSA from focusing on long term research objectives. One advisory official noted that these difficulties in long-term planning have occurred despite the fact that the Congress—in making SSA an independent agency and establishing a 6-year term for the SSA Commissioner—intended that SSA would be better able to engage in the type of long-range planning required to address its program needs. SSA’s approach for identifying and prioritizing demonstrations has varied through the years. Soon after being granted DI demonstration authority in 1980, SSA developed a detailed demonstration research plan to directly address the policy issues identified in SSA’s authorizing legislation. However, our discussions with SSA officials and review of internal agency documents indicate that the plan was never acted upon because of competing organizational priorities and concerns over the potential cost of the demonstrations and possible technical limitations, such as the adequacy of systems support. Consequently, as its DI demonstration authority was due to expire in 1985, SSA had not used it to conduct any demonstrations. In the second half of the 1980s, after its demonstration authority was renewed, SSA changed course. Partly on the basis of solicitation of ideas from the public, SSA identified priority areas dealing mostly with vocational rehabilitation and employment services issues for which it would issue grants to public and private organizations to conduct demonstrations. The specific priority areas identified changed from year to year as SSA attempted to stimulate, test, and coordinate effective approaches toward employment assistance. In its required 1991 annual report to the Congress on its DI demonstration activities, SSA said that it was proceeding with broader testing of key elements of a comprehensive employment and rehabilitation system. But our review of agency documents and discussions with SSA officials indicate that SSA has not developed a formal, comprehensive, long-term agenda for conducting DI demonstration projects. Senior SSA officials told us that the agency’s current demonstration project decisions are, to some extent, based on discussions with outside research, advocacy, and other groups. But SSA has no formal mechanisms and requirements in place to ensure that the agency obtains such input and to decide how such input should be factored in with other considerations in determining the agency’s demonstration priorities. The need for explicit planning concerning SSA research, including demonstrations, has been identified in past reviews of SSA’s disability programs. For example, in 1998, the Social Security Advisory Board (SSAB) noted the need for SSA to develop a comprehensive, long-range research and program evaluation plan for DI and SSI that would guide the agency’s research and define priorities. SSAB also said that SSA’s research plan should reflect broad consultation with the Congress, other agencies, SSAB, and others and recommended the establishment of a permanent research advisory panel to advise in the development of a long range plan. In a 1996 report on SSA’s disability programs, the National Academy of Social Insurance noted the “dearth of rigorous research on the disability benefit programs” since the 1980s and said that SSA needed a comprehensive, long-range research program to address this deficiency.In addition, officials from disability research, advisory, and advocacy groups told us that they believe the establishment of a formal research agenda or an advisory panel with regard to demonstration projects would be helpful in ensuring that SSA adequately identifies its demonstration priorities and maintains its commitment to these priorities even in the face of political or administrative changes. SSA’s demonstration projects have had little influence on the agency’s and the Congress’ consideration of DI policy issues. This is due, in part, to methodological limitations that have prevented SSA from producing project results that are useful for reliably assessing DI policy alternatives. In addition, SSA lacks a formal process for fully considering the potential policy implications of its demonstration results. Furthermore, SSA’s reports on demonstration projects have not fully apprised the Congress of project results and their policy implications. The demonstration projects SSA has conducted under its DI demonstration authority have generally not been designed, implemented, or evaluated in a rigorous enough manner to allow the agency to reliably assess the advantages and disadvantages of specific policy alternatives. While SSA’s major DI demonstrations have varied significantly in their methodological rigor, all of them have experienced at least some significant methodological limitations. For example, SSA’s first major DI demonstration, the Research Demonstration Program, was characterized by a number of fundamental design and evaluation flaws such as the limited scope and small sample sizes of the RDP projects and the limited use of control groups. In its 1994 report on the RDP, the Department of Health and Human Services’ (HHS) Inspector General noted that because of such limitations, “grantees were unable to conduct research that SSA deemed necessary for definitive tests of alternatives to help beneficiaries obtain work.” In addition, SSA did not develop a plan for evaluating the overall RDP results as part of its initial project design. In a required 1994 annual report to the Congress on its demonstration activities, SSA acknowledged that the lack of a rigorous project design and the omission of a strong evaluation component limited the ways in which the project results could be generalized. But SSA also described a number of “observations” that resulted from the RDP and noted that this project had helped to identify the agency’s future demonstration priorities. However, given the significant limitations of the RDP, it is unlikely that its results could have provided a reliable basis for effectively establishing such priorities. In its next major DI demonstration effort, Project Network, which was initiated as the RDP projects were being completed, SSA avoided many of the major shortcomings of the RDP. For example, Project Network was rigorously designed, using an experimental approach based on the random assignment of beneficiaries to treatment and control groups. As a result, this project produced some reasonably clear results, which SSA thoroughly evaluated in an effort to assess the overall impact of the tested policy alternatives. Despite its generally rigorous design, Project Network also had some limitations that may have, to some extent, limited its usefulness for policy consideration. For example, in examining the effects of a case management approach for providing vocational rehabilitation services, Project Network used four different service delivery models.Although the Project Network evaluation provided information on the overall effects of a case management approach, it did not provide a basis for reliably assessing and comparing the separate effects of the four models even though such an assessment may have provided useful information for policy considerations. In addition, Project Network did not produce results that could be generalized to the larger population of beneficiaries, which, in turn, limited SSA’s ability to assess whether the tested policy should be implemented on a nationwide basis. As was the case with Project Network, SSA has made a significant effort under its State Partnership Initiative demonstration to avoid some of the problems encountered under the RDP. For example, SSA contracted with two research institutions to design an evaluation plan for the demonstration and to provide assistance with technical issues and data collection to the various states conducting this demonstration. Our discussions with SSA and contractor officials who have been involved in this demonstration as well as our own review of SPI project documents indicate that the efforts of the contractors appear to have introduced a certain degree of rigor in the design, implementation, and, potentially, evaluation of this demonstration. For example, SSA’s contractors have indicated that the SPI “core evaluation” will likely produce useful results regarding the effects on beneficiary employment of the overall package of policy alternatives tested under the demonstration. But despite these efforts, the SPI design also has a number of limitations that could substantially reduce the usefulness of its results for evaluating the effects of the demonstration’s individual policy alternatives. For example, SSA gave each of the 12 participating states significant discretion in designing and conducting projects, which resulted in 12 distinct state projects. Each project tested different combinations of policy alternatives, applied different research methods to study these alternatives, and used varying approaches to select beneficiaries for participation in the project. SSA officials told us that such differences across projects make it unlikely that SPI will produce final results that allow for reliable evaluations of specific policy alternatives on a national level. SSA and one of its SPI contractors have also noted other potential limitations in the design and implementation of SPI, such as problems with the quality of states’ data collection, that may detract from SSA’s ability to evaluate specific policy alternatives. SSA officials currently responsible for planning and conducting DI demonstrations acknowledged that the agency’s past demonstrations have generally not provided useful information for policy making largely because of the limited rigor with which these projects were conducted. However, they emphasized that the agency has, over the past couple of years, placed a new emphasis on ensuring that DI demonstrations are rigorously designed so that the results can be used to effectively evaluate specific policy options and develop recommendations. In particular, the officials noted the importance of using, whenever feasible, an experimental approach in its demonstration projects and of ensuring that demonstration results can be generalized to the larger population of DI beneficiaries. The officials also emphasized the need for SSA to hire additional staff with the expertise needed to carry out methodologically rigorous demonstration projects. Aside from the SPI demonstration, all of SSA’s other current DI demonstrations are in the early design phase or have been proposed only recently. Therefore, we were not able to assess the methodological rigor of these projects. However, our review of SSA’s request for proposal (RFP) for its Benefit Offset demonstration indicates that SSA is making a serious effort to comprehensively and rigorously study this policy issue. For example, SSA has proposed using an experimental design with random assignment to treatment and control groups. Nevertheless, the scope and complexity of SSA’s proposal suggest that this will be a very challenging project for SSA to carry out successfully, and that the agency will need to ensure that its project design avoids some of the pitfalls that have limited the usefulness of past demonstrations, such as insufficient sample size and lack of uniformity in tested interventions across sites. SSA does not have procedures or processes in place to ensure that project results—regardless of any limitations that they may have—are fully considered by senior officials within the agency for their policy implications or their implications for future SSA research and demonstrations. Without such processes, projects that begin with the support of senior managers under one administration may not receive adequate attention from a new group of senior managers under a future administration. Our discussions with current and former SSA officials and with officials from disability research, advocacy, and advisory organizations indicate that such shifting priorities have been the norm for SSA’s DI demonstration projects. For example, several of these officials told us that when Project Network was completed in 1999, its results were not formally reviewed and considered by senior SSA managers, in part because of the changes in presidential administrations and in senior agency leadership that had occurred since the start of the project. Officials from one of the groups we spoke with told us that SSA’s consideration of project results could be improved by the establishment of a panel to review project results and explore their policy implications. An additional factor that could limit SSA’s consideration of demonstration results is the lack of an adequate historical record—reflecting the outcomes and the problems or issues encountered—of the various projects that the agency has conducted under its demonstration authority. SSA has not maintained a formal record of its disability demonstration project activities and results, so basic information on these projects—such as project notices, design documents, and evaluation documents—is in some cases no longer available. As a result, information on some projects can be obtained only by relying on the recollection of SSA employees who were around when the study was conducted. While formal document retention requirements may not dictate that SSA maintain such information, several SSA officials told us that the agency would benefit from an institutional record of demonstration activity. According to these officials, such a record would constitute a body of knowledge that the agency should be building to improve DI return-to work policies. This becomes even more important in light of the expected retirement of a large percentage of SSA staff during this decade. In addition to having shortcomings in its consideration of demonstration results, SSA has not sufficiently communicated the status and results of its demonstration projects to the Congress. Although SSA had been required to issue various reports to the Congress regarding its DI demonstration projects, it has not always produced such reports. For example, although SSA was required to submit final reports on the use of its demonstration authority in 1985, 1990, 1993, and 1996, the only final report that SSA submitted was in 1996. In addition, SSA did not submit annual reports on its demonstration activities in 7 of the 16 years in which these reports were required. Furthermore, when these reports have been produced, they have not provided all of the information needed to fully inform the Congress of demonstration activities and results. For example, our review of these reports indicates that they have frequently lacked key information such as a discussion of a project’s potential policy implications, its limitations, and the costs of conducting the project. In allowing SSA to waive program provisions and use OASDI Trust Fund dollars, SSA’s DI demonstration authority provides the agency with a special, and potentially very valuable, means of studying policy alternatives to improve the agency’s return-to-work programs. SSA has spent tens of millions of dollars from the OASDI Trust Funds to conduct these projects—in addition to tens of millions of dollars from SSA’s general appropriations—and expects to spend hundreds of millions more within the next 10 years. While these amounts may be small as a percentage of the total Trust Funds, they nevertheless represent a substantial use of increasingly limited federal resources. After having this authority for more than two decades, SSA has yet to use it to propose or assess major policy options that could result in savings to the Trust Funds. Because SSA’s use of its DI demonstration authority has yet to achieve the Congress’ intended results—and because SSA is permitted to draw on increasingly limited Trust Funds to conduct these demonstrations—we believe it is important for the Congress to maintain close oversight of SSA’s use of this authority. We also believe that such oversight would be a greater challenge if the Congress were to grant this demonstration authority on a permanent basis. As the DI Trust Fund approaches exhaustion, the need for programmatic improvements becomes greater and greater. As part of a broader effort to address this need, SSA has recently initiated or proposed a number of DI demonstration projects that, according to SSA officials, are geared toward producing useful and methodologically sound results. Such results could provide an important basis for SSA to address some of the long-standing issues that have led GAO to identify federal disability programs as a high risk area. However, the challenges SSA has historically faced in conducting demonstration projects and the potential for changing priorities to adversely affect long-range research plans suggests that, in the long run, SSA may be unable to fulfill these demonstration goals. This is especially likely if SSA continues its informal approach to prioritizing and planning demonstrations and assessing their results. Without more formal mechanisms for establishing its commitment to effective and thorough DI demonstrations—including the submission of regular reports to the Congress on the results and implications of its demonstration projects— SSA will be unable to ensure that the extensive amount of time, effort, and funding devoted to these demonstrations is well spent. To help ensure the effectiveness of SSA’s DI demonstration projects, we recommend that the Commissioner of Social Security take the following actions: Develop a formal agenda reflecting the agency’s long-term plans and priorities for conducting DI demonstration projects. In establishing this agenda, SSA should consult broadly with key internal and external stakeholders, including SSA advisory groups, disability researchers, and the Congress. Establish an expert panel to review and provide regular input on the design and implementation of demonstration projects from the early stages of a project through its final evaluation. Such a panel should include SSA’s key research personnel as well as outside disability experts and researchers. SSA should establish guidelines to ensure that its project plans and activities adequately address the issues or concerns raised by the panel or provide a clear rationale for not addressing such issues. Establish formal processes to ensure that, at the conclusion of each demonstration project, SSA fully considers and assesses the policy implications of its demonstration results and clearly communicates SSA’s assessment to the Congress. Such processes should ensure that SSA consults sufficiently with internal and external experts in its review of demonstration project results and that SSA issues a report to the Congress clearly identifying (1) major project outcomes, (2) major project limitations, (3) total project costs, (4) any policy options or recommendations, (5) expected costs and benefits of proposed options or recommendations, and (6) any further research or other actions needed to clarify or support the project’s results. Another key aspect of such formal processes should be a requirement that SSA maintain a comprehensive record of DI demonstration projects. This record would help SSA in establishing an empirically based body of knowledge regarding possible return-to-work strategies and in deriving the full value of its substantial investments in demonstration projects. To facilitate close congressional oversight and provide greater assurance that SSA will make effective use of its DI demonstration authority, the Congress should consider the following actions: Continue to provide DI demonstration authority to SSA on a temporary basis but allow SSA to complete all projects that have been initiated prior to expiration of this authority. This would provide SSA with greater certainty and stability in its efforts to plan and conduct demonstration projects while preserving the Congress’ ability to periodically reassess and reconsider SSA’s overall use of DI demonstration authority. Require that SSA periodically provide a comprehensive report to the Congress summarizing the results and policy implications of all of its DI demonstration projects. The due date for this report could either coincide with the expiration of SSA’s DI demonstration authority or, if this authority is made permanent or extended for a period greater than 5 years, be set for every 5 years. Such reports could serve as a basis for the Congress’ assessment of SSA’s use of its demonstration authority and its consideration of whether this authority should be renewed. Establish reporting requirements that more clearly specify what SSA is expected to communicate to the Congress in its annual reports on DI demonstrations. Among such requirements could be a description of all SSA projects that the SSA Commissioner is considering conducting or is conducting some preliminary work on. For each demonstration project that the agency is planning or conducting, SSA should provide clear information on the projects’ specific objectives, potential costs, key milestone dates (e.g., actual or expected dates for RFP, award of contracts or grants, start of project operations, completion of operations, completion of analysis, and final report), potential obstacles to project completion, and the types of policy alternatives that SSA might consider pursuing depending on the results of the demonstration. This would provide the Congress with a more complete understanding of the direction and progress of SSA in its efforts to fulfill its DI demonstration requirements. More clearly specify the methodological and evaluation requirements for DI demonstrations to better ensure that such projects are designed in the most rigorous manner possible and that their results are useful for answering specific policy questions and for making, where appropriate, well-supported policy recommendations. Such requirements should not be entirely prescriptive given the need for SSA to have sufficient flexibility for choosing the right methodological approach based on the specific circumstances and objectives of a particular demonstration project. However, the requirements could call for SSA to choose, to the extent practical and feasible, the most rigorous methods possible in conducting these demonstrations. Whatever methods are ultimately selected, SSA should be sure that the methods used will allow for a reliable assessment of the potential effect on the DI program of the individual policy alternatives being studied. Finally, SSA’s legislative requirements could be revised to include a more explicit list of project objectives—such as assessments of specific employment outcomes, costs and benefits, and Trust Fund savings—similar to the language that was included under Sections 302(b)(1) and (b)(2) of the Ticket to Work and Work Incentives Improvement Act. In commenting on a draft of this report, SSA agreed with our recommendations. SSA agreed that in the past it has not used its demonstration authority to extensively evaluate DI policy but noted that its recently initiated or proposed demonstrations will play a vital role in testing program and policy changes. SSA also agreed that the use of experts in developing demonstration projects is very useful and commented that it has used the expertise of particular individuals on an ad hoc basis and plans to continue to use the advice and recommendations of experts in the development of future demonstrations. Finally, SSA agreed that a central source of information regarding the results and policy implications of disability demonstrations needs to be established and stated that it planned to fully analyze the results of demonstration projects to inform DI policy decisions. SSA’s comments appear in appendix II. Copies of this report are being sent to the Commissioner of SSA, appropriate congressional committees, and other interested parties. The report is also available at no charge on GAO’s Web site at http://www.gao.gov. If you have any questions about this report, please contact me at (202) 512-7215. Other contacts and staff acknowledgments are listed in appendix III. To address the mandated objectives, we reviewed legislation authorizing the Social Security Administration (SSA) to conduct Disability Insurance (DI) demonstration projects, Congressional reports related to this legislation, and SSA regulations governing DI demonstration activities. We also examined internal SSA memorandums and planning documents discussing proposals to conduct demonstration projects and the nature, purpose, requirements, and distinguishing features of SSA’s demonstration authority. We interviewed a wide range of current and former SSA officials who have had involvement in or responsibility for conducting disability program demonstration projects, including officials from the Office of Disability and Income Security Programs (ODISP) and two offices operating under ODISP—the Office of Program Development and Research and the Office of Employment Support Programs—as well as officials from the Office of the Chief Actuary, the Office of Acquisition and Grants, the Office of Budget, the Office of Strategic Management, and the Office of Research, Evaluation, and Statistics. We also interviewed officials from disability research, advisory, and advocacy organizations. In addition, we examined other reviews of SSA’s disability demonstration and research programs, including prior GAO and Inspector General reports and reports from disability research and advisory groups. We also reviewed SSA budget documents identifying agency spending on disability program demonstrations and SSA testimony describing agency priorities related to the DI program in general and demonstration projects in particular. In addition, we examined SSA’s strategic plan, annual performance plans, and annual accountability reports. To obtain detailed information on SSA’s DI demonstration projects, we reviewed various documents related to SSA’s design, implementation, and evaluation of demonstration projects including agency reports to the Congress; public notifications of demonstration projects issued in the Federal Register; contract, grant, and cooperative agreement solicitation and award notices issued in the Federal Register or in the Commerce Business Daily; and project reports submitted to SSA by grantees or contractors, including project design and evaluation documents. We used information from these sources to identify key characteristics and outcomes of each project, including its broad goals, specific study objectives, types of program waivers applied, methodology, actual or expected costs, funding sources, major project milestones including actual or expected initiation and completion dates, project duration, involvement of outside contractors and grantees, key project strengths and limitations, and final project results, including any recommendations that may have been made. The type and extent of information we obtained for each demonstration project varied widely, in large part because SSA has not maintained comprehensive documentation on its prior demonstrations. In addition, documentation on SSA’s more recent demonstrations was very limited given that these projects are in the early planning and design stages. To provide a broader context for understanding SSA’s use of its demonstration authority, we reviewed other federal agencies’ legislative authorities for conducting demonstration and research activities. We also examined reports from GAO and other organizations that evaluated demonstration and research projects conducted by other federal agencies or that identified key evaluation and methodological issues related to such projects. We performed our work at SSA headquarters in Baltimore, Maryland, and at various locations in Washington, D.C. We conducted our work between October 2003 and August 2004 in accordance with generally accepted government auditing standards. The following individuals also made important contributions to this report: Jacquelyn D. Stewart, Erin M. Godtland, Corinna A. Nicolaou, Daniel A. Schwimer, Ronald La Due Lake, Michele C. Fejfar. | Since 1980, the Congress has required the Social Security Administration (SSA) to conduct demonstration projects to test the effectiveness of possible program changes that could encourage individuals to return to work and decrease their dependence on Disability Insurance (DI) benefits. To conduct these demonstrations, the Congress authorized SSA, on a temporary basis, to waive certain DI and Medicare program rules and to use Social Security Trust Funds. The Congress required GAO to review SSA's use of its DI demonstration authority and to make a recommendation as to whether this authority should be made permanent. SSA has not used its demonstration authority to extensively evaluate a wide range of DI policy areas dealing with return to work. Despite being given the authority to assess a broad range of policy alternatives, SSA has, until very recently, focused its demonstration efforts mostly on a relatively narrow set of policy issues--those dealing with the provision of vocational rehabilitation and employment services. SSA's recently proposed or initiated demonstrations have begun to address a broader range of policy issues, such as provisions to reduce, rather than terminate, benefits based on earnings above a certain level. However, the agency has no systematic processes or mechanisms for ensuring that it is adequately identifying and prioritizing those issues that could best be addressed through use of its demonstration authority. For example, the agency has not developed a formal demonstration research agenda explicitly identifying its broad vision for using its DI demonstration authority and explaining how ongoing or proposed demonstration projects support achievement of the agency's goals and objectives. SSA's demonstration projects have had little impact on the agency's and the Congress' consideration of DI policy issues. This is due, in part, to methodological limitations that have prevented SSA from producing project results that are useful for reliably assessing DI policy alternatives. In addition, SSA has not established a formal process for ensuring that its demonstration results are fully considered for potential policy implications. For example, SSA does not maintain a comprehensive record of its demonstration results that could be used to build a body of knowledge for informing policy decisions and planning future research. Furthermore, SSA's reporting of demonstration project results has been insufficient in ensuring that the Congress is fully apprised of these results and their policy implications. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
PRWORA overhauled the nation’s welfare system by abolishing the previous welfare program, AFDC, and creating the TANF block grant. PRWORA established four broad goals for TANF, which included (1) providing assistance to needy families so that children may be cared for in their own homes or in the homes of relatives; (2) ending dependence of needy parents on government benefits by promoting job preparation, work, and marriage; (3) preventing and reducing the incidence of out-of-wedlock pregnancies; and (4) encouraging the formation and maintenance of two- parent families. Unlike the previous program, TANF gives states great flexibility to design programs that meet these goals. However, while states have flexibility, the programs they design must meet several federal requirements that emphasize the importance of work and the temporary nature of TANF. For example, PRWORA requires that parents receiving assistance engage in work, as defined by the state, after receiving assistance for 24 months, or earlier, at state option. In exercising their option, 28 states require immediate participation in work, and 9 other states require participation in work within 6 months of receiving cash assistance, resulting in great interstate variation in program provisions. Further, despite the programmatic flexibility authorized by TANF, states must meet federal data reporting requirements by submitting quarterly reports that include information from administrative records about those receiving welfare and those terminated from assistance, as well as an annual report, to HHS. The annual report contains information about program characteristics, such as states’ activities used to prevent out-of- wedlock pregnancy. In 1995, we reported that the block grants enacted as part of the Omnibus Budget Reconciliation Act of 1981 (OBRA) carried no uniform federal information requirements. We found that the program information states collected was designed to meet individual states’ needs and that, as a result, it was difficult to aggregate states’ experiences and speak from a national perspective on the block grant activities or their effects. Without uniform information definitions and collection methodologies, it was difficult for the Congress to compare state efforts or draw meaningful conclusions about the relative effectiveness of different strategies. In a second examination of federal block grant programs, we reported that problems in information and reporting under many block grants—the Education Block Grant, the Community Services Block Grant, and the Alcohol, Drug Abuse, and Mental Health Services Block Grant—have limited the Congress’ ability to evaluate them. However, for the TANF Block Grant, the regulations require that states submit the quarterly TANF Data Report and the TANF Financial Report or be subject to statutory penalties. For these reports, HHS provides data reporting specifications including timing, format, and definitions for such data topics as family composition, employment status, and earned and unearned income. These specifications facilitate the use of HHS’ TANF administrative data for welfare reform research by improving the data’s comparability from state to state. Several national surveys and data collected for state and local studies of welfare reform also are potential sources of data for an assessment of TANF. A number of national surveys that collect information about welfare receipt have been used in the past by researchers to analyze welfare reform or have been developed to assess current welfare reform. Four surveys—the Survey of Income and Program Participation (SIPP), the Current Population Survey (CPS), the National Longitudinal Survey of Youth (NLSY), and the Panel Study of Income Dynamics (PSID)—have been used in past research on the AFDC program and the low-income population in general. Both the SIPP and the PSID have updated their questionnaires to include questions that pertain to welfare reform specifically, including questions about the work participation requirements and penalties for not complying with these and other program rules. Moreover, two national surveys are designed specifically to answer questions about welfare reform. The U.S. Census Bureau, at the direction of the Congress, is conducting a longitudinal survey of a nationally representative sample of families, with emphasis on eligibility for and participation in welfare programs, employment, earnings, the incidence of out-of-wedlock births, and adult and child well-being. This survey, the Survey of Program Dynamics, was designed to help researchers understand the impact of welfare reform on the well-being of low-income families and children. Similarly, the Urban Institute has been conducting a multiyear project monitoring program changes and fiscal developments, along with changes in the well-being of children and families. Part of this project includes a nationally representative survey of 50,000 people called the National Survey of America’s Families (NSAF) that is collecting information on the well-being of adults and children as welfare reform is implemented. With the change in the fundamental structure of the nation’s welfare program, there have been several efforts by private research organizations to document the policies states have adopted under TANF. The Center for Law and Social Policy and the Center on Budget and Policy Priorities, in collaboration, have created the State Policy Documentation Project to document policies in all 50 states and the District of Columbia. Available on the Web, the State Policy Documentation Project contains information about state policies contained in statutes, regulations, and caseworker manuals, but it does not describe state practices. In addition, the Urban Institute has developed and made available to the public a database that documents changes in state program rules since 1996. Prior to and since TANF’s implementation, a considerable body of research about the low-income population has been conducted to examine the circumstances of families affected by welfare reform, the effectiveness of welfare reform initiatives, and the implementation of TANF at the state level. HHS has played a major role in laying the foundation for this welfare reform research. During the early 1990s, HHS granted waivers to states that allowed them to test various welfare reform provisions. In return, states were required to evaluate the effectiveness of the waiver provisions by randomly assigning welfare recipients to either participate in the waiver program or not. With the passage of TANF, states were given the option to continue their waiver evaluations as originally designed or modify the evaluation design. Several states opted to continue with their original random assignment design, while others modified their evaluation designs to focus on examining the implementation of the waivers or describe participants’ employment, earnings, and well-being. Because some elements of the waivers granted to states were incorporated into many TANF programs, the waiver evaluations provide useful insights into issues and designs for research about TANF. However, according to HHS, one aspect of waiver policies may mean that some waiver evaluations may not represent TANF requirements completely. TANF established work requirements for all adult recipients, but states could delay adhering to these requirements under their TANF program, in part or whole, if the requirements were inconsistent with state waiver policies. Under the Job Opportunities and Basic Skills Training (JOBS) program, work requirements were mandatory for a work-ready or able-bodied population, excluding a number of subgroups such as those caring for young children and the disabled. For the most part, states that continued the original random assignment design maintained some or all of the JOBS exemptions from work requirements and applied these exemptions in determining who was subject to time-limited assistance. Consequently, while these states’ waivers may incorporate other work policies prescribed under TANF, these policies would not be expected to affect the exempt population. In contrast, in states that do not claim JOBS exemptions from work requirements, all adults are subject to work requirements and time limits on assistance. Thus, while testing TANF-like policies, evaluations that continued the random assignment design may not fully reflect the experience, outcomes, or impacts of fully implemented TANF requirements. In addition to the waiver evaluations, HHS, as well as private foundations, has provided funding for demonstration programs across the country. The demonstration programs are pilot projects designed to measure the effects of a particular strategy, rather than an entire program, on welfare recipients or those eligible to receive welfare. Many of these demonstration programs were intended to increase employment, decrease out-of-wedlock pregnancy, or promote marriage. For example, in the late 1980s, several demonstration programs aimed at decreasing teen pregnancy among welfare recipients were developed. One program, the New Chance Demonstration, randomly assigned teen mothers receiving welfare to participate in a program that offered education or training classes and other support services and then compared the accomplishments of these teen mothers with those of teen mothers who did not participate in the program. Given states’ greater responsibility for welfare programs under PRWORA and the larger number of people leaving the welfare rolls, there has been general interest among program administrators and state and local policymakers about the condition of those who are no longer receiving TANF, otherwise known as “leavers.” In response to this concern, a growing body of research about leavers has been initiated at both the state and federal levels. Generally, researchers have found that once low-income families leave welfare, they become hard to keep track of. Moreover, we previously reported that studies of former TANF recipients’ status differ in important ways, including geographic scope, the time period covered, and the categories of families studied, which limits the comparability of the data across states. In order to facilitate cross-state study comparisons, the Office of the Assistant Secretary for Planning and Evaluation (ASPE) within HHS has issued guidance to states and the research community on developing comparable measures for commonly reported outcomes and defined these outcomes. In fiscal year 1998, ASPE awarded approximately $2.9 million in grants to 10 states and three large counties to study leavers, followed by additional grants in fiscal years 1999 and 2000. ASPE also has encouraged the researchers to use comparable measures. Research is also being conducted to examine the effects of welfare reform in metropolitan areas or neighborhoods. This area of research is important because the caseload decline in urban areas has been substantially lower than in other areas of the country. Moreover, urban areas can have higher unemployment rates and a greater concentration of poverty than suburban or rural communities; thus, insights gathered from these studies will be useful in understanding the potential for the success of welfare reform in the event of an economic downturn. For example, one study—the Three City Study—will survey primarily low-income, single-mother families from poor and moderate-income areas in Boston, Chicago, and San Antonio, with half of those surveyed being TANF recipients. The survey will collect information on adult and family well-being, employment, and welfare receipt three times within 4 years. Finally, a body of welfare reform research examines the implementation of TANF at the state and local levels. Since PRWORA has not only granted states greater responsibility for providing cash assistance but also changed the nature of cash assistance, it is important to learn how states and localities are coping with these changes. Much of the research about program implementation focuses on challenges faced by state, and in some cases local, administrators in implementing TANF. Typically, in this research qualitative data are collected by visiting state or local TANF agencies; reviewing program records; and interviewing agency officials, caseworkers, and clients. For example, the State Capacity Study conducted by the State University of New York, Rockefeller Institute of Government, is collecting data in 20 states about the implementation of TANF at the state level, such as the structure of government services and information systems used to track clients. Because we expect much of the reauthorization debate to focus on TANF’s four legislative goals, the framework for our data assessment was based on those goals. To assess whether data exist to address the goals, we first created a list of “descriptive” and “effect” research questions relevant to each goal. Descriptive questions concern a low-income individual’s or family’s status or behavior, such as the receipt of TANF cash assistance or support services like transportation, housing, child care, or health services; an adult’s employment status and earnings; and a family’s reliance on non- TANF government benefits, such as Food Stamps, Medicaid, or the Earned Income Tax Credit. Effect questions concern the extent to which changes in an individual’s or family’s status or behavior, such as obtaining employment, earning income, avoiding out-of-wedlock births, or forming a two-parent family, are the result of the TANF program. These research questions represent the broad issues that the Congress will consider during TANF’s reauthorization. To summarize our findings, we identified data categories associated with TANF’s goals, some of which are more narrowly focused than the research questions. The data categories represent combinations of topics we found in the data, such as employment and earnings or family and child well- being, that were associated with the research questions. Figure 1 shows the relationships among TANF’s goals, the research questions, and the data categories, several of which are associated with more than one question. We then compared the data categories with the HHS administrative data, the data collected by national surveys, and the data derived from existing and planned studies. Our assessment of the data’s usefulness for determining TANF’s progress is based on the data’s strengths and weaknesses, the design of the survey or study for which the data were gathered, and the topics to which the data related. The criteria we used in assessing the strengths and weaknesses of survey data included survey sample size, the attrition rate of respondents from whom data were collected over time, and survey response rate. For administrative data, we examined the geographic scope and the comparability of the data among states. The design features examined included what the data collection method was, whether the data were collected at one point in time or at different points in time, and whether the data were used for descriptive analysis of TANF or AFDC program recipients and their families or analysis of the program’s effects. Data that can be used for descriptive analysis are useful for research that addresses questions in the descriptive column of figure 1, and data that can be used for analyses of effect are useful for questions in the effect column of the figure. Together, national surveys, HHS administrative data, and data from state and local studies of welfare reform address TANF’s four legislative goals. The national data provide extensive information related to TANF’s goals of providing assistance to needy families and ending dependency on government benefits through job preparation, work, and marriage. State and local data not only address the same goals as the national data but in some cases also provide information related to the goals of preventing out- of-wedlock pregnancies and promoting family formation. National data provide detailed descriptive information related to two of TANF’s goals, but limited information related to TANF’s goals of preventing out-of-wedlock pregnancies and promoting family formation. HHS administrative data and the six national surveys we examined—the CPS, NLSY, NSAF, PSID, SIPP, and SPD—provide descriptive information related to TANF’s goal of providing assistance to needy families, including information about the change in size and composition of the TANF caseload and the use of noncash assistance by current and former TANF recipients (see fig. 2). National data also address TANF’s goal of ending dependence on government benefits by describing the circumstances of those receiving TANF and those who are no longer receiving TANF. HHS administrative records and national surveys provide descriptive information about TANF recipients’ participation in work activities, employment status, earnings, and other family well-being measures. HHS administrative records contain information only about whether a recipient is working and how much income that individual earns, while national surveys collect more detailed employment and earnings data, such as the types of jobs held and the hourly wage. National data are also available about family well-being measures, which provide information about how TANF’s focus on work and marriage may be changing the lives of low-income families. For instance, national surveys have information about the amount of personal income spent on health and housing, whether recipients or former recipients rent or own housing, and the well-being of children of welfare recipients. Several of the national surveys provide information about children’s school attendance or developmental status, while SIPP and SPD also collect data about the number of births to teenagers. SIPP is the only national survey we examined that contains information about whether parents have had to terminate their parental rights or give a child up for adoption. National data related to the goals of preventing out-of-wedlock pregnancy and promoting family formation are limited. While all the national data sets include information about recipients’ and nonrecipients’ marital status, only HHS administrative records contain information about out-of-wedlock births among the TANF caseload. However, states did not begin reporting this information to HHS until fiscal year 2000. Aside from information about welfare reform in general, national surveys and HHS collect information about several different groups of individuals affected by TANF, including those who remain on assistance, those who no longer receive TANF, those who are diverted from TANF, and those who are eligible but choose not to participate. HHS administrative data and all six national surveys collect data about current and former TANF recipients, but the type of information collected about these individuals differs. As figure 3 shows, only the NSAF and SIPP have data about those diverted from TANF, while the NLSY, NSAF, PSID, SIPP, and SPD have data about individuals who are eligible to receive TANF but do not. The state and local data we reviewed can be classified into four categories that complement and, in some cases, fill in gaps not covered by the national data. Waiver data come from evaluations that tested the effects of programs implemented by states under waivers approved by HHS prior to TANF. Demonstration data come from studies that tested the effectiveness of particular strategies aimed at individuals either receiving welfare or eligible to receive welfare. Leavers data come from administrative records and surveys that describe the circumstances of those who left welfare. Finally, metropolitan and community-based data come from studies that, in general, describe the circumstances of low- income families and TANF participants in specific metropolitan areas, neighborhoods, or communities. Waiver data have been used to examine the effects of TANF-like provisions on welfare recipients’ employment status, birth rates, and marital status, as shown in figure 4. Several states have been evaluating the waiver provisions in their welfare programs by randomly assigning welfare recipients to either the waiver program or AFDC. Waiver programs require participants to follow provisions that later were required or permitted under TANF, such as being required to work or risk losing eligibility for benefits or being allowed to receive welfare for only a limited time. Most of the waiver program evaluations collected data used to analyze the effect of waivers on welfare receipt, employment, and income. Data from several of the evaluations have also been used to analyze the effects of waivers on out-of-wedlock pregnancy or family formation. With the passage of PRWORA, several states incorporated their waiver provisions into their TANF program and have been collecting data about the experiences of participants in the program. Some of these states chose not to continue their evaluations as originally designed, instead conducting modified evaluations that typically involved studies that will provide information on the experience of implementing the program. For example, Montana is surveying TANF participants to collect data about the duration of their welfare receipt, the types of noncash assistance they use, and their employment. Demonstration data provide information on topics that are similar to those addressed by waiver data and have also been used to analyze the effects of programs on their participants, but demonstration data differ in two key ways. First, most demonstration data, including all data related to pregnancy prevention and family formation, were collected before PRWORA was enacted. Second, demonstration data were collected for studies focused on how a particular approach affected program participants. In fact, many of the demonstration data we examined were used entirely to assess the effects of various strategies on participants’ employment status and earnings, which helps to distinguish the effects of particular provisions included in a program like TANF. Leavers data provide descriptive information about those who have left welfare. This information includes the length of time an individual received TANF, reasons for leaving welfare, types of noncash assistance used, and employment and earnings information. In addition, some leavers data sets contain information about former recipients’ marital status, and a few have data about the number of pregnancies and births among former recipients. Metropolitan and community-based data cover some of the same issues as the other data categories, including information about TANF work requirements and time limits. Although the same issues are addressed, the data are collected in large cities or neighborhoods in order to examine the circumstances of welfare recipients in areas that may have high concentrations of poverty or limited access to jobs. In addition, metropolitan and community-based data provide information about groups other than TANF recipients and former recipients—including individuals diverted from TANF and those who are eligible to participate in TANF but do not. Although existing data provide rich information about the lives of families who are receiving or have received TANF, the strengths and weaknesses of these data affect their usefulness for understanding welfare under the TANF block grant. National data can be analyzed to gain a descriptive picture of what has happened under TANF for the nation as a whole. However, of the seven national data sets we reviewed, only two can be used to describe the well-being of families receiving TANF within individual states. Although waiver and demonstration data can be analyzed to gain information about TANF’s effects, these analyses can be done within only a limited number of states and disparate localities. We examined nearly 40 data sets that could be analyzed for information about the circumstances of former recipients. However, only a subgroup of these data sets met criteria that allowed the sample to be generalized statewide. These data sets represented 15 states. In some cases, the value of survey data collected from those who left welfare was limited because few former recipients actually responded to the surveys: in some cases, former recipients could not be located, and in other cases they chose not to answer the questions posed to them. Metropolitan and community-based data can be analyzed to describe changes over time in the lives of welfare recipients in urban centers. Much of this data collection will continue beyond 2001. The strength of the national data is that they were collected from samples selected randomly from the nation’s population and include low-income families and TANF recipients in numbers sufficient to allow reliable estimates about these groups. In addition, most of the national data were collected for the same individuals over time, allowing changes in welfare recipients’ employment, earnings, and well-being to be tracked across programs implemented at different times. However, all the national surveys have participants who drop out of the survey sample over time, and this may limit how well the samples represent the nation’s welfare recipients. National data are collected from random samples that contain low-income families and TANF recipients. Because samples from national surveys are selected randomly, they are, at the time of selection, representative of the population at large, including the welfare population. In addition, all the national data sets we reviewed have sample sizes large enough to allow reliable estimates about the nation’s low-income and TANF populations— as sample size increases, the degree of precision of the estimates made using that sample also increases (see table 1). As shown in figure 5, two national data sources collect data on individuals at one point in time; others collect data on the same individuals across time. In both cases, the data can be used for comparisons between groups of individuals living under welfare provisions implemented at different time periods. Five national surveys—the CPS, NLSY, PSID, SIPP, and SPD—collect data from the same individuals over time. For the SIPP, the Census Bureau, after a specified period, changes the group of individuals from whom data are collected. For example, the 1993 SIPP panel followed a group of individuals through 1996. In 1996, a new group was randomly selected and followed through 2000. Data collected over time could be analyzed to describe how people cycle on and off TANF, how their use of benefits changes over time, and how their family well-being changes. In addition, comparisons could be made between groups covered by different welfare provisions. For example, AFDC recipients included in the 1993-96 SIPP panel could be compared with TANF recipients who were part of the 1996-2000 SIPP panel. The NSAF, as well as HHS administrative records, has collected data from different samples of individuals in different years. For example, in 1997 one group of people completed the NSAF; another group completed the survey in 1999. In cases such as these, the samples from different years can be compared with each other to look for changes across time. For those national surveys that collect information about changes in welfare across time, the likelihood that survey participants will drop out over time increases, potentially affecting how well the data actually represent all members of the nation’s low-income and TANF populations. In general, the greater the attrition rate, the less likely a sample is to be representative of the larger population from which it was drawn. Those who have continued participating in the survey may be different from those who stopped or dropped out. As surveys that collect data over time, the NLSY, PSID, SIPP, and SPD all have experienced sample loss, as shown in table 2. Concerns about attrition are especially significant for the SPD, because it was designed specifically to track welfare recipients from AFDC through TANF. Census has tried mathematically adjusting available responses to compensate for the survey’s sample loss, but this adjustment has not sufficiently remedied the problem, according to a Census official. Census will take steps to lessen attrition through intensive follow-up with survey dropouts to enlist their participation and through the use of monetary incentives for future respondents to the survey. For national surveys, the response rate—the number of people in the survey sample who actually responded, compared with those who were asked to respond but did not—has been large enough to allow the survey results to be generalized beyond those who completed the survey, with the exception of the 1999 NSAF. Most practitioners of survey research, including GAO, require at least a 70- to 75-percent response rate before survey data can be generalized beyond those who completed the survey. As table 3 shows, the response rate for all the national surveys except the 1999 NSAF was at or above the 70-percent standard. Given the survey’s response rate, using the 1999 NSAF survey data would require determining whether patterns in who responded and who did not respond existed and what this means for how well respondents represent the original sample. For those surveys that collect data on the same individuals over time, response rates sometimes are considered in conjunction with rates of attrition. The major limitation of most existing national data is that they cannot be used for state-level analyses. In general, national data sources have state sample sizes that are too small to allow reliable generalizations about TANF recipients within individual states. The NLSY, PSID, SIPP, and SPD collect data not from states per se, but from regions that, in some cases, include more than one state. Thus, while these data can be analyzed to provide a descriptive picture of TANF for the nation, they cannot be used within states for descriptive analyses or to analyze the effects of states’ TANF provisions. This does not mean that researchers do not use these data sources for state-level analyses. For example, some researchers combine several years of CPS data to obtain adequate sample sizes within states for state-level analyses. However, Census, which administers the CPS, SIPP, and SPD, does not recommend using data from these surveys for state-level analyses, because doing so when sample sizes are small may produce findings that are not reliable. Two national data sources, HHS administrative records and the NSAF survey, can be used for state-level analysis, but with limitations. HHS administrative records provide data from all 50 states and the District of Columbia. However, the reporting requirements for these data are not completely standardized across states, so that how a variable is defined may vary among states. For example, each state may define the work or work-related activities in which TANF recipients participate as they think appropriate to the state program. Like HHS administrative records, NSAF survey data can be used for state- level analyses. NSAF has samples large enough to allow state-level analyses in 13 states, representing 58 percent of the fiscal year 1999 national TANF caseload; this is not the case in the 37 remaining states. For example, the number of low-income children surveyed for the 1997 NSAF ranged from a low of 760 to a high of 1,813 in each of the 13 states where NSAF collected samples large enough to permit state-level analysis. However, the number of low-income children surveyed in the 37 remaining states averaged 35 per state, a number too small to allow reliable conclusions about the children of TANF recipients in any of these states. Even if the issue of sample sizes within states were resolved, obstacles to using the national data to analyze TANF’s effects within states would still exist. The lack of information about the choices states have made about TANF policies and program rules has been identified as one of the challenges to using national data to analyze TANF’s effects. However, research organizations have collected this information. The Center for Law and Social Policy has worked with the Center on Budget and Policy Priorities to document policies in all 50 states and the District of Columbia, and the Urban Institute has developed a state database that documents state program rules. Yet, even with this information, using national data to measure state-level effects poses challenges. The first challenge is deciding with whom TANF recipients should be compared. To test TANF’s effects, the employment, earnings, and well- being of individuals in the program must be compared with those of individuals who are not in the program. In the case of TANF, it would be difficult to determine what group should provide the point of comparison. Because waivers introduced TANF-like policies and program rules while AFDC was still in effect, it would be difficult to select a group of welfare recipients whose experiences with welfare were not influenced by TANF. The second challenge is determining the effect of any single welfare provision given the multiple provisions that make up states’ TANF programs. For example, TANF recipients are required to work, and states must impose penalties or sanctions when recipients do not comply with work requirements. In such cases, it would be difficult to separate the combined effects of work requirements and any penalties or sanctions that were imposed into the individual effects of each. A third challenge is detecting the long-term effects of state programs that have been recently implemented. Although PRWORA was enacted in 1996, states implemented their TANF programs at different points in time. Some states were still refining their TANF programs at the beginning of 1998. Consequently, the long-term effects of TANF may not yet be realized. Finally, state-level analyses may not be the best way to measure TANF’s effects in every state. Some states have further devolved TANF to localities, and different localities may implement a state’s TANF provisions differently. In total, 17 states have given local governments responsibility for TANF program design and implementation. The strength of the waiver and demonstration data is that they can be used to analyze TANF’s effects, but with few exceptions these data were collected from city and county samples rather than statewide samples. (See app. II for the localities examined.) Most of the waiver and demonstration data were collected as part of experiments—studies that randomly assigned welfare recipients to groups that were subject to different welfare provisions. Experiments, when done correctly, are recognized as the most rigorous way of determining the extent to which an observed outcome can be attributed to the program itself, rather than to differences among the program participants. Over half of the waiver data sets and virtually all of the demonstration data sets we reviewed consisted of data from experiments. Of the waiver data sets, about half were collected from city and county samples, with the others being collected from statewide samples. All of the demonstration data sets were collected from city and county samples. Overall, 6 of the 54 waiver and demonstration data sets that could be used for analyses of effect were collected from statewide samples. According to the project directors of two waiver evaluations, the high cost of conducting rigorous program evaluations may explain, in part, why data sets used to analyze TANF’s effects tend to use samples from cities and counties and not entire states. Given limited resources, researchers may choose to conduct rigorous evaluations in selected cities or counties rather than sacrifice rigor to evaluate a program statewide. Data sources we reviewed for both the Vermont and Iowa waiver evaluations mentioned budget constraints as a factor that led researchers to limit their data collection efforts. Another limitation of the waiver and demonstration data is that most often they were collected prior to the implementation of TANF. This is not surprising given that in many cases the waiver provisions and the demonstration projects were intended to test provisions before they were adopted and implemented. However, the provisions tested may not have been those ultimately adopted by the state. Finally, in almost all cases in which waiver evaluations and demonstration projects collected survey data, response rates were above the 70-percent standard (see table 4). The strength of the leavers data is that in most cases, they were collected from statewide samples. However, in some cases, leavers data collected using surveys may not be representative of a state’s leaver population. Although we reviewed nearly 40 leavers data sets, on the basis of the type of data available, response rates, and the absence of significant differences between survey respondents and nonrespondents, we concluded that state- level analyses could be done for 15 states using the data sets we examined. To be representative of a state’s leavers population, survey data need to meet the 70-percent standard for response rates, or, through a comparison of survey respondents with nonrespondents, show that the two groups do not differ significantly. When a state has both administrative data and survey data available, the administrative data could be used in place of survey data that are not representative. As figure 6 shows, Arkansas, Florida, Georgia, North Carolina, and South Carolina have either survey data that meet the standard for response rates or data from survey respondents who were not significantly different from nonrespondents. Arizona, Colorado, the District of Columbia, Illinois, Kansas, Missouri, Virginia, Washington, and Wisconsin have both administrative data and survey data. The response rate for the District of Columbia, Illinois, Kansas, Virginia, and Wisconsin was below 70 percent, but for Virginia, a comparison of respondents with nonrespondents revealed no significant differences between the two groups. Although New York has no survey data, its administrative data provide information about the state’s leavers. California, Massachusetts, and Texas are the three states for which, given the available data, state-level analyses of leavers cannot be done. We previously reported that eight leavers studies covering seven states had collected adequate information to allow the study findings to be generalized to the states’ welfare populations. Thus 4 states—Indiana, Maryland, Oklahoma, and Tennessee—can be added to the list of 15 states we identify in figure 6 as having data that can be generalized statewide. In appendix II we list all the sources we reviewed that provide data on those who have left welfare. Some researchers may wish to compare those who left TANF with those who left AFDC on outcomes such as employment, earnings, and well-being. Contrasting outcomes for these two groups would require deciding which AFDC leavers provide the best point of comparison. Many factors specific to the year in which recipients left the welfare rolls would influence their employment prospects, wages, and well-being. For example, labor markets and economic conditions in a given year would influence former recipients’ employment opportunities. Historical influences such as these would complicate the issue of selecting a comparable group of AFDC leavers and TANF leavers. The strength of the metropolitan and community-based data is that they can be used in descriptive analyses that provide information about how the lives of low-income families and TANF participants have changed over time. Because data collection is occurring over time, in some cases it has yet to be completed. For example, the Los Angeles Family and Neighborhood Study (LA FANS) is collecting data about participation in welfare programs from residents of 65 neighborhoods in Los Angeles County over a 4-year period. LA FANS began data collection in January 2000 and will continue data collection through 2004. Most of the materials we reviewed regarding metropolitan and community-based data sets did not report information about attrition rates. When response rates were reported, they were above the 70-percent standard. Figure 7 shows the time periods for which the data are or will be available for different metropolitan areas and communities. Three of the metropolitan and community-based data sources have measures that can be used to analyze TANF’s effects, even though the data were not collected as part of an experiment. For example, data from the Fragile Families study can be used to examine TANF’s effects by drawing comparisons between the 3,675 unmarried parents and the 1,125 married parents who compose the survey sample in cities with populations over 200,000. Data collection for Fragile Families began in 1998 and will continue through 2004. The data have already been used to examine differences in relationship quality between married and unmarried couples, including whether a father gave money to or helped a mother in a nonmonetary way during pregnancy. The current body of research on TANF addresses many issues of interest to the Congress but does not provide a comprehensive national picture of TANF. However, existing national data and data from state and local studies could be pieced together to develop a descriptive picture of what has happened to TANF participants in all 50 states. In addition, within a limited number of states and various cities and counties, existing data can be used to conduct analyses of TANF’s effects. National survey data can be used with data from HHS administrative records for descriptive analyses of TANF’s progress nationwide. HHS administrative data can be used for analyses within each of the 50 states, and national survey data can be analyzed for national trends. These analyses could be compared to examine the extent to which the employment experiences, for example, of current and former TANF recipients in individual states conform with or depart from the experiences of such individuals identified with national survey data. This comparison could be extended to the individual states and localities covered by the NSAF data, waiver and demonstration data, leavers data, and metropolitan and community-based evaluation data. While piecing the data together in this way would build on their strengths, each data type still has limitations. Specifically, national survey data provide national samples useful for comparing the lives of welfare recipients covered by welfare provisions implemented at different times. However, attrition or low response rates may affect the degree to which these samples represent all members of the nation’s low-income and TANF population. Within each of the 50 states, HHS administrative data can be analyzed to gain insight into current recipients’ use of noncash benefits, among other things, but the lack of standardized reporting requirements would complicate comparisons across states. Supplemental descriptive analyses for individual states can be done using NSAF survey data, leavers data, waiver and demonstration data, and metropolitan and community- based data. In addition, like the national survey data, many of these data represent multiple measures over time. However, these analyses in many cases can be generalized only to cities and counties and not to entire states. Existing data can also be analyzed to gain information about TANF’s effects. Although the 1997 and 1999 NSAF survey samples do not include pre-TANF welfare recipients, the samples do include other populations, such as low-income families who do not participate in TANF, whose employment, earnings, and well-being can be compared with those of TANF recipients, assuming adequate sample sizes for both groups. Moreover, because NSAF has sample sizes in 13 states large enough to allow state-level analyses, the employment, earnings, and well-being of TANF recipients in those states can be considered in relation to the state’s TANF programs and policies. However, using the NSAF data for such analyses would require resolving the challenges to analyzing effects described earlier in this report. Similarly, although most of the metropolitan and community-based evaluation samples do not include pre- TANF welfare recipients, other populations represented in the study samples could be compared with TANF recipients. Finally, waiver and demonstration data can be analyzed to gain information about TANF’s effects, keeping in mind that this information is about the effects of programs and provisions often implemented prior to TANF and implemented in cities and counties rather than entire states. The data available for addressing TANF’s goals will provide useful information, but with some limitations. Given the costs, some limitations may be difficult to overcome. Our examination of the data raised three issues. First, for a comprehensive assessment of TANF, it is important to have data for a representative sample of TANF recipients and nonrecipients that allow for analyses of effect at the state level. The federal government has made an investment in national surveys, which either in whole or in part are intended to gather information about the lives of TANF recipients. One of these, the SPD, was funded as a means to gather data about TANF recipients. For another, the SIPP, the Census Bureau added a special section of questions about welfare and reworded questions so that they would better capture respondents’ participation in state programs. However, even with these efforts, none of Census’ surveys currently being administered can be used for state-level analyses of TANF’s effects because of small sample sizes within individual states. In addition, the SPD has a high attrition rate. The Census Bureau plans to take steps to improve response to the SPD through intensive follow-up with survey dropouts to enlist their participation and through monetary incentives for future respondents to participate in the survey. However, the issue of small sample sizes at the state level will remain unresolved. Second, HHS has encouraged state agencies to study the effects of their TANF programs through the AFDC waiver requirement for experimental studies and subsequent research initiatives. Moreover, our examination of data indicates that, because of the variability in TANF program provisions across states, analysis of TANF’s effects at the state and local levels can be done with the greatest confidence. However, even when conducted at the state and local levels, studies designed to examine TANF’s effects tend to be costly, time-consuming, and impractical to implement in every state. In some cases, conducting an evaluation for an entire state is determined to be so expensive that data collection is limited to a portion of the state. For example, the evaluation of Vermont’s waiver program focused on 6 of 12 welfare service districts. The evaluation’s 42-month follow-up survey was administered to only these 6 district offices and, owing to cost constraints, included a subset of the sample for whom administrative records, rather than survey responses, were collected. Policymakers, federal and state officials, and the welfare reform research community will need to seek ways to balance the need for information about TANF’s effects with the resource demands of rigorous studies. Third, both qualitative and quantitative data may be needed to understand what has happened to former TANF recipients. Leavers are a difficult population to track, and, in some cases, using multiple methods of quantitative data collection has not necessarily increased the number of former recipients who could be located or who responded to surveys. In fact, in some of the studies we reviewed, the low rate of success in gathering data from these individuals makes the data’s usefulness questionable. Surveys that used only one mode of data collection, such as telephoning former TANF recipients, generally had the lowest response rates. Some leavers’ studies followed telephone surveys with personal interviews of those who could not be reached by phone or who did not respond. However, even the use of multiple modes of data collection did not always ensure high response rates. Given the difficulties inherent in collecting quantitative data from this group, other data collection strategies that use local communication networks to identify families as well as interviews of respondents in their homes may be needed to gain information about the lives of TANF leavers. In commenting on a draft of this report, HHS said that the report will be of help to the Congress and other interested parties. In its technical comments, HHS expressed concern that in highlighting the importance of statewide samples, we understated the value of data from local samples. In response to this concern, we have noted in the report not only that findings from local samples are important but also that, in some cases, they provide data only recently available from national surveys. We concur with HHS that a sample need not be statewide in order for findings to be useful. However, we have emphasized the value of data that can be generalized to the state level because of the Congress’ interest in a picture of TANF’s progress nationwide. HHS’ comments appear in appendix IV. As agreed with your offices, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days after its issue date. At that time, we will send copies to the Honorable Tommy G. Thompson, Secretary of Health and Human Services; appropriate congressional committees; and other interested parties. We also will make copies available to others on request. If you or your staff have any questions about this report, please contact me on (202) 512-7202 or David D. Bellis on (415) 904-2272. Another GAO contact and staff acknowledgments are listed in appendix V. This appendix discusses in more detail our scope and methodology for identifying, selecting, and assessing studies and surveys that might provide data to help researchers as they seek to describe what has happened to recipients of Temporary Assistance for Needy Families (TANF) and to estimate the effect of welfare reform on them. Because no comprehensive list of data sources for welfare reform research exists, we used a judgmental sampling method for our assessment of data resources. We began our work by examining six key critiques of welfare reform research that had been issued, in draft or final form, by the fall of 1999. The six critiques listed in figure 8 both gave us insight into issues that will probably arise in assessing TANF and identified studies that may be potential sources of data for an assessment of TANF. We started the development of a list of data sources from three of the critiques—the Research Forum’s report and its related on-line database, the National Research Council’s interim report, and Peter Rossi’s paper. To ensure that this list was comprehensive, we consulted with officials at the Department of Health and Human Services (HHS) about important bodies of work in the welfare reform research field. We also conducted follow-up interviews with HHS project officers and experts in the welfare reform research community to ensure that we had identified the most relevant national surveys and studies, particularly those that might have data about family, marriage, and pregnancy issues. As a result of these discussions and an examination of the original list, we designed a judgmental sample of potential data sources for welfare reform research that included the following categories: national surveys and HHS’ TANF administrative data; studies that collected data about the major TANF subpopulations in three or more states or municipalities; studies of TANF leavers; HHS’ waiver evaluations; and studies listed on the websites of HHS’ Administration for Children and Families (ACF), HHS’ Office of the Assistant Secretary for Planning and Evaluation (ASPE), and the Welfare Information Network of the Finance Project. We then began to develop lists of the surveys and studies in each of the sample’s categories. The national surveys included in our list were the Current Population Survey (CPS), the National Longitudinal Survey of Youth (NLSY), the National Survey of America’s Families (NSAF), the Panel Study of Income Dynamics (PSID), the Survey of Income and Program Participation (SIPP), and the Survey of Program Dynamics (SPD). We used information from ASPE and from the National Conference of State Legislatures to identify leavers studies sponsored by HHS or states. Similarly, we used information from ACF to ensure that our list contained the body of research funded by ACF that focused on waivers implemented by state welfare agencies prior to TANF’s authorization. As we added items to the list, we continually checked to avoid any duplication. This comparison involved our judgment, as some lists were of projects or studies and others were of study reports. Because we relied on multiple reviews of the body of work undertaken in the welfare reform research community, we believe that the list of 443 entries we compiled included the key sources of data. We selected surveys and studies systematically from this list within each sample category. We were interested in surveys or studies that were as comprehensive as possible in geographic coverage and topics addressed. Thus, we selected all of the national surveys and the HHS administrative data. We also selected all studies on the original list that by their description appeared to have produced data concerning the major subpopulations affected by TANF in three or more states, municipalities, or counties. This resulted in 55 studies and surveys. We then selected studies that pertained to individual states in the following way. First we selected all leavers studies financed by ASPE. Of the leavers studies listed by the National Conference of State Legislatures and those mentioned in an article authored by Brauner and Loprest, we included only those that had not been included in our previous report or were not from a state that already had an ASPE-funded study. In states that had issued multiple reports for their leavers studies for people who left welfare in different years, we selected the most recent study. When a state had no ASPE-funded study or any listed by the National Conference of State Legislatures or Brauner and Loprest, but did have a report available on its Web site, we selected the Web report. Waiver studies generally produced several reports. We selected for review the most recently issued waiver report because the data topics examined were similar in the initial and later reports. After selecting these types of studies and surveys, we removed from our list studies that did not appear to contain data that could answer our research questions or that used data from one of the national surveys on our list. In summary, we excluded literature searches, reviews of research on state policies or programs, technical assistance projects focused on improving or evaluating information systems or databases, and studies based on data from a national survey that we had included in our list. A list of 239 studies remained. Finally, we obtained advice from five welfare experts about which of these 239 studies we should include. Ultimately, we selected 17 of these studies. In all, we judgmentally selected 141 national surveys and studies that yielded 187 data sets to review. A complete list of the national surveys and studies that we examined for data is provided in appendix II. Identifying data resources for a comprehensive assessment of TANF required criteria that could be used to assess data sets. The first step in this process was to express each of TANF’s goals as a research question. In looking at the goals themselves, it is evident that some express expected results—for example, that work and marriage will improve the well-being of low-income families. Assessing TANF’s progress toward these expected results required, in part, questions about TANF’s effects. However, some of TANF’s goals focus on its general purpose—for example, providing assistance to needy families. In this case, assessing TANF’s progress required research questions that are descriptive, that is, questions that ask what public assistance looks like under TANF. To translate TANF’s goals into research questions, we considered the nature of each of TANF’s goals and formulated questions to represent key issues the Congress will consider at reauthorization. As shown in figure 1, we created corresponding questions that asked for descriptions of what has happened under TANF, the effects of TANF, or both. We then specified the information, or data topics, necessary to address our research questions. We developed a data collection instrument that listed the data topics associated with each question and used the instrument to record the data topics found in each data set examined. It is important to note that what we identified as data topics were not equivalent to specific measures. In other words, our coding captured the fact that a certain data source collected measures on employment. It did not capture the specific manner in which employment was measured. In addition to data topics, we collected such pertinent information as the unit of analysis, population, sampling method, sample size, dates covered by data collection, and design of the study for which data were gathered. We recorded response rates and attrition rates when they were relevant given the method of data collection. We also looked to see if data had been or were being collected for a comparison or control group. To summarize our findings, we identified data categories related to TANF’s goals, some of which represented the research questions and others of which were more narrowly focused. The narrowly focused data categories represented combinations of data topics, such as employment and earnings or family and child well-being, that were associated with the research questions. We took this approach for a variety of reasons. First, in making a judgment that data were available to address particular questions, we required that certain data topics be present in combination and, for effect questions, that the data were collected using control groups or comparison groups. However, a data source could provide relevant data topics, even though the data topic could not be used to address the particular question we had posed. Rather than discount the value of these data topics, we decided to note their availability. Second, in many cases, the same data topics and data categories were being used to address different questions. For example, as figure 1 shows, the data categories associated with employment were related to 5 of our 11 questions. Presenting our findings in terms of data categories allowed us to report on all of the data topics, including those that were not available in the combinations needed to address a research question. Finally, to assess how the data might be used for an assessment of TANF, we considered three attributes of the data. We considered the geographic scope of the sample; the data topics included in the data set; and whether or not the data could be used for descriptive analyses or analyses of effect, given the design of the study. In determining the geographic scope of the sample, we looked at the sampling method and sample size, as well as at response rates and attrition rates, since both affect how well a sample represents a population. We relied on the design of the study, the data topics included in a data set, and how researchers had used the data to make a judgment about whether the data could be used for descriptive analyses or analyses of effect. We coded data as being useable for analyses of effect when they came from a study that made comparisons between groups, one of which served as the treatment group and the other as the absence of the treatment, or the comparison group. In deciding whether a study included a treatment and a comparison group, we recognized that such groups could be formed through experimental designs, quasi- experimental designs, or statistical modeling. Because this assessment is based on a judgmental sample and the data needs of an assessment of TANF’s progress are derived from TANF’s legislative objectives, several study limitations should be considered. First, while every attempt was made to be comprehensive in sample design and selection, some relevant data sources may have been omitted. Second, framing the data needs for an assessment of TANF’s progress around TANF’s objectives, which focus on the behavior and well-being of low- income children and families, excluded from consideration the bodies of welfare reform research concerned with institutions, including studies of TANF’s implementation at the state and local levels and descriptions of TANF program policies and practices. Third, the study’s focus on identification of quantitative data resulted in our eliminating data from most studies that used qualitative data collection methods. Fourth, because our bibliographic sources for surveys and studies included both existing and planned surveys and studies, complete documentation for data sets was not always available. Finally, because our coding focused on whether a certain data source collected measures on specific topics, but not on the precise measures used, we did not assess whether measures were comparable across studies. The Family Transition Program: Implementation and Three-Year Impact of Florida’s Initial Time-Limited Welfare Program Administrative Survey Iowa’s Family Investment Program: Impacts During the First 3-½ Years of Welfare Reform Administrative Reforming Welfare and Rewarding Work: Final Report on the Minnesota Family Investment Program Administrative Surveys (2) In addition to those named above, the following individuals made important contributions to this report: Patrick DiBattista designed the data collection instrument used to assess the 187 data sets reviewed, oversaw data collection, and designed and conducted the analysis of the data's strengths and weaknesses; Andrea Sykes played a major role in data collection and developed the analysis of the data's availability to address TANF's goals; Stephen Langley III also played a major role in data collection, provided consultation on multivariate analysis issues, and prepared the report's methodology appendix; James Wright provided guidance on study design and measurement; and Gale Harris, Kathryn Larin, and Heather McCallum provided consultation on TANF policy and implementation issues. The first copy of each GAO report is free. Additional copies of reports are $2 each. A check or money order should be made out to the Superintendent of Documents. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. Orders by mail: U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Orders by visiting: Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders by phone: (202) 512-6000 fax: (202) 512-6061 TDD (202) 512-2537 Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. Web site: http://www.gao.gov/fraudnet/fraudnet.htm e-mail: [email protected] 1-800-424-5454 (automated answering system) | GAO commented on the federal government's ability to assess the goals of the Temporary Assistance for Needy Families (TANF) program using national, state, and local data. These data address the goals to differing degrees. National data, which includes data collected in national surveys and information that all states report to the Department of Health and Human Services (HHS), include extensive information on TANF's two goals of providing assistance to needy families and ending dependency on government benefits but have limited information on promoting family formation. The data pertain to such issues as changes in TANF workloads, recipients' participation in work activities, employment status and earnings, and family well-being. Although there are national data on the incidence of out-of-wedlock births and marriage among TANF recipients and other low-income families, these data include only very recently available information on states' strategies to prevent out-of-wedlock pregnancies or promote family formation. Studies of welfare reform at the state and local levels contain the same kind of information as national data, but they also include information about areas very recently covered by national data. Much of these data come from waiver evaluations--evaluations conducted in states that experimented with their welfare program, under a waiver from HHS, prior to TANF. The usefulness of existing data for assessing TANF's progress varies. In general, the need for information about TANF's progress will have to be balanced against the challenges of rigorous data collection from the low- income population. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
The two largest federal school meal programs, the NSLP and the SBP, aim to address problems of hunger, food insecurity, and poor nutrition by providing nutritious meals to children in schools. The NSLP, established in 1946, and the SBP, permanently established in 1975, provide nutritionally balanced low-cost or free lunches and breakfasts in participating schools. At the federal level, these programs are administered by FNS as part of its strategic goal to improve the nation’s nutrition and health, and the department has laid out plans to increase access to, and utilization of, these school meal programs. At the state level, the NSLP and SBP are typically administered by state education agencies, which operate the programs through agreements with SFAs. SFAs, in turn, administer the school meal programs at individual schools. SFAs must offer meals that meet federal nutritional requirements, operate the food service on a nonprofit basis and follow the record- keeping and claims procedures required by USDA. As shown in fig. 1, SFAs receive cash reimbursements from FNS through the state agency for each meal they serve, based on the type of meal served (lunch or breakfast) and the meal category (free, reduced price, or full price). In addition, unless they are eligible for free meals, students pay a full-price or reduced-price fee to SFAs for each meal they receive, depending on their household income. To supplement the federal reimbursement, some state agencies also use state funds to provide cash reimbursements to SFAs based on the number of meals they serve. In school year 2008-2009, FNS per meal reimbursement rates ranged from 24 cents for a full-price lunch to $2.57 for a free lunch (see table 1). The majority of the meals served through the NSLP and SBP are provided for free or at a reduced price to low-income students. In fiscal year 2008, about half of the school lunches served were provided for free and about 10 percent were provided at a reduced price. Similarly, about 71 percent of the school breakfasts served were provided for free and about 10 percent were provided at a reduced price (see fig. 2). The laws governing the school lunch and breakfast programs establish maximum charges for reduced-price meals, but SFAs set their own fees for full-price meals. School districts are required to determine whether students are eligible to receive free or reduced-price school meals based on federal poverty guidelines. Students are eligible for free meals if their household income is less than or equal to 130 percent of the federal poverty level, or if they are homeless, runaway, or migrant, as defined in the law. Students are eligible for reduced-price meals if their household income is greater than 130 percent and less than or equal to 185 percent of the federal poverty level (see table 2). Typically, parents submit school meal applications to school districts each school year, including self-reported household income, household size, and information on whether the household participates in any other federal nutrition assistance programs. Districts review school meal applications and certify students as being eligible for free or reduced-price meals, and are required by FNS to annually verify the accuracy of their eligibility determinations for a sample of free and reduced-price meal applicants. If students’ household income is above 185 percent of the federal poverty level, they pay the full-price fee for school meals set by the SFA. According to USDA, nearly half of the households that received free or reduced-price school lunches from mid-November to mid-December 2007 faced food insecurity, in that they had difficulty providing enough food for all their members because of a lack of resources. Specifically, ERS analyzed data from an annual food security survey conducted by the U.S. Census Bureau in December 2007 and found that 47 percent of the households that received free or reduced-price school lunches in the month prior to the survey faced food insecurity at some time during 2007. Overall, ERS found that the NSLP reached 33.6 percent of the 13 million food insecure households in the United States in the month prior to the survey. While a typical school district participating in the NSLP or SBP collects fees from eligible students who receive reduced-price meals, districts with ERP programs have chosen to provide free meals to reduced-price-eligible students and bear the cost of the reduced-price fees that these students otherwise would have paid (for a comparison of fees and reimbursements for districts with and without an ERP lunch program, see fig. 3). Both typical school districts and districts with ERP programs collect full-price meal fees from other students and receive a cash reimbursement from FNS for each meal they serve, based on the type of meal served (lunch or breakfast) and the meal category (free, reduced price, or full price). As shown in table 3, in recent years, participation in school meals has increased overall and among students in all three meal categories. A variety of factors may affect the number of students participating in school meals, such as economic conditions, changes in student enrollment, improvements in food quality and meal choices, and school meal program marketing efforts. Despite these increases in participation, some students who are certified as being eligible to receive free or reduced-price meals do not participate in school meals, as shown in figure 4. According to FNS, in fiscal year 2008, about 81 percent (15.4 million) of the approximately 19 million students certified as eligible for free meals participated in school lunch and about 39 percent (7.5 million) of these students participated in school breakfast. Similarly, about 72 percent (3.1 million) of the approximately 4.3 million students certified as eligible for reduced-price meals participated in school lunch, and about 24 percent (1 million) of these students participated in school breakfast. A recent Mathematica Policy Research study identified school type (elementary school, middle school, or high school) and student attitudes toward school meals as factors affecting both the breakfast and lunch participation of students who are certified as eligible for free or reduced-price meals. This study found that when controlling for other factors, high school students are less likely to participate in school meals than middle school students, and middle school students are less likely to participate in school meals than elementary school students. This study also found that students who are satisfied with the taste of school meals are much more likely to participate in school meals than students who are not. Some individual schools and districts have implemented programs that provide free meals to all students regardless of income. These schools and districts still receive a cash reimbursement from FNS for each meal they serve, based on the type of meal served (lunch or breakfast) and the meal category (free, reduced price, and full price) and are still required to determine student eligibility for free and reduced-price meals and report the number of meals they serve by meal category. However, these schools and districts do not collect reduced-price fees or full-price fees from students and therefore need to make up for this lost revenue in other ways. Because the federal reimbursement is significantly higher for free and reduced-price meals than for full-price meals, these programs may not be as costly an alternative for schools with a very high percentage of students eligible for free or reduced-price meals schools relative to schools with a lower percentage of these students. Two USDA special assistance provisions of the NSLP and the SBP allow participating schools and districts to provide reimbursable, universal free meals to all participating students regardless of their household income. These special assistance provisions are intended to reduce the administrative burden for individual schools and districts by allowing them to process school meal applications and determine eligibility for free and reduced- price meals less frequently. For additional information about universal free meals programs, see appendix II. Acting on their own initiative, at least 5 states and 35 school districts eliminated the reduced-price fee for breakfast, lunch, or both meals in school year 2008-2009, primarily to increase participation or reduce hunger. We identified 5 statewide ERP programs in Colorado, Maine, Minnesota, Vermont, and Washington, and 35 district-level programs in 19 other states out of approximately 14,000 districts nationwide. (See fig. 5.) The 5 state programs included more than 1,400 districts. The states and districts with ERP programs included both small and large districts based on student enrollment, with an average percentage of reduced-price- eligible students similar to the national average of 9 percent across nearly 14,000 districts. (See table 4.) State- and district-level officials we interviewed most often cited reducing hunger and food insecurity or increasing participation of low-income students as primary reasons for implementing the ERP programs. State officials from 4 of the 5 states cited reducing hunger and food insecurity, through increasing participation of low–income students, as the primary reason for implementing the ERP programs. For example, an official in 1 of the 5 states said the state had ranked high in the nation for hunger and food insecurity several years ago, and the official thought the ERP program would be one way to help address this problem. Similarly, in our survey, almost all district officials cited reducing hunger and food insecurity and increasing participation of reduced-price-eligible students as major or moderate reasons for implementing the ERP programs as well. (See fig. 6.) SFA officials we interviewed in one district said the district started its ERP program to help those students who were not eating breakfast or lunch because their families could not afford either meal, even at the reduced price, much less both meals. Some state- and district-level officials we interviewed or surveyed also cited the intention to improve academic performance and increase overall participation as major reasons for implementing these ERP programs. One state implemented its statewide program primarily in response to the view that eating breakfast is related to academic success. States and districts implemented ERP programs in various ways. For example, state- and district-level officials said they eliminated reduced- price fees for either breakfast, lunch, or both meals. (See table 5.) There was also some variation in the grades included in the state- and district-level ERP programs. Officials from all 5 states we interviewed and most of the 35 districts we surveyed eliminated the reduced-price fee for at least one meal in all grades. However, Colorado and Washington provided free lunch to reduced-price-eligible students in specific grades in addition to breakfast for all grades; Colorado’s lunch ERP program was limited to kindergarten through second grade, and in Washington ERP lunch was limited to kindergarten through third grade. One district ERP program was limited to eighth grade and below, and some district ERP programs included preschool, while others did not. In addition, some states and districts used ERP programs in combination with other free meal programs. Four of the 5 states with ERP programs included schools or districts with universal free meal programs. For example, while Colorado schools provide ERP for breakfast, Denver Public Schools, the state’s largest school district, has offered universal free breakfast for the last few years. Thirteen of the 35 districts with ERP programs for one meal also had universal programs for the other meal. For example, the Hillsborough County School District in Florida provides lunch through its ERP program but has also offered free breakfast to all students through a universal free meals program since 2002. Unlike ERP programs that only subsidize the fees paid by students eligible for reduced- price meals, universal free meal programs also subsidize the fees paid by students for full-price meals, and the cost is borne by the SFA. One state official and SFA officials in most districts we surveyed reported that their ERP programs have increased the rate of participation among students who are eligible for reduced-price meals. For example, according to a Washington official, after the state implemented its ERP program for school breakfast in September 2006, the breakfast participation rate of reduced-price-eligible students increased from about 19 percent (15,373 students) in October 2005 to about 25 percent (21,644 students) in October 2006. In addition, officials in Maine and Vermont, which both implemented state ERP programs in September 2008, told us preliminary data suggest that these programs have increased the participation of reduced-price-eligible students. Similarly, in our school district survey, SFA officials in 28 of the 31 districts with ERP breakfast programs reported that these programs have increased the participation of reduced-price-eligible students in school breakfast, while officials in 2 districts reported no change in breakfast participation and one district official did not know whether breakfast participation had changed. Officials in 20 of the 23 districts with ERP lunch programs reported that these programs have increased the participation of reduced-price-eligible students in school lunch, while again officials in 2 districts reported no change in lunch participation and one district official did not know whether lunch participation had changed. In a separate survey question, some SFA officials provided data indicating that the rate of participation among reduced-price-eligible students increased, on average, by 9 percentage points in breakfast and 11 percentage points in lunch, since their ERP programs were implemented (see table 6). The increase in the participation rate among reduced-price-eligible students in these districts may not be entirely due to the ERP programs, as participation rates may vary even in districts without ERP programs over time, but at least some of the increases in participation appear to be a result of the ERP programs themselves. SFA officials in districts with ERP programs reported that the average increase in the lunch participation rate among reduced-price-eligible students (11 percentage points) was greater than the average increase in this participation rate among students in the free (5 percentage points) or full-price (5 percentage points) meal categories for their districts. Further, in the four districts that implemented their ERP programs in school year 2007-2008 and provided participation data—2 of these districts had ERP programs for breakfast and lunch, and 2 districts limited their ERP programs to breakfast—the increase in the breakfast participation rate (2 to 11 percentage points) and lunch participation rate (7 to 10 percentage points) among reduced-price-eligible students was greater than the national change in these participation rates (less than a 1 percentage point change each for breakfast and lunch). Two states and most school districts with ERP programs observed no effect on school meal program errors related to student eligibility or meal counting. Implementing an ERP program would generally not be expected to have an effect on school meal program errors, because school districts and SFAs are required to follow the same administrative procedures regardless of whether or not they collect reduced-price fees. According to FNS officials, districts that eliminated the reduced-price fee are still required to process school meal applications and certify students as being eligible for reduced-price meals under federal poverty guidelines, and SFAs are still required to count the number of reduced-price meals they serve and report this meal count to FNS. In two of the 5 states with ERP programs, officials said they believe that these programs had no effect on school meal program errors. In two other states, officials told us that they were unable to determine whether the ERP programs had an effect on errors. In the fifth state, an official said that districts’ meal- counting errors increased temporarily because of the implementation of the state ERP program, which required districts to change the way they reported to the state the number of meals served. However, this official told us that these errors have since returned to their previous levels. In our survey, SFA officials in 32 of the 35 school districts with ERP programs reported that these programs had no effect on errors related to student eligibility, and officials in 31 districts reported that these programs had no effect on meal-counting errors. ERP programs involve additional costs to states and school districts, as well as to the federal government. The state or school district implementing the ERP program bears the cost of the reduced-price fee— no more than 30 cents for each breakfast served and 40 cents for each lunch served—that otherwise would have been paid by reduced-price- eligible students. Across the 5 states with ERP programs, officials told us that the costs for them to implement these programs ranged from about $144,000 to about $3 million per year, and across the 4 school districts with ERP programs we interviewed, SFA officials said that program costs ranged from about $12,000 to about $370,000 per year. In addition, both state- and district-level ERP programs involve an additional cost to the federal government because these programs generally lead to increased participation among reduced-price-eligible students, thus increasing the reimbursement that FNS provides to states. In addition to the FNS reimbursement, some states also provide a reimbursement to SFAs based on the number of meals they serve. In these cases, the increased participation among reduced-price-eligible students associated with ERP programs involves additional costs to states. While increased federal reimbursements partially offset program costs for the state and district ERP programs that experienced increased participation, all 5 state ERP programs used state appropriations to cover their remaining program costs, and districts used a variety of revenue sources to manage their remaining program costs. Increased FNS reimbursements can offset program costs when the amount of the per meal reimbursement exceeds the cost to the SFA of producing the meal. In our survey, SFA officials in 21 of the 35 districts with ERP programs said that they received an increased reimbursement amount from FNS as a result of increased participation. For example, an SFA official from the Grand Rapids Public Schools told us that the total additional cost to the district associated with the ERP program is about $92,000 per year, but the net cost of the program is about $64,000 per year, because the SFA experienced an increase of about $28,000 per year in its FNS reimbursement as a result of increased participation. SFA officials in 2 of these 21 districts told us that increased participation also allowed them to obtain additional state funding. For example, because the Salt Lake City School District receives state liquor tax funding based on the number of lunches served by the SFA, the increased participation associated with the ERP program also resulted in additional state funding. While SFA officials in 16 districts told us that the additional revenue from increased participation covered program costs, officials in 3 districts told us that it did not cover program costs, and officials in 2 districts said they did not know whether it covered program costs. Several SFA officials told us that their districts covered program costs by supplementing increased reimbursement revenue from FNS with school district revenue from à la carte sales, catering, or other district funds, and one of these officials also reported increasing the full-price meal fee to help cover costs. Even so, officials in the 2 districts we identified that had discontinued ERP programs told us they did so because they were unable to continue to cover program costs. Some SFA officials identified factors that minimized the additional costs associated with implementing ERP programs. A few SFA officials noted that their districts were already bearing the cost of the reduced-price fee for some students prior to implementing ERP programs because reduced- price-eligible students participating in school meals were often unable to pay this fee. For example, an SFA official in 1 district said that over 33 percent of reduced-price-eligible students were receiving meals but were not paying the reduced-price fee. Also, some districts experienced economies of scale because ERP programs increased participation but did not increase their labor costs. Specifically, SFA officials in these districts told us that they were able to serve meals to more students without hiring additional staff or increasing work hours for existing staff, because the additional number of meals served at each school was relatively small. Similarly, in our survey, SFA officials in 30 of the 35 districts with ERP programs reported that these programs had no effect on or decreased the overall workload of kitchen and cashier staff at participating schools. SFA officials in nearly all of the school districts we surveyed reported that ERP programs either had no effect on or decreased the overall administrative burden on district staff (see table 7). Several officials who reported that ERP programs decreased this administrative burden explained that district staff no longer spend time trying to collect unpaid meal charges from reduced-price-eligible students who receive school meals but are unable to pay the reduced-price fee. One of these officials further explained that prior to the implementation of the ERP program, students who were unable to pay the reduced-price fee would charge these meals and build up a balance of unpaid meal fees, and staff would then spend time trying to collect these fees from parents. Several officials noted that attempts to collect these fees were sometimes unsuccessful, and one official said he believed that the cost of the administrative time spent trying to collect these fees was greater than the value of the fees themselves. Most of the SFA officials we surveyed reported that ERP programs have had a generally positive effect on students’ attitudes about and parents’ level of satisfaction with the school meal programs (see table 8). SFA officials in several districts also reported other benefits. One official told us she believes that the ERP program has increased administration and faculty support for the school meal programs, and another official noted that the program has increased the school board’s level of satisfaction with the school meal programs. SFA officials in several other school districts noted that their ERP programs have been well received by their communities. Some SFA officials we surveyed told us they believe that ERP programs have improved students’ academic performance, although they did not conduct research on the effect of these programs on academic performance (see table 9). Officials in more than half of the districts (19 of 35) responded that they did not know what effect their ERP programs had on academic performance. One SFA official noted that it would be difficult to link improvements in academic performance to ERP programs because there are many factors that affect academic performance. Even so, some research studies indicate that participation in school breakfast may be associated with improvements in performance on standardized tests and math grades as well as improvements in school attendance and punctuality. Supportive legislators and nonprofit organizations played a major role in establishing ERP programs at the state level, and support from school boards and superintendents was a major factor in establishing programs at the district level. Officials that we spoke with from all 5 states cited strong support from key legislators and various nonprofit organizations concerned with child nutrition and hunger as a major factor in establishing an ERP program under state law. For example, an official from the state of Colorado told us that the state school nutrition association had contacted state legislators to promote the elimination of reduced-price fees for school meals, and one legislator was particularly supportive of implementing a statewide ERP program. A Washington state official told us that a coalition of several organizations contacted every member of the state Ways and Means Committees to promote legislation that would eliminate reduced-price fees. As shown in figure 7, most SFA officials from the district-level programs we identified reported that supportive school boards and superintendents were major factors in helping implement their ERP programs. We also asked state- and district-level officials we interviewed and surveyed about the effect that a number of other factors might have had on the implementation of ERP programs. Specifically, we asked about a lack of program funding, limited information on program development, and requirements to continue annual certification of student eligibility for reduced-price meals, but in general few states and districts indicated that these were major factors that hindered implementation. See figure 8 for district survey responses. For state ERP programs, lack of funding was not a major factor largely because funds were appropriated by the state legislature when these programs were established. However, at least one official indicated that the state’s decision to limit the number of grades covered by the ERP program for lunch may have been due to funding restrictions. Regarding program development, while officials in one state found information on other state ERP programs to have been very helpful, another state official cited unique circumstances as one reason why the information was not that helpful in developing her state’s ERP program. Regarding the district-level ERP programs, these districts were generally committed to making their programs work, had the support they needed, and were able to succeed. However, the number of districts that may have tried to implement ERP programs and been unsuccessful is not known. Finally, most state- and district-level program officials did not see continuing to certify reduced-price-eligible students as a major hindrance because systems to capture this information were already in place. Funding for the state ERP programs may be vulnerable to across-the- board budget cuts, but most district-level SFA officials reported less dependence on state funding and more options for managing ERP program costs. Officials from all 5 state programs indicated that dedicated state appropriations were a primary source of ERP funding, and officials from four of these states indicated that a loss of state funding would be a threat to the continuation of their programs. While state or local budget cuts might also affect district funding, especially in the current fiscal environment, some district-level ERP programs might be better situated to withstand such cuts. For example, we asked several of the SFA officials from district-level ERP programs that we interviewed what would happen to their programs if their funds were cut. Even under declining fiscal conditions, when we conducted our interviews during the latter part of 2008 and early 2009, the officials indicated that the SFAs would explore ways to raise additional revenue or reduce expenditures so that they could continue to cover ERP program costs. Further, officials from several SFAs that we interviewed indicated that the net costs of their district-level ERP programs were less than 1 percent of their annual expenditures. Specifically, an SFA official from the Grand Rapids Public Schools—with annual expenditures of about $8.3 million—estimated the annual net cost of its ERP program at about $64,000 per year. An SFA official from the Great Neck Public Schools in New York told us that the average number of students participating in reduced-price lunch was only about 7.6 percent (221 of 2,921) of the total number of students who participated in school lunch on a daily basis. In addition, SFA officials told us that they have flexibility to potentially offset revenue losses. For example, the SFA official from Great Neck told us that her district’s ERP program had previously covered costs through revenue generated by à la carte sales, but noted that recent declines in sales may require the district to begin using reserve funds to cover program costs. Despite potential fiscal challenges at the time we conducted our interviews and survey, all 5 states and 30 of the 35 districts surveyed reported that they plan to continue their ERP programs in the future. The other 5 districts had not decided to discontinue their programs, but said they did not know if the programs would continue. Some state- and district-level officials believe that there is an even greater need for this type of program at a time when some families are experiencing increased economic hardship. However, state and local fiscal conditions have continued to deteriorate since we began our audit work and the effect of the changes in the economic climate on ERP programs is unknown. We provided a draft of this report to USDA for review and comment. USDA did not provide written comments. However, FNS provided us with technical comments that helped clarify our report’s findings, which we incorporated where appropriate. We are sending copies of this report to relevant congressional committees and other interested parties and will make copies available to others upon request. In addition, the report will be available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-7215 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix III. To provide a better understanding of the experiences of states and school districts with programs that eliminated the reduced-price fee (known as ERP programs), this report presents information on the following questions: (1) What is known about the state and local jurisdictions that have eliminated the reduced-price fee for the school lunch or breakfast programs? (2) What have been the experiences of state and local jurisdictions that have eliminated reduced-price fees with respect to factors such as participation, errors, and costs? (3) What factors may help or hinder the establishment or continuation of programs that eliminate reduced-price fees? To answer these questions, we identified states and school districts that have implemented ERP programs and collected information about their experiences. We are not aware of any prior research that has rigorously studied ERP programs at the state or district level. We conducted semistructured phone interviews with state child nutrition officials from the 5 states we identified as having ERP programs (Colorado, Maine, Minnesota, Vermont, and Washington). We also conducted a Web-based survey of local school food authority (SFA) officials in 51 school districts initially identified as having ERP programs and gathered in-depth information from 4 of these districts through site visits or phone interviews. In addition to collecting information from these states and school districts, we interviewed officials at the U.S. Department of Agriculture’s (USDA) Food and Nutrition Service (FNS) and Economic Research Service (ERS) as well as representatives of child nutrition advocacy organizations and professional associations, reviewed relevant studies, and conducted semistructured phone interviews with SFA officials in 2 school districts we identified that had discontinued ERP programs. We conducted our work from August 2008 to July 2009 in accordance with all sections of GAO’s Quality Assurance Framework that are relevant to our objectives. The framework requires that we plan and perform the engagement to obtain sufficient and appropriate evidence to meet our stated objectives and to discuss any limitations in our work. We believe that the information and data obtained, and the analysis conducted, provide a reasonable basis for any findings and conclusions in this product. To learn about the experiences of states with ERP programs, we conducted semistructured phone interviews with officials in the 5 states we identified as having these programs: Colorado, Maine, Minnesota, Vermont, and Washington. In August 2008, FNS provided us with a preliminary list of 4 states that had implemented ERP programs and a fifth state with pending legislation that, if enacted, would create a state ERP program. However, FNS is not necessarily aware of all the states with ERP programs, because states are not required to report this information to FNS. We contacted this fifth state and determined that it had already implemented a state ERP program. In each of these 5 states, we interviewed child nutrition officials responsible for administering the school meal programs at the state level. There may be additional states with ERP programs that we did not identify as part of this study. We also conducted a Web-based survey of SFA officials in 51 school districts initially identified as having implemented these programs. We identified school districts with ERP programs using a preliminary list of these districts, by state, provided by FNS in August 2008. This list included 43 districts in 16 states. However, FNS is not necessarily aware of all the districts with ERP programs, because districts are not required to report this information to FNS. We conducted follow-up with child nutrition officials in Washington, D.C., and the 28 states for which no information was provided, as well as officials in 5 states for which information was incomplete or needed clarification, and officials in 5 states for which contact information for district-level SFA officials was either missing or needed clarification. As a result of our follow-up efforts, we removed 9 districts from the original FNS list and added 17 new districts, for a total of 51 districts. We surveyed SFA officials in all 51 districts included in this revised list. There may be additional school districts with ERP programs that we did not identify as part of this study. Because the universe of districts with ERP programs is unknown, the results of our survey cannot be generalized to all districts with ERP programs. We conducted the survey from December 2008 to March 2009, and achieved a response rate of 83 percent. We received survey responses from SFA officials in 44 school districts, 35 of whom confirmed that their districts had implemented ERP programs. To increase the survey response rate, we conducted follow-up by both e-mail and phone with all nonrespondents. The questionnaire asked SFA officials about the number of students eligible for reduced-price meals; the meals and grades covered by the ERP programs; the reasons they implemented these programs; the duration of these programs; the effects of the programs on participation, errors, and costs; the factors that helped or hindered program implementation; and whether or not they plan to continue the ERP programs in the future. While we did not validate specific information that SFA officials reported in our survey, we reviewed their responses and conducted follow-up as necessary to determine that the data were complete, reasonable, and sufficiently reliable for the purposes of this report. Because we did not select a probability sample, our survey results do not have sampling errors. However, the practical difficulties of conducting any survey may introduce nonsampling errors, such as variations in how respondents interpret questions. We took steps to minimize nonsampling errors, such as pretesting the draft questionnaire. Specifically, we pretested the draft questionnaire by phone with SFA officials in 5 school districts—1 district in each of the states of Arizona, Indiana, Tennessee, Utah, and Virginia—in September through December 2008. In the pretests, we were generally interested in the clarity of the questions and the logical flow of the questionnaire. For example, we wanted to ensure that the definitions used in the questionnaire were clear and understandable, the categories provided in closed-ended questions were complete, and the ordering of questions was logical. We made revisions to the questionnaire in response to each of the pretests. In addition, using a Web-based survey minimized nonsampling errors because this format eliminated the need for and the errors associated with a manual data entry process. Specifically, the Web-based survey allowed respondents to enter their responses directly into the survey Web site and automatically created a database record for each respondent. To further minimize errors, the programs used to analyze the survey data were independently verified to ensure the accuracy of this work. To gather in-depth information from several school districts with ERP programs, we conducted site visits with 2 school districts (Grand Rapids Public Schools, Michigan and Salt Lake City School District, Utah) and semistructured phone interviews with two other school districts (Great Neck Public Schools, New York and Hillsborough County School District, Florida), all of which also responded to our Web-based survey. We selected these districts based on the following criteria: (1) variation in the duration of the ERP program, (2) variation in the percentage of students eligible for free or reduced-price meals, and (3) variation in location and city size. In each of these districts, we interviewed SFA officials responsible for administering the school meal programs. In addition to collecting information from states and school districts with ERP programs, we interviewed officials at FNS and ERS as well as representatives of child nutrition advocacy organizations and professional organizations, including the Food Research and Action Center (FRAC) and the School Nutrition Association (SNA). We also conducted semistructured phone interviews with SFA officials in 2 school districts we identified (Milpitas Unified School District, California, and Tacoma Public Schools, Washington) that had discontinued ERP programs. Additionally, we reviewed relevant studies, such as USDA’s NSLP/SBP Access, Participation, Eligibility, and Certification (APEC) Study, USDA’s School Lunch and Breakfast Cost Study–II, and a Mathematica Policy Research study conducted for ERS on the factors associated with school meal participation and the relationships between different participation measures. Some schools and districts have chosen to provide universal free meals to all participating students regardless of their household income under two alternative special assistance provisions of the National School Lunch Program (NSLP) and the School Breakfast Program (SBP), known as Provision 2 and Provision 3. These special assistance provisions in the SBP and NSLP are intended to reduce the administrative burden for individual schools and districts by allowing them to process school meal applications and determine eligibility for free and reduced-price meals less frequently. Specifically, these schools and districts are only required to process applications and determine eligibility in the first year (base year) of a 4-year or 5-year period. Although these provisions are intended to reduce the administrative burden, participating schools and districts bear the costs of providing free meals to students who qualify for reduced-price or full-price meals. Participating schools and districts still receive cash reimbursements from FNS based on the meal category for which students are eligible. According to FNS, during the 2007-2008 school year, more than 2,900 schools—about 3 percent of the 95,331 schools participating in the NSLP in that year—were participating in Provision 2 or Provision 3. Table 10 compares key aspects of standard school meal programs, Provision 2 programs, Provision 3 programs, and ERP programs. Heather McCallum Hahn (Assistant Director) and Dan Alspaugh (Analyst- in-Charge) managed all aspects of the assignment. Caitlin Croake and Rosemary Torres Lerma made significant contributions to this report, in all aspects. In addition, Luann Moy provided technical support in design and methodology, survey research, and statistical analysis; Susan Baker provided statistical analysis; James Rebbe provided legal support; Mimi Nguyen provided graphic design assistance; and Susannah Compton assisted in the message and report development. | In fiscal year 2008, about 31 million children participated in the National School Lunch Program and more than 10 million children participated in the School Breakfast Program each school day. The U.S. Department of Agriculture's (USDA) Food and Nutrition Service (FNS) spent $11.7 billion on the school meal programs in that year. The majority of school meals are provided for free or at a reduced price to low-income students. Some states and school districts have chosen to implement programs that eliminate the reduced-price fee (known as ERP programs) and instead provide free meals to students eligible for the reduced fee. GAO was asked to provide information on (1) what is known about the states and districts that have eliminated the reduced-price fee for school meals, (2) the experiences of states and districts that have ERP programs with respect to participation, errors, and costs, and (3) the factors that may help or hinder the establishment or continuation of ERP programs. To obtain this information, GAO interviewed FNS officials, interviewed officials from state- and district-level programs, and conducted a Web-based survey of the 35 districts identified as having ERP programs. However, because the universe of ERP programs is unknown, survey results cannot be generalized to all districts with ERP programs. USDA did not provide formal written comments, but FNS provided technical comments, which were incorporated where appropriate. GAO identified 5 states and an additional 35 school districts in 19 other states that eliminated the reduced-price fee for school meals, primarily to increase participation or reduce hunger. States and districts eliminated reduced-price fees for either breakfast or lunch or, in some cases, for both meals. Further, some ERP programs included all grades, and some covered only the early school years. One state- and most district-level officials GAO interviewed or surveyed reported that ERP programs have increased the rate of participation among students who are eligible for reduced-price meals. Participation may increase for a number of reasons; however, for those districts that implemented ERP programs in the most recently completed school year (2007-2008) and provided participation data, their average increase in the participation rate among reduced-price-eligible students was greater than the national change in this rate over the same year. ERP programs involve additional costs to states and districts, as they bear the cost of the reduced-price fees that these students otherwise would have paid. For the state and district ERP programs that experienced increased participation, FNS reimbursements, and thus federal costs, also increased. While the increased reimbursements partially offset program costs, state ERP programs covered their remaining costs with state funds and districts used a variety of revenue sources. The majority of district-level officials reported that their districts experienced benefits from the ERP programs, such as a decrease in the burden on staff to collect unpaid meal fees from reduced-price-eligible students who received school meals but who charged these meals and built up a balance of unpaid meal fees. State officials GAO interviewed cited support from legislators and nonprofit organizations in establishing ERP programs in state law. Supportive school boards and superintendents were a major factor in establishing district-level programs. Most state officials indicated that a loss of state funding would threaten program continuation, while some district-level officials indicated they would try to raise additional revenue or reduce expenditures to cover program costs. As of late 2008, officials from all 5 states and most district-level ERP programs planned to continue their programs. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
The SSI program is authorized by title XVI of the Social Security Act. To qualify for SSI, an individual must meet financial eligibility and age or disability criteria. Generally, SSA determines an applicant’s age and financial eligibility; the state’s Disability Determination Service determines an applicant’s initial medical eligibility. The maximum monthly benefit in 1995 was $458 per month for an individual, increased to $470 in 1996. An individual is ineligible for SSI in any given month if throughout that month he or she is an inmate of a public institution (42 U.S.C. 1382 (e)(1)(A)). The title XVI regulation defines an inmate of a public institution as a person who can receive substantially all of his or her food and shelter while living in a public institution. SSA operating instructions provide that a prison is a public institution. SSI recipients may receive their payments in one of several ways: (1) SSI checks can be mailed to them at their residences or, in some cases, to post office boxes; (2) SSI checks can be direct-deposited into recipients’ checking or savings accounts; or (3) SSI checks can be sent to recipients’ representative payees—individuals or organizations that receive checks on behalf of SSI recipients who are unable to manage their own affairs (including legally incompetent people, alcoholics, drug addicts, and children). A representative payee is responsible for dispensing the SSI payment in a manner that is in the best interest of the recipient. Many events can affect a recipient’s eligibility or payment amount. SSA requires that recipients voluntarily report these events and also monitors and periodically reviews recipients’ financial eligibility. SSI recipients are responsible for reporting information that may affect their eligibility or payment amounts. If the recipient has a representative payee, the payee is responsible for reporting such information to SSA. Significant events to be reported include a change in income, resources, marital status, or living arrangements, such as admission to or discharge from a public institution. A redetermination is a review of financial eligibility factors to ensure that recipients are still eligible for SSI and receiving the correct payment. A redetermination addresses financial eligibility factors that can change frequently, such as income, resources, and living arrangements. Redeterminations are either scheduled or unscheduled. They are conducted—by mail, telephone, or face-to-face interview—at least every 6 years, but may be conducted more frequently if SSA determines that changes in eligibility or erroneous payments are likely. The redetermination process includes a question about whether the recipient spent a full calendar month in a hospital, nursing home, other institution, or any place other than the recipient’s normal residence. Since the SSI program was established, SSA has recognized the potential for erroneous payments if SSI recipients become residents of public institutions, including state and federal prisons and county and local jails. SSA headquarters has established computer-matching agreements with state prison systems and the federal Bureau of Prisons. Under these agreements, the participating states and the Bureau can regularly provide automated prisoner information to SSA. SSA matches the information against its payment records to identify SSI recipients incarcerated in state and federal prisons. According to information provided by SSA, the process of matching prisoner information against the SSI payment records is a cost-effective way to identify SSI recipients who are in prison. However, to succeed, SSA determined it is essential that field offices work closely with public institutions, both county and local, to facilitate the flow of information concerning the SSI population. Accordingly, SSA has, for years, instructed its field offices to (1) maintain regular contact (for example, regular visits) with prisons in their areas and (2) establish procedures for promptly obtaining information on events, such as admissions and discharges, that affect SSI eligibility and payment determinations. On May 24, 1996, the Commissioner of Social Security sent draft legislation to the Congress. This proposed legislation is designed to promote timely carrying out of SSI provisions requiring cessation of payments to prisoners. The legislation would authorize the Commissioner to enter into agreements with willing state and local “correctional facilities.” Under these agreements, the Commissioner would pay the facility for each report of a newly admitted inmate who has been a Social Security or SSI beneficiary but is not, as a prisoner, entitled to payments. In August, the Congress passed The Personal Responsibility and Work Opportunity Reconciliation Act of 1996. The act authorizes the Commissioner of SSA to enter into agreements with interested institutions. Under these agreements, the institutions would provide SSA with the names, SSNs, and other information about their inmates. SSA, subject to the terms of the agreements, would pay an institution for each inmate who SSA subsequently determines is ineligible for SSI. The act specifies, however, that the institution’s primary purpose must be to confine individuals for offenses punishable by confinement for more than 1 year. This 1-year requirement would seem to preclude SSA from entering into agreements with, as well as making payments to, county and local jails, which generally incarcerate prisoners for shorter periods. Overall, in the jail systems we reviewed, we detected a total of $5 million in erroneous SSI payments to prisoners. This includes $3.9 million to 2,343 current prisoners in 12 jail systems and $1.1 million to 615 former prisoners in 2 jail systems. Typically, an erroneous payment continued for 6 months or less and totalled about $1,700. SSA was unaware that many of these payments had occurred. SSA had made erroneous payments to 2,343 prisoners, who were incarcerated in the 12 jail systems at the time of our work. These 2,343 prisoners represent about 4 percent of the prisoners with verified SSNs in these jail systems. As shown in table 1, SSA made payments to some prisoners in each of the 12 jail systems. The percentage of prisoners who received SSI payments differed somewhat among these jail systems, ranging from 2 to about 7.7 percent. In addition, there were 926 SSI recipients in jail at the time of our review who had not yet been there for 1 full calendar month. Collectively, these 926 prisoners were being paid about $387,000 a month. To the extent these prisoners remain in jail for at least 1 calendar month and SSA remains unaware of their incarceration, SSI payments made after a full month of incarceration would be erroneous. In the 12 systems we reviewed, as of the date we reviewed each system, we estimate that SSA paid $3,888,471 to the 2,343 current prisoners (see table 2). The average amount paid to an individual prisoner varies among the jail systems, but the overall average is approximately $1,700. Some payments are much larger. Erroneous payments to individual prisoners ranged from less than $100 to over $17,000. We determined that 136 prisoners received in excess of $5,000, including 19 who received more than $10,000. The percentage of current prisoners by range is shown in figure 1. Large erroneous payments to prisoners occurred because SSA paid some of them for long periods of time. For example, one SSI recipient was arrested on June 27, 1993, and was still in jail on November 30, 1995. SSA paid this prisoner monthly for this entire period. The erroneous monthly payments totaled about $13,000. As of November 30, 1995, this SSI recipient was still in jail and SSA was continuing to pay him. We determined that 85 percent of the 2,343 current prisoners had received erroneous payments for a period of 6 months or less, at the time of our review. However, some were paid for longer periods. We found a total of 94 prisoners that had been paid for more than 1 year, including 13 who were paid for more than 2 years. The range of months during which payments continued is shown in table 3. The erroneous payments to current prisoners are likely to increase. Based on a review of SSA’s records, we estimate that at the time of our review, SSA was unaware that 1,570 of the 2,343 recipients were in jail. SSA therefore continued to erroneously pay them. But SSA had stopped paying the remaining 773 and, for some of them, established an overpayment. We obtained information, from two jail systems, for 15,998 former prisoners who were released from jail between January 1 and June 30, 1995. We determined that of these former prisoners, 615 (3.8 percent) received SSI while incarcerated. In total, these former prisoners received about $1.1 million in payments. The number of former prisoners, total erroneous payments, and average amount to individual prisoner by jail system are shown in table 4. Included in the count of 419 former prisoners in Cook County are 17 who were also in our population of current prisoners. This indicates that these 17 were in prison and received SSI payments on at least two occasions. In Cook County, where we had data for both current and former prisoners, erroneous payments to former prisoners were higher. In that county, about 73 percent of the former prisoners were erroneously paid $1,000 or more, compared with 48 percent of the current prisoners. The difference is predictable because former prisoners have completed their time in the county or local jail and current prisoners have not. In Wayne County, where we only had data on former prisoners, 38 percent of the former prisoners were erroneously paid $1,000 or more. Based on a review of SSA’s records, we estimate that SSA is unaware that it erroneously paid 454 (74 percent) of the 615 former prisoners (see table 5). As of December 1995, SSA was making SSI payments to 340 of the 454 former prisoners. However, SSA was not recovering these payments by withholding a portion of the current payments. Our review suggests that many of the erroneous payments to prisoners stem from the fact that SSA field offices were not following existing instructions. These indicate that field offices should contact county and local jails to detect incarcerated SSI recipients. Other reasons for such payments include SSI recipients (or their representative payees) not reporting incarcerations and redeterminations not identifying some incarcerated SSI recipients. At the start of our review, we contacted 23 county and local jail systems to determine if they were regularly providing prisoner information to SSA. Only 1 county was, although a few said SSA contacted them occasionally to determine if specific people were incarcerated. In addition, 1 other county indicated that it initiated contact with SSA, but had not provided data. SSA had contacted 6 additional systems about regularly obtaining information on prisoners, but these had not yet provided any data. The remaining 15 systems reported that they had not been contacted by SSA about regularly providing information on prisoners. For example, according to an SSA branch office manager, no one from SSA had visited the jails in the office’s service area in more than 20 years. Our review of SSA records indicates that although some SSI recipients or their representative payees report incarceration to SSA as required, many do not. We determined that of the 615 former prisoners who were erroneously paid, 217 had representative payees while in prison. We also determined that of these representative payees, 164 did not report the SSI recipient’s incarceration. About 87 percent of the representative payees who did not report were relatives; 1 percent were social agencies or other types of public and private organizations; and 12 percent were “other” types. Similar reporting problems were noted for current prisoners. In the redetermination process, SSA attempts to verify that recipients remain financially eligible for SSI and receive the correct payment. SSA records indicate that while in jail, 88 prisoners each had one redetermination and 4 prisoners each had two or more. We found that 32 of these 92 prisoners continued to be incarcerated and receive SSI payments after the redeterminations. According to SSA records, 22 of these redeterminations involved face-to-face contact between an SSA employee and the recipient or the representative payee. According to SSA officials, it is possible for inmates who are temporarily free, on work release or some other similar arrangement, to appear for a redetermination and subsequently return to jail. In addition, representative payees may complete the redeterminations, including face-to-face, on behalf of the SSI recipient. The identity of the actual individual who appeared at the face-to-face redetermination is not included in SSA’s computerized record, and a detailed review to determine who appeared at the interview was beyond the scope of our work. SSA’s operating instructions contain provisions for field offices to contact local jails in order to obtain prisoner data from them. However, SSA only recently began implementing this program systematically. According to agency officials and internal documents, most of the jails nationwide had been contacted by April 1996 to obtain information on current prisoners and future admissions, but not on former prisoners. According to agency officials, in March 1995, SSA field offices were instructed to contact local jails in their service areas and report to their regional offices concerning which jails would agree to provide SSA with prisoner data. However, the field offices did not consistently comply with these instructions, these SSA officials stated. In October 1995, after the start of our review, SSA headquarters issued a follow-up memo to the regional offices, directing them to instruct their field offices to (1) complete a detailed census of all jails in their jurisdictions and (2) report to headquarters by November 30, 1995. It was during this period of time that the agency initiated a concerted effort to contact all county and local jails nationwide. According to agency officials, prisons and jails are being contacted in the following order: (1) all state prisons, (2) the 25 largest county and local jails nationwide, and (3) all other county and local jails. According to SSA documents, as of March 1996, SSA had identified 3,878 county and local jails: SSA had obtained written agreements covering 2,647 of these and had agreements pending with 235. In addition, 843 jails were already reporting to SSA or held prisoners for less than 30 days; 153 jails had not responded or had refused to cooperate. SSA has requested that facilities it has contacted provide lists of their inmates to the local field offices. The agency has offered flexible reporting guidelines for frequency and format of the lists (computerized or on paper). In general, SSA has requested that facilities that have provided data to it previously or on a trial basis continue providing data. In addition, SSA has requested that facilities that have not provided any lists in the past provide (1) a current census of their inmates and (2) continuing lists of new admissions to the facility. Specifically, we found that SSA has contacted the 25 largest jail systems in the country and requested prisoner data from them. Most of these systems had agreed to supply SSA with prisoner data beginning in early to mid-1996. One system (Orange County, Calif.) began providing data in April 1995, and another system (New York City) has agreed to a pilot project including data beginning with January 1995. For many years, SSA has lacked an effective program to detect SSI recipients in county and local jails. It has relied primarily on (1) the recipients or their representative payees to voluntarily report incarceration and (2) redeterminations. Neither of these mechanisms has been completely effective; as a result, SSA has erroneously paid millions of dollars to thousands of prisoners in county and local jails. SSA was unaware of most of these payments. The number of SSI recipients who received SSI while in jail, including those with representative payees and those with redeterminations, raises numerous questions, including whether payments were obtained fraudulently. SSA’s recent initiative—to obtain better information on SSI recipients currently in county and local jails—is a positive step. However, the effort is not comprehensive enough. In general, SSA has begun to obtain information on current prisoners and new admissions. But SSA has not attempted to develop information, when available, on SSI recipients who may have been incarcerated and received payments in prior years. We found that this information is available and can provide SSA the means to identify and initiate recovery of many more erroneous payments. In order to identify SSI recipients who have been erroneously paid in prior years, we recommend that the Commissioner of SSA direct SSA field offices to obtain information from county and local jails on former prisoners. SSA should then process this information to (1) determine if it made erroneous payments to any of these former prisoners, (2) establish overpayments for the ones it paid, and (3) attempt to recover all erroneous payments. SSA commented on a draft of our report in a letter, dated July 16, 1996, and acknowledged that investigation of the productivity of securing information on former prisoners appears desirable and worthy of further examination. However, SSA expressed concerns about the availability of data, the potential negative effect of requests for more data on existing reporting arrangements with county and local jail officials, the cost-effectiveness of processing data on former prisoners who may no longer be receiving SSI payments, and other matters. SSA believes these concerns need to be resolved before implementing our recommendations. (The full text of SSA’s comments is included in app. III.) During its recent initiative to identify current prisoners, SSA identified local officials who know what data are available and can be provided. It should not be difficult or time-consuming, therefore, for SSA to contact these officials and determine if information on former prisoners is available. In addition, to identify information on former prisoners, SSA need not establish that “the majority” of county and local jail systems have such information, given that the largest jail systems account for the majority of prisoners. During the course of its initiative, SSA expanded the number of agreements with local correctional facilities to report prisoner information. According to SSA, some of these facilities were initially reluctant to enter into these agreements because SSA does not have the authority to pay for this information. However, unlike information on current prisoners, which requires monthly or quarterly reporting, information on former prisoners only requires a onetime effort by the local jail systems. Therefore, SSA need not assume that requesting such data will jeopardize existing agreements. If county and local jail systems are initially reluctant to provide data on former prisoners, SSA could emphasize the potential benefit to state programs (such as the recovery of erroneously paid state supplements) that such data exchanges may provide. We agree that SSA stands a better chance of recovering erroneous payments if the former prisoner is still receiving SSI. However, the fact that he or she is not currently receiving SSI should not prevent the implementation of our recommendation. To ensure program integrity, SSA has a responsibility to identify erroneous payments and collect overpayments. Once established, overpayments made to former prisoners remain in the record and could be recovered if the person again begins to receive SSI. Furthermore, SSA has the authority to recover SSI debts through a tax refund offset. SSA also took issue with the fact that we reported that until recently, identifying prisoners was not a priority at SSA. According to SSA, however, policies and operating procedures call for field offices to (1) maintain contacts with local institutions and (2) determine prisoner eligibility for payments. In our review, we found that field offices had not been following this guidance. We made minor changes to the text of the report to clarify this point. SSA also expressed concern about a statement in the report that erroneous payments to prisoners may be partially due to the vulnerability of redeterminations to abuse. Although we do not discuss the redetermination process in great detail, our review of SSA records indicates that 32 of 92 prisoners in our sample continued to receive benefits after a redetermination. If this process had been working as intended, SSA would have determined that these prisoners were no longer eligible to receive benefits. We made minor changes to the text of the report to clarify this point. We are sending copies of this report to interested congressional Committees and Subcommittees; the Director, Office of Management and Budget; and other interested parties. This report was prepared under the direction of Christopher C. Crissman, Assistant Director. Other GAO contacts and staff acknowledgments are listed in appendix IV. To determine if jail systems provide information on prisoners to SSA, we contacted 23 large county and local jail systems that met the following criteria: (1) a minimum average daily prisoner population of at least 1,000, with emphasis on the largest U.S. metropolitan areas, (2) geographic dispersion, and (3) populous SSA regions. Of the 23 systems we contacted, we subsequently requested data from the 13 that met the following additional criteria: (1) an ability to provide us with automated data tapes suitable for matching, (2) willingness to provide the data at no cost, and (3) not currently providing SSA with prisoner data. Based on the above criteria, between September 1995 and January 1996, we obtained automated data on current prisoners from 12 county and local jail systems. They collectively represent about 20 percent of the county and local prisoner population nationwide. The jail systems that provided data to us are in 10 states, in 6 of SSA’s 10 regions. The jail systems that provided current prisoner data to us were: Broward County (Fla.); Cook County (Ill.); Dade County (Fla.); Hamilton County (Ohio); Harris County (Tex.); King County (Wash.); Los Angeles County (Calif.); Maricopa County (Ariz.); New York City; Orange County (Fla.); Santa Clara County (Calif.); and Shelby County (Tenn.). In addition, during February and March 1996, we obtained data on former prisoners from Cook County and from Wayne County (Mich.). From 12 of the county and local jail systems, we obtained data for prisoners who were under their jurisdiction on specific dates. The dates were selected by the jail systems, based on their available resources. Jail systems also supplied available personal identifiers, including name, Social Security number (SSN), date of birth, place of birth, mother’s maiden name (or next of kin), ethnicity or race, home address, and date of incarceration. We received information on a total of 97,813 current prisoners and eliminated duplicate records. This reduced the initial universe to 79,595 prisoners. We processed the information on these prisoners through SSA’s Enumeration Verification System (EVS), which uses key variables (name and date of birth) to verify the SSNs provided or determine an SSN if none is provided. We obtained verified SSNs for 53,420 of the 79,595 prisoners. We could not verify SSNs for the remaining 26,175 prisoners. To determine which prisoners had SSI records, we matched the verified SSNs against the Supplemental Security Record. We identified 12,951 prisoners with SSI records. We analyzed these 12,951 records to determine if any of the prisoners received benefits while they were incarcerated; we then extracted and analyzed the records of these prisoners. To test the accuracy of the current prisoner data provided by the counties, we selected a random sample of 240 current prisoners we had identified as having been paid SSI benefits while incarcerated (20 prisoners from each of 12 counties). We supplemented the random sample with 100 judgmentally selected cases (considering large payments to prisoners, long periods of incarceration, SSI eligibility date versus incarceration date, and other such factors). We requested that the jail systems verify (1) the booking date (the first day the prisoner was incarcerated) and (2) whether the prisoner was continuously incarcerated between the booking date and the date on which the jail created the list of inmates in its system. We requested that the jails verify the information from a source other than that used to produce the original data. The results of our random sample indicate that overall, our data were reliable. For five counties, no errors were found for the sample cases. For three counties, one case that could not be verified was found for each. For three other counties, minor errors were found in the data. For the final county, some of the information we had originally been provided was incorrect. At that time, the county had not yet entered the release dates for some prisoners into its computer system. As a result, the original information showed 123 SSI recipients in jail on November 16, 1995 (the date on which the county produced the original data), when they actually had been released before that date. We eliminated these cases from our review. Of the original 20 randomly selected cases in this county, 10 were unaffected, with the original information being correct. To obtain information on former prisoners, we asked two county systems (Wayne and Cook) to provide us with automated lists of all the prisoners released from their systems in the first 6 months of 1995. We received information on 16,821 prisoners, with no duplicate records. We processed these data through EVS, and obtained 15,998 verified SSNs. We matched the verified SSNs against the Supplemental Security Record to detect former prisoners who received SSI, and extracted and analyzed their records. The 10 SSA regions are shown in figure II.1. As discussed in appendix I, we obtained our data from county and local jail systems in 10 states—New York, Florida, Tennessee, Ohio, Illinois, Texas, Arizona, California, Washington, and Michigan—in 6 regions—II, IV, V, VI, IX, and X. In addition to those named above, the following also made important contributions to this report: Jeremy Cox, Evaluator; Mary Ellen Fleischman, Evaluator; James P. Wright, Assistant Director (Study Design and Data Analysis); and Jay Smale, Social Science Analyst (Study Design and Data Analysis). The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | Pursuant to a congressional request, GAO determined whether the Social Security Administration (SSA) is making erroneous supplemental security income (SSI) payments to prisoners in county and local jail systems. GAO found that: (1) a total of $5 million has been erroneously paid to prisoners in local and county jail systems; (2) these erroneous payments are the result of SSA field offices' inability to obtain prisoner information on a regular basis, SSI recipients' failure to report their incarceration, and SSA inability to verify recipients' eligibility for SSI; (3) the Commissioner of Social Security has sent draft legislation to Congress that would authorize payment to each correctional facility reporting newly admitted SSI beneficiaries; (4) erroneous payments to individual prisoners range from $100 to more than $17,000; (5) 136 prisoners have received more than $5,000 in erroneous SSI payments and 19 prisoners have received more than $10,000 in erroneous SSI payments; and (6) SSA is requesting its field offices to obtain prisoner information from both county and local jail systems and emphasizing the importance of monitoring field offices' compliance with this procedure. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
Polar-orbiting satellites provide data and imagery that are used by weather forecasters, climatologists, and the military to map and monitor changes in weather, climate, the oceans, and the environment. Since the 1960s, the United States has operated two separate operational polar-orbiting meteorological satellite systems: the Polar-orbiting Operational Environmental Satellite (POES) series, which is managed by NOAA, and the Defense Meteorological Satellite Program (DMSP), which is managed by the Air Force. Currently, there is one operational POES satellite and two operational DMSP satellites that are positioned so that they can observe the earth in early morning, midmorning, and early afternoon polar orbits. In addition, the government is also relying on a European satellite, called Meteorological Operational, or MetOp, in the midmorning orbit. With the expectation that combining the POES and DMSP programs would reduce duplication and result in sizable cost savings, a May 1994 Presidential Decision Directive required NOAA and DOD to converge the two satellite programs into a single satellite program capable of satisfying both civilian and military requirements. The converged program, NPOESS, is considered critical to the United States’ ability to maintain the continuity of data required for weather forecasting and global climate monitoring. To manage this program, DOD, NOAA, and NASA formed the tri-agency Integrated Program Office, located within NOAA. Within the program office, each agency has the lead on certain activities: NOAA has overall program management responsibility for the converged system and for satellite operations; the Air Force has the lead on the acquisition; and NASA has primary responsibility for facilitating the development and incorporation of new technologies into the converged system. NOAA and DOD share the cost of funding NPOESS, while NASA funds specific technology projects and studies. In addition, an Executive Committee— made up of the administrators of NOAA and NASA and the Under Secretary of Defense for Acquisition, Technology, and Logistics—is responsible for providing policy guidance, ensuring agency support and funding, and exercising oversight authority. The Executive Committee manages the program through a Program Executive Officer who oversees the NPOESS program office. Since the program’s inception, NPOESS costs have grown to $13.95 billion, and launch schedules have been delayed by up to five years. In addition, as a result of a 2006 restructuring of the program, the agencies reduced the program’s functionality by removing 2 of 6 originally planned satellites and one of the orbits. The restructuring also decreased the number of instruments from 13 (10 sensors and 3 subsystems) to 9 (7 sensors and 2 subsystems), with 4 of the sensors providing fewer capabilities. The restructuring also led agency executives to mitigate potential data gaps by deciding to use a planned demonstration satellite, called the NPOESS Preparatory Project (NPP) satellite, as an operational satellite providing climate and weather data. However, even after this restructuring, the program is still encountering technical issues, schedule delays, and the likelihood of further cost increases. Over the past year, selected components of the NPOESS program have made progress. Specifically, three of the five instruments slated for NPP have been delivered and integrated on the spacecraft; the ground-based satellite data processing system has been installed and tested at both of the locations that are to receive NPP data; and the satellites’ command, control, and communications system has passed acceptance testing. However, problems with two critical sensors continue to drive the program’s cost and schedule. Specifically, challenges with a key sensor’s (the Visible/infrared imager radiometer suite (VIIRS)) development, design, and workmanship have led to additional cost overruns and delayed the instrument’s delivery to NPP. In addition, problems discovered during environmental testing on another key sensor (called the Cross-track infrared sounder (CrIS)) led the contractor to further delay its delivery to NPP and added further unanticipated costs to the program. To address these issues, the program office halted or delayed activities on other components (including the development of a sensor planned for the first NPOESS satellite, called C1) and redirected those funds to fixing VIIRS and CrIS. As a result, those other activities now face cost increases and schedule delays. Program officials acknowledge that NPOESS will cost more than the $13.95 billion previously estimated, but they have not yet adopted a new cost estimate. Program officials estimated that program costs will grow by about $370 million due to recent technical issues experienced on the sensors and the costs associated with halting and then restarting work on other components of the program. In addition, the costs associated with adding new information security requirements to the program could reach $200 million. This estimate also does not include approximately $410 million for operations and support costs for the last two years of the program’s life cycle (2025 and 2026). Thus, we anticipate that the overall cost of the program could grow by about $1 billion from the current $13.95 billion estimate—especially given the fact that difficult integration and testing of the sensors on the NPP and C1 spacecrafts has not yet occurred. Program officials reported that they plan to revise the program’s cost estimate over the next few weeks and to submit it for executive-level approval by the end of June 2009. As for the program’s schedule, program officials estimate that the delivery of VIIRS to the NPP contractor will be delayed, resulting in a further delay in the launch of the NPP satellite to January 2011, a year later than the date estimated during the program restructuring—and seven months later than the June 2010 date that was established last year. In addition, program officials estimated that the first and second NPOESS satellites would be delayed by 14 and 5 months, respectively, because selected development activities were halted or slowed to address VIIRS and CrIS problems. The program’s current plans are to launch C1 in March 2014 and the second NPOESS satellite, called C2, in May 2016. Program officials notified the Executive Committee and DOD’s acquisition authority of the schedule delays, and under DOD acquisition rules, are required to submit a new schedule baseline by June 2009. These launch delays have endangered our nation’s ability to ensure the continuity of polar-orbiting satellite data. The final POES satellite, called NOAA-19, is in an afternoon orbit and is expected to have a 5-year lifespan. Both NPP and C1 are planned to support the afternoon orbit. Should the NOAA-19 satellite fail before NPP is launched, calibrated, and operational, there would be a gap in satellite data in that orbit. Further, the delays in C1 mean that NPP will not be the research and risk reduction satellite it was originally intended to be. Instead, it will have to function as an operational satellite until C1 is in orbit and operational—and if C1 fails on launch or in early operations, NPP will be needed to function until C3 is available, currently planned for 2018. The delay in the C2 satellite launch affects the early morning orbit. There are three more DMSP satellites to be launched in the early and midmorning orbits, and DOD is revisiting the launch schedules for these satellites to try to extend them as long as possible. However, an independent review team, established to assess key program risks, recently reported that the constellation of satellites is extremely fragile and that a single launch failure of a DMSP, NPOESS, or the NPP satellite could result in a gap in satellite coverage from 3 to 5 years. Although the program’s approved cost and schedule baseline is not achievable and the polar satellite constellation is at risk, the Executive Committee has not yet made a decision on how to proceed with the program. Program officials plan to propose new cost and schedule baselines in June 2009 and have reported that they are addressing immediate funding constraints by deferring selected activities to later fiscal years in order to pay for VIIRS and CrIS problems; delaying the launches of NPP, C1, and C2; and assessing alternatives for mitigating the risk that VIIRS will continue to experience problems. Without an executive-level decision on how to proceed, the program is proceeding on a course that is deferring cost growth, delaying launches, and risking its underlying mission of providing operational weather continuity to the civil and military communities. While the NPOESS Executive Committee has made improvements over the last several years in response to prior recommendations, it has not effectively fulfilled its responsibilities and does not have the membership and leadership it needs to effectively or efficiently oversee and direct the NPOESS program. Specifically, the DOD Executive Committee member with acquisition authority does not attend Committee meetings—and sometimes contradicts the Committee’s decisions, the Committee does not aggressively manage risks, and many of the Committee’s decisions do not achieve desired outcomes. Independent reviewers, as well as program officials, explained that the tri-agency structure of the program makes it very difficult to effectively manage the program. Until these shortfalls are addressed, the Committee is unable to effectively oversee the NPOESS program—and important issues involving cost growth, schedule delays, and satellite continuity will likely remain unresolved. We and others, including the Department of Commerce’s Inspector General in a 2006 report, have reported that the Committee was not accomplishing its job effectively. However, since then, the Committee has met regularly on a quarterly basis and held interim teleconferences as needed. The Committee has also sought and reacted to advice from external advisors by, among other actions, authorizing a government program manager to reside onsite at the VIIRS contractor’s facility to improve oversight of the sensor’s development on a day-to-day basis. More recently, the Executive Committee sponsored a broad-based independent review of the NPOESS program and is beginning to respond to its recommendations. As established by the 1995 and 2008 memorandums of agreement signed by all three agencies, the members of the NPOESS Executive Committee are (1) the Under Secretary of Commerce for Oceans and Atmosphere; (2) the Under Secretary of Defense for Acquisition, Technology, and Logistics; and (3) the NASA Administrator. Because DOD has the lead responsibility for the NPOESS acquisition, the Under Secretary of Defense for Acquisition, Technology, and Logistics was also designated as the milestone decision authority—the individual with the authority to approve a major acquisition program’s progression in the acquisition process, as well as any changes to the cost, schedule, and functionality of the acquisition. The intent of the tri-agency memorandums was that acquisition decisions would be agreed to by the Executive Committee before a final acquisition decision is made by the milestone decision authority. However, DOD’s acquisition authority has never attended an Executive Committee meeting. This individual delegated the responsibility for attending the meetings—but not the authority to make acquisition decisions—to the Under Secretary of the Air Force. Therefore, none of the individuals who attend the Executive Committee meetings for the three agencies have the authority to approve the acquisition program baseline or major changes to the baseline. As a result, agreements between Committee members have been overturned by the acquisition authority, leading to significant delays. To provide the oversight recommended by best practices, including reviewing data and calling for corrective actions at the first sign of cost, schedule, and performance problems and ensuring that actions are executed and tracked to completion, the Executive Committee holds quarterly meetings during which the program’s progress is reviewed using metrics that provide an early warning of cost, schedule, and technical risks. However, the Committee does not routinely document action items or track those items to closure. Some action items were not discussed in later meetings, and in cases where an item was discussed, it was not always clear what action was taken, whether it was effective, and whether the item was closed. According to the Program Executive Officer, the closing of an action item is not always explicitly tracked because it typically involves gathering information that is presented during later Committee meetings. Nonetheless, by not rigorously documenting action items—including identifying the party responsible for the action, the desired outcome, and the time frame for completion—and then tracking the action items to closure, the Executive Committee is not able to ensure that its actions have achieved their intended results and to determine whether additional changes or modifications are still needed. This impedes the Committee’s ability to effectively oversee the program, direct risk mitigation activities, and obtain feedback on the results of its actions. Best practices call for oversight boards to take corrective actions at the first sign of cost, schedule, and performance slippages in order to mitigate risks and achieve successful outcomes. The NPOESS Executive Committee generally took immediate action to mitigate the risks that were brought before them; however, a majority of these actions were not effective—that is, they did not fully resolve the underlying issues or result in a successful outcome. The Committee’s actions on the sensor development risks accomplished interim successes by improving the government’s oversight of a subcontractor’s activities and guiding next steps in addressing technical issues—but even with Committee actions, the sensors’ performance has continued to falter and affect the rest of the program. Independent reviewers reported that the tri-agency structure of the program complicated the resolution of sensor risks because any decision could be revisited by another agency. Program officials explained that interagency disagreements and differing priorities make it difficult to effectively resolve issues. When NPOESS was restructured in June 2006, the program included two satellites (C1 and C2) and an option to have the prime contractor produce the next two satellites (C3 and C4). In approving the restructured program, DOD’s decision authority noted that he reserved the right to use a different satellite integrator for the final two satellites, and that a decision on whether to exercise the option was to be made in June 2010. To prepare for this decision, DOD required a tri-agency assessment of alternative management strategies. This assessment was to examine the feasibility of an alternative satellite integrator, to estimate the cost and schedule implications of moving to an alternative integrator, and within one year, to provide a viable alternative to the NPOESS Executive Committee. To address DOD’s requirement, the NPOESS Program Executive Officer sponsored two successive alternative management studies; however, neither of the studies identified a viable alternative to the existing satellite integrator. The Program Executive Officer plans to conduct a final assessment of alternatives prior to the June 2010 decision on whether to exercise the option to have the current system integrator produce the next two NPOESS satellites. Program officials explained that the program’s evolving costs, schedules, and risks could mean that an alternative that was not viable in the past would become viable. For example, if the prime contractor’s performance no longer meets basic requirements, an alternative that was previously too costly to be considered viable might become so. In the report being released today, we are making recommendations to improve the timeliness and effectiveness of acquisition decision-making on the NPOESS program. Specifically, we recommend that the Secretary of Defense direct the Under Secretary of Defense for Acquisition, Technology, and Logistics to attend and participate in NPOESS Executive Committee meetings. In addition, we are recommending that the Secretaries of Defense and Commerce and the Administrator of NASA direct the NPOESS Executive Committee to take the following five actions: (1) establish a realistic time frame for revising the program’s cost and schedule baselines; (2) develop plans to mitigate the risk of gaps in satellite continuity; (3) track the Committee’s action items from inception to closure; (4) improve the Committee’s ability to achieve successful outcomes by identifying the desired outcome associated with each of the Committee actions, as well as time frames and responsible parties, when new action items are established; and (5) improve the Committee’s efficiency by establishing time frames for escalating risks to the Committee for action so that they do not linger unresolved at the program executive level. In written comments on a draft of our report, NASA and NOAA agreed with our findings and recommendations and identified plans to implement them. DOD concurred with one and partially concurred with our other recommendations. For example, regarding our recommendation to have the appropriate official attend Executive Committee meetings, the agency partially concurred and noted that the Under Secretary for Acquisition, Technology, and Logistics would evaluate the necessity of attending future Executive Committee meetings. DOD also reiterated that the Under Secretary of the Air Force was delegated authority to attend the meetings. While we acknowledge that the Under Secretary delegated responsibility for attending these meetings, it is an inefficient way to make decisions and achieve outcomes in this situation. In the past, agreements between Executive Committee members have been overturned by the Under Secretary, leading to significant delays in key decisions. The full text of the three agencies’ comments and our evaluation of those comments are provided in the accompanying report. In summary, continued problems in the development of critical NPOESS sensors have contributed to growing costs and schedule delays. Costs are now expected to grow by as much as $1 billion over the prior life cycle cost estimate of $13.95 billion, and problems in delivering key sensors have led to delays in launching NPP and the first two NPOESS satellites— by a year or more for NPP and the first NPOESS satellite. These launch delays have endangered our nation’s ability to ensure the continuity of polar-orbiting satellite data. Specifically, if any planned satellites fail on launch or in orbit, there would be a gap in satellite data until the next NPOESS satellite is launched and operational—a gap that could last for 3 to 5 years. The NPOESS Executive Committee responsible for making cost and schedule decisions and addressing the many and continuing risks facing the program has not yet made important decisions on program costs, schedules, and risks—or identified when it will do so. In addition, the Committee has not been effective or efficient in carrying out its oversight responsibilities. Specifically, the individual with the authority to make acquisition decisions does not attend Committee meetings, the Committee does not aggressively manage risks, and many of the Committee’s decisions do not achieve desired outcomes. Until the Committee’s shortfalls are addressed, important decisions may not be effective and issues involving cost increases, schedule delays, and satellite continuity may remain unresolved. Mr. Chairman and members of the Subcommittee, this concludes our statement. We would be pleased to respond to any questions that you or other members of the Subcommittee may have at this time. If you have any questions on matters discussed in this testimony, please contact David A. Powner at (202) 512-9286 or at [email protected]. Other key contributors to this testimony include Colleen M. Phillips, Assistant Director; Kate Agatone; Neil Doherty; Kathleen S. Lovett; Lee McCracken; and China R. Williams. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | The National Polar-orbiting Operational Environmental Satellite System (NPOESS)--a tri-agency acquisition managed by the Department of Commerce's National Oceanic and Atmospheric Administration (NOAA), the Department of Defense (DOD), and the National Aeronautics and Space Administration (NASA)--is considered critical to the United States' ability to maintain the continuity of data required for weather forecasting (including severe weather events such as hurricanes) and global climate monitoring. Since its inception, NPOESS has experienced escalating costs, schedule delays, and technical difficulties. As the often-delayed launch of its demonstration satellite (called the NPOESS Preparatory Project--NPP) draws closer, these problems continue. GAO was asked to summarize its report being released today that (1) identifies the status and risks of key program components, (2) assesses the NPOESS Executive Committee's ability to fulfill its responsibilities, and (3) evaluates efforts to identify an alternative system integrator for later NPOESS satellites. The NPOESS program's approved cost and schedule baseline is not achievable and problems with two critical sensors continue to drive the program's cost and schedule. Costs are expected to grow by about $1 billion from the current $13.95 billion cost estimate, and the schedules for NPP and the first two NPOESS satellites are expected to be delayed by 7, 14, and 5 months, respectively. These delays endanger the continuity of weather and climate satellite data because there will not be a satellite available as a backup should a satellite fail on launch or in orbit--loss of a Defense Meteorological Satellite Program (DMSP) satellite, an NPOESS satellite, or NPP could result in a 3 to 5 year gap in data continuity. Program officials reported that they are assessing alternatives for mitigating risks, and that they plan to propose a new cost and schedule baseline by the end of June 2009. However, the Executive Committee does not have an estimate for when it will make critical decisions on cost, schedule, and risk mitigation. While the NPOESS Executive Committee has made improvements over the last several years in response to prior recommendations, it has not effectively fulfilled its responsibilities and does not have the membership and leadership it needs to effectively or efficiently oversee and direct the NPOESS program. Until its shortfalls are addressed, the Committee will be unable to effectively oversee the NPOESS program--and important issues involving cost growth, schedule delays, and satellite continuity will likely remain unresolved. The NPOESS program has conducted two successive studies of alternatives to using the existing system integrator for the last two NPOESS satellites, but neither identified a viable alternative to the current contractor. Program officials plan to conduct a final study prior to the June 2010 decision on whether to proceed with the existing prime contractor. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
Our analysis of FDIC data showed that while the profitability of most minority banks with assets greater than $100 million nearly equaled the profitability of all similarly sized banks (peers), the profitability of smaller minority banks and African-American banks of all sizes did not. Profitability is commonly measured by return on assets (ROA), or the ratio of profits to assets, and ROAs are typically compared across peer groups to assess performance. Many small minority banks (those with less than $100 million in assets) had ROAs that were substantially lower than those of their peer groups in 2005 as well as in 1995 and 2000. Moreover, African- American banks of all sizes had ROAs that were significantly below those of their peers in 2005 as well as in 1995 and 2000 (African-American banks of all sizes and other small minority banks account for about half of all minority banks). Our analysis of FDIC data identified some possible explanations for the relatively low profitability of some small minority banks and African-American banks, such as relatively higher reserves for potential loan losses and administrative expenses and competition from larger banks. Nevertheless, the majority of officials from banks across all minority groups were positive about their banks’ financial outlook, and many saw their minority status as an advantage in serving their communities (for example, in providing services in the language predominantly used by the minority community). The bank regulators have adopted differing approaches to supporting minority banks, and, at the time of our review, no agency had assessed the effectiveness of its efforts through regular and comprehensive surveys of minority banks or outcome-oriented performance measures. FDIC— which supervises more than half of all minority banks—had the most comprehensive program to support minority banks and led an interagency group that coordinates such efforts. Among other things, FDIC has designated officials in the agency’s headquarters and regional offices to be responsible for minority bank efforts, held periodic conferences for minority banks, and established formal policies for annual outreach to the banks it regulates to make them aware of available technical assistance. OTS also designated staff to be responsible for the agency’s efforts to support minority banks, developed outreach procedures, and focused its efforts on providing technical assistance. OCC and the Federal Reserve, while not required to do so by Section 308 of FIRREA, undertook some efforts to support minority banks, such as holding occasional conferences for Native American banks, and were planning additional efforts. FDIC proactively sought to assess the effectiveness of its support efforts; for example, it surveyed minority banks. However, these surveys did not address key activities, such as the provision of technical assistance, and the agency had not established outcome-oriented performance measures for its support efforts. Furthermore, none of the other regulators comprehensively surveyed minority banks on the effectiveness of their support efforts or established outcome-oriented performance measures. Consequently, the regulators were not well positioned to assess the results of their support efforts or identify areas for improvement. Our survey of minority banks identified potential limitations in the regulators’ support efforts that likely would be of significance to agency managers and warrant follow-up analysis. About one-third of survey respondents rated their regulators’ efforts for minority banks as very good or good, while 26 percent rated the efforts as fair, 13 percent as poor or very poor, and 25 percent responded “do not know.” FDIC-regulated banks were more positive about their agency’s efforts than banks that other agencies regulated. However, only about half of the FDIC-regulated banks and about a quarter of the banks regulated by other agencies rated their agency’s efforts as very good or good. Although regulators may emphasize the provision of technical assistance to minority banks, less than 30 percent of such institutions said they had used such agency services within the last 3 years. Therefore, the banks may have been missing opportunities to address problems that limited their operations or financial performance. As we found in our 1993 report, some minority bank officials also said that examiners did not always understand the challenges that the banks may face in providing services in their communities or operating environments. Although the bank officials said they did not expect special treatment in the examination process, they suggested that examiners needed to undergo more training to improve their understanding of minority banks and the customer base they serve. To allow the regulators to better understand the effectiveness of their support efforts, our October 2006 report recommended that the regulators review such efforts and, in so doing, consider employing the following methods: (1) regularly surveying the minority banks under their supervision on all efforts and regulatory areas affecting these institutions; or (2) establishing outcome-oriented performance measures to evaluate the extent to which their efforts are achieving their objectives. Subsequent to the report’s issuance, the regulators have reported taking steps to better assess or enhance their minority bank support efforts. For example, all of the regulators have developed surveys or are in the process of consulting with minority banks to obtain feedback on their support efforts. I also note that some regulators plan to provide additional training to their examiners on minority bank issues. These initiatives are positive developments, but it is too soon to evaluate their effectiveness. We encourage agency officials to ensure that they collect and analyze relevant data and take steps to enhance their minority bank support efforts as may be warranted. Many minority banks are located in urban areas and seek to serve distressed communities and populations that financial institutions traditionally have underserved. For example, after the Civil War, banks were established to provide financial services to African-Americans. More recently, Asian-American and Hispanic-American banks have been established to serve the rapidly growing Asian and Hispanic communities in the United States. In our review of regulators’ lists of minority banks, we identified a total minority bank population of 195 for 2005 (see table 1). Table 2 shows that the distribution of minority banks by size is similar to the distribution of all banks by size. More than 40 percent of all minority banks had assets of less than $100 million. Each federally insured depository institution, including each minority bank, has a primary federal regulator. As shown in table 3, FDIC serves as the primary federal regulator for more than half of minority banks—109 of the 195 banks, or 56 percent—and the Federal Reserve regulates the fewest. The federal regulators primarily focus on ensuring the safety and soundness of banks and do so through on-site examinations and other means. Regulators may also close banks that are deemed insolvent and posing a risk to the Deposit Insurance Fund. FDIC is responsible for ensuring that the deposits in failed banks are protected up to established deposit insurance limits. While the regulators’ primary focus is bank safety and soundness, laws and regulations can identify additional goals and objectives. Recognizing the importance of minority banks, Section 308 of FIRREA outlined five broad goals toward which FDIC and OTS, in consultation with Treasury, are to work to preserve and promote minority banks. These goals are: preserving the present number of minority banks; preserving their minority character in cases involving mergers or acquisitions of minority banks; providing technical assistance to prevent insolvency of institutions that are not currently insolvent; promoting and encouraging the creation of new minority banks; and providing for training, technical assistance, and education programs. Technical assistance is typically defined as one-to-one assistance that a regulator may provide to a bank in response to a request. For example, a regulator may advise a bank on compliance with a particular statute or regulation. Regulators also may provide technical assistance to banks that is related to deficiencies identified in safety and soundness examinations. In contrast, education programs typically are open to all banks regulated by a particular agency or all banks located within a regulator’s regional office. For example, regulators may offer training for banks to review compliance with laws and regulations. As shown in figure 1, our 2006 report found that, according to FDIC data, most minority banks with assets exceeding $100 million had ROAs in 2005 that were close to those of their peer groups, while many smaller banks had ROAs that were significantly lower than those of their peers. Minority banks with more than $100 million in assets accounted for 58 percent of all minority banks, while those with less than $100 million accounted for 42 percent. Each size category of minority banks with more than $100 million in assets had a weighted average ROA that was slightly lower than that of its peers, but in each case their ROAs exceeded 1 percent. By historical banking industry standards, an ROA of 1 percent or more generally has been considered to indicate an adequate level of profitability. We found that profitability of the larger minority, Hispanic-American, Asian- American, Native American, and women-owned banks were close to, and in some cases exceeded, the profitability of their peers in 2005. In contrast, small minority banks (those with assets of less than $100 million) had an average ROA of 0.4 percent, and their peers had an average ROA of 1 percent. Our analysis of FDIC data for 1995 and 2000 also indicated some similar patterns, with minority banks with assets greater than $100 million showing levels of profitability that generally were close to those of their peers, or ROAs of about 1 percent, and minority banks with assets of less than $100 million showing greater differences with their peers. The profitability of African-American banks generally has been below that of their peers in all size categories (see fig. 2). For example, African- American banks with less than $100 million in assets—which constitute 61 percent of all African-American banks—had an average ROA of 0.16 percent, while their peers averaged 1.0 percent. Our analysis of FDIC data for 2000 and 1995 also found that African-American banks of all sizes had lower ROAs than their peers. Our analysis of 2005 FDIC data also suggests some possible reasons for the differences in profitability between some minority banks and their peers. For example, our analysis of 2005 FDIC data showed that African- American banks with assets of less than $300 million—which constitute 87 percent of all African-American banks—had significantly higher loan loss reserves as a percentage of their total assets than the average for their peers (see fig. 3). Although having higher loan loss reserves may be necessary for the safe and sound operation of any particular bank, they lower bank profits because loan loss reserves are counted as expenses. We also found some evidence that higher operating expenses might affect the profitability of some minority banks. Operating expenses— expenditures for items such as administrative expenses and salaries— typically are compared to an institution’s total earning assets, such as loans and investments, to indicate the proportion of earning assets that banks spend on operating expenses. As figure 4 indicates, many minority banks with less than $100 million in assets had higher operating expenses than their peers in 2005. Academic studies we reviewed generally reached similar conclusions. Officials from several minority banks we contacted also described aspects of their operating environment, business practices, and customer service that could result in higher operating costs. In particular, the officials cited the costs associated with providing banking services in low-income urban areas or in communities with high immigrant populations. Bank officials also told us that they focus on fostering strong customer relationships, sometimes providing financial literacy services. Consequently, as part of their mission these banks spend more time and resources on their customers per transaction than other banks. Other minority bank officials said that their customers made relatively small deposits and preferred to do business in person at bank branch locations rather than through potentially lower-cost alternatives, such as over the phone or the Internet. Minority bank officials also cited other factors that may have limited their profitability. In particular, in response to Community Reinvestment Act (CRA) incentives, the officials said that larger banks and other financial institutions were increasing competition for minority banks’ traditional customer base. The officials said that larger banks could offer loans and other financial services at more competitive prices because they could raise funds at lower rates and take advantage of operational efficiencies. In addition, officials from some African-American and Hispanic banks cited attracting and retaining quality staff as a challenge to their profitability. Despite these challenges, officials from banks across minority groups were optimistic about the financial outlook for their institutions. When asked in our survey to rate their financial outlook compared to those of the past 3 to 5 years, 65 percent said it would be much or slightly better; 21 percent thought it would be about the same, and 11 percent thought it would be slightly or much worse, while 3 percent did not know. Officials from minority banks said that their institutions had advantages in serving minority communities. For example, officials from an Asian-American bank said that the staff’s ability to communicate in the customers’ primary language provided a competitive advantage. Our report found that FDIC—which supervises 109 of 195 minority banks—had developed the most extensive efforts to support minority banks among the banking regulators (see fig. 5). FDIC had also taken the lead in coordinating regulators’ efforts in support of minority banks, including leading a group of all the banking regulators that meets semiannually to discuss individual agency initiatives, training and outreach events, and each agency’s list of minority banks. OTS had developed a variety of support programs, including developing a minority bank policy statement and staffing support structure. OCC had also taken steps to support minority banks, such as developing a policy statement. OCC and the Federal Reserve had also hosted events for some minority banks. The following highlights some key support activities discussed in our October 2006 report. Policy Statements. FDIC, OTS, and OCC all have policy statements that outline the agencies’ efforts for minority banks. They discuss how the regulators identify minority banks, participate in minority bank events, provide technical assistance, and work toward preserving the character of minority banks during the resolution process. OCC officials told us that they developed their policy statement in 2001 after an interagency meeting of the federal banking regulators on minority bank issues. Both FDIC and OTS issued policy statements in 2002. Staffing Structure. FDIC has a national coordinator in Washington, D.C. and coordinators in each regional office from its Division of Supervision and Consumer Protection to implement the agency’s minority bank program. Among other responsibilities, the national coordinator regularly contacts minority bank trade associations about participation in events and other issues, coordinates with other agencies, and compiles quarterly reports for the FDIC chairman based on regional coordinators’ reports on their minority bank activities. Similarly, OTS has a national coordinator in its headquarters and supervisory and community affairs staff in each region who maintain contact with the minority banks that OTS regulates. While OCC and the Federal Reserve did not have similar staffing structures, officials from these agencies had contacted minority banks among their responsibilities. Minority Bank Events and Training. FDIC has taken the lead role in sponsoring, hosting, and coordinating events in support of minority banks. For example, in August 2006 FDIC sponsored a national conference for minority banks in which representatives from OTS, OCC, and the Federal Reserve participated. FDIC also has sponsored the Minority Bankers Roundtable (MBR) series, which agency officials told us was designed to provide insight into the regulatory relationship between minority banks and FDIC and explore opportunities for partnerships between FDIC and these banks. In 2005, FDIC held six roundtables around the country for minority banks supervised by all of the regulators. To varying degrees, OTS, OCC, and the Federal Reserve also have held events to support minority banks, such as Native American Institutions. Technical Assistance. All of the federal banking regulators told us that they provided their minority banks with technical assistance if requested, but only FDIC and OTS have specific procedures for offering this assistance. More specifically, FDIC and OTS officials told us that they proactively seek to make minority banks aware of such assistance through established outreach procedures outside of their customary examination and supervision processes. FDIC also has a policy that requires its regional coordinators to ensure that examination case managers contact minority banks from 90 to 120 days after an examination to offer technical assistance in any problem areas that were identified during the examination. This policy is unique to minority banks. OCC and the Federal Reserve provide technical assistance to all of their banks, but had not established outreach procedures for all their minority banks outside of the customary examination and supervision processes. However, OCC officials told us that they were in the process of developing an outreach plan for all minority banks regulated by the agency. Federal Reserve officials told us that Federal Reserve districts conduct informal outreach to their minority banks and consult with other districts on minority bank issues as needed. Policies to Preserve the Minority Character of Troubled Banks. FDIC has developed policies for failing banks that are consistent with FIRREA’s requirement that the agency work to preserve the minority character of minority banks in cases of mergers and acquisitions. For example, FDIC maintains a list of qualified minority banks or minority investors that may be asked to bid on the assets of troubled minority banks that are expected to fail. However, FDIC is required to accept the bids on failing banks that pose the lowest expected cost to the Deposit Insurance Fund. As a result, all bidders, including minority bidders, are subject to competition. OTS and OCC have developed written policies that describe how the agencies will work with FDIC to identify qualified minority banks or investors to acquire minority banks that are failing. While the Federal Reserve does not have a similar written policy, agency officials say that they also work with FDIC to identify qualified minority banks or investors. All four agencies also said that they try to assist troubled minority banks improve their financial condition before it deteriorates to the point that a resolution through FDIC becomes necessary. For example, agencies may provide technical assistance in such situations or try to identify other minority banks willing to acquire or merge with the troubled institutions. While FDIC was proactive in assessing its support efforts for minority banks, none of the regulators routinely and comprehensively surveyed their minority banks on all issues affecting the institutions, nor have the regulators established outcome-oriented performance measures. Evaluating the effectiveness of federal programs is vitally important to manage programs successfully and improve program results. To this end, in 1993 Congress enacted the Government Performance and Results Act, which instituted a governmentwide requirement that agencies report on their results in achieving their agency and program goals. As part of its assessment methods, FDIC conducted roundtables and surveyed minority banks on aspects of its minority bank efforts. For example, in 2005, FDIC requested feedback on its efforts from institutions that attended the agency’s six MBRs (which approximately one-third of minority banks attended). The agency also sent a survey letter to all minority banks to seek their feedback on several proposals to better serve such institutions, but only 24 minority banks responded. The proposals included holding another national minority bank conference, instituting a partnership program with universities, and developing a minority bank museum exhibition. FDIC officials said that they used the information gathered from the MBRs and the survey to develop recommendations for improving programs and developing new initiatives. While FDIC had taken steps to assess the effectiveness of its minority bank support efforts, we identified some limitations in its approach. For example, in FDIC’s surveys of minority banks, the agency did not solicit feedback on key aspects of its support efforts, such as the provision of technical assistance. Moreover, FDIC has not established outcome- oriented performance measures to gauge the effectiveness of its various support efforts. None of the other regulators had surveyed minority banks recently on support efforts or developed performance measures. By not taking such steps, we concluded that the regulators were not well positioned to assess their support efforts or identify areas for improvement. Further, the regulators could not take corrective action as necessary to provide better support efforts to minority banks. Minority bank officials we surveyed identified potential limitations in the regulators’ efforts to support them and related regulatory issues, such as examiners’ understanding of issues affecting minority banks, which would likely be of significance to agency managers and warrant follow-up analysis. Some 36 percent of survey respondents described their regulators’ efforts as very good or good, 26 percent described them as fair, and 13 percent described the efforts as poor or very poor (see fig. 6). A relatively large percentage—25 percent—responded “do not know” to this question. Banks’ responses varied by regulator, with 45 percent of banks regulated by FDIC giving very good or good responses, compared with about 25 percent of banks regulated by other agencies. However, more than half of FDIC-regulated banks and about three-quarters of the other minority banks responded that their regulator’s efforts were fair, poor, or very poor or responded with a “do not know.” In particular, banks regulated by OTS gave the highest percentage of poor or very poor marks, while banks regulated by the Federal Reserve most often provided fair marks. Nearly half of minority banks reported that they attended FDIC roundtables and conferences designed for minority banks, and about half of the 65 respondents that attended these events found them to be extremely or very useful (see fig. 7). Almost a third found them to be moderately useful, and 17 percent found them to be slightly or not at all useful. One participant commented that the information was useful, as was the opportunity to meet the regulators. Many banks also commented that the events provided a good opportunity to network and share ideas with other minority banks. While FDIC and OTS emphasized technical services as key components of their efforts to support minority banks, less than 30 percent of the institutions they regulate reported using such assistance within the last 3 years (see fig. 8). Minority banks regulated by OCC and the Federal Reserve reported similarly low usage of technical assistance services. However, of the few banks that used technical assistance—41—the majority rated the assistance provided as extremely or very useful. Further, although small minority banks and African-American banks of all sizes have consistently faced financial challenges and might benefit from certain types of assistance, the banks also reported low rates of usage of the agencies’ technical assistance. While our survey did not address the reasons that relatively few minority banks appear to use the technical assistance and banking regulators cannot compel banks under their supervision to make use of offered technical assistance, the potential exists that many such institutions may be missing opportunities to learn how to correct problems that limit their operational and financial performance. More than 80 percent of the minority banks we surveyed responded that their regulators did a very good or good job of administering examinations, and almost 90 percent felt that they had very good or good relationships with their regulator. However, as in our 1993 report, some minority bank officials said in both survey responses and interviews that examiners did not always understand the challenges the banks faced in providing services in their particular communities. Twenty-one percent of survey respondents mentioned this issue when asked for suggestions about how regulators could improve their efforts to support minority banks, and several minority banks that we interviewed elaborated on this topic. The bank officials said that examiners tended to treat minority banks like any other bank when they conducted examinations and thought such comparisons were not appropriate. For example, some bank officials whose institutions serve immigrant communities said that their customers tended to do business in cash and carried a significant amount of cash because banking services were not widely available or trusted in the customers’ home countries. Bank officials said that examiners sometimes commented negatively on the practice of customers doing business in cash or placed the bank under increased scrutiny relative to the Bank Secrecy Act’s requirements for cash transactions. While the bank officials said that they did not expect preferential treatment in the examination process, several suggested that examiners undergo additional training so that they could better understand minority banks and the communities that these institutions served. FDIC has conducted such training for its examiners. In 2004, FDIC invited the president of a minority bank to speak to about 500 FDIC examiners on the uniqueness of minority banks and the examination process. FDIC officials later reported that the examiners found the discussion helpful. Many survey respondents also said that a CRA provision that was designed to assist their institutions was not effectively achieving this goal. The provision allows bank regulators conducting CRA examinations to give consideration to banks that assist minority banks through capital investment, loan participation, and other ventures that help meet the credit needs of local communities. Despite this provision, only 18 percent of survey respondents said that CRA had—to a very great or great extent— encouraged other institutions to invest in or form partnerships with their institutions, while more than half said that CRA encouraged such activities to some, little, or no extent (see fig. 9). Some minority bankers attributed their view that the CRA provision has not been effective, in part, to a lack of clarity in interagency guidance on the act’s implementation. They said that the interagency guidance should be clarified to assure banks that they will receive CRA consideration in making investments in minority banks. Our 2006 report recommended that the bank regulators regularly review the effectiveness of their minority bank support efforts and related regulatory activities and, as appropriate, make changes necessary to better serve such institutions. In conducting such reviews, we recommended that the regulators consider conducting periodic surveys of minority banks or developing outcome-oriented performance measures for their support efforts. In conducting such reviews, we also suggested that the regulators focus on the overall views of minority banks about support efforts, the usage and effectiveness of technical assistance (particularly assistance provided to small minority and African-American banks), and the level of training provided to agency examiners on minority banks and their operating environments. Over the past year, bank regulatory officials we contacted identified several steps that they have initiated to assess the effectiveness of their minority bank support efforts or to enhance such support efforts. They include the following actions: A Federal Reserve official told us that the agency has established a working group that is developing a pilot training program for minority banks and new banks. The official said that three training modules have been drafted for different phases of a bank’s life, including starting a bank, operating a bank during its first 5 years of existence, and bank expansion. The official said that the program will be piloted throughout the U.S. beginning in early November 2007. Throughout the course of developing, drafting, and piloting the program, Federal Reserve officials said they have, and will continue to, consult with minority bankers to obtain feedback on the effort. An OCC official said that the agency recently sent a survey to minority banks on its education, outreach, and technical assistance efforts that should be completed by the end of October. OCC also plans to follow up this survey with a series of focus groups. In addition, the official said OCC just completed an internal survey of certain officials involved in supervising minority institutions, and plans to review the results of the two surveys and focus groups to improve its minority bank support efforts. FDIC officials told us that the agency has developed a survey to obtain feedback on the agency’s minority bank support efforts. They estimate that the survey will be sent out to all minority institutions (not just those minority banks FDIC supervises) in mid-December 2007. An OTS official told us that the agency will send out a survey to the minority banks the agency supervises on its efforts in the next couple weeks and that it has also conducted a series of roundtables with minority banks in the past year. The federal banking agencies have also taken some steps to address other issues raised in our report. For example, Federal Reserve and FDIC officials told us that that the agencies will provide additional training on minority bank issues to their examiners. In addition, in July 2007 the federal banking agencies published a CRA Interagency Notice that requested comments on nine new “Questions and Answers” about community reinvestment. One question covers how majority banks may engage in and receive positive CRA consideration for activities conducted with minority institutions. An OCC official said that the comments on the proposed “Q and As” are under review. While the regulators’ recent efforts to assess and enhance their minority bank support efforts and other activities are encouraging, it is too soon to assess their effectiveness. For example, the Federal Reserve’s pilot training program for minority and new banks is not scheduled to begin until later this year. Further, the other regulators’ efforts to survey minority banks on support efforts generally also are at an early stage. We encourage agency officials to ensure that they collect and analyze relevant data and take steps to enhance their minority bank support efforts as warranted. Mr. Chairman, this concludes my prepared statement. I would be happy to address any questions that you or subcommittee members may have. For further information about this testimony, please contact George A. Scott on (202) 512-7215 or at [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Individuals making key contributions include Wesley M. Phillips, Assistant Director; Allison Abrams; Kevin Averyt; and Barbara Roesmann. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | Minority banks can play an important role in serving the financial needs of historically underserved communities and growing populations of minorities. For this reason, the Financial Institutions, Reform, Recovery, and Enforcement Act of 1989 (FIRREA) established goals that the Federal Deposit Insurance Corporation (FDIC) and the Office of Thrift Supervision (OTS) must work toward to preserve and promote such institutions (support efforts). While not required to do so by FIRREA, the Board of Governors of the Federal Reserve System (Federal Reserve) and Office of the Comptroller of the Currency (OCC) have established some minority bank support efforts. This testimony, based on a 2006 General Accountability Office (GAO) report, discusses the profitability of minority banks, regulators' support and assessment efforts, and the views of minority banks on the regulators' efforts as identified through responses from a survey of 149 such institutions. GAO reported in 2006 that the profitability of most large minority banks (assets greater than $100 million) was nearly equal to that of their peers (similarly sized banks) in 2005 and earlier years, according to FDIC data. However, many small minority banks and African-American banks of all sizes were less profitable than their peers. GAO's analysis and other studies identified some possible explanations for these differences, including relatively higher loan loss reserves and operating expenses and competition from larger banks. Bank regulators had adopted differing approaches to supporting minority banks, but no agency had regularly and comprehensively assessed the effectiveness of its efforts. FDIC--which supervises over half of all minority banks--had the most comprehensive support efforts and leads interagency efforts. OTS focused on providing technical assistance to minority banks. While not required to do so by FIRREA, OCC and the Federal Reserve had taken some steps to support minority banks. Although FDIC had recently sought to assess the effectiveness of its support efforts through various methods, none of the regulators comprehensively surveyed minority banks or had developed performance measures. Consequently, the regulators were not well positioned to assess their support efforts. GAO's survey of minority banks identified potential limitations in the regulators' support efforts that would likely be of significance to agency managers and warrant follow-up analysis. Only about one-third of survey respondents rated their regulators' efforts for minority banks as very good or good, while 26 percent rated the efforts as fair, 13 percent as poor or very poor, and 25 percent responded "don't know". Banks regulated by FDIC were more positive about their agency's efforts than banks regulated by other agencies. However, only about half of the FDIC-regulated banks and about a quarter of the banks regulated by other agencies rated their agency's efforts as very good or good. Although regulators may have emphasized the provision of technical assistance to minority banks, less than 30 percent of such institutions have used such agency services within the last 3 years and therefore may be missing opportunities to address problems that limit their operations or financial performance. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
Mr. Chairman and Members of the Committee: I am pleased to be here today to discuss our observations on the Department of Justice’s August draft of its strategic plan. The Government Performance and Results Act of 1993 (the Results Act) requires that all executive branch agencies submit their plans to Congress and the Office of Management and Budget (OMB) by September 30, 1997. My statement focuses on Justice’s August draft strategic plan, which builds on our July comments regarding Justice’s February draft plan. Specifically, my statement will focus on the August plan’s compliance with the Act’s requirements and on the extent to which it covered crosscutting program activities, management challenges, and Justice’s capacity to provide reliable performance information. In summary, Justice’s February draft of its strategic plan was incomplete in that of the six elements required by the Act, three—the relationship between long-term goals/objectives and the annual performance plans, the key factors external to Justice that could affect Justice’s ability to meet its goals, and a program evaluation component—were not specifically identified in the draft plan. The remaining three elements—the mission statement, goals and objectives, and strategies to achieve the goals and objectives—were discussed. The August plan includes two of the three missing elements but the plan does not include a required discussion on a third element—how the long-term goals and objectives are tied to Justice’s annual performance plans. In addition, the revised plan would better meet the purposes of the Act if it provided more complete coverage of crosscutting programs, management challenges, and performance information. In the 1990s, Congress put in place a statutory framework to address long-standing weaknesses in federal government operations, improve federal management practices, and provide greater accountability for achieving results. This framework included as its essential elements financial management reform legislation, information technology reform legislation, and the Results Act. In enacting this framework, Congress sought to create a more focused, results-oriented management and decisionmaking process within both Congress and the executive branch. These laws seek to improve federal management by responding to a need for accurate, reliable information for congressional and executive branch decisionmaking. This information has been badly lacking in the past, as much of our work has demonstrated. Implemented together, these laws provided a powerful framework for developing fully integrated information about agencies’ missions and strategic priorities, data to show whether or not the goals are achieved, the relationship of information technology investment to the achievement of those goals, and accurate and audited financial information about the costs of achieving mission results. The Results Act focuses on clarifying missions, setting goals, and measuring performance toward achieving those goals. It emphasizes managing for results and pinpointing opportunities for improved performance and increased accountability. Congress intended for the Act to improve the effectiveness of federal programs by fundamentally shifting the focus of management and decisionmaking away from a preoccupation with tasks and services to a broader focus on results of federal programs. program evaluations were used to establish and revise strategic goals and a schedule for future program evaluations. Justice’s strategic plan is organized around what Justice has identified as its seven core functions: (1) investigation and prosecution of criminal offenses; (2) assistance to state and local governments; (3) legal representation, enforcement of federal laws, and defense of federal government interests; (4) immigration; (5) detention and incarceration; (6) protection of the federal judiciary and improvement of the justice system; and (7) management. Justice’s February draft of its strategic plan was incomplete and did not provide Congress with critical information for its consultations with Justice. Justice’s August version added two of the three required elements that were missing in the February plan. As a result, the August plan includes, to some degree, a discussion on five of the six required elements—a mission statement, goals and objectives, key external factors, a program evaluation component, and strategies to achieve the goals and objectives. The August plan does not include a required discussion of a sixth element—the relationship between Justice’s long-term goals/objectives and its annual performance plans. “Our mission at the United States Department of Justice is to enforce the law and defend the interests of the U.S. according to the law, provide Federal leadership in preventing and controlling crime, seek just punishment for those guilty of unlawful behavior, administer and enforce the Nation’s immigration laws fairly and effectively and ensure fair and impartial administration of justice for all Americans.” Justice’s mission statement covers six of the seven core functions that Justice identified but does not specify the detention and incarceration function, which is one of Justice’s largest budget items. The plan does incorporate the detention and incarceration function in the discussion of goals and objectives and in its strategies to achieve those goals and objectives. Justice officials said that it was their intent to cover the detention and incarceration function by the phrases “seek just punishment . . .” and “ensure fair and impartial administration of justice . . .” While we agree that mission statements may vary in the extent to which they specify particular activities, we believe that it would be helpful to explicitly include the detention and incarceration function in this case. Our belief is based on Justice’s decision to specify all of the other major functions in its mission statement and our concern that the Department’s stakeholders may not interpret the phrases cited by Justice officials as indicating that the detention and incarceration component is part of its mission. Justice’s goals and objectives cover its major functions and operations and are logically related to its mission. However, they are not as results oriented as they could be and some focus on activities and processes. For example, one set of results-oriented goals involves reducing violent, organized, and gang-related crime; drug-related crime; espionage and terrorism; and white collar crime. However, goals in other areas are more process oriented, such as “Represent the United States in all civil matters for which the Department of Justice has jurisdiction,” “Promote the participation of victims and witnesses throughout each stage of criminal and juvenile justice proceedings at the Federal, State, and local levels,” and “Make effective use of information technology.” Another concern we have with some of the goals is that they are not always expressed in as measurable a form as intended by OMB guidance. For example, two of Justice’s goals in the legal representation, enforcement of federal laws, and defense of U.S. interests core function are to protect the civil rights of all Americans and safeguard America’s environment and natural resources. It is not clear from the August plan how Justice will measure its progress in achieving these goals. The Results Act and OMB Circular A-11 indicate that agency strategic plans should describe the processes the agencies will use to achieve their goals and objectives. Our review of Justice’s strategic plan, specifically the strategies and performance indicators, identified areas where the plan did not fully meet the Act’s requirements and OMB Circular A-11 guidance. programs and activities have contributed to changes in violent crime, availability and abuse of illegal drugs, espionage and terrorism, and white collar crime. Similarly, in its immigration core function, Justice has a goal to maximize deterrence to unlawful migration by reducing the incentives of unauthorized employment and assistance. It is likewise unclear how Justice will be able to determine the effect of its efforts to deter unlawful migration, as differentiated from the effect of changes in the economic and political conditions in countries from which illegal aliens originated. The plan does not address either issue. Some of Justice’s performance indicators are more output than outcome related. For example, one cited strategy for achieving the goal of ensuring border integrity is to prevent illegal entry by increasing the strength of the Border Patrol. One of the performance indicators Justice is proposing as a measure of how well the strategy is working is the percentage of time that Border Patrol agents devote to actual border control operations. While this measure may indicate whether agents are spending more time controlling the border, it is not clear how it will help Justice assess its progress in deterring unlawful migration. The Act requires that agencies’ plans discuss the types of resources (e.g., human skills, capital, and information technology) that will be needed to achieve the strategic and performance goals and OMB guidance suggests that agencies’ plans discuss any significant changes to be made in resource levels. Justice’s plan does not include either discussion. This information could be beneficial to Justice and Congress in agreeing on the goals, evaluating Justice’s progress in achieving the goals, and making resource decisions during the budget process. In its August plan, Justice added a required discussion on key external factors that could affect its plan outcomes. Justice discusses eight key external factors that could significantly affect achievement of its long-term goals. These factors include emergencies and other unpredictable events (e.g., the bombing of the Alfred P. Murrah building), changing statutory responsibilities, changing technology, and developments overseas. According to Justice, isolating the particular effects of law enforcement activity from these eight factors that affect outcomes and over which Justice has little control is extremely difficult. This component of the plan would be more helpful to decisionmakers if it included a discussion of alternatives that could reduce the potential impact of these external factors. In its August plan, Justice added a required discussion on the role program evaluation is to play in its strategic planning efforts. Justice recognizes that it has done little in the way of formal evaluations of Justice programs and states that it plans to examine its evaluation approach to better align evaluations with strategic planning efforts. The August plan identifies ongoing evaluations being performed by Justice’s components. OMB guidance suggests that this component of the plan include a general discussion of how evaluations were used to establish and revise strategic goals, and identify future planned evaluations and their general scope and time frames. Justice’s August plan does neither. Under the Results Act, Justice’s long-term strategic goals are to be linked to its annual performance plans and the day-to-day activities of its managers and staff. This linkage is to provide a basis for judging whether an agency is making progress toward achieving its long-term goals. However, Justice’s August plan does not provide such linkages. In its August plan, Justice pointed out that its fiscal year 1999 annual performance planning and budget formulation activities are to be closely linked and that both are to be driven by the goals of the strategic plan. It also said that the linkages would become more apparent as the fiscal year 1999 annual performance plan and budget request are issued. how Justice and the Department of the Treasury, which have similar responsibilities concerning the seizure and forfeiture of assets used in connection with illegal activities (e.g., money laundering) will coordinate and integrate their operations; how INS will work with the Bureau of Prisons and state prison officials to identify criminal aliens; and how INS and the Customs Service, which both inspect arriving passengers at ports of entry to determine whether they are carrying contraband and are authorized to enter the country, will coordinate their resources. Along these lines, certain program areas within Justice have similar or complementary functions that are not addressed or could be better discussed in the strategic plan. For example, both the Bureau of Prisons and INS detain individuals, but the plan does not address the interrelationship of their similar functions or prescribe comparable measures for inputs and outcomes. As a second example, the plan does not fully recognize the linkage among Justice’s investigative, prosecutorial, and incarceration responsibilities. One purpose of the Results Act is to improve the management of federal agencies. Therefore, it is particularly important that agencies develop strategies that address management challenges that threaten their ability to achieve both long-term strategic goals and this purpose of the Act. Over the years, we as well as others, including the Justice Inspector General and the National Performance Review (NPR), have addressed many management challenges that Justice faces in carrying out its mission. In addition, recent audits under the Chief Financial Officers Act of 1990 (CFO Act), expanded by the Government Management Reform Act, have revealed internal control and accounting problems. Justice’s draft strategic plan is silent on these issues. contains a new section on “Issues and Challenges in Achieving Our Goals,” which was not in its February plan. This new section discusses Justice’s process for managing its information technology investments, steps taken to provide security over its information systems, and its strategy to ensure that computer systems accommodate dates beyond the year 2000. However, neither this new section nor the “Management” core function addresses some of the specific management problems that have been identified over the years and the status of Justice’s efforts to address them. In its August draft plan, Justice also added a discussion on “accountability,” which points out that Justice has an internal control process that systematically identifies management weaknesses and vulnerabilities and specifies corrective actions. This section also recognizes the role of Justice’s Inspector General. However, the plan would be more helpful if it included a discussion of corrective actions Justice has planned for internally and externally identified management weaknesses, as well as how it plans to monitor the implementation of such actions. In addition, the plan does not address how Justice will correct significant problems identified during the Inspector General’s fiscal year 1996 financial statement audits, such as inadequate safeguarding and accounting for physical assets and weaknesses in the internal controls over data processing operations. To efficiently and effectively operate, manage, and oversee its diverse array of law enforcement-related responsibilities, Justice needs reliable data on its results and those of other law enforcement-related organizations. Further, Justice will need to rely on a variety of external data sources (e.g., state and local law enforcement agencies) to assess the impact of its plan. These data are needed so that Justice can effectively measure its progress and monitor, record, account for, summarize, and analyze crime-related data. Justice’s August strategic plan contains little discussion about its capacity to provide performance information for assessing its progress toward its goals and objectives over the next 5 years. and reliable budget, accounting, and performance data to support decisionmaking, and (2) integrating the planning, reporting and decisionmaking processes. These strategies could assist Justice in producing results-oriented reports on its financial condition and operating performance. Mr. Chairman, this concludes my prepared statement. I would be pleased to answer any questions. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | GAO discussed the Department of Justice's August 1997 draft strategic plan developed in compliance with the Government Performance and Results Act of 1993, focusing on the plan's compliance with the Act's requirements and on the extent to which it covered crosscutting program activities, management challenges, and Justice's capacity to provide reliable performance information. GAO noted that: (1) Justice's plan discusses, to some degree, five of the six required elements--mission statement, goals and objectives, key external factors, a program evaluation component, and strategies to achieve the goals and objectives; (2) the plan does not include a required discussion on the relationship between Justice's long-term goals/objectives and its annual performance plans; (3) the draft plan could better address how Justice plans to: (a) coordinate with other federal, state, and local agencies that perform similar law enforcement functions, such as the Defense and State Departments regarding counter-terrorism; (b) address the many management challenges it faces in carrying out its mission, such as internal control and accounting problems; and (c) increase its capacity to provide performance information for assessing its progress in meeting the goals and objectives over the next 5 years. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
NRC’s implementation of a risk-informed, performance-based regulatory approach for commercial nuclear power plants is complex and will require many years to fully implement. It requires basic changes to the regulations and NRC’s processes to ensure the safe operation of these plants. NRC faces a number of challenges to develop and to implement this process. For example, because of the complexity of this change, the agency needs a strategy to guide its development and implementation. We recommended such a strategy in March 1999. We suggested that a clearly defined strategy would help guide the regulatory transformation if it described the regulatory activities NRC planned to change to a risk-informed approach, the actions needed to accomplish this transformation, and the schedule and resources needed to make these changes. NRC initially agreed that it needed a comprehensive strategy, but it has not developed one. As one NRC Commissioner said in March 2000, “we really are . . . inventing this as we go along given how much things are changing, it’s very hard to plan even 4 months from now, let alone years from now.” NRC did develop the Risk-Informed Regulation Implementation Plan, which includes guidelines to identify, set priorities for, and implement risk-informed changes to regulatory processes. The plan also identifies specific tasks and projected milestones. The Risk-Informed Regulation Implementation Plan is not as comprehensive as it needs to be, because it does not identify performance measures, the items that are critical to achieving its objectives, activities that cut across its major offices, resources, or the relationships among the more than 40 separate activities (25 of which pertain to nuclear plants). For example, risk-informing NRC’s regulations will be a formidable task because they are interrelated. Amending one regulation can potentially affect other regulations governing other aspects of nuclear plant operations. NRC found this to be the case when it identified over 20 regulations that would need to be made consistent as it developed a risk- informed approach for one regulation. NRC expects that its efforts to change its regulations applicable to nuclear power plants to focus more on relative risk will take 5 to 8 years. NRC has compounded the complexity of moving to a new regulatory approach by deciding that compliance with such an approach will be voluntary. As a result, NRC will be regulating with two different systems— one for those utilities that choose to comply with a risk-informed approach and another for those that choose to stay with the existing regulatory approach. It is not clear how this dual system will be implemented. One part of the new risk-informed approach that has been implemented is a new safety oversight process for nuclear power plants. It was implemented in April 2000; and since then, NRC’s challenge has been to demonstrate that the new approach meets its goal of maintaining the same level of safety as the old approach, while being more predictable and consistent. The nuclear industry, states, public interest groups, and NRC staff have raised questions about various aspects of the process. For example, the industry has expressed concern about some of the performance indicators selected. Some NRC staff are concerned that that the process does not track all inspections issues and NRC will not have the information available, should the public later demand accountability from the agency. Furthermore, it is very difficult under the new process to assess those activities that cut across all aspects of plant operations— problem identification and resolution, human performance, and safety conscious work environment. In June 2001, NRC staff expect to report to the Commission on the first year of implementation of the new process and recommend changes, where warranted. NRC is facing a number of difficulties inherent in applying a risk-informed regulatory approach for nuclear material licensees. The sheer number of licensees—almost 21,000—and the diversity of the activities they conduct—converting uranium, decommissioning nuclear plants, transporting radioactive materials, and using radioactive material for industrial, medical, or academic purposes—increase the complexity of developing a risk-informed approach that would adequately cover all types of licensees. For example, the diversity of licensees results in varying levels of analytical sophistication; different experience in using risk- informed methods, such as risk assessments and other methods; and uneven knowledge about the analytical methods that would be useful to them. Because material licensees will be using different risk-informed methods, NRC has grouped them by the type of material used and the regulatory requirements for that material. For example, licensees that manufacture casks to store spent reactor fuel could be required to use formal analytical methods, such as a risk assessment. Other licensees, such as those that use nuclear material in industrial and medical applications, would not be expected to conduct risk assessments. In these cases, NRC staff said that they would use other methods to determine those aspects of the licensees’ operations that have significant risk, using an approach that considers the hazards (type, form, and quantity of material) and the barriers or physical and administrative controls that prevent or reduce exposure to these hazards. Another challenge associated with applying a risk-informed approach to material licensees is how NRC will implement a new risk-informed safety and safeguards oversight process for fuel cycle facilities. Unlike commercial nuclear power plants, which have a number of design similarities, most of the 10 facilities that prepare fuel for nuclear reactors perform separate and unique functions. For example, one facility converts uranium to a gas for use in the enrichment process, two facilities enrich or increase the amount of uranium-235 in the gas, and five facilities fabricate the uranium into fuel for commercial nuclear power plants. These facilities possess large quantities of materials that are potentially hazardous (i.e., explosive, radioactive, toxic, and/or combustible) to workers. The facilities’ diverse activities makes it particularly challenging for NRC to design a “one size fits all” safety oversight process and to develop indicators and thresholds of performance. In its recently proposed new risk-informed safety oversight process for material licensees, NRC has yet to resolve such issues as the structure of the problem identification, resolution, and corrective action program; the mechanics of the risk- significance determination process; and the regulatory responses that NRC would take when changes in performance occur. NRC had planned to pilot test the new fuel cycle facility safety oversight process in fiscal year 2001, but staff told us that this schedule could slip. NRC also faces challenges in redefining its role in a changing regulatory environment. As the number of agreement states increases beyond the existing 32, NRC must continue to ensure the adequacy and consistency of the states’ programs as well as its own effectiveness and efficiency in overseeing licensees that are not regulated by the agreement states. NRC has been working with the Conference of Radiation Control Program Directors (primarily state officials) and the Organization of Agreement States to address these challenges. However, NRC has yet to address the following questions: (1) Would NRC continue to need staff in all four of its regional offices as the number of agreement states increases? (2) What are the appropriate number, type, and skills for headquarters staff? and (3) What should NRC’s role be in the future? Later this month, a NRC/state working group expects to provide the Commission with its recommended options for the materials program of the future. NRC wants to be in a position to plan for needed changes because in 2003, it anticipates that 35 states will have agreements with NRC and that the states will oversee more than 85 percent of all material licensees. Another challenge NRC faces is to demonstrate that it is meeting one of its performance goals under the Government Performance and Results Act— increasing public confidence in NRC as an effective regulator. There are three reasons why this will be difficult. First, to ensure its independence, NRC cannot promote nuclear power, and it must walk a fine line when communicating with the public. Second, NRC has not defined the “public” that it wants to target in achieving this goal. Third, NRC has not established a baseline to measure the “increase” in its performance goal. In March 2000, the Commission rejected a staff proposal to conduct a survey to establish a baseline. Instead, in October 2000, NRC began an 18-month pilot effort to use feedback forms at the conclusion of public meetings. Twice a year, NRC expects to evaluate the information received on the forms to enhance its public outreach efforts. The feedback forms that NRC currently plans to use will provide information on the extent to which the public was aware of the meeting and the clarity, completeness, and thoroughness of the information provided by NRC at the meetings. Over time, the information from the forms may show that the public better understands the issues of concern or interest for a particular plant. It is not clear, however, how this information will show that public confidence in NRC as a regulator has increased. This performance measure is particularly important to bolster public confidence as the industry decides whether to submit a license application for one or more new nuclear power plants. The public has a long history with the traditional regulatory approach and may not fully understand the reasons for implementing a risk-informed approach and the relationship of that approach to maintaining plant safety. In a highly technical and complex industry, NRC is facing the loss of a significant percentage of its senior managers and technical staff. For example, in fiscal year 2001, about 16 percent of NRC staff are eligible to retire, and by the end of fiscal year 2005, about 33 percent will be eligible. The problem is more acute at the individual office level. For example, within the Office of Nuclear Reactor Regulation, about 42 percent of the technical staff and 77 percent of senior executive service staff are eligible for retirement. During this period of potentially very high attrition, NRC will need to rely on that staff to address the nuclear industry’s increasing demands to extend the operating licenses of existing plants and transfer the ownership of others. Likewise, in the Office of Nuclear Regulatory Research, 49 percent of the staff are eligible to retire at the same time that the nuclear industry is considering building new plants. Since that Office plays a key role in reviewing any new plants, if that Office looses some of its highly-skilled, well-recognized research specialists to retirement, NRC will be challenged to make decisions about new plants in a timely way, particularly if the plant is an untested design. In its fiscal year 2000 performance plan, NRC identified the need to maintain core competencies and staff as an issue that could affect its ability to achieve its performance goals. NRC noted that maintaining the correct balance of knowledge, skills, and abilities is critical to accomplishing its mission and is affected by various factors. These factors include the tight labor market for experienced professionals, the workload as projected by the nuclear industry to transfer and extend the licenses of existing plants, and the declining university enrollment in nuclear engineering studies and other fields related to nuclear safety. In October 2000, NRC’s Chairman requested the staff to develop a plan to assess the scientific, engineering, and technical core competencies that NRC needs and propose specific strategies to ensure that the agency maintains that competency. The Chairman noted that maintaining technical competency may be the biggest challenge confronting NRC. In January 2001, NRC staff provided a suggested action plan for maintaining core competencies to the Commission. The staff proposed to begin the 5-year effort in February 2001 at an estimated cost of $2.4 million, including the costs to purchase software that will be used to identify the knowledge and skills needed by NRC. To assess how existing human capital approaches support an agency’s mission, goals, and other organizational needs, we developed a human capital framework, which identified a number of elements and underlying values that are common to high-performing organizations. NRC’s 5-year plan appears to generally include the human capital elements that we suggested. In this regard, NRC has taken the initiative and identified options to attract new employees with critical skills, developed training programs to meets its changing needs, and identified legislative options to help resolve its aging staff issue. The options include allowing NRC to rehire retired staff without jeopardizing their pension payments and to provide salaries comparable to those paid in the private sector. In addition, for nuclear reactor and nuclear material safety, NRC expects to implement an intern program in fiscal year 2002 to attract and retain individuals with scientific, engineering, and other technical competencies. It has established a tuition assistance program, relocation bonuses, and other inducements to encourage qualified individuals not only to accept but also to continue their employment with the agency. NRC staff say that the agency is doing the best that it can with the tools available to hire and retain staff. Continued oversight of NRC’s multiyear effort is needed to ensure that it is being properly implemented and is effective in achieving its goals. Mr. Chairman and Members of the Subcommittee, this concludes our statement. We would be pleased to respond to any questions you may have. | This testimony discusses the challenges facing the Nuclear Regulatory Commission (NRC) as it moves from its traditional regulatory approach to a risk-informed, performance-based approach. GAO found that NRC's implementation of a risk-informed approach for commercial nuclear power plants is a complex, multiyear undertaking that requires basic changes to the regulations and processes NRC uses to ensure the safe operation of these plants. NRC needs to overcome several inherent difficulties as it seeks to apply a risk-informed regulatory approach to the nuclear material licensees, particularly in light of the large number of licensees and the diversity of activities they conduct. NRC will have to demonstrate that it is meeting its mandate (under the Government Performance and Results Act) of increasing public confidence in NRC as an effective regulator. NRC also faces challenges in human capital management, such as replacing a large percentage of its technical staff and senior managers who are eligible to retire. NRC has developed a five-year plan to identify and maintain the core competencies it needs and has identified legislative options to help resolve its aging staff problem. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
Food aid comprises all food-supported interventions by foreign donors to individuals or institutions within a country. It has helped save millions of lives and improve the nutritional status of the most vulnerable groups, including women and children, in developing countries. Food aid is one element of a broader global strategy to enhance food security by reducing poverty and improving the availability of, access to, and use of food in low- income, less developed countries. Food aid is utilized as both a humanitarian response to address acute hunger in emergencies and a development-focused response to address chronic hunger. Large-scale conflicts, poverty, weather calamities, and severe health-related problems are among the underlying causes of both acute and chronic hunger. Countries provide food aid through either in-kind donations or cash donations. In-kind food aid is food procured and delivered to vulnerable populations, while cash donations are given to implementing organizations to purchase food in local, regional, or global markets. U.S. food aid programs are all in-kind, and no cash donations are allowed under current legislation. However, the administration has recently proposed legislation to allow up to 25 percent of appropriated food aid funds to purchase commodities in locations closer to where they are needed. Other food aid donors have also recently moved from providing primarily in-kind aid to more or all cash donations for local procurement. Despite ongoing debates as to which form of assistance is more effective and efficient, the largest international food aid organization, the UN WFP, continues to accept both. The United States is both the largest overall and in-kind provider of food aid to WFP, supplying about 43 percent of WFP’s total contributions in 2006 (see fig. 1) and 70 percent of WFP’s in-kind contributions in 2005. Other major donors of in-kind food aid in 2005 included China, the Republic of Korea, Japan, and Canada. In fiscal year 2006, the United States delivered food aid through its largest program to over 50 countries, with about 80 percent of its funding allocations for in-kind food donations going to Africa, 12 percent to Asia and the Near East, 7 percent to Latin America, and 1 percent to Eurasia (see fig. 2). Of the 80 percent of the food aid funding going to Africa, 30 percent went to Sudan, 27 percent to the Horn of Africa, 18 percent to southern Africa, 14 percent to West Africa, and 11 percent to Central Africa. Food aid is used for emergency and nonemergency purposes. Program design and implementation decisions for both emergency and nonemergency situations are informed by assessments that help determine the nature and scale of humanitarian crises and the type and scope of assistance needed. These assessments inform the selection of geographic areas to be targeted as well as criteria for the selection of intended recipients. The majority of U.S. emergency food aid resources are distributed to affected communities and households that require food assistance to survive an emergency and begin the process of recovery. Emergency needs assessments include analyses of various factors, among them the effects of the crisis on vulnerable populations, strategies used by these populations to deal with the crisis, and the outcome in terms of food insecurity. They are usually carried out as a joint effort by several organizations, including FAO, WFP, and NGOs, in response to a request from the government of an affected country. In addition to collecting primary data, assessors may use information from other sources, such as population estimates and agricultural data from recipient governments. Assessors may also rely on pre-crisis vulnerability assessments and information generated by early warning systems, such as the USAID- funded Famine Early Warning System Network and the FAO-funded Global International Early Warning System. In nonemergency situations, U.S. commodities may be provided to address chronic hunger. In addition, U.S. law allows U.S. commodities to be sold— i.e., monetized—in developing countries to generate cash for development activities that address causes and symptoms of chronic food insecurity. For example, food may be provided in exchange for labor in poor communities to build agricultural infrastructure, or cash from monetization may be used to provide basic health services, nutrition education, and agricultural training. Assessments conducted during nonemergency situations help to identify vulnerable populations and the need for food aid interventions. Over the last several years, funding for nonemergency U.S. food aid programs has declined. For example, in fiscal year 2001, the United States directed approximately $1.2 billion of funding for international food aid programs to nonemergencies. In contrast, in fiscal year 2006, the United States directed approximately $698 million for international food aid programs to nonemergencies (see fig. 3). U.S. food aid is funded under four program authorities and delivered through six programs administered by USAID and USDA; the programs serve a range of objectives, including humanitarian goals, economic assistance, foreign policy, market development, and international trade. (For a description of each of these programs, see app. II.) The largest program, P.L. 480 Title II, is managed by USAID and represents approximately 74 percent of total in-kind food aid allocations over the past 4 years, mostly to fund emergency programs (see fig. 4). In addition, P.L. 480, as amended, authorizes USAID to preposition food aid both domestically and abroad with a cap on storage expenses for foreign prepositioning sites of $2 million per fiscal year. U.S. food aid programs also have multiple legislative and regulatory mandates that affect their operations. One mandate that governs U.S. food aid transportation is cargo preference, which is designed to support a U.S.- flag commercial fleet for national defense purposes. Cargo preference requires that 75 percent of the gross tonnage of all government-generated cargo be transported on U.S.-flag vessels. A second transportation mandate, known as the Great Lakes Set-Aside, requires that up to 25 percent of Title II bagged food aid tonnage be allocated to Great Lakes ports each month. Other mandates require that a minimum of 2.5 million metric tons of food aid be provided through Title II programs and that of this amount, a subminimum of 1.825 million metric tons be provided for nonemergency programs. (For a summary of congressional mandates for P.L. 480, see app. II.) Multiple U.S. government agencies coordinate U.S. food aid programs. USDA and USAID share in the administration of all U.S. food aid programs. USDA’s KCCO manages the product standards, purchase, and delivery of all food aid commodities, while other branches of USDA—such as the Animal and Plant Health Inspection Service (APHIS) and the Federal Grain Inspection Service (FGIS)—conduct quality reviews and certification of food aid products. DOT/MARAD is also involved in supporting the ocean transport of food aid on U.S. vessels. Finally, the U.S. Department of State works to advance U.S. food aid as part of its international humanitarian and multilateral assistance initiatives. U.S. food aid programs also involve many stakeholders, including donors, implementing organizations (also known as cooperating sponsors), agricultural commodity groups, and the maritime industry. U.S. agencies channel U.S. food aid contributions through organizations such as WFP, NGOs, and recipient country governments that serve as implementing partners. The level of contributions that each implementing partner receives varies for each food aid program. For example, between 2001 and 2006, WFP received the majority of U.S. Title II emergency food aid resources—approximately 78 percent—while NGOs received 94 percent of nonemergency Title II resources. Recipient country governments received considerable amounts of funding for USDA food aid programs. For example, the governments received 43 percent of funding for the Food for Progress program, while NGOs received 55 percent. Stakeholders use various forums to discuss and coordinate U.S. food aid programs. The principal interagency forums are the Food Assistance Policy Council and the Food Aid Consultative Group. Led by USDA’s Under Secretary for Farm and Foreign Agricultural Services, the Food Assistance Policy Council includes representatives from USDA, USAID, and other key government agencies. The council oversees the Bill Emerson Humanitarian Trust, an emergency food reserve. The Food Aid Consultative Group, which includes various working groups, is led by USAID’s Office of Food for Peace. As stipulated by law, the Food Aid Consultative Group includes representatives from USAID, USDA, NGOs, and agricultural commodity groups. It meets at least twice a year and addresses issues concerning the effectiveness of the regulations and procedures that govern food assistance programs. Multiple challenges reduce the efficiency of U.S. food aid programs, including logistical constraints that impede food aid delivery and reduce the amount and quality of food provided as well as inefficiencies inherent in the current practice of using food aid to generate cash resources to fund development projects. While in some cases agencies have tried to expedite food aid delivery, most food aid program expenditures are for logistics, and the delivery of food from vendor to village is generally too time- consuming to be responsive in emergencies. Factors that increase logistical costs and time frames include uncertain funding and inadequate planning, ocean transportation contracting practices that disproportionately increase risks for ocean carriers (who then factor those risks into freight rates), legal requirements, and inadequate coordination to systematically track and respond to food delivery problems, such as food spoilage or contamination. While U.S. agencies are pursuing initiatives to improve food aid logistics—such as prepositioning food commodities—their long-term cost-effectiveness has not yet been measured. In addition, the current practice of selling commodities as a means to generate resources for development projects—monetization—is an inherently inefficient use of food aid. Monetization entails not only the costs of procuring, shipping, and handling food, but also the costs of marketing and selling it in recipient countries. Furthermore, the time and expertise needed to market and sell food abroad requires NGOs to divert resources away from their core missions. In addition, U.S. agencies do not collect or maintain an electronic database on monetization revenues and the lack of such data impedes the agencies’ ability to fully monitor the degree to which revenues can cover the costs related to monetization. Transportation costs represent a significant share of food aid expenditures. For the largest U.S. food aid program (Title II), approximately 65 percent of expenditures are for transportation to the U.S. port for export, ocean transportation, in-country delivery, associated cargo handling costs, and administration. According to USAID, these noncommodity expenditures have been rising in part due to the increasing number of emergencies and the expensive nature of logistics in such situations. For all food aid programs, rising transportation and business costs have contributed to a 52 percent decline in average tonnage delivered over the last 5 years. To examine procurement costs (expenditures on commodities and ocean transportation) for all U.S. food aid programs, we obtained KCCO procurement data for fiscal years 2002 through 2006. KCCO data also suggest that ocean transportation has been accounting for a larger share of procurement costs, with average freight rates rising from $123 per metric ton in fiscal year 2002 to $171 per metric ton in fiscal year 2006 (see fig. 5). Further, U.S. food aid ocean transportation costs are relatively expensive compared with those of some other donors. WFP transports both U.S. and non-U.S. food aid worldwide at reported ocean freight costs averaging around $100 per metric ton— representing just over 20 percent of its total procurement costs. At current U.S. food aid budget levels, every $10 per metric ton reduction in freight rates could feed almost 850,000 more people during an average hungry season. Delivering U.S. food aid from vendor to village is also a time-consuming task, requiring on average 4 to 6 months. Food aid purchasing processes and sample time frames are illustrated in figure 6. While KCCO purchases food aid on a monthly basis, it allows implementing partner orders to accumulate for 1 month prior to purchase in order to buy in scale. KCCO then purchases the commodities and receives transportation offers leading to awards of transportation contracts over the following month. Commodity vendors bag the food and ship it to a U.S. port for export during the next 1 to 2 months. After an additional 40 to 50 days for ocean transportation to Africa, for example, the food arrives at an overseas port, where it is trucked or railroaded to the final distribution location over the next few weeks. While agencies have in some cases tried to expedite food aid delivery, the entire logistics process often lacks the timeliness required to meet humanitarian needs in emergencies and may at times result in food spoilage. Additionally, the largest tonnage of U.S. food aid is purchased during August and September. Average tonnage purchased during the fourth quarter of the last 5 fiscal years has exceeded that purchased during the second and third quarters by more than 40 percent. Given a 6-month delivery window, these food commodities do not arrive in country in most cases until the end of the peak hungry season (from October through January in southern Africa, for example). Food aid logistics are costly and time-consuming for a variety of reasons. First, uncertain funding processes for emergencies can result in bunching of food aid purchases, which increases food and transportation costs and lengthens delivery time frames. Many experts, officials, and stakeholders emphasized the need for improved logistical planning. Second, ocean transportation contracting practices—such as freight and payment terms, claims processes, and time penalties—further increase ocean freight rates and contribute to delivery delays. Third, legal requirements such as cargo preference can increase delivery costs. Although DOT reimburses food aid agencies for certain transportation expenditures, the sufficiency of reimbursement levels varies and officials disagree on whether the levels are sufficient to cover the additional costs of such requirements. Fourth, when food delivery problems arise, such as food spoilage or contamination, U.S. agencies and stakeholders lack adequately coordinated mechanisms to systematically track and respond to complaints. Uncertain funding processes, combined with reactive and insufficiently planned procurement, increase food aid delivery costs and time frames. Food aid emergencies are increasingly common and now account for 70 percent of USAID program expenditures. To respond to sudden-onset emergencies—such as Afghanistan in 2002; Iraq in 2003; Sudan, Eritrea, and Ethiopia in 2005; and Sudan and the Horn of Africa in 2006—U.S. agencies largely rely on supplemental appropriations and the Bill Emerson Humanitarian Trust (BEHT) to augment annual appropriations by up to a quarter of their budget. Figure 7, for example, illustrates that USAID supplemental appropriations and other funding in addition to its annual appropriations have ranged from $270 million in fiscal year 2002 and $350 million in fiscal year 2006 to over $600 million annually in fiscal years 2003 and 2005. Agency officials and implementing partners told us that the uncertainty of whether, when, and at what levels supplemental appropriations would be forthcoming hampers their ability to plan both emergency and nonemergency food aid programs on a consistent, long- term basis and to purchase food at the best price. Although USAID and USDA instituted multiyear planning approaches in recent years, uncertain supplemental funding has caused them to adjust or redirect funds from prior commitments, according to agency officials. Agencies and implementing organizations also face uncertainty about the availability of BEHT funds. As of January 2007, BEHT held about $107.2 million in cash and around 915,350 metric tons of wheat valued at $133.9 million—a grain balance that could support two major emergencies based on an existing authority to release up to 500,000 metric tons per fiscal year and another 500,000 of commodities that could have been, but were not, released from previous fiscal years. Although the Secretary of Agriculture and the USAID Administrator have agreed that the $341 million combined value of commodity and cash currently held in BEHT is more than adequate to cover expected usage over the current authorization period, the authorization is scheduled to expire on September 30, 2007. Resources have been drawn from BEHT on 12 occasions since 1984 (see fig. 8). For example, in fiscal year 2005, $377 million from the trust was used to procure 700,000 metric tons of commodities for Ethiopia, Eritrea, and Sudan. However, experts and stakeholders with whom we met noted that the trust lacks an effective replenishment mechanism—withdrawals from BEHT must be reimbursed by the procuring agency or by direct appropriations for reimbursement, and legislation establishing BEHT capped the annual amount of reimbursement from P.L. 480 at $20 million. Inadequately planned food and transportation procurement reflects the uncertainty of food aid funding. As previously discussed, KCCO purchases the largest share of food aid tonnage during the last quarter of each fiscal year. This bunching of procurement occurs in part because USDA requires 6 months to approve programs and/or because funds for both USDA and USAID programs may not be received until the middle of a fiscal year (after OMB has approved budget apportionments for the agencies) or through a supplemental appropriation. USAID officials stated that they have reduced procurement bunching through improved cash flow management. Although USAID has had more stable monthly purchases in fiscal years 2004 and 2005, total food aid procurement has not been consistent enough to avoid the higher prices associated with bunching. Higher food and transportation prices result from procurement bunching as suppliers try to smooth earnings by charging higher prices during their peak seasons and as food aid contracts must compete with seasonally high commercial demand. According to KCCO data for fiscal years 2002 through 2006, average commodity and transportation prices were each $12 to $14 per metric ton higher in the fourth quarter than in the first quarter of each year. Procurement bunching also stresses KCCO operations and can result in costly and time-consuming congestion for ports, railways, and trucking companies. While agencies face challenges to improving procurement planning given the uncertain nature of supplemental funding in particular, stakeholders and experts emphasized the importance of such efforts. For example, 11 of the 14 ocean carriers we interviewed stated that reduced procurement bunching could greatly reduce transportation costs. When asked about bunching, agency officials, stakeholders, and experts suggested the following potential improvements: Improved communication and coordination. KCCO and WFP representatives suggested that USAID and USDA improve coordination of purchases to reduce bunching. KCCO has also established a web-based system for agencies and implementing organizations to enter up to several years’ worth of commodity requests. However, implementing organizations are currently only entering purchases for the next month. Additionally, since the statute that established the Food Aid Consultative Group does not specify including transportation stakeholders, DOT officials and ocean carriers strongly recommended establishing a formal mechanism for improving coordination and transportation planning. Increased flexibility in procurement schedules. USAID expressed interest in an additional time slot each month for food aid purchases. Several ocean carriers expressed interest in shipping food according to cargo availability rather than through preset shipping windows that begin 4 weeks and 6 weeks after each monthly purchase. Although KCCO has established shipping windows to avoid port congestion, DOT representatives believe that carriers should be able to manage their own schedules within required delivery time frames. Increased use of historical analysis. DOT representatives, experts, and stakeholders emphasized that USAID and USDA should increase their use of historical analysis and forecasting to improve procurement. USAID has examined historical trends to devise budget proposals prepared 2 years in advance, and it is now beginning to use this analysis to improve timing of procurement. However, neither USAID nor USDA has used historical analysis to establish more efficient transportation practices, such as the long-term agreements commonly used by DOD. For example, WFP is now using forecasting to improve purchasing patterns through advanced financing but is unable to use this financing for U.S. food aid programs due to legal and administrative constraints. Ocean transportation contracting practices are a second factor contributing to higher food aid costs. DOT officials, experts, and ocean carriers emphasized that commercial transportation contracts include shared risk between buyers, sellers, and ocean carriers. In food aid transportation contracts, risks are disproportionately placed on ocean carriers, discouraging participation and resulting in expensive freight rates. Examples of costly contracting practices include the following. Noncommercial and nonstandardized freight terms. Food aid contracts often define freight terms differently than commercial contracts and place increased liability on ocean carriers. For example, many food aid contracts hold ocean carriers responsible for logistical problems such as improperly filled containers that may occur at the load port before they arrive. Many food aid contracts also hold ocean carriers responsible for logistical problems, such as truck delays or improper port documentation, that may occur at the discharge port after they arrive. One carrier reported financial losses of around $1 million for an instance where, to be able to deliver food aid to a port in Madagascar, the carrier was required to wait almost 60 days for a vessel already at port to finish unloading and to assist the government in repairing port discharging equipment. Further, several carriers reported that food aid contracts are not sufficiently standardized. Although USDA and USAID created a standard contract for nonbulk shipments, contracts for bulk shipments (which accounted for 63 percent of total food aid tonnage in fiscal year 2006) have not yet been standardized. To account for risks that are unknown or outside their control, carriers told us that they charge higher freight rates. Impractical time requirements. Food aid contracts may include impractical time requirements, but agencies disagree on how frequently this occurs. Although USAID officials reviewed contract time requirements and described them as reasonable, they also indicated that transportation delays often result from poor carrier performance and the diminishing number of ocean carriers participating in food aid programs. Several implementing organizations also complained about inadequate carrier performance. WFP representatives, for example, provided several examples of ocean shipments in 2005 and 2006 that were more than 20 days late. While acknowledging that transportation delays occur, DOT officials indicated that these delays often result from problems at a discharge port on the vessel’s previous food aid voyage. DOT officials also stated that although contract time requirements are being made more reasonable, some contracts still include requirements that are impossible for carriers to meet. For example, one carrier complained about a contract that required the same delivery date for four different ports. When carriers do not meet time requirements, they must often pay costly penalties. Carriers reported that they review contracts in advance and, where time requirements are deemed implausible, factor the anticipated penalty into the freight rate. While agencies do not systematically collect data on time requirements and penalties associated with food aid contracts, DOT officials examined a subset of contracts from December 2005 to September 2006 and estimated that 13 percent of them included impractical time requirements. Assuming that the anticipated penalties specified in the contracts analyzed were included in freight rates, food aid costs may have increased by almost $2 million (monies that could have been used to provide food to more than 57,000 additional beneficiaries during a typical hungry season). Lengthy claims processes. Lengthy processes for resolving transportation disputes discourage both carriers and implementing organizations from filing claims. According to KCCO officials, obtaining needed documentation for a claim can require several years, and disputed claims must be resolved by the Department of Justice. USAID’s Inspector General reported that inadequate and irregular review of claims by USAID and USDA has also contributed to delayed resolution. Currently, KCCO has over $6 million in open claims, some of which were filed prior to fiscal year 2001. For ocean carriers, the process is burdensome and encourages them to factor potential losses into freight rates rather than pursue claims. Incentives for most implementing organizations are even weaker given that monies recovered from claims reimburse the overall food aid budget rather than the organization that experienced the loss. According to KCCO and WFP officials, transportation claims are filed for less than 2 percent of cargo. However, several experts and implementing organizations suggested that actual losses are likely higher. In 2003, KCCO proposed a new administrative appeals process for ocean freight claims that would establish a hearing officer within USDA and a 285-day time frame. While DOT and some carriers agreed that a faster process was needed, DOT officials suggested that the claims review process should include hearing officers outside of USDA to ensure independent findings. To date, KCCO’s proposed process has not been implemented. Lengthy payment time frames and burdensome administration. Payment of food aid contracts is slow and paperwork is insufficiently streamlined. When carriers are not paid for several months, they incur large interest costs that are factored into freight rates. While a new electronic payment system has enabled USDA to provide freight payments within a few weeks, several ocean carriers complained that USAID often requires 2 to 4 months to provide payment, though USAID officials dispute this claim. USAID officials also asserted that the electronic payment system used by USDA is too expensive, and they are considering other payment options. In addition to payment issues, a few carriers suggested that paperwork in general needs streamlining and modernization. The 2002 Farm Bill required both USDA and USAID to pursue streamlining initiatives that the agencies are implementing. KCCO officials indicated that they are updating food aid information technology systems (to be in place in fiscal year 2009). In structured interviews, ocean carriers confirmed the cost impact of food aid transportation contracting practices. Figure 9 shows that depending upon the practice, between 9 (60 percent) and 14 (100 percent) of the carriers reported increased costs, with “liabilities outside the carriers’ control” as the most significant factor. To quantify the impact, two carriers estimated that nonstandardized freight terms increase costs by about 5 percent (about $8 per metric ton), while another carrier suggested that slow payment increases costs by about 10 percent (about $15 per metric ton). Figure 9 also shows that a large percentage of carriers strongly recommended actions to address contracting practices. Legal requirements governing food aid procurement are a third factor that can increase delivery costs and time frames, with program impacts dependent on the sufficiency of associated reimbursements. In awarding contracts, KCCO must meet various legal requirements, such as cargo preference and the Great Lakes Set-Aside. Each requirement may result in higher commodity and freight costs. Cargo preference laws, for example, require 75 percent of food aid to be shipped on U.S.-flag carriers, which are generally more expensive than foreign-flag carriers by an amount known as the ocean freight differential (OFD). The total annual value of this cost differential between U.S.- and foreign-flag carriers averaged $134 million from fiscal years 2001 to 2005 (see fig. 10). DOT reimbursements have varied from $126 million in fiscal year 2002 to $153 million in fiscal year 2005. However, USAID officials expressed concern that the OFD calculations do not fully account for the costs of cargo preference or the uncertainties regarding its application. For example, several U.S. agency and port officials believe that cargo preference regulations discourage foreign-flag participation in the program due to the small percentage of cargo that can be shipped on foreign-flag vessels. OFD reimbursements do not include shipments for which a foreign-flag vessel has not submitted a bid or for the additional costs of shipping on U.S.-flag vessels that are older than 25 years (about half of the vessels). USAID officials estimated that for Title II programs, the actual cost of cargo preference in fiscal year 2003 exceeded the total OFD cost by about $50 million due to these factors. DOT officials estimated these additional costs for Title II at about $34 million in fiscal year 2004 and about $56 million in fiscal year 2005. Finally, USAID and DOT officials have not yet agreed on whether cargo preference applies to shipments from prepositioning sites. U.S. agencies and stakeholders do not coordinate adequately to respond to food and delivery problems when they arise. USAID and USDA lack a shared, coordinated system to systematically track and respond to food quality complaints. Food quality concerns have been long-standing issues for both food aid agencies and the U.S. Congress. In 2003, for example, USAID’s Inspector General reported some Ethiopian warehouses in poor condition, with rodent droppings near torn bags of corn soy blend (CSB), rainwater seepage, pigeons flying into one warehouse, and holes in the roof of another. Implementing organizations we spoke with also frequently complained about receiving heavily infested and contaminated cargo. For example, in Durban, South Africa, in October 2006, we saw 1,925 metric tons of heavily infested cornmeal that arrived late in port after it had been erroneously shipped to the wrong countries first. As shown in figure 11, we found live and dead insects in bags of cornmeal, along with their nests. NGOs noted that some of the food had been in containers for as long as 78 days. This food could have fed over 37,000 people during a typical hungry season. When food arrives heavily infested, NGOs hire a surveyor to (1) determine how much is salvageable for human consumption or for use as animal feed and (2) destroy what is deemed unfit. U.S. agencies and food aid stakeholders face a variety of coordination challenges in addressing such food delivery problems, including the following: KCCO, USDA and USAID have disparate quality complaint tracking mechanisms that monitor different levels of information. As a result, they are unable to determine the extent of and trends in food quality problems. In addition, because implementing organizations track food quality concerns differently, if at all, it is difficult for them to coordinate to share concerns with each other and with U.S. government agencies. For example, since WFP—which accounts for approximately 60 percent of all U.S. food aid shipments—independently handles its own claims, KCCO officials are unable to track the quality of food aid delivery programwide. Agencies and stakeholders have suggested that food quality tracking and coordination could be improved if USAID and USDA shared the same database and created an integrated food quality complaint reporting system. Agency country offices are often unclear about their roles in tracking food quality, creating gaps in monitoring and reporting. For example, USAID found that some missions do not clearly understand their responsibilities to independently verify claims stemming from food spoilage and often rely on the implementing organization to research the circumstances surrounding losses. One USAID country office also noted that rather than tracking all food quality problems reported, it only recorded and tracked commodity losses for which an official claim had been filed. Further, in 2004, USAID’s Inspector General found that USAID country offices were not always adequately following up on commodity loss claims to ensure that they were reviewed and resolved in a timely manner. To improve food quality monitoring, agencies and stakeholders have suggested updating regulations to include separate guidance for complaints, as well as developing a secure Web site for all agencies and their country offices to use to track both complaints and claims. When food quality issues arise, there is no clear and coordinated process to resolve problems. For example, WFP officials stated that they experienced coordination issues with USAID in 2003 when they received 4,200 metric tons of maize from USAID in Angola and found a large quantity to be wet and moldy. Although USAID officials maintain that their response was timely, WFP officials stated that USAID did not provide timely guidance on how WFP would be reimbursed for testing and destruction of cargo that was not fit for consumption and how USAID would replace the quantity lost. WFP officials claim that WFP lost over $640,000 in this case, including testing and destruction costs and the value of the commodity, and no replacement cargo was provided by USAID. Although KCCO established a hotline to provide assistance on food quality complaints, KCCO officials stated that it was discontinued because USDA and USAID officials wanted to receive complaints directly, rather than from KCCO. Agencies and stakeholders have suggested that providing a standard questionnaire to implementing organizations would ensure more consistent reporting on food quality issues. To improve timeliness in food aid delivery, USAID has been prepositioning commodities in two locations and KCCO is implementing a new transportation bid process. Prepositioning enabled USAID to respond more rapidly to the 2004-2005 Asian tsunami emergency than would have been possible otherwise. KCCO’s bid process is also expected to reduce delivery time frames and ocean freight rates. However, the long-term cost- effectiveness of both initiatives has not yet been measured. USAID has prepositioned food aid on a limited basis to improve timeliness in delivery. USAID has used warehouses in Lake Charles (Louisiana) since 2002 and Dubai (United Arab Emirates) since 2004 to stock commodities in preparation for food aid emergencies, and it is now adding a third site in Djibouti, East Africa. USAID has used prepositioned food to respond to recent emergencies in Lebanon, Somalia, and Southeast Asia, among other areas. Prepositioning is beneficial because it allows USAID to bypass lengthy procurement processes and to reduce transportation time frames. USAID officials told us that diverting food aid cargo to the site of an emergency before it reaches a prepositioning warehouse further reduces response time and eliminates storage costs. When the 2004 Asian tsunami struck, for example, USAID quickly provided 7,000 metric tons of food to victims by diverting the carrier at sea, before it reached the Dubai warehouse. According to USAID officials, prepositioning warehouses also offer the opportunity to improve logistics when USAID is able to begin the procurement process before an emergency occurs or if it is able to implement long-term agreements with ocean carriers for tonnage levels that are more certain. Despite its potential for improved timeliness, USAID has not studied the long-term cost-effectiveness of prepositioning. Table 1 shows that over fiscal years 2005 and 2006, USAID purchased about 200,000 metric tons of processed food for prepositioning (around 3 percent of total food aid tonnage), diverted about 36,000 metric tons en route, and incurred contract costs of about $1.5 million for food that reached the warehouse (averaging around $10 per metric ton). In addition to contract costs, ocean carriers generally charge higher freight rates for prepositioned cargo to account for additional cargo loading or unloading, additional days at port, and additional risk of damage associated with cargo that has undergone extra handling. USAID officials have suggested that average freight rates for prepositioned cargo could be $20 per metric ton higher. In addition to the costs of prepositioning, agencies face several challenges to their effective management of this program, including the following: Food aid experts and stakeholders expressed mixed views on the appropriateness of current prepositioning locations. Only 5 of the 14 ocean carriers we interviewed rated existing sites positively, and most indicated interest in alternative sites. KCCO officials and experts also expressed concern with the quality of the Lake Charles warehouse and the lack of ocean carriers providing service to that location. For example, many carriers must move cargo by truck from Lake Charles to Houston before shipping it. Relative to other ports, shipping out of the Lake Charles prepositioning site can add as much as 21 days for delivery. Inadequate inventory management increases the risk of cargo infestation. KCCO and port officials suggested that USAID had not consistently shipped older cargo out of the warehouses first. USAID officials emphasized that inventory management has been improving but that limited monitoring and evaluation funds constrain their oversight capacity. For example, the current USAID official responsible for overseeing the Lake Charles prepositioning stock was able to visit the site only once in fiscal year 2006—at his own expense. Agencies have had difficulties ensuring phytosanitary certification for prepositioned food because they do not know the country of final destination when they request phytosanitary certification from APHIS. According to USDA, since prepositioned food is not imported directly from a U.S. port, it requires either a U.S.-reissued phytosanitary certificate or a foreign-issued phytosanitary certificate for re-export. USDA officials told us they do not think it is appropriate to reissue these certificates— once a food aid shipment leaves the United States, they cannot make any statements about the phytosanitary status of the commodities, which may not meet the entry requirements of the destination country. USDA officials are also concerned that USAID will store commodities for a considerable period of time during which their status may change, thus making their certificate invalid. Although USDA and USAID officials are willing to allow foreign government officials to issue these certificates, U.S. inspection officials remain concerned that the foreign officials might not have adequate resources for inspection or be willing to certify these commodities. Without phytosanitary certificates, food aid shipments could be rejected, turned away, or destroyed by recipient country governments. Certain regulations applicable to food aid create challenges for improving supply logistics. For example, food aid bags must include various markings reflecting contract information, when the commodity should be consumed, and whether the commodity is for sale or direct distribution. Marking requirements vary by country (some require markings in local languages), making it difficult for USAID to divert cargo. Also, due to the small quantity of total food aid tonnage (around 3 percent) allocated for the prepositioning program, USAID is unable to use the program to consistently purchase large quantities of food aid earlier in the fiscal year. In addition to prepositioning, KCCO is implementing a new transportation bid process to reduce procurement time frames and increase competition between ocean carriers. In the prior two–step system, during a first procurement round, commodity vendors bid on contracts and ocean carriers indicated potential freight rates. Carriers provided actual rate bids during a second procurement round once the location of the commodity vendor had been determined. In the new one-step system, ocean carriers will bid at the same time as commodity vendors. KCCO expects the new system to cut 2 weeks from the procurement process and provide potential annual savings of around $25 million. KCCO expects this new bid process to also reduce cargo handling costs as cargo loading becomes more consolidated. When asked about the new system, several carriers reported uncertainty about its future impact and expressed concern that USDA’s testing of the system had not been sufficiently transparent. Despite efforts to improve the efficiency of the delivery of U.S. food aid, the current use of food aid as a means to raise cash to fund development projects—a practice known as monetization—is inherently inefficient. Besides procurement and shipping costs, NGOs involved in monetization programs often incur additional costs for marketing food commodities in recipient countries. Furthermore, NGOs must maintain the expertise necessary to sell and market food aid abroad, which diverts resources from their core missions. The permissible use of monetization revenues has expanded beyond its original intent over the years. Although monetization was initially established to fund expenses related to direct food aid delivery for humanitarian purposes, it now funds projects ranging from rural financing to health services. Additionally, U.S. agencies do not collect data electronically, and the lack of such data impedes their ability to monitor the extent to which monetization revenues can cover the costs. Monetizing food to fund development projects is an inherently inefficient use of food aid. Monetization requires food to be procured, shipped, and eventually sold—incurring costs at each step in the process. Furthermore, although bulk products comprise a larger proportion of monetized food aid, they have higher transportation costs relative to their market price in recipient countries than nonbulk (processed) products. For example, the ratio of transportation cost to market price for bulk wheat is more than three times that of vegetable oil. In addition to shipping and handling costs, the process of generating cash from selling food is inefficient because it also requires NGOs to maintain the capacity necessary to sell and market food aid, diverting them from their core missions. In its 2001 report to Congress on Food Aid Monetization, USDA stated that the increasing involvement of NGOs in implementing food aid programs has required these organizations to seek expertise in all facets of commodity sales and cope with price, exchange rate, and other uncertainties, which has affected the way in which they operate. Noting that NGOs have differing missions and backgrounds and vary in size and scope of operations, the USDA report stated that some NGOs view the monetization process as “inconvenient but necessary to generate program development funds.” However, some NGOs would prefer to end their involvement in monetization. For example, CARE, one of the major NGOs engaged in the practice, decided to transition out of it by 2009 partly because “monetization requires intensive management and is fraught with risks. Procurement, shipping, commodity management, and commercial transactions are management intensive and costly. Experience has shown that these transactions are also fraught with legal and financial risks.” Some participants at the GAO roundtable on food aid stated that they recognize that monetization is not an efficient way to raise development money, but they pointed out that it is the only available resource to supplement food aid and enhance food security and other development projects. The permissible use of monetization revenues and the minimum level of monetization allowed by the law have expanded, contributing to an increasing use of monetization as a means to generate cash for development projects. While monetization was initially established to pay for administrative costs related to direct food distribution, monetization revenues now fund development activities beyond food distribution that aim to improve food security. Examples include the following. Title II monetization revenues can be used to implement income- generating, community development, health, nutrition, cooperative development, agricultural, and other development activities. Revenues can also be invested, and any interest earned on such investments may be used for the same purposes. Food for Progress monetization revenues can be used for private sector agricultural development through improved agricultural techniques, marketing systems, farmer education, and cooperative development; enhanced food processing capacity; introduction of new foods; or agricultural-related business growth. Monetization has also been used on rare occasions to achieve objectives that may be beyond the scope of direct food delivery. USAID officials informed us of a case in which monetization was intentionally used to help increase access to food for the urban poor in Zimbabwe. The program involved subsidized sales of sorghum meal in poor areas of a few selected cities. The main goal was not to generate revenue but to provide affordable staple foods to households in urban areas where conventional food aid distribution programs were not practical or appropriate. The monetization rate for Title II nonemergency food aid has far exceeded the minimum requirement of 15 percent, reaching close to 70 percent in 2001 but declining to about 50 percent in 2005. This decline is due to both increasing demand for emergency food aid and OMB’s 2002 recommendation to decrease monetization, according to USAID officials. OMB pointed out that monetization can impede U.S. commercial exports, lower market prices, induce black market activity, and thwart market development for U.S farm products. OMB also raised questions about the economic efficiency of the practice. Furthermore, in 2002 The President’s Management Agenda suggested that directly feeding the hungry, rather than providing food for development, should be the primary goal of U.S. food aid programs. Figure 12 shows the average share of nonemergency food aid funding different programs used for monetization from fiscal years 2001 through 2006. U.S. agencies do not electronically collect data on monetization revenues. Without such data, the agencies’ ability to adequately monitor the degree to which revenues cover costs is impeded. USAID used to require that monetization revenues cover at least 80 percent of costs associated with delivering food to recipient countries, but this requirement no longer exists. Neither agency was able to provide us with data on the revenues generated through monetization. The agencies told us that the information should be in the results reports, which are in individual hard copies and not available in any electronic database. We have expressed similar concerns about the limited oversight of monetization revenues in our 2002 review of the McGovern-Dole Food for Education program. USAID officials told us that they believe NGOs have incentives to generate the maximum amount of resources possible from monetization and, therefore, the officials are not concerned about monitoring revenue data. However, some NGOs may not have sufficient expertise in commodity trading to ensure that they are selling food at the best possible price. In addition, due to insufficient market expertise or delivery delays, monetization revenues can also be reduced when NGOs sell the commodity at a time when market supplies have grown. For example, selling Title I- and Title II-funded wheat simultaneously in Mozambique in 2002 flooded the market and decreased food prices, resulting in reduced monetization revenues. A number of challenges reduce the effectiveness of food aid in alleviating hunger. Since food aid is limited, it is important that donors and implementers use it effectively by ensuring that it reaches the most vulnerable populations and does not cause negative market impact. However, a number of factors limit efforts to develop reliable estimates of food needs and respond to crises in a timely manner. These include challenging operating environments in recipient countries, insufficient coordination among stakeholders and use of noncomparable assessment methods, difficulties in identifying vulnerable groups (such as chronic versus transitory food-insecure populations) and understanding the causes of food insecurity, and resource constraints that adversely affect the quality of assessments and quantity of food and other assistance. Consequently, estimates of food needs have differed significantly and, in some cases, have resulted in delays in appropriately responding to crises with sufficient food and complementary assistance. Furthermore, some impediments to improving the nutrition quality of U.S. food aid, including the lack of interagency coordination to update food aid products and specifications, may prevent the most nutritious or appropriate food from reaching intended recipients. Despite these concerns, USAID and USDA do not sufficiently monitor food aid programs, particularly in recipient countries, as they have limited staff and competing priorities and face legal restrictions on the use of food aid resources. U.S. food aid assists only about 11 percent of the estimated hungry population worldwide. In light of the significant need for food aid, it is critical that this assistance be used effectively by ensuring that the right food reaches the right people at the right time. Generally, the most food- insecure populations include poor households with elderly people, young children (especially those under 5 years of age), pregnant and lactating women, and the chronically sick (e.g., people with HIV/AIDS). To provide food to these vulnerable populations, agencies and stakeholders target food aid resources. Targeting involves assessments of needs, program planning to reach vulnerable households with adequate food, implementing the distribution of food, and monitoring these programs. (Figure 13 illustrates these elements of the targeting process). The timing of food delivery is a key factor that impacts targeting effectiveness. Timely provision of food aid will not only save lives during an emergency, but also help to avert crises that may result from increasing vulnerability. To focus on the vulnerability of food insecure populations, USAID discussed the concept of development relief in its Food Security Strategic Plan for 2006- 2010, whereby programs dealing with emergencies would also address the underlying causes of emergencies and development programs would help vulnerable people improve their ability to prevent and cope with future emergencies. Enhancements to early warning systems, such as the USAID- funded Famine Early Warning System Network, and efforts to better understand the livelihoods of vulnerable populations have contributed to improved information on the needs of vulnerable populations, according to officials from implementing organizations and USAID. In addition to ensuring effective use of food aid resources, accurate targeting can reduce the potential adverse impact of food aid on recipient country markets. (See app. III for more information on the impact of food aid on local markets.) When food aid is distributed during a food shortfall to people who would not otherwise be able to purchase food, markets may remain unaffected. In the case of food shortfalls, food aid may actually serve to bring supply back to levels that would have occurred in the absence of the shortage and help limit price increases. However, when food aid is sent in response to a food shortfall but arrives while food is readily available—such as after the hungry season—and is distributed to people who can otherwise purchase food, it increases total food supplies above normal market levels. Additionally, in such cases, the food aid may decrease market prices and the incomes of food producers in recipient countries. These low prices could decrease agricultural investments and reduce the return on labor allocated to agriculture. While food aid may lower prices, it may also increase income for recipients. For example, according to one study, distribution of food aid to households in northern Ethiopia during the hungry season actually increased household purchasing power and contributed to increased agricultural productivity. Various factors limit the ability of U.S. agencies to ensure that food aid is directed to the most vulnerable populations. First, challenging operating environments, characterized by poor infrastructure and concerns about physical safety and security, have limited access to vulnerable groups and caused delays in providing food aid. Inadequate recipient government participation and human resource constraints also contribute to insufficient assistance to vulnerable people. Second, weak in-country coordination among key stakeholders and the use of noncomparable methods in assessing food needs have resulted in significantly different estimates and delays in donor assistance. Additionally, assessments have not been used sufficiently to inform food aid programs. Third, difficulties in identifying vulnerable populations and understanding the causes of their food insecurity contributed to the lack of timely and appropriate response in some instances. For example, it has been challenging for implementing organizations to determine the causes of chronic food insecurity and provide appropriate assistance. Fourth, resource constraints have affected the quality and timeliness of assessments as well as the quantity of food and other related assistance provided to vulnerable populations. Difficult operating environments characterized by poor infrastructure and physical safety, as well as the limited participation and capacity of recipient governments, have impeded access to and the timely delivery of food aid to the most vulnerable populations. In 2003, we reported on the southern Africa food crisis, noting that long-standing weaknesses in transportation infrastructure across the region hampered timely delivery of food aid where it was needed. Access to intended recipients in villages was further hindered during the rainy seasons when many village roads became impassable. Due to concerns about physical safety and security, the timely provision of food aid to recipients has been especially difficult in regions experiencing war and conflict. We recently reported that frequent violence and continued conflict and an increase in attacks on humanitarian staff in the Darfur region of Sudan limited the ability of implementing organizations to access parts of the region and provide food and other assistance to vulnerable populations, such as internally displaced persons. As a result, approximately 460,000 people in northern Darfur were cut off from emergency food aid in July 2006, and 355,000 people still did not receive food aid in August 2006, according to UN sources. Limited recipient government participation has contributed to insufficient coverage of vulnerable populations. In late 2006, while donors were providing assistance to support the food needs of Zambians, the government continued to hold large quantities of its food stocks— approximately 350,000 metric tons—in its emergency reserve, according to Zambian officials. Even in cases where recipient governments are participating, lack of human resources and financial capacity can limit overall efforts to target vulnerable populations. For example, while the governments of Ethiopia and Kenya are involved in coordinating the food aid efforts of donors and implementers, several implementing organizations expressed concerns about the governments’ human resource capacity at the district and village level to effectively contribute to planning and implementing food aid programs. According to a number of USAID-approved proposals for Ethiopia, a lack of government staffing and skills combined with high turnover rates posed a significant challenge to implementing food aid projects. USAID officials acknowledged these concerns and noted that the government of Ethiopia is addressing these deficiencies by providing training to staff at all levels of the government. Additionally, all Title II-funded NGOs in Ethiopia have received resources for capacity building and training as part of their agreements with USAID. Insufficient coordination among key stakeholders and the use of noncomparable methods has resulted in disparate assessments of food needs and numbers of recipients, although some efforts are under way to improve coordination. Officials of various implementing organizations we interviewed in Ethiopia, Kenya, Zambia, and South Africa identified lack of coordination on assessments, especially with recipient governments, as one of the key challenges to accurately assessing the needs of vulnerable populations. According to an NGO official in Zambia, the Zambian government and NGOs conducted two parallel but separate assessments in 2005 that resulted in significantly different estimates. This discrepancy led to a 6-month delay in declaring an emergency while the difference in assessment results was resolved. Some recipient governments have increased their efforts to ensure coordination on assessments between stakeholders; however, estimates of food needs have sometimes differed significantly because the stakeholders use different methods and estimating assumptions. For example, although the Ethiopian government's Disaster Prevention and Preparedness Agency coordinates with donors and implementing organizations in conducting assessments of food needs, their assessments varied significantly in 2004. Specifically, WFP estimated that 1.8 million people would need food assistance, while the government of Ethiopia estimated that 700,000 fewer people (1.1 million) would need assistance. Donors we interviewed in Ethiopia stated that the host government has tended to lower food need estimates based on its view of what donors are likely to fund. They noted that an earlier assessment in 2006, which was led by the government but involved other stakeholders, underestimated the number of potential beneficiaries by 1 million people. This significant underestimation created a humanitarian crisis, according to a senior UN official, and more emergency food was eventually requested. Implementing organizations have had to resort to measures, such as reducing ration size or shortening the duration of assistance, to provide food aid to a larger than estimated number of vulnerable households. Various implementing organizations have attributed a proliferation of assessment methods and approaches to a lack of coordination that can result in different estimates and delay donor response, especially during emergencies. Although USAID and NGOs have noted that multiple assessment methods and approaches are required to respond to different circumstances, noncomparable methods have resulted in disparate food need estimates. Donors and implementing organizations do not agree on definitions and common approaches to conducting assessments; according to USAID officials, this has resulted in inconsistent estimates that prevent timely donor responses, especially during emergencies. WFP’s Strengthening Emergency Needs Assessment Capacity (SENAC) initiative, launched in 2004, is aimed at addressing some of these concerns by developing better methods and guidance for assessments conducted during emergencies. However, USAID and other officials have expressed concerns about the limited involvement of NGOs in the SENAC process and its implementation in selected countries. Moreover, there is a lack of coordination among various NGOs, which tend to assess food needs differently, according to U.S. government officials. Some GAO roundtable participants stated that peer learning and information-sharing among implementing organizations had been further hampered by the dissolution in 2004 of Food Aid Management (FAM), a USAID-funded NGO that facilitated information sharing and development of food aid standards. Additionally, assessments have not been used sufficiently to inform food aid programs. According to WFP and NGO officials, estimates resulting from needs assessments have not, in many cases, driven donor response to impending or existing crises. Other factors—such as donors’ foreign policy objectives or media attention to a crisis—tend to determine the timing and level of donor assistance, according to these officials. However, donors and GAO roundtable participants have expressed concerns about the independence of assessors, because organizations such as WFP and NGOs generally conduct assessments and also implement programs based on their results. According to GAO roundtable participants, NGOs generally conduct assessments and propose projects in areas where they are already operating, which may introduce geographical gaps in the delivery of assistance and prevent food aid from reaching the most vulnerable areas. According to a USAID-funded study on Title II development food aid programs in 2002, although program assessments had advanced considerably and proposals described critical country-level food security problems, quantitative data collection and analysis at the local level were deficient. Our review of USAID- and USDA-approved proposals indicates that some proposed programs were based on assessments that identified specific criteria to target food aid, whereas other proposals justified programs based on general statements of need. For example, while proposals for a nationwide safety net program in Ethiopia generally identified districts based on high levels of chronic vulnerability, proposals for some other countries did not include adequate assessment information on the extent or severity of needs in areas proposed for food aid programs. Accurately identifying various types of vulnerable populations and the causes of their vulnerability has been difficult due to the complexity of factors—such as poverty, environmental degradation, and disease—that contribute to food insecurity. According to WFP officials in southern Africa, identifying people with HIV/AIDS who need food aid has been very difficult because the social stigma associated with the disease may discourage intended recipients from getting tested for it. It is also difficult to assess whether deterioration in health is due to hunger or the disease itself. Insufficient understanding of the causes of malnutrition and chronic food insecurity, as well as the role of local markets, has in some cases resulted in inaccurate assessment of and response to crises. According to WFP and USAID, assessments have focused too narrowly on food availability (such as food production in vulnerable countries) and not enough on factors that determine access to food (such as food prices in local markets) and effective use of food (such as health and sanitation practices). The 2005 food crisis in Niger, where about 1.8 million people received food aid, illustrated such a limitation in focus. According to WFP’s evaluation, donors as well as implementers focused too narrowly on food production and deficits and analyzed the causes of malnutrition insufficiently. As a result, the cause of the crisis was misdiagnosed as lack of food availability, when in fact it was caused by factors affecting the effective use of food, such as health and sanitation problems and poor water quality, according to a USAID analysis. Donors did not respond until May 2005, 3 months after the crisis reached emergency proportions in February 2005. Moreover, insufficient understanding of the causes of the crisis initially led to a disagreement between the recipient government and WFP on how to respond to the situation. As a result, the request for aid was revised seven times in the next 3 months, from May to August, and recipients finally received food in August 2005. Difficulties in the targeting process related to determining eligibility of recipients and appropriate food distribution activities have also been exacerbated because implementers have not developed or optimally used best practices and institutional knowledge. According to USAID officials in Kenya, there has been very limited analysis of which targeting approaches and activities are more appropriate to provide food aid in certain situations and how long these should be used. (See app. IV for food distribution activities to target different vulnerable groups.) According to a WFP evaluation of its targeting practices during emergency and relief operations, a more systematic analysis of WFP’s experience in targeting recipients is necessary to resolve recurring issues and improve this practice. Furthermore, WFP’s targeting approaches tend to depend on individual staff experience rather than organizationwide experience, according to the review. In part, this is because WFP had not yet developed a consolidated policy and comprehensive guidance material on targeting. Despite these limitations, there is some evidence that with experience, accuracy in providing food to intended recipients has generally improved at the country and program level. For example, according to several implementing organization officials in Ethiopia, during the first year of implementing a nationwide food and cash assistance program, targeting the most vulnerable populations was challenging because implementers did not adequately understand the eligibility criteria for recipients and selected better-off people in many cases. In the second year, however, targeting improved as program goals were more clearly communicated to implementers, who applied the recipient selection criteria more accurately. Limitations on the amount and use of resources have adversely affected the quality and timing of assessments, particularly for Title II-funded programs. According to USAID, NGO, and WFP officials we interviewed in the field, lack of sufficient resources is one of the main constraints to conducting accurate and reliable assessments. The U.S. agencies provide very limited or no resources to conduct assessments prior to the implementing organizations’ submission of proposals requesting food aid. This is because requests for cash for materials or activities related to U.S. food aid funding, such as assessments, must accompany requests for food commodities. Since cash is in effect tied to requests for commodities, the U.S. government cannot provide assistance for activities such as needs assessments that may enhance the use of food aid but may not require commodities at the same time. Due to such constraints, U.S. agencies have not provided financial assistance for WFP’s major initiative to improve needs assessments, although they have provided technical assistance. According to WFP officials we spoke with in South Africa, this lack of adequate financial support for assessments diminishes U.S. influence and input on how assessments are conducted. USAID officials stated that they would like to fund assessments using P.L.480 Title II resources, but they are unable to do so because of legal restrictions related to such use of these funds. In addition to their impact on assessments, resource constraints have also limited the quantity of food and other complementary assistance that is provided to intended recipients. In 2003, we reported that due to the lack of adequate donor funding in Afghanistan, food rations to refugees and internally displaced persons were reduced to a third of the original planned amount, and program implementation was delayed by up to 10 weeks in some cases. During our fieldwork, we found instances where insufficient complementary assistance to meet basic needs in addition to food has also limited the benefits of food aid to recipients. For example, people with HIV/AIDS receiving food aid in Wukuru, Ethiopia, informed us that they sold part of their food rations to pay for other basic necessities because they lacked other assistance or income. Similarly, Somali and Sudanese refugees in Kenya sold approximately 4 percent of their food rations to buy basic items (such as fuel, cooking utensils, and clothes) or supplementary foods, according to a 2004 food consumption survey by WFP and the UN High Commission for Refugees. These refugees suffered from poor nutrition as a result of insufficient food consumption and other factors, such as poor hygiene. Some impediments to improving nutritional quality further reduce the effectiveness of food aid. Although U.S. agencies have made efforts to improve the nutritional quality of food aid, the appropriate nutritional value of the food and the readiness of U.S. agencies to address nutrition- related quality issues remain uncertain. Further, existing interagency food aid working groups have not resolved coordination problems on nutrition issues. Moreover, USAID and USDA do not have a central interagency mechanism to update food aid products and their specifications. As a result, vulnerable populations may not be receiving the most nutritious or appropriate food from the agencies, and disputes may occur when either agency attempts to update the products. Although U.S. agencies have made efforts to improve the nutritional quality of food aid, challenges remain with nutrition quality control mechanisms and interagency coordination on these issues. Past micronutrient assessments of U.S. food aid have also found that commodities are produced containing low and inconsistent levels of micronutrients, and gaps exist in nutrition quality control procedures. According to the World Health Organization, deficiencies in iron, vitamin A, and zinc rank among the top 10 leading causes of death from disease in developing countries, and micronutrient fortification of food aid is considered one of the most cost-effective approaches to addressing widespread deficiencies. Despite efforts to update food aid nutritional quality control mechanisms, the quality of U.S. food aid and U.S. agencies’ readiness to address quality issues remains uncertain. USDA attempted to improve its quality control procedures by introducing a Total Quality Systems Audit (TQSA) program to verify a supplier’s capability of producing products that meet program requirements. The TQSA program is responsible for examining commodity suppliers’ quality control mechanisms, such as management processes and procedures for food aid production, to ensure that they are operating according to U.S. food aid standards. However, the TQSA program is not responsible for overseeing the nutritional quality of the product itself. It was only recently given more funding in this area in response to a 2005 incident involving CSB food aid that was found to be overfortified with iron. Because food with iron overfortification can be toxic when consumed by vulnerable groups in large quantities, USAID and USDA suspended distribution of 1,100 metric tons of CSB food aid donations while WFP suspended distribution of 16,000 tons of U.S.- donated CSB to Ethiopia. It was not until after this incident that the TQSA program was provided with funding to test CSB fortification, but it was given only enough resources to cover the costs of sampling and testing CSB and no other processed commodities. USDA has recently requested additional funding to develop quality sampling and testing protocols for each blended or processed food aid product, but this proposal has yet to be approved. USDA officials have stated that they are still struggling to verify the nutritional quality of U.S. food aid. Insufficient coordination also limits agencies’ abilities to improve the nutritional quality of food aid commodities. First, existing food aid commodity working groups have not resolved interagency coordination problems. While U.S. government agencies have begun to jointly discuss ways to improve nutrition issues in the FACG’s Commodity Working Group, the group has yet to implement any of their suggested improvements. And while interagency forums such as the Commodity Working Group exist, coordination problems still occur. For example, USAID approached USDA officials to collaborate on exploring ways to deliver fortified and enriched food aid commodities to beneficiaries at a competitive cost. USDA’s Agriculture Research Service declined, citing its mission to address problems for U.S. agriculture and food supply and its lack of authority to study nutritional needs in other countries. Second, USAID and USDA do not have a central interagency mechanism to update products and their specifications. As a result, food aid recipients may not be receiving the most nutritious or appropriate food from the agencies, and disputes may occur when either agency attempts to update the products. Examples include the following: Although USDA has taken some steps to improve its food aid product specifications, there is still no central system in place to ensure that the product specifications are consistently updated. USDA recently made fortification improvements and updated the specifications to comply with Federal Acquisition Regulations and also requested resources to review the specifications. However, commodity suppliers complain that food aid product specifications are not as clear and consistent as in the commercial sector and that some requirements for food aid commodities are outdated and no longer necessary. One commodity supplier questioned the need for a current requirement of 50 ash for all USDA food aid flour purchases, noting that other countries have different ash specifications or none at all. KCCO officials have stated that most of the food aid products in use today were first developed in the 1960s and that they do not have a system in place to evaluate and update them. Therefore, KCCO officials may not be using the most cost-effective products to address food aid nutrition needs. One commodity supplier noted that products should be updated every 5 to 6 years and that it would be more cost-effective for the U.S. government to update products as technology develops. U.S. government agencies are currently attempting to discuss recipients’ nutritional needs in the Commodity Working Group and have started to explore the introduction of new food aid products that address health issues related to HIV/AIDS in young children and nutritional deficiencies in young mothers. USDA has also recently requested resources to conduct a long-term study on the present composition and use of food aid commodities. However, the agencies have yet to (1) agree on what products to update and (2) implement a central system to ensure that such updates are put into practice when they do reach an agreement. USDA and USAID disagree on a proposed update to product specifications. USDA reviewed micronutrient fortification and enrichment of Title II commodities in 1994 and recommended that tricalcium phosphate (TCP) be reduced by 25 percent. According to USDA, this reduction would result in an annual savings of over $1.5 million, which would increase funds available for Title II program commodities without compromising their nutritional value. However, USAID did not agree with the recommended reduction and chose not to reduce TCP in any Title II commodities due to its concern about the effect of the reduction on malnourished food aid recipients. The agencies have disagreed about the nutritional effect of TCP reductions since 2004 and have yet to reach an agreement. Although USAID and USDA require implementing organizations to regularly monitor and report on the use of food aid, these agencies have undertaken limited field-level monitoring of food aid programs. Agency inspectors general have reported that monitoring has not been regular and systematic, that in some cases intended recipients have not received food aid, or that the number of recipients could not be verified. Our audit work also indicates that monitoring has been insufficient due to various factors including limited staff, competing priorities, and legal restrictions on the use of food aid resources. USAID and USDA require NGOs and WFP to regularly monitor food aid programs. USAID Title II guidance for multiyear programs requires implementing organizations to provide a monitoring plan, which includes information such as the percentage of the target population reached and midterm and final evaluations of program impact. USDA requires implementing organizations to report semiannually on commodity logistics and the use of food. According to WFP’s agreement with the U.S. government, WFP field staff should undertake periodic monitoring at food distribution sites to ensure that commodities are distributed according to an agreed-upon plan. Additionally, WFP is to provide annual reports for each of its U.S.-funded programs. In addition to monitoring by implementing organizations, agency monitoring is important to ensure that targeting of food aid is adjusted to changes in conditions as they occur and to modify programs to improve their effectiveness, according to USAID officials. However, various USAID and USDA Inspectors General reports have cited problems with agencies’ monitoring of programs. For example, according to various USAID Inspector General reports on nonemergency programs in 2003, food aid was generally delivered to intended recipients, but USAID officials did not conduct regular and systematic monitoring. One assessment of direct distribution programs in Madagascar, for example, noted that as a result of insufficient and ad hoc site visits, USAID officials were unable to detect an NGO reallocation of significant quantities of food aid to a different district; combined with the late arrival of U.S. food aid, this resulted in severe shortages of food aid for recipients in a USAID-approved district. The Inspector General’s assessment of food aid programs in Ghana stated that the USAID mission’s annual report included data, such as the number of recipients, that were directly reported by implementing organizations without any procedures to review the completeness and accuracy of this information over a 3-year period. As a result, the Inspector General concluded, the mission had no assurance as to the quality and accuracy of this data. Limited staff and other demands in USAID missions and regional offices have constrained their field-level monitoring of food aid programs. In fiscal year 2006, although USAID had some non-Title II-funded staff assigned to monitoring, it had only 23 Title II-funded USAID staff assigned to missions and regional offices in 10 countries to monitor programs costing about $1.7 billion in 55 countries. For example, USAID’s Zambia mission had only one Title-II funded foreign national and one U.S. national staff member to oversee $4.6 million in U.S. food aid funding in fiscal year 2006. Moreover, the U.S. national staff member spent only about one-third of his time on food aid activities and two-thirds on the President’s Emergency Plan for AIDS Relief program. USAID regional offices’ monitoring of food aid programs has also been limited. These offices oversee programs in multiple countries, especially where USAID missions lack human resource capacity. For example, USAID’s East Africa regional office, which is located in Kenya, is responsible for oversight in 13 countries in East and Central Africa, of which 6 had limited or no capacity to monitor food aid activities, according to USAID officials. This regional office, rather than USAID’s Kenya mission, provided monitoring staff to oversee about $100 million in U.S. food aid to Kenya in fiscal year 2006. While officials from the regional office reported that their program officers monitor food aid programs, an implementing organization official we interviewed told us that USAID officials have visited the project site only three times in 1 year. USAID officials told us that they may be responsible for multiple project sites in a given country and may monitor selected sites based on factors such as severity of need and level of funding. Monitoring food aid programs in the Democratic Republic of Congo (DRC) from the USAID regional office had been difficult due to poor transportation and communication infrastructure, according to USAID officials. Therefore, USAID decided to station one full-time employee in the capital of the DRC to monitor U.S. food aid programs that cost about $51 million in fiscal year 2006. Field-level monitoring is also constrained by limited resources and restrictions on their use. Title II resources provide only part of the funding for USAID’s food aid monitoring activities, and there are legal restrictions on the use of these funds for nonemergency programs. Other funds, such as those from the agency’s overall operations expense and development assistance accounts, are also to be used for food aid activities, such as monitoring. However, these additional resources are limited due to competing priorities, and their use is based on agencywide allocation decisions, according to USAID officials. As a result, resources available to hire food aid monitors are limited. For example, about five U.S. national and five foreign national staff are responsible for monitoring all food aid programs in seven countries in southern Africa, according to a USAID food aid regional coordinator. Moreover, because its operations expense budget is limited and Title II funding allows food monitors only for emergency programs, USAID relies significantly on personal services contractors (PSC)—both U.S. national and foreign national hires—to monitor and manage food aid programs in the field. For example, while PSCs can use emergency food aid project funds for travel, USAID’s General Schedule staff cannot. Restrictions on the use of Title II resources for monitoring nonemergency programs further reduce USAID’s monitoring of these programs. USDA administers a smaller proportion of food aid programs than USAID and its field-level monitoring of food aid programs is more limited. In March 2006, USDA’s Inspector General reported that USDA’s Foreign Agricultural Service (FAS) had not implemented a number of recommendations made in a March 1999 report on NGO monitoring. Furthermore, several NGOs informed us that the quality of USDA oversight from Washington, D.C., is generally more limited than USAID’s. USDA has fewer overseas staff, and they are usually focused on monitoring agricultural trade issues and foreign market development. For example, the agency assigns a field attaché—with multiple responsibilities in addition to food aid monitoring—to the U.S. mission in some countries. However, FAS officials informed us that in response to past USDA Inspector General and GAO recommendations, a new monitoring and evaluation unit was recently established with an increased staffing level to monitor the semiannual reports, conduct site visits, and evaluate programs. Without adequate monitoring from U.S. agencies, food aid programs may not effectively direct limited food aid resources to those populations most in need. As a result, agencies may not be accomplishing their goal of getting the right food to the right people at the right time. U.S. international food aid programs have helped hundreds of millions of people around the world survive and recover from crises since the Agricultural Trade Development and Assistance Act (P.L. 480) was signed into law in 1954. Nevertheless, in an environment of increasing emergencies, tight budget constraints, and rising transportation and business costs, U.S. agencies must explore ways to optimize the delivery and use of food aid. U.S. agencies have taken some measures to enhance their ability to respond to emergencies and streamline the myriad processes involved in delivering food aid. However, opportunities for further improvement in such areas as logistical planning and transportation contracting remain. Inadequate coordination among food aid stakeholders has hampered ongoing efforts to address some of these logistical challenges. Furthermore, inefficiencies inherent in current monetization practices best illustrate the complex challenges that face U.S. food aid programs today. In addition, the lack of comparable and reliable needs assessments, insufficient complementary assistance, and impediments to improving the nutritional quality of food aid commodities raise questions about the effectiveness of the use of food aid. Finally, U.S. agencies’ lack of sufficient monitoring leaves U.S. food aid programs vulnerable to wasting increasingly limited resources, not putting them to their most effective use, or not reaching the most vulnerable populations on a timely basis. To improve the efficiency of U.S. food aid—in terms of its amount, timeliness, and quality—we recommend that the Administrator of USAID and the Secretaries of Agriculture and Transportation take the following five actions: improve food aid logistical planning through cost-benefit analysis of (1) supply-management options, such as long-term transportation agreements, and (2) prepositioning, including consideration of alternative methods, such as those used by WFP; work together and with stakeholders to modernize ocean transportation and contracting practices to include, to the extent possible, commercial principles of shared risks, streamlined administration, and expedited payment and claims resolution; seek to minimize the cost impact of cargo preference regulations on food aid transportation expenditures by updating implementation and reimbursement methodologies to account for new supply practices, such as prepositioning, and potential costs associated with older vessels or limited foreign-flag participation; establish a coordinated system for tracking and resolving food quality develop an information collection system to track monetization transactions. To improve the effective use of food aid, we recommend that the Administrator of USAID and the Secretary of Agriculture take the following four actions: enhance the reliability and use of needs assessments for new and existing food aid programs through better coordination among implementing organizations, make assessments a priority in informing funding decisions, and more effectively build on lessons from past targeting experiences; determine ways to provide adequate nonfood resources in situations where there is sufficient evidence that such assistance will enhance the effectiveness of food aid; develop a coordinated interagency mechanism to update food aid specifications and products to improve food quality and nutritional standards; and improve monitoring of food aid programs to ensure proper management and implementation. DOT, USAID, and USDA—the three U.S. agencies to whom we direct our recommendations— provided comments on a draft of our report. We have reprinted their comments in appendixes V, VI, and VII, respectively, along with our responses to specific points. These agencies—along with DOD, State, FAO, and WFP—also provided technical comments and updated information, which we have incorporated throughout this report as appropriate. DOT stated that it strongly supports the transportation initiatives highlighted in the draft report and that full and effective implementation of these initiatives—in particular, modernizing transportation and contracting practices and updating reimbursement methodologies—offers the potential to reduce costs for ocean transportation. DOT commented that legal requirements (such as cargo preference) that increase delivery costs are not borne by food aid programs and have minimal impact on the amount of food available for distribution. While we recognize that DOT reimbursements have improved, the impact of cargo preference on the amount of food aid tonnage provided depends on the sufficiency of reimbursements to cover cargo preference costs. Our analysis shows that compared with the estimated costs of cargo preference, the level of DOT reimbursements varied—falling short in fiscal years 2001 through 2004 when taking into account the costs included in the current reimbursement formula and the additional costs associated with older vessels and shipments where there was no foreign-flag vessel bid. USAID’s comments suggest that we did not adequately address some of the challenges facing U.S. food aid programs or take into account the considerable improvements USAID has made in a number of areas, such as transportation and contracting practices. USAID raised two key overarching points: (1) the crucial relationship between emergencies and development and the need to address the linkages between chronic and acute vulnerabilities discussed in the new USAID Food Security Strategic Plan for 2006-2010 and (2) the need for additional analysis of the magnitude and perspective of the recommendations in relation to program size and the number of beneficiaries reached. While we recognize the important linkages between emergencies and development programs, these issues primarily relate to food security, which was not a research objective of this study. However, we used the strategic plan to provide contextual information, particularly in our discussion of the effective use of food aid. We also provided information throughout this report to indicate the potential magnitude and impact of savings from efficiency improvements in food aid delivery. USDA took issue with a number of our findings and conclusions and expressed two overarching concerns. First, USDA believes that we did not fully articulate the challenges inherent in achieving an ideal first world performance when implementing programs in difficult third world environments and that critical nutritional needs are routinely met in a timely manner. Second, USDA believes that we lacked hard analysis to support many of the weaknesses that we identified and suggested that our conclusions are based upon anecdotal incidents reported by various constituencies with their own interests and viewpoints. We recognize the difficult operating environments in developing countries and agencies’ efforts to provide U.S. food aid on a timely basis with minimal commodity losses. However, during our fieldwork in three recipient countries, many implementing organizations we met with complained about the lack of timeliness in food aid delivery, particularly to meet emergency needs. The example of the Ethiopian grain reserve illustrates how local food aid stakeholders adapted ways to provide food aid in a timely manner even when U.S. shipments were late. As described in our scope and methodology (app. I), this report is based on a rigorous and systematic review of multiple sources of evidence, including procurement and budget data, site visits, previous audits, agency studies, economic literature, and testimonial evidence collected in both structured and unstructured formats. To ensure accuracy and independence in our findings, we assessed the reliability of data we used for our analysis and compared information from stakeholders who have different points of view and are involved in different stages of food aid programs. We discussed our preliminary findings with a roundtable of food aid experts and practitioners. We reviewed and incorporated, where appropriate, agency oral, technical, and official comments. We use anecdotal examples in our report to illustrate findings that are based on our broader work. We are sending copies of this report to interested members of Congress, the Administrator of USAID and the Secretaries of Agriculture, State, and Transportation. We will also make copies available to others upon request. In addition, this report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staffs have any questions about this report, please contact me at (202) 512-9601 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix VIII. Our objectives were to examine some key challenges to the (1) efficiency of U.S. food aid programs and (2) effective use of U.S. food aid. To examine key challenges to the efficiency of the delivery of U.S. food aid programs, we analyzed (1) food aid procurement and ocean transportation data provided by the Kansas City Commodity Office (KCCO) and (2) total food aid budget and monetization cost data provided by the U.S. Agency for International Development (USAID), the U.S. Department of Agriculture (USDA), and the World Food Program (WFP). We did not assess the reliability of the data that we used for background purposes or that WFP reported for transportation costs. We examined the KCCO data for their reliability and appropriateness for our purposes through electronic testing of the data, verification of the data against other sources, and interviews with agency officials that manage the data. We found the data to be sufficiently reliable to represent trends in food aid tonnage, required time frames for delivery, and commodity versus noncommodity costs. We also conducted structured interviews of the 14 U.S.- and foreign-flag ocean carriers that transport over 80 percent of U.S. food aid tonnage. While information from these interviews may not be generalized to all ocean carriers, we supplemented the structured interviews with information from several other ocean carriers, shipping agents, and transportation experts. To examine key challenges to the sale of food to generate cash (monetization), we reviewed monetization data from USAID and USDA for all food aid programs to determine the commodity and noncommodity (such as shipping and other transportation) costs. We tested the data for internal consistency, interviewed USAID and USDA officials to clarify data definitions, and corroborated our classification of bulk commodities with them. We determined that the data were sufficiently reliable to represent the level, cost breakdown, and bulk versus nonbulk breakdown of monetization. We were not able to determine to what extent the costs of monetization are recovered through sales proceeds because neither USAID nor USDA systematically collect the data, which we point out as a finding in this report. We reviewed program authorities and regulations to determine their impact on food aid transportation; the nature of food aid transportation contracts; and the allowable use of monetization proceeds, 202(e) funding, and Internal Transportation, Storage, and Handling (ITSH) costs. To examine key challenges to the effectiveness of the use of food aid, we reviewed numerous U.S. government documents, including all USDA- approved proposals and approximately half of all USAID-approved proposals from fiscal years 2002 through 2006 for food aid programs each agency administers in the countries we visited. We reviewed several WFP internal evaluations, including those related to needs assessments and targeting, and some external studies, such as those conducted by the Washington, D.C.-based International Food Policy Research Institute. We also incorporated information from our past audits as appropriate. Additionally, we interviewed officials from WFP, nongovernmental organizations (NGO), recipient governments, the U.S. government, and food aid recipients in the field and obtained relevant documentation from them. To assess food quality and nutrition issues, we conducted interviews with and reviewed reports by commodity suppliers, trade associations, and officials from NGOs, WFP, KCCO, USAID, and Animal and Plant Health Inspection Service (APHIS). We also reviewed U.S. agency food aid product specifications, rules and regulations, commodity complaint logs, and quality control guidelines; USAID audit reports; and internal agency correspondence and draft documents concerning food quality and nutrition issues. To assess U.S. agencies’ monitoring of food aid programs, we reviewed agencies’ inspectors general reports, guidance for implementing organizations, and staffing data. Lastly, we reviewed economic literature on the impact of food aid on local markets and recent reports, studies, and papers issued on U.S. and international food aid programs. In Washington, D.C., we interviewed officials from USAID; USDA; the Departments of State (State) and Defense (DOD); the Department of Transportation Maritime Administration (DOT/MARAD); and the Office of Management and Budget (OMB). We also met with a number of officials representing NGOs, including 8 of the top 10 recipients of Title II food aid between fiscal years 2002 to 2005, that serve as implementing partners to USAID and USDA in carrying out U.S. food aid programs overseas; freight forwarding companies; and agricultural commodity groups. In Rome, we met with officials from the U.S. Mission to the United Nations (UN) Food and Agriculture Agencies, the WFP headquarters, and the Food and Agriculture Organization. We also conducted fieldwork in three countries that are recipients of food aid—Ethiopia, Kenya, and Zambia—and met with officials from over 40 organizations representing U.S. missions, implementing organizations, and relevant host government agencies. We visited a port in Texas from which food is shipped; two food destination ports in South Africa and Kenya; and two sites in Louisiana and Dubai where U.S. food may be stocked prior to shipment to destination ports. Finally, in January 2007, we convened a roundtable of experts and practitioners—including 15 representatives from academia, think tanks, implementing organizations, the maritime industry, and agricultural commodity groups—to further delineate, based on our initial work, some key challenges to the efficient delivery and effective use of U.S. food aid and to explore options for improvement. We took the roundtable participants’ views into account as we finalized our analysis of these challenges and options. We conducted our work between May 2006 and March 2007 in accordance with generally accepted government auditing standards. The United States has principally employed six programs to deliver food aid: Public Law (P.L.) 480 Titles I, II, and III; Food for Progress; the McGovern-Dole Food for Education and Child Nutrition; and Section 416(b). Table 2 provides a summary of these food aid programs by program authority. In addition to these programs, resources for U.S. food aid can be provided through other sources, which include the following: The International Disaster and Famine Assistance Fund, which provides funding for famine prevention and relief, as well as mitigation of the effects of famine by addressing its root causes. Over the past 3 years, USAID has programmed $73.8 million in famine prevention funds. Most of the funds have been programmed in the Horn of Africa, where USAID officials told us that famine is now endemic. According to USAID officials, experience to date demonstrates that these funds have the advantage of enabling USAID to combine emergency responses with development approaches to address the threat of famine. Approaches need to be innovative and catalytic while providing flexibility in assisting famine- prone countries or regions. Famine prevention assistance funds should generally be programmed for no more than 1 year and seek to achieve significant and measurable results during that time period. Funding decisions are made jointly by USAID’s regional bureaus and its Bureau for Democracy, Conflict, and Humanitarian Assistance and are subject to OMB concurrence and congressional consultations. In fiscal year 2006, USAID programmed $19.8 million to address the chronic failure of the pastoralist livelihood system in the Mandera Triangle—a large, arid region encompassing parts of Ethiopia, Somalia, and Kenya that was the epicenter of that year’s hunger crisis in the Horn of Africa. In fiscal year 2005, USAID received $34.2 million in famine prevention funds for activities in Ethiopia and six Great Lakes countries in Africa. The activities in Ethiopia enabled USAID to intervene early enough in the 2005 drought cycle to protect the livelihoods—as well as the lives—of pastoralist populations in the Somali region, which were not yet protected by Ethiopia’s Productive Safety Net program. In fiscal year 2004, the USAID mission in Ethiopia received $19.8 million in famine prevention funds to enhance and diversify the livelihoods of the chronically food insecure. State’s Bureau of Population, Refugees, and Migration (PRM), which provides limited amounts of cash to WFP to purchase food locally and globally to remedy shortages in refugee feeding pipeline breaks. In these situations, PRM generally provides about 1 month’s worth of refugee feeding needs and will not usually provide funds unless USAID’s resources have been exhausted. Funding from year to year varies. In fiscal year 2006, PRM’s cash assistance to WFP to fund operations in 14 countries totaled about $15 million, including $1.45 million for humanitarian air service. In addition, PRM also funds food aid and food security programs for Burmese refugees in Thailand. In fiscal year 2006, PRM provided $7 million in emergency supplemental funds to the Thailand-Burma Border Consortium, most of which supported food-related programs. PRM officials told us that they coordinate efforts with USAID as needed. Table 3 lists congressional mandates for the P.L. 480 food aid programs and the targets for fiscal year 2006. The impact of food aid on local markets can be assessed by analyzing its impact on supply and demand and on expectations of market participants regarding future market stability. A number of factors affect the impact of food aid on the markets of recipient countries. In general, in-kind food aid affects recipient markets by increasing supply. In the case of food shortfalls, food aid may actually serve to bring supply back to what the levels would have been in the absence of the shortage and would not be thought to cause a distortion. Under these circumstances, food aid would help stop the rise in prices caused by the shortage-induced decreased supply. To the extent that food aid prevents major losses in physical and human capital, it may help assure growth in subsequent periods. In addition, if food aid is distributed free of charge to people who are desperately poor and have no purchasing power, the transaction can be “off line” to the market—not leading to changes in market prices. To the extent that food aid increases supply beyond what it would have been in the absence of shortage, it can have a potentially adverse effect on the market. These effects would include downward pressure on prices. The extent of this decrease would depend on (1) the amount of food aid relative to the total volume handled in the market and (2) the sensitivity of demand to changes in the quantities supplied to the market (price elasticity of demand). Declines in market prices provide disincentives to local production and could also affect the allocation of inputs to production by reducing the value of labor—for example, causing households to reallocate labor away from agricultural production. The impact of food aid could extend to other sectors of the market by affecting the prices for substitute and complementary foods. The general characteristics of the recipient market—such as the extent to which the local market is integrated into broader national, regional, and global markets—can also influence the impact of food aid. Market integration measures the degree to which changes in market conditions in one market affect those in other markets (separated by time or space). It is typically the result of traders moving products across markets when it makes economic sense to do so—when the price differential between those markets exceeds the cost of moving the product. If markets are well integrated, injecting aid in one area can strongly affect market conditions in related areas. In well integrated markets, food aid shocks are short term and dissipate quickly. In poorly functioning markets, food aid impact could be more long term, and price movements can be dramatic. In addition, the increase in supply due to food aid may result in less need for commercial sales or imports. Adverse market impacts resulting from food aid can be alleviated through the timing and targeting of food aid delivery. For example, timing the delivery of food aid to occur when it is needed, such as in the “hungry season,” would alleviate adverse market effects by bringing market supply to what the levels would have been in the absence of supply shortfalls. In this case food aid might be effective in capping what might otherwise be a very sharp spike in prices. In addition, it would reduce the longer term effects of the food shortages by alleviating the need for recipients to liquidate high return assets, such as livestock and tools, or incur high levels of debt to meet short-term requirements for food, thus reducing their future capacity to produce. Conversely, food aid that arrives at harvest time, when prices are already falling due to increased supply, can plunge prices below what it costs farmers to produce and distribute the commodity, thereby discouraging them from future production. Targeting food aid by making sure it goes to the people who need it the most and excluding those who can obtain the food in other ways is also important. This assures that the supply arrives where the demand is greatest. In addition, according to some of the studies we reviewed and economic principles, the very poor tend to spend a greater proportion of income on food (high income-elasticity) and are responsive to prices when income is available (high price-elasticity of demand). When food aid is targeted to this group, the combined price and income effects lead to proportionately more purchases of food, checking overall price declines. The actual impact of food aid on markets is an empirical question. Studies have been inconclusive regarding disincentives and other effects of food aid. In the case of emergency food aid distributions, there is less evidence of negative effects than for nonemergency aid, the effects of which tend to persist over longer time periods. Figure 14 describes the food distribution activities used to target different groups of food and recipients. The following is GAO’s comment on the U.S. Department of Transportation’s (DOT) letter dated March 29, 2007. 1. We recognize that processing of DOT reimbursements has improved. However, the impact of cargo preference on the amount of food aid tonnage provided depends on the sufficiency of reimbursements to cover cargo preference costs—both those that are included in the reimbursement calculation as well as those associated with shipments where no foreign-flag vessel has submitted a bid and where the vessel’s age is 25 years or older. Figure 10 in out report illustrates how DOT reimbursements compare with the estimated costs of cargo preference (ocean freight differential (OFD) costs) included in the reimbursement calculation. As shown in the figure, DOT reimbursements fell short of OFD costs in fiscal years 2001 through 2003 and exceeded OFD costs in fiscal years 2004 and 2005. Including the estimated additional costs for Title II programs only that were associated with older vessels and shipments where there was no foreign-flag vessel bid (about $50 million in fiscal year 2003, about $34 million in fiscal year 2004, and about $56 million in fiscal year 2005), DOT reimbursements would have exceeded total cargo preference costs in fiscal year 2005 only. Finally, while we acknowledge that DOT revised the reimbursement formula in 2004 to provide more timely payments, the current methodology has not been updated to include these additional costs of cargo preference or to promote new supply practices, such as prepositioning. The following are GAO’s comments on the U.S. Agency for International Development’s letter dated March 29, 2007. 1. We incorporated contextual information from USAID’s Food Security Strategic Plan for 2006–2010 in the background and in the discussion on the effectiveness of the use of food aid. We also added a direct reference in the text to the strategic plan. While we recognize the importance of the linkages between emergencies and development programs, these issues primarily relate to food security, which was not a research objective of this study. 2. We added information from the specific study cited. While this study mentioned that proposals had improved in identifying and describing critical country-level food security problems, it also noted that quantitative data collection and analysis at the local level were deficient. Additionally, according to this study, USAID’s policy guidance has been insufficient, and there has been friction between USAID and implementing organizations regarding the transparency and timeliness of the program management by the Office of Food for Peace. 3. We have provided available information throughout this report to indicate the potential magnitude and impact of savings from improving the efficiency of food aid delivery. In our view, even a savings of less than 2 percent of the fiscal year 2006 program funding could have a significant impact by enabling the United States to feed almost 850,000 additional people for 90 days. 4. We have included additional information regarding the selection process for prepositioning warehouses. 5. We recognize that uncertainties in funding processes, combined with reactive and insufficiently planned procurement, increase food aid delivery costs and time frames. Further, we noted that difficult operating environments contribute to various challenges that impede the effective use of food aid. Despite these constraints, we noted that enhancements, such as better planning and improved coordination in conducting assessments, can improve the efficiency and effectiveness of U.S. food aid programs. 6. We reference the standard booking note that USAID and USDA created with input from the booking note committee. We have included additional information regarding members of this committee. However, in structured interviews, all 14 ocean carriers indicated that further improvements are needed to standardize freight terms and to further include, to the extent possible, commercial principles for the allocation of risk. 7. More timely payment of food aid contracts is not a competitiveness issue and would reduce costs for both U.S.- and foreign-flag carriers. DOD and DOT officials have also reported that long-term transportation agreements have produced savings for DOD and could provide savings for food aid programs. As DOD is also subject to cargo preference regulations, legal requirements governing food aid may not necessarily prevent the agencies from achieving savings with long-term transportation agreements. To determine potential savings, we are recommending that USAID, USDA, and DOT work together to conduct further cost-benefit analyses of supply-management options. 8. We recognize that USAID asked DOD several years ago to calculate the cost for a sample set of shipments using long-term transportation agreements managed by DOD, and that this analysis indicated a lack of potential savings. However, as discussed in this report, DOD and DOT officials subsequently found that the analysis contained flaws and both agencies recommend that a new analysis be conducted. For example, DOT officials indicated that cost savings could be realized if USAID were to manage its own contracts, and they have offered to assist USAID in doing so. Regarding USAID’s use of multiple port discharge options, we have included additional language in our report to reflect this information. 9. While food quality issues may be discussed in the Food Aid Consultative Group, there is still no shared, coordinated system in place that USDA, KCCO, and USAID can use to track and respond to complaints. Additionally, while we acknowledge that USAID has developed the Quarterly Web-Interfaced Commodity Reporting (QWICR) system to assist in tracking food aid commodities, this system is currently utilized only by some Food for Peace programs and NGOs in Africa and is not shared with USDA and KCCO. We also point out the need for better monitoring and tracking of monetization transactions, including tracking of revenues generated by monetization. At this point, it is not clear whether QWICR will be able to accommodate this need for both USAID and USDA. 10. We note that USAID recognizes that the quality and formulation of food aid products are crucial for undernourished populations and that the Director of the Office of Food for Peace highlighted the need to improve the quality of food aid commodities in his statement before the Senate Committee on Agriculture, Nutrition, and Forestry on March 21, 2007. We also note that USAID, along with USDA, plans to do an in-depth review of the types and quality of food products used in U.S. food aid programs and will continue its efforts to review existing contract specifications and improve commodity sampling and testing. However, these planned reviews and improvements have not yet been implemented. 11. USAID recognizes that enhancing assessments is a priority. Our recommendation to improve needs assessments was also endorsed by the Director of USAID’s Office of Food for Peace in his statement before the Senate Committee on Agriculture, Nutrition, and Forestry on March 21, 2007. 12. Based on USAID’s technical comments, we have added a footnote stating that implementing organizations are required to monitor food aid programs according to OMB Circular A-110 as well as USAID regulations (22 C.F.R. 226.51). While noting the implementing organizations’ monitoring responsibilities, we maintain that U.S. agencies still need to adequately monitor programs to ensure independence and provide assurance that food aid resources are used optimally. In its official comments, USAID states that it has over 65 staff in the field and over 30 staff in Washington, D.C., to monitor and oversee food aid programs. However, as noted in our report, there are only 23 Title II-funded staff in the field, and non-Title II funded staff often have other responsibilities in addition to monitoring food aid programs. Further, the Director of the Office of Food for Peace, in his statement before the Senate Committee on Agriculture, Nutrition, and Forestry on March 21, 2007, supported our recommendation on the need for increased monitoring. 13. We agree that it is important to carefully review the monetization proposals in order to minimize the disruption to local production and markets. However, even when the proposals satisfy all the criteria USAID considers, monetization is still an inherently inefficient practice because converting food to cash in order to fund development projects is costly. The following are GAO’s comments on the U.S. Department of Agriculture’s letter dated March 30, 2007. 1. We recognize (1) the challenges of providing food aid in developing countries and (2) agency efforts to provide U.S. food aid on a timely basis with minimal commodity losses. However, multiple implementing organizations we met with expressed concern regarding the lack of timeliness in food aid delivery, particularly to meet emergency needs. The Ethiopian grain reserve example illustrates how food aid stakeholders have adapted strategies to provide food aid in a timely manner even when U.S. shipments are late. Although commodity losses for non-WFP programs are reported at less than 1 percent, KCCO is unable to determine the extent of commodity losses for WFP programs, which account for approximately 60 percent of U.S. food aid shipments. Additionally, various factors suggest that actual commodity losses may exceed those reported in the data. 2. We provide a detailed description of our scope and methodology in appendix I. Each of our report findings and recommendations is based on a rigorous and systematic review of multiple sources of evidence, including procurement and budget data, site visits, previous audits, agency studies, economic literature, and testimonial evidence collected in both structured and unstructured formats. To ensure accuracy and independence in our findings, we assessed the reliability of data used for our analysis and compared information from stakeholders who have different points of view and are involved in different stages of food aid programs. We discussed our preliminary findings with a roundtable of food aid experts and practitioners. We reviewed and incorporated, where appropriate, agency oral, technical, and official comments. We include anecdotal examples in our report to illustrate findings that are based on our broader work. 3. While it is likely that the risks of transporting packaged cargo are higher than those for bulk cargo, all of our transportation recommendations are intended to improve the delivery of both types of food aid. Improving food aid logistical planning could decrease procurement bunching (and the higher prices that result) for both packaged and bulk food shipments. Modernizing transportation contracting practices, including standardizing bulk cargo contracts and improving claims processes, could likewise decrease ocean freight rates for both bulk and packaged shipments. Finally, since cargo preference regulations apply to shipments of both bulk and packaged cargoes and food quality complaints may occur for all food aid shipments, our remaining two recommendations to improve the efficiency of delivery are aimed at the entire food aid program. 4. KCCO officials told us that USDA needs to improve procurement planning in order to reduce the continued bunching of purchases that stresses its operations and those of its food suppliers. KCCO data and a recent KCCO study confirmed that bunching of procurement has occurred through fiscal year 2006—findings that were confirmed by a broad representation of other food aid stakeholders and experts we interviewed. 5. To determine the length of time required to provide U.S. food aid, we examined the delivery process from vendor to village. Our analysis of transportation contracting practices refers to ocean transportation contracts only, and we have added language in the report to reflect this scope. We did not systematically examine transportation contracts for foreign inland cargo since U.S. agencies do not collect uniform contract data for these shipments. KCCO does not include these costs when determining lowest cost providers for food aid delivery, and DOT cargo preference reimbursement methodologies pertain to ocean transportation only. 6. We have added language to the report to reflect that USDA ships bulk cargoes using contract terms that incorporate more shared risk. However, contracts for bulk shipments have not yet been standardized, and the standard booking note used by both USAID and USDA for packaged cargoes defines freight terms differently than commercial contracts. Other areas where USDA transportation contracting practices differ from commercial practices include lengthy claims processes and insufficiently streamlined administration and paperwork. 7. We have added language to the report to indicate that the net cost impact of shifting risk from ocean carriers to other food aid stakeholders, such as commodity suppliers and implementing organizations has not been studied. However, savings could arise through aligning the fiduciary responsibility for food delivery risks with those stakeholders that could better assess and manage those risks. Under the current approach, ocean carriers are held responsible for certain food delivery risks that they have no direct ability to manage. Ocean carriers generally insure themselves against these risks by increasing their freight rates for all deliveries. Moreover, by realigning the cost of risk to those who manage it during each step of the process, food aid stakeholders would have additional incentives to make sure the process goes right. 8. Figure 10 in our report compares DOT reimbursements with the estimated costs of cargo preference. DOT reimbursements include the incremental ocean freight rate differential and the additional costs of ocean transportation exceeding 20 percent of the total cost of food aid commodities and ocean freight (Sections 901d(a) and 901d(b) of the Merchant Marine Act). As shown in the figure, DOT reimbursements fell short of OFD costs in fiscal years 2001 through 2003 and exceeded OFD costs in fiscal years 2004 and 2005. However, the estimated OFD costs in figure 10 do not include costs associated with shipments where no foreign-flag vessel submitted a bid and where the vessel’s age was 25 years or older. USAID and DOT officials separately estimated the additional costs associated with these two factors for past Title II shipments. Agency estimates amounted to about $50 million in fiscal year 2003, about $34 million in fiscal year 2004, and about $56 million in fiscal year 2005. Including additional estimated costs, DOT reimbursements would only have exceeded total cargo preference costs in fiscal year 2005. 9. While we acknowledge that USAID and USDA do have some means of sharing information on quality problems and that commodity and storage-specific initiatives like the Containerization Aid Product Improvement Team are helpful in addressing quality issues , both agencies still do not have a shared, coordinated system to track and respond systematically to food quality complaints for all of their commodities. And as stated in comment 1, agency officials are unable to track the quality of food aid for approximately 60 percent of food aid shipments, and commodity losses may exceed those reported in the data. We also acknowledge that USDA has a rapid response team, but KCCO officials have told us that the team is limited in its ability to respond to all of the complaints on food quality that it receives. USDA officials have also stated that food quality inspection officials like USDA’s Federal Grain Inspection Service do not have responsibilities overseas and are limited to inspecting only some food aid commodities and that while those officials can be hired to conduct overseas inspections, it would be expensive to do so. 10. Limitations in the availability and use of nonfood resources to conduct credible assessments and to use these assessments to inform program proposals apply both to USAID- and USDA-administered programs. However, we specifically note in response to agency comments that some limitations, such as legal restrictions on the use of funding, apply specifically to Title II-funded programs. As indicated by USDA, the McGovern-Dole Food for Education and Child Nutrition program has a cash component of 13 percent, as indicated by USDA, which is higher than the upper limit of 10 percent cash allowed as 202(e) funding to implementing organizations for USAID Title-II funded programs. However, Food for Education accounts for only 4 percent of U.S. food aid funding; therefore, our overall finding about limited complementary nonfood resources still applies broadly to U.S. food aid programs. Additionally, the majority of Food for Progress commodities are monetized rather than used for direct distribution to beneficiaries, as shown in figure 12 in our report. Therefore, the need for nonfood resources to enhance the effectiveness of the use of food aid is less relevant in the case of Food for Progress. Moreover, as we note in our report, the use of monetization to generate funds for development projects is an inefficient use of food aid resources in general. In addition to the person named above, Phillip J. Thomas (Assistant Director), Carol Bray, Ming Chen, Debbie Chung, Martin De Alteriis, Leah DeWolf, Mark Dowling, Etana Finkler, Kristy Kennedy, Joy Labez, Kendall Schaefer, and Mona Sehgal made key contributions to this report. Darfur Crisis: Progress in Aid and Peace Monitoring Threatened by Ongoing Violence and Operational Challenges, GAO-07-9. Washington, D.C.: Nov. 9, 2006. Darfur Crisis: Death Estimates Demonstrate Severity of Crisis, but Their Accuracy and Credibility Could Be Enhanced, GAO-07-24. Washington, D.C.: Nov. 9, 2006. Maritime Security Fleet: Many Factors Determine Impact of Potential Limits on Food Aid Shipments, GAO-04-1065. Washington, D.C.: Sept. 13, 2004. Foreign Assistance: Lack of Strategic Focus and Obstacles to Agricultural Recovery Threaten Afghanistan’s Stability, GAO-03-607. Washington, D.C.: June 30, 2003. Foreign Assistance: Sustained Efforts Needed to Help Southern Africa Recover from Food Crisis, GAO-03-644. Washington, D.C.: June 25, 2003. Food Aid: Experience of U.S. Programs Suggest Opportunities for Improvement, GAO-02-801T. Washington, D.C.: June 4, 2002. Foreign Assistance: Global Food for Education Initiative Faces Challenges for Successful Implementation, GAO-02-328. Washington, D.C.: Feb. 28, 2002. Foreign Assistance: U.S. Food Aid Program to Russia Had Weak Internal Controls, GAO/NSIAD/AIMD-00-329. Washington, D.C.: Sept. 29, 2000. Foreign Assistance: U.S. Bilateral Food Assistance to North Korea Had Mixed Results, GAO/NSIAD-00-175. Washington, D.C.: June 15, 2000. Foreign Assistance: Donation of U.S. Planting Seed to Russia in 1999 Had Weaknesses, GAO/NSIAD-00-91. Washington, D.C.: Mar. 9, 2000. Foreign Assistance: North Korean Constraints Limit Food Aid Monitoring, GAO/T-NSIAD-00-47. Washington, D.C.: Oct. 27, 1999. Foreign Assistance: North Korea Restricts Food Aid Monitoring, GAO- NSIAD-00-35. Washington, D.C.: Oct. 8, 1999. Food Security: Preparations for the 1996 World Food Summit, GAO/NSIAD-97-44. Washington, D.C.: Nov. 1996. Food Security: Factors That Could Affect Progress Toward Meeting World Food Summit Goals, GAO/NSIAD-99-15. Washington, D.C.: Mar. 1999. International Relations: Food Security in Africa, GAO/T-NSIAD-96-217. Washington, D.C.: Jul. 31, 1996. Food Aid: Competing Goals and Requirements Hinder Title I Program Results, GAO/GGD-95-68. Washington, D.C.: June 26, 1995. Foreign Aid: Actions Taken to Improve Food Aid Management, GAO/NSIAD-95-74. Washington, D.C.: Mar. 23, 1995. Maritime Industry: Cargo Preference Laws Estimated Costs and Effects, GAO/RCED-95-34. Washington, D.C.: Nov. 30, 1994. Private Voluntary Organizations’ Role in Distributing Food Aid, GAO/NSIAD-95-35. Washington, D.C.: Nov. 23, 1994. Cargo Preference Requirements: Objectives Not Met When Applied to Food Aid Programs, GAO/GGD-94-215. Washington, D.C.: Sept. 29, 1994 Public Law 480 Title I: Economic and Market Development Objectives Not Met, GAO/T-GGD-94-191. Washington, D.C.: Aug. 3, 1994. Multilateral Assistance: Accountability for U.S. Contributions to the World Food Program, GAO/ T-NSIAD-94-174. Washington, D.C.: May 5, 1994. Foreign Assistance: Inadequate Accountability for U.S. Donations to the World Food Program, GAO/NSIAD-94-29. Washington, D.C.: Jan. 28, 1994. Foreign Assistance: U.S. Participation in FAO’s Technical Cooperation Program, GAO/NSIAD-94-32. Washington, D.C.: Jan. 11, 1994. Food Aid: Management Improvements Are Needed to Achieve Program Objectives, GAO/NSIAD-93-168. Washington, D.C.: July 23, 1993. Cargo Preference Requirements: Their Impact on U.S. Food Aid Programs and the U.S. Merchant Marine, GAO/NSIAD-90-174. Washington, D.C.: June 19, 1990. Status Report on GAO’s Reviews on P.L. 480 Food Aid Programs, GAO/T- NSIAD-90-23. Washington, D.C.: Mar. 21, 1990. Related GAO Products his is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | The United States is the largest global food aid donor, accounting for over half of all food aid supplies to alleviate hunger and support development. Since 2002, Congress has appropriated an average of $2 billion per year for U.S. food aid programs, which delivered an average of 4 million metric tons of food commodities per year. Despite growing demand for food aid, rising business and transportation costs have contributed to a 52 percent decline in average tonnage delivered over the last 5 years. These costs represent 65 percent of total emergency food aid, highlighting the need to maximize its efficiency and effectiveness. Based on analysis of agency documents, interviews with experts and practitioners, and fieldwork, this report examines some key challenges to the (1) efficiency of U.S. food aid programs and (2) effective use of U.S. food aid. Multiple challenges hinder the efficiency of U.S. food aid programs by reducing the amount, timeliness, and quality of food provided. Specific factors that cause inefficiencies include (1) funding and planning processes that increase delivery costs and lengthen time frames; (2) ocean transportation and contracting practices that create high levels of risk for ocean carriers, resulting in increased rates; (3) legal requirements that result in awarding of food aid contracts to more expensive service providers; and (4) inadequate coordination between U.S. agencies and food aid stakeholders to track and respond to food and delivery problems. U.S. agencies have taken some steps to address timeliness concerns. The U.S. Agency for International Development (USAID) has been stocking or prepositioning food commodities domestically and abroad, and the U.S. Department of Agriculture (USDA) has implemented a new transportation bid process, but the long-term cost effectiveness of these initiatives has not yet been measured. In addition, the current practice of using food aid to generate cash for development projects--monetization--is an inherently inefficient use of resources. Furthermore, since U.S. agencies do not collect monetization revenue data electronically, they are unable to adequately monitor the degree to which revenues cover costs. Numerous challenges limit the effective use of U.S. food aid. Factors contributing to limitations in targeting the most vulnerable populations include (1) challenging operating environments in recipient countries; (2) insufficient coordination among key stakeholders, resulting in disparate estimates of food needs; (3) difficulty in identifying vulnerable groups and causes of their food insecurity; and (4) resource constraints on conducting reliable assessments and providing food and other assistance. Further, some impediments to improving the nutritional quality of U.S. food aid may reduce the benefits of food aid to recipients. Finally, U.S. agencies do not adequately monitor food aid programs due to limited staff, competing priorities, and restrictions on the use of food aid resources. As a result, these programs are vulnerable to not getting the right food to the right people at the right time. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
The federal real property environment has many stakeholders and involves a vast and diverse portfolio of assets that are used for a wide variety of missions. Real property is generally defined as facilities; land; and anything constructed on, growing on, or attached to land. The U.S. government’s fiscal year 2002 financial statements show an acquisition cost of more than $335 billion for real property assets held by the federal government on September 30, 2002. In terms of facilities, the latest available governmentwide data from GSA indicated that as of September 30, 2002, the federal government owned and leased approximately 3.4 billion square feet of building floor area worldwide. The Department of Defense (DOD), U.S. Postal Service (USPS), GSA, and the Department of Veterans Affairs (VA) hold the majority of the owned facility space. Federal real property managers operate in a complex and dynamic environment. Numerous laws and regulations govern the acquisition, management, and disposal of federal real property. The Federal Property and Administrative Services Act of 1949, as amended (Property Act), and the Public Buildings Act of 1959, as amended, are the laws that generally apply to real property held by federal agencies; and GSA is responsible for the acts’ implementation. Agencies are subject to these acts, unless they are specifically exempted from them, and some agencies may also have their own statutory authority related to real property. Agencies must also comply with numerous other laws related to real property. Despite significant changes in the size and mission needs of the federal government in recent years, the federal portfolio of real property assets in many ways still largely reflects the business model and technological environment of the 1950s and faces serious security challenges. In the last decade alone, the federal government has reduced its workforce by several hundred thousand personnel, and several federal agencies have had major mission changes. With these personnel reductions and mission changes, the need for existing space, including general-purpose office space, has declined overall and necessitated the need for different kinds of space. At the same time, technological advances have changed workplace needs, and many of the older buildings are not configured to accommodate new technologies. The advent of electronic government is starting to change how the public interacts with the federal government. These changes will have significant implications for the type and location of property needed in the 21st century. Furthermore, changes in the overall domestic security environment have presented an additional range of challenges to real property management that must be addressed. One reason the government has many unneeded assets is that some of the major real property-holding agencies have undergone significant mission shifts that have affected their real property needs. For example, after the Cold War, DOD’s force structure was reduced by 36 percent. Despite four rounds of base closures, DOD projects that it still has considerably more property than it needs. The National Defense Authorization Act for Fiscal Year 2002, which became law in December 2001, gave DOD the authority for another round of base realignments and military installation closures in 2005. Various factors may significantly reduce the need for real property held by USPS. These factors include new technologies, additional delivery options, and the opportunity for greater use of partnerships and retail co location arrangements. A July 2003 Presidential Commission report on USPS stated, among other things, that USPS had vacant and underutilized facilities that had little, if any, value to the modern-day delivery of the nation’s mail. According to testimony by the Co-Chair of the Commission, rightsizing of the postal network would be crucial to USPS’s transformation into a modern, 21st century institution. In the mid-1990s, VA began shifting its role from being a traditional hospital-based provider of medical services to an integrated delivery system that emphasizes a full continuum of care with a significant shift from inpatient to outpatient services. Subsequently, VA has struggled to reduce its large inventory of buildings, many of which are underutilized or vacant. Although the Department of Energy (DOE) is no longer producing new nuclear weapons, it still maintains a facilities infrastructure largely designed for this purpose. The magnitude of the problem with underutilized or excess federal property puts the government at significant risk for wasting taxpayers’ money and missed opportunities. First, underutilized or excess property is costly to maintain. DOD estimates that it is spending $3 billion to $4 billion each year maintaining facilities that are not needed. In July 1999, we reported that vacant VA space was costing as much as $35 million to maintain each year. Costs associated with excess DOE facilities, primarily for security and maintenance, exceed $70 million annually. It is likely that other agencies that continue to hold excess or underutilized property are also incurring significant costs for staff time spent managing the properties and on maintenance, utilities, security, and other building needs. Second, in addition to day-to-day operational costs, holding these properties has opportunity costs for the government, because these buildings and land could be put to more cost-beneficial uses, exchanged for other needed property, or sold to generate revenue for the government. Finally, continuing to hold property that is unneeded does not present a positive image of the federal government in local communities. Instead, it presents an image of waste and inefficiency that erodes taxpayers’ confidence in government. It also can have a negative impact on local economies if the property is occupying a valuable location and is not used for other purposes, sold, redeveloped, or used in a public-private partnership. Appendix I discusses some examples of vacant, highly visible properties that are in the federal inventory— the former main VA hospital building at the Milwaukee, Wisconsin, health facility campus; St. Elizabeths Hospital in Washington, D.C.; and the former main post office building in downtown Chicago, Illinois. These examples demonstrate the range of challenges agencies face in disposing of unneeded property. Restoration, repair, and maintenance backlogs in federal facilities are significant and reflect the federal government’s ineffective stewardship over its valuable and historic portfolio of real property assets. The state of deterioration is alarming because of the magnitude of the repair backlog— current estimates show that tens of billions of dollars will be needed to restore these assets and make them fully functional. This problem has accelerated in recent years because much of the federal portfolio was constructed over 50 years ago, and these assets are reaching the end of their useful lives. As with the problems related to underutilized or excess property, the challenges of addressing facility deterioration are also prevalent at major real property-holding agencies. For example: Over the last decade, DOD reports that it has been faced with the major challenge of adequately maintaining its facilities to meet its mission requirements. Although DOD no longer reports data on backlog of repairs and maintenance, it reported in 2001 that the cost of bringing its facilities to a minimally acceptable condition was estimated at $62 billion; the cost of correcting all deficiencies was estimated at $164 billion. The Department of the Interior (Interior) has a significant deferred maintenance backlog that the Interior Inspector General (IG) estimated in April 2002 to be as much as $8 billion to $11 billion. This backlog has affected numerous national treasures, such as Ellis Island, Yellowstone National Park, and Mount Rushmore, just to name a few. GSA has struggled over the years to meet the repair and alteration requirements identified at its buildings. In March 2000, we reported that GSA data showed that over half of GSA’s approximately 1,700 buildings needed repairs estimated to cost about $4 billion. More recently, in August 2002, we reported that this estimated backlog of identified repair and alteration needs was up to $5.7 billion. Other agencies with repair backlogs that we highlighted in our high-risk report include the Department of State (State), DOE, the Smithsonian Institution, and USPS. Since issuing our high-risk report, we have updated our assessment of facility conditions at DOD and State. In February 2003, we reported that although the amount of money the active forces have spent on facility maintenance had increased recently, DOD and service officials said that these amounts had not been sufficient to halt the deterioration of facilities. Too little funding to adequately maintain facilities is also aggravated by DOD’s acknowledged retention of facilities in excess of its needs. Furthermore, the information that the services have on facility conditions is not consistent, making it difficult for Congress, DOD, and the services to direct funds to facilities where they are most needed and to accurately gauge facility conditions. And, although DOD has a strategic plan for facilities, it lacks comprehensive information on the specific actions, time frames, responsibilities, and funding needed to reach its goals. In May 2003, we also reported on a similar problem with National Guard and Reserve facilities. In March 2003, we reported that many of the primary office buildings at overseas embassies and consulates were in poor condition. In 2002, State estimated that its repair backlog was $736 million. In addition, the primary office buildings at more than half of the posts do not meet certain fire/life safety standards. State officials stated that maintenance costs would increase over time because of the age of many of the buildings, and overcrowding has become a problem at several posts. Our work over the years has shown that the deterioration problem leads to increased operational costs, has health and safety implications that are worrisome, and can compromise agency missions. In addition, we have reported that the ultimate cost of completing delayed repairs and alterations may escalate because of inflation and increases in the severity of the problems caused by the delays. As discussed above, the overall cost could also be affected by government realignment. That is, to the extent that unneeded property is also in need of repair, disposing of such property could reduce the repair backlog. Another negative effect, which is not readily apparent but nonetheless significant, is the effect that deteriorating facilities have on employee recruitment, retention, and productivity. This human capital element is troublesome because the government is often at a disadvantage in its ability to compete in the job market in terms of the salaries agencies are able to offer. Poor physical work environments exacerbate this problem and can have a negative impact on potential employees’ decisions to take federal positions. Furthermore, research has shown that quality work environments make employees more productive and improve morale. Finally, as with excess or underutilized property, deteriorated property presents a negative image of the federal government to the public. This is particularly true when many of the assets the public uses and visits the most—such as national parks and museums—are deteriorated and in generally poor condition. Compounding the problems with excess and deteriorated property is the lack of reliable and useful real property data that are needed for strategic decisionmaking. GSA’s worldwide inventory database and related reports are the only central sources of descriptive data on the makeup of the real property inventory, such as property address, square footage, acquisition date, and property type. However, in April 2002, we reported that the worldwide inventory contained data that were unreliable and of limited usefulness. GSA agreed with our findings and has revamped this database and produced a new report on the federal inventory, as of September 30, 2002. We have not evaluated GSA’s revamped database and related report. In addition to problems with the worldwide inventory, real property data contained in the financial statements of the U.S. government have been problematic. In April 2003, we reported that—for the sixth consecutive year—we were unable to express an opinion on the U.S. government’s consolidated financial statements for fiscal year 2002. We have reported that because the government lacked complete and reliable information to support asset holdings—including real property—it could not satisfactorily determine that all assets were included in the financial statements, verify that certain reported assets actually existed, or substantiate the amounts at which they were valued. Aside from the problematic financial data, some of the major real property-holding agencies—including DOD, State, GSA, and Interior—have faced challenges in developing quality management data on their real property assets. The problems at these agencies are discussed in more detail in our high-risk report. As a general rule, building ownership options through construction or purchase are the least expensive ways to meet agencies’ long-term and recurring requirements for space. Lease-purchases—under which payments are spread out over time and ownership of the asset is eventually transferred to the government— are generally more expensive than purchase or construction but are generally less costly than using ordinary operating leases to meet long-term space needs. However, over the last decade, we have reported that GSA—as the central leasing agent for most agencies—relies heavily on operating leases to meet new long- term needs because it lacks funds to pursue ownership. In 1999, we reported that for nine major operating lease acquisitions that GSA had proposed, construction would have been the least-cost option in eight cases and would have saved an estimated $126 million. Lease-purchase would have saved an estimated $107 million, compared with operating leases but would have cost $19 million more than construction. A prime example of this problem was the Patent and Trademark Office’s long-term requirements in northern Virginia, where the cost of meeting this need with an operating lease was estimated to be $48 million more than construction and $38 million more than lease-purchase. In August 2001, we also reported that GSA reduced the term of a proposed 20-year lease for the Department of Transportation headquarters building to 15 years so that it could meet the definition of an operating lease. GSA’s fiscal year 1999 prospectus for constructing a new facility for this need showed the cost of construction was estimated to be $190 million less than an operating lease. Operating leases have become an attractive option in part because they generally look cheaper in any given year. Pursuant to the scoring rules adopted as a result of the Budget Enforcement Act of 1990, the budget authority to meet the government’s real property needs is to be scored— meaning recorded in the budget—in an amount equal to the government’s total legal commitment. For example, for lease-purchase arrangements, the net present value of the government’s legal obligations over the life of the lease contract is to be scored in the budget in the first year. For construction or purchase, the budget authority for the estimated legal obligation related to the construction costs or purchase price is to be scored in the first year. However, for many of the government’s operating leases—including GSA leases, which, according to GSA, account for over 70 percent of the government’s leasing expenditures and are self-insured in the event of cancellation—only the budget authority to cover the government’s commitment for an annual lease payment is required to be scored in the budget. Given this, although operating leases are generally more costly over time, compared with other options, they add much less to a single year’s appropriation total than these other arrangements, making an operating lease a more attractive option from an annual budget perspective, particularly when funds for ownership are not available. Although the policy requirement for full “up-front funding” permits disclosure of the full costs to which the government is being committed, the budget scorekeeping rules allow costly operating leases to “look cheaper” in the short term and have encouraged an overreliance on them for satisfying long-term space needs. Decisionmakers have struggled with this matter since the scoring rules were established and the tendency for agencies to choose operating leases instead of ownership became apparent. We have suggested the alternative of scoring all operating leases up-front on the basis of the underlying time requirement for the space so that all options are treated equally. Although this could be a viable alternative, there would be implementation challenges if this were pursued, including the need to evaluate the validity of agencies’ stated space requirements. Another option—which was recommended by the President’s Commission to Study Capital Budgeting in 1999 and discussed by GAO—would be to allow agencies to establish capital acquisition funds to pursue ownership where it is advantageous, from an economic perspective. To date, none of these options have been implemented, and debate continues among decisionmakers about what should be done. Finding a solution for this problem has been difficult; however, change is needed because the current practice of relying on costly leasing to meet long-term space needs results in excessive costs to taxpayers and does not reflect a sensible or economically rational approach to capital asset management. Terrorism is a major threat to federally owned and leased real property assets, the civil servants and military personnel who work in them, and the public who visits them. This was evidenced by the 1995 Oklahoma City bombing; the 1998 embassy bombings in Africa; the September 11, 2001, attacks on the World Trade Center and Pentagon; and the anthrax attacks in the fall of 2001. Since the Oklahoma City bombing, the federal government has spent billions of dollars on security upgrades within the country and overseas. A study of federal facilities done by the Justice Department in 1995 resulted in minimum-security standards and an evaluation of security conditions in the government’s facilities. In October 1995, the President signed Executive Order 12977, which established an Interagency Security Committee (ISC) to enhance the quality and effectiveness of security in nonmilitary federal facilities. Since the attacks on the World Trade Center and the Pentagon, the focus on security in federal buildings has been heightened considerably. Real property-holding agencies are employing such measures as searching vehicles that enter federal facilities, restricting parking, and installing concrete barricades. As the government’s security efforts intensify, the government will be faced with important questions regarding the level of security needed to adequately protect federal facilities and how the security community should proceed. Furthermore, the 1995 Justice study placed an emphasis on increasing security where large numbers of personnel are located. However, a risk-based approach—which GSA is using for the federal buildings it controls—appears to be more desirable in light of this new round of threats. In September 2001, we reported that DOD uses a risk-based approach to reduce installation vulnerabilities, but this approach was applied primarily to installations with 300 or more personnel assigned on a daily basis. We recommended that DOD improve this approach by ensuring all critical military facilities receive a periodic vulnerability assessment conducted by their higher headquarters regardless of the number of personnel assigned. DOD concurred and began taking action. Since 1996, we have produced more than 60 reports and testimonies on the federal government’s efforts to combat terrorism. Several of these reports have recommended that the federal government use risk management as an important element in developing a national strategy. We have also reported extensively on the security problems and challenges at individual real property-holding agencies. Our high-risk report identifies the problems and challenges faced by State, DOD, Interior, GSA, USPS, and ISC. More recently, we testified on security conditions of overseas diplomatic facilities. We found that State has done much over the last 4 years to improve physical security at overseas posts by, for example, constructing perimeter walls, anti-ram barriers, and access controls at many facilities. However, even with these improvements, most office facilities do not meet security standards. As a result, thousands of U.S. government employees may be more vulnerable to terrorist attacks. Furthermore, our work has shown that agency coordination is critical to addressing security challenges. In our February 2003 report on threats to selected agencies’ critical computer and physical infrastructures, selected agencies identified challenges, including coordinating security efforts with GSA. GSA may often be responsible for protecting facilities that house these critical assets. We recommended that steps be taken to complete the identification and analysis of their critical assets and their dependencies, including setting milestones, developing plans to address vulnerabilities, and monitoring progress. In addition to the clear challenges agencies will continue to face in securing real property assets, the security issue has an impact on the other problems that we have discussed. To the extent that more funding will be needed to increase security, funding availability for repair and restoration, preparing excess property for disposal, and improving real property data systems may be further constrained. Furthermore, real property managers will have to dedicate significant staff time and other human capital resources to security issues and thus may have less time to manage other problems. Another broader effect is the impact that increased security will have on the public’s access to government offices and other assets. Debate arose in the months after September 11, 2001, and continues to this day on the challenge of providing the proper balance between public access and security. In November 2002, legislation was enacted establishing the Department of Homeland Security (DHS). The Federal Protective Service, which was part of GSA and which was responsible for protecting federal agencies under GSA’s jurisdiction, was among those agencies whose functions and personnel were transferred to DHS. Accordingly, DHS became responsible for protecting buildings, grounds, and property owned, occupied, or secured by the federal government that are under GSA’s jurisdiction. In addition, the act provided DHS with authority to protect the buildings, grounds, and property of any other agency whose functions were transferred to DHS under the act. In September 2002, we reported on the implications that the creation of DHS would have on ISC. We concluded that the need to address ISC’s lack of progress in fulfilling its responsibilities should be taken into account in establishing this new department. Although the federal government faces significant, long-standing problems in the real property area, it is important to give Congress, OMB, GSA, and the major real property-holding agencies credit for proposing several reform efforts and other initiatives in recent years. Legislative proposals in the 108th Congress (H.R. 2548 and H.R. 2573) are aimed at enhancing real property management. H.R. 2548 would provide GSA with enhanced asset management tools, including the use of public-private partnerships for itself and other landholding agencies. This bill also provides incentives for better property management, such as allowing agencies to retain funds generated from the property to pay expenses associated with the property and fund other capital needs. In addition, the bill contains provisions aimed at improving real property data, establishing senior real property managers at agencies, developing asset management principles, and identifying specific conditions under which GSA can enter into real property partnerships with the private sector. H.R. 2573 would provide GSA with the authority to enter into public-private partnerships for itself and other landholding agencies. In July 2001, we reported that public- private partnership authority could be an important management tool to address problems in deteriorating federal buildings, but further study of this tool was needed. Appendix II summarizes this report and discusses two examples of public-private partnership opportunities. In August 2003, we also reported on other methods agencies are using to finance federal capital in addition to public-private partnerships, such as incremental funding, real property swaps, and outleases. Another initiative in the National Defense Authorization Act for fiscal year 2002 gave DOD the authority for another round of base realignment and military installation closures in 2005. DOD officials testified that these actions could result in recurring annual net savings of about $3 billion. Despite these and other initiatives agencies have undertaken and the sincerity with which the federal real property community has embraced the need for reform, the problems have persisted and have been exacerbated by several factors that will require high-level attention from Congress and the administration. These factors include competing stakeholder interests in real property decisions; various legal and budget- related disincentives to businesslike outcomes; the need for improved capital planning; and the lack of a strategic, governmentwide focus on federal real property issues. More specifically: Competing Stakeholder Interests - In addition to Congress, OMB, and the real property-holding agencies themselves, several other stakeholders also have an interest in how the federal government carries out its real property acquisition, management, and disposal practices. These include foreign and local governments; business interests in the communities where the assets are located; private sector construction and leasing firms; historic preservation organizations; various advocacy groups; and the public in general, which often views the facilities as the physical face of the federal government in local communities. As a result of competing stakeholder interests, decisions about real property often do not reflect the most cost-effective or efficient alternative that is in the interests of the agency or the government as a whole but instead reflect other priorities. Legal and Budgetary Disincentives - The complex legal and budgetary environment in which real property managers operate has a significant impact on real property decisionmaking and often does not lead to economically rational and businesslike outcomes. For example, we have reported that public-private partnerships might be a viable option for redeveloping obsolete federal property when they provide the best economic value for the government, compared with other options, such as federal financing through appropriations or sale of the property. However, most agencies are precluded from entering into such arrangements.Resource limitations, in general, often prevent agencies from addressing real property needs from a strategic portfolio perspective. When available funds for capital investment are limited, Congress must weigh the need for new, modern facilities with the need for renovation, maintenance, and disposal of existing facilities, the latter of which often gets deferred. In the disposal area, a range of laws intended to address other objectives—such as laws related to historic preservation and environmental remediation— makes it challenging for agencies to dispose of unneeded property. Need for Improved Capital Planning - Over the years, we have reported that prudent capital planning can help agencies to make the most of limited resources, and failure to make timely and effective capital acquisitions can result in increased long-term costs. GAO, Congress, and OMB have identified the need to improve federal decisionmaking regarding capital investment. Our Executive Guide, OMB’s Capital Programming Guide, and its revisions to Circular A-11 have attempted to provide guidance to agencies for making capital investment decisions. However, agencies are not required to use the guidance. Furthermore, agencies have not always developed overall goals and strategies for implementing capital investment decisions, nor has the federal government generally planned or budgeted for capital assets over the long term. Lack of a Strategic, Governmentwide Focus on Real Property Issues - Historically, there has not been a strategic, governmentwide focus on real property issues among decisionmakers. Although some efforts in recent years have attempted to address real property issues with some limited success, the problems have persisted and will continue to grow in magnitude unless they are adequately addressed from a governmentwide standpoint. Resolving the long-standing problems will require high-level attention and effective leadership by Congress and the administration and a governmentwide, strategic focus on real property issues. A strategic focus on real property would be rooted in having the appropriate incentives in place; ensuring transparency in the government’s actions; and fostering a higher level of accountability to stakeholders, including taxpayers. Also, it is important that key stakeholders develop an effective system to measure results. Having quality data would be critical to evaluate the progress of various reforms as they evolve. The magnitude of real property-related problems and the complexity of the underlying factors that cause them to persist put the federal government at significant risk in this area. Real property problems related to unneeded property and the need for realignment, deteriorating conditions, unreliable data, costly space, and security concerns have multibillion-dollar cost implications and can seriously jeopardize mission accomplishment. Because of the breadth and complexity of the issues involved, the long-standing nature of the problems, and the intense debate about potential solutions that will likely ensue, current structures and processes may not be adequate to address the problems. Given this, we concluded in our high-risk report that a comprehensive and integrated transformation strategy for federal real property is needed, and an independent commission or governmentwide task force may be needed to develop this strategy. Such a strategy, based on input from agencies, the private sector, and other interested groups, could comprehensively address these long-standing problems with specific proposals on how best to realign the federal infrastructure and dispose of unneeded property, taking into account mission requirements, changes in technology, security needs, costs, and how the government conducts business in the 21st century; address the significant repair and restoration needs of the federal ensure that reliable governmentwide and agency-specific real property data—both financial and program related—are available for informed decisionmaking; resolve the problem of heavy reliance on costly leasing; and consider the impact that the threat of terrorism will have on real property needs and challenges, including how to balance public access with safety. To be effective in addressing these problems, it would be important for the strategy to focus on minimizing the negative effects associated with competing stakeholder interests in real property decisionmaking; providing agencies with appropriate tools and incentives that will facilitate businesslike decisions—for example, consideration should be given to what financing options should be available; how disposal proceeds should be handled; what process would permit comparisons between rehabilitation/renovation and replacement and among construction, purchase, lease-purchase, and operating lease; and how public-private partnerships should be evaluated; addressing federal human capital issues related to real property by recognizing that real property conditions affect the federal government’s ability to attract and retain high-performing individuals and the productivity and morale of employees; improving real property capital planning in the federal government by helping agencies to better integrate agency mission considerations into the capital decisionmaking process, make businesslike decisions when evaluating and selecting capital assets, evaluate and select capital assets by using an investment approach, evaluate results on an ongoing basis, and develop long-term capital plans; and ensuring credible, rational, long-term budget planning for facility sustainment, modernization, or recapitalization. The transformation strategy should also reflect the lessons learned and leading practices of organizations in the public and private sectors that have attempted to reform their real property practices. Over the past decade, leading organizations in both the public and private sectors have been recognizing the impact that real property decisions have on their overall success. Better managing real property assets in the current environment calls for a significant departure from the traditional way of doing business. Solutions should not only correct the long-standing problems we have identified but also be responsive to and supportive of agencies’ changing missions, security concerns, and technological needs in the 21st century. If actions resulting from the transformation strategy comprehensively address the problems and are effectively implemented, agencies will be better positioned to recover asset values, reduce operating costs, improve facility conditions, enhance safety and security, recruit and retain employees, and achieve mission effectiveness. In addition to developing a transformation strategy, it is critical that all the key stakeholders in government—Congress, OMB, and real property- holding agencies—continue to work diligently on the efforts planned and already under way that are intended to promote better real property capital decisionmaking, such as enacting reform legislation, assessing infrastructure and human capital needs, and examining viable funding options. Congress and the administration could work together to develop and enact reform legislation to give real property-holding agencies the tools they need to achieve better outcomes, foster a more businesslike real property environment, and provide for greater accountability for real property stewardship. These tools could include, where appropriate, the ability to retain a portion of the proceeds from disposal and the use of public-private partnerships in cases where they represent the best economic value to the government. Congress and the administration could also elevate the importance of real property in policy debates and recognize the impact that real property decisions have on agencies’ missions. Solving the problems in this area will undeniably require a reconsideration of funding priorities at a time when budget constraints will be pervasive. However, experimenting with creative financing tools where they provide the best economic value for the government and allocating sufficient funding will likely result in long-term benefits. Without effective incentives and tools; top management accountability, leadership, and commitment; adequate funding; full transparency with regard to the government’s real property activities; and an effective system to measure results, long-standing real property problems will continue and likely worsen. However, the overall risk to the government and taxpayers could be substantially reduced if an effective transformation strategy is developed and successfully implemented, reforms are made, and property- holding agencies effectively implement current and planned initiatives. Since our high-risk report was issued, OMB has informed us that it is taking steps to address the federal government’s problems in the real property area. Specifically, it has formed a team within OMB to determine how to approach the resolution of these long-standing issues. To assist OMB with its efforts, we have agreed to meet regularly to discuss progress and have provided OMB with specific suggestions on the types of actions and results that could be helpful in justifying the removal of real property from the high-risk list. Madam Chairman, this concludes my prepared statement. I would be happy to respond to any questions you or other Members of the Committee may have at this time. For further information on this testimony, please contact Bernard L. Ungar on (202) 512-2834 or at [email protected]. Key contributions to this testimony were made by Kevin Bailey, Christine Bonham, Casey Brown, John Brummett, Maria Edelstein, Anne Kidd, Mark Little, Susan Michal- Smith, David Sausville, and Gerald Stankosky. Three examples of vacant, highly visible federal properties are the former main Department of Veterans Affairs (VA) hospital building in Milwaukee, Wisconsin; St. Elizabeths Hospital in Washington, D.C.; and the former main post office building in downtown Chicago, Illinois. A VA-owned building at a health care facility campus in Milwaukee, Wisconsin is an example of a long-held vacant federal property. This 134,000 square foot building, which is shown in figure 1, has been vacant for about 14 years. The building had been used as the campus’s main hospital but was vacated in 1989 primarily because a new main hospital was built on the campus. VA officials told us that in June 1999, a consulting firm—Economic Research Associates—issued a study in which it identified various options for VA to consider in trying to enhance the use of various vacant and underutilized buildings on the Milwaukee campus, including the former main hospital building. On the basis of the study’s results, VA officials have told us that a substantial investment of capital would in all likelihood be needed to convert this building for alternate use. For example, to convert the building for use as housing for the elderly, the study estimated that about $8.4 million to $9.3 million would be needed. VA officials also mentioned that various organizations, such as the Salvation Army and the Knights of Columbus, expressed some interest in leasing the building; but thus far, VA has not received any firm offers from these organizations. VA officials told us that in fiscal year 2001, VA incurred about $348,000 in maintenance costs for this building, which included such expenses as utilities, pest management, and security. Also, the officials said that VA currently has no alternate use or disposal plans for this building. However, VA officials have told us that updated information on the planned disposal of its vacant and underutilized property would in all likelihood be available after the Secretary of Veterans Affairs approves the results of the Capital Asset Realignment for Enhanced Services process, expected after December 2003. The west campus of St. Elizabeths, which has 61 mostly vacant buildings containing about 1.2 million square feet of space on 182 acres, is held by the Department of Health and Human Services (HHS). During the Civil War, the hospital was used to house soldiers recuperating from amputations, and the property contains a civil war cemetery. In 1990, the property—which contains magnificent vistas of the rivers and the city— was designated a national historic landmark. This is the same designation given to the White House, the U.S. Capitol building, and other buildings that have historic significance. HHS has not needed the property for many years. In April 2001, we reported that the property had significantly deteriorated and had environmental and historic preservation issues that would need to be addressed in order for the property to be disposed of or transferred to another federal agency. In the last year, the General Services Administration (GSA), the District of Columbia (the District), HHS, and various public interest groups have been working to resolve the situation at St. Elizabeths. In May 2002, the Urban Land Institute formed an advisory panel that reported on several options for redeveloping the site. The panel recommended that the federal government transfer the west campus to the District and that the District should identify a master developer for the site. The panel further recommended that the master developer consider redeveloping the site into four campus areas without changing the character of the surrounding neighborhoods and without displacing existing residents. The panel recommended preserving the historic buildings through adaptive use and sensitive addition of new buildings. In addition to the panel, an executive steering committee and a working group, each consisting of representatives from the District, HHS, GSA, and public interest groups, have been established and HHS and GSA have proceeded with a number of actions to prepare the property for disposal. These include preparing the property for “mothballing,” which is work done to minimize further deterioration of the property while the disposal process proceeds; determining the extent of environmental remediation needed; and conducting community outreach. Figure 2 shows the vacant, boarded-up Center Building, which opened in 1855 and served as the main hospital building. The former Chicago main post office building is a 2.5 million square foot facility that was vacated when it was replaced with a new facility in 1997. The U.S. Postal Service (USPS) is incurring about $2 million in annual holding costs for the property. According to USPS, the property was listed for sale and publicly offered. About five offers were received and the property was placed under contract of sale for $17 million. According to USPS, completion of the sale has been delayed due to the weakness of the Chicago real estate market and the lack of an agreement between the developer and the city of Chicago that would abate real estate taxes on a portion of the redevelopment cost for a number of years. According to USPS, this has created a “chicken and egg” situation for the developer. Potential tenants are unwilling to commit to the project unless they are sure it will go ahead. The city appears unwilling to grant the tax abatement until the users of the building are known. USPS is hopeful that the city will begin to address the issue. In addition to the holding costs USPS is incurring, a deteriorating façade will add additional repairs costs to USPS’s annual budget. Furthermore, deterioration of the system that funnels train exhaust up through eight shafts to the roof of the building is a problem that will have to be addressed. The estimated cost of repair is about $10 million and is a condition of the sale. According to USPS, another factor, which bears on the cost of redevelopment, is that the State Historic Preservation Office wants to impose requirements on the redevelopment of the building. Currently, according to USPS, these requirements will add millions of dollars to the redevelopment costs, and the buyer and USPS are reviewing them. USPS said that this project is challenging because of the large amount of space that needs to be developed. According to USPS, a breakthrough in current market conditions will have to be achieved, together with an agreement with the city, before this project can move forward. Figure 3 shows downtown Chicago with the vacant post office building highlighted. Under a public-private partnership, a contractual arrangement is formed between public and private sector partners that can include a variety of activities that involve the private sector in the development, financing, ownership, and operation of a public facility or service. In the case of real property, the federal government typically would contribute the property and a private sector entity contributes financial capital and borrowing ability to redevelop or renovate the property. Public-private partnerships can be a viable option for redeveloping obsolete federal property if they provide the best economic value for the government, compared with other options, such as federal financing through appropriations or sale of the property. However, most agencies are precluded from entering into such arrangements. The Department of Defense (DOD), Department of Veterans Affairs (VA), and U.S. Postal Service (USPS), however, have this authority. Proposed real property reform legislation in the 108th Congress (H.R. 2548 and H.R. 2573) is aimed at enhancing real property management. H.R. 2548 would provide GSA with enhanced asset management tools, including the use of public-private partnerships for itself and other landholding agencies. This bill also provides incentives for better property management, such as allowing agencies to retain funds generated through the use of the management tools to pay expenses associated with the property and fund other capital needs. H.R. 2573 would provide GSA with the authority to enter into public-private partnerships for itself and other landholding agencies. Public-private partnerships need to be carefully evaluated to determine whether they offer the best economic value for the government, compared with other available options. In July 2001, we reported that 8 of 10 GSA properties were strong to moderate candidates for a partnership because there were potential benefits for both the private sector and the government. The potential internal rates of return (IRR) for the private partner ranged from 13.7 to 17.7 percent. It should be noted that we did not calculate the IRR for the government if the government had financed the entire project. This comparison would need to be made to determine which financing option offers the best economic value for the government. Furthermore, public-private partnerships will not necessarily work or be the best option available to address the problems in all federal properties. Two examples of properties that were strong candidates for a partnership were the Internal Revenue Service (IRS) Service Center in Andover, Massachusetts and an office building in Portland, Oregon that houses the Immigration and Naturalization Service known as the 511 Building. Since we profiled these properties in 2001, GSA officials said that they have been unable to pursue public-private partnerships for these properties because GSA continues to lack authority to enter into such arrangements. In August 2003, we also reported on other methods agencies are using to finance federal capital in addition to public-private partnerships, such as incremental funding, real property swaps, and outleases. The Andover Service Center was a strong candidate for a partnership in terms of strong federal demand, moderate private sector interest in development, and strong nonfederal demand for use of the property. The property is a 375,000 square foot, single-story, highly secured building on 37 acres that is in need of capital repairs. At the time of our review, IRS was leasing about 336,000 square feet in additional space in the area. GSA and IRS would like to consolidate IRS’s operations, and the property would be desirable for the city of Andover and local developers to develop. The redevelopment strategy involved a partnership to develop a small office park consisting of six, 5-acre pads. Under this plan, the project could progress as follows: Year 1: Build a new 4-story, 700,000 square foot IRS facility and parking structure for current and expiring IRS leases; the complex would be at the rear of the site to allow for security and a phased development of the rest of the site. Year 2: IRS moves into the new facility and the old building is demolished; the partnership constructs another 250,000 square foot federal office building for non-IRS expiring leases. Years 3 and 4: Partnership constructs two more 250,000 square foot federal office buildings for compatible agency and private sector occupancy. The analysis of this strategy projected a 14.4 percent lifetime IRR for the private partner and a 9.4 percent lifetime IRR for the government. Figure 4 is an aerial view of the IRS Service Center in Andover, Massachusetts. The 511 building was also a strong candidate for a partnership in terms of strong federal demand, strong private sector interest in development, and moderate nonfederal demand for use of the property. The 511 building is an historic, 6-floor building in a desirable location between downtown Portland and the trendy “Pearl District” that housed offices of the Immigration and Naturalization Service. The property includes a parking lot that was sought by the city for a pedestrian mall. The redevelopment strategy included renovating the existing historic office building to include storage use in the basement and retail or restaurant on the first floor. In addition, the strategy included acquiring an additional site for construction of a 240,000 square foot, federal office building across the street. This strategy projected a 15.7 percent lifetime IRR for the private partner and a 12.7 percent lifetime IRR for the government. Figure 5 shows the 511 building (building in center of the picture). If the federal government were to completely finance the Andover and Portland projects, it would not have to share returns with a private sector partner. However, we did not determine what the returns would be in such a situation and how the returns would compare with the returns under a partnership arrangement. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | The federal government faces longstanding problems with excess and underutilized real property, deteriorating facilities, unreliable real property data, and costly space. These problems have multibillion-dollar cost implications and can seriously jeopardize agencies' missions. In addition, federal agencies face many challenges securing real property due to the threat of terrorism. This testimony discusses long-standing, complex problems in the federal real property area and what actions are needed to address them. Government data show that over 30 agencies control hundreds of thousands of real property assets worldwide, including facilities and land, which are worth hundreds of billions of dollars. Unfortunately, much of this vast, valuable portfolio reflects an infrastructure based on the business model and technological environment of the 1950s. Many of the assets are no longer effectively aligned with, or responsive to, agencies' changing missions and are therefore no longer needed. Further, many assets are in an alarming state of deterioration; agencies have estimated that restoration and repair needs are in the tens of billions of dollars. Compounding these problems are the lack of reliable governmentwide data for strategic asset management, a heavy reliance on costly leasing instead of ownership to meet new space needs, and the cost and challenge of protecting these assets against potential terrorism. Given the persistence of these problems and related obstacles, we designated federal real property as a new high-risk area in January 2003. Resolving these problems will require high-level attention and effective leadership by both Congress and the administration. Also, current structures and processes may not be adequate to address the problems. Thus, as we have reported, there is a need for a comprehensive, integrated transformation strategy for real property that will focus on some of the underlying causes that contribute to these problems, such as competing stakeholder interests in real property decisions, various legal and budget-related disincentives to businesslike outcomes, inadequate capital planning, and the lack of governmentwide focus on real property issues. It is equally important that Congress and the administration work together to develop and enact needed reform legislation to give real property-holding agencies incentives and tools they need to achieve better outcomes. This would also foster a more businesslike real property environment and provide for greater accountability. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
The Army has divided nonstandard equipment into two broad categories: Nontactical nonstandard equipment, which consists primarily of durable goods that are used to provide services for soldiers as well as foreign governments. This equipment includes but is not limited to fire trucks and ambulances, as well as equipment used for laundry and food service. Most of this equipment has been acquired through the Logistics Civil Augmentation Program (LOGCAP) and is managed and sustained by contractors under the LOGCAP contract (hereinafter referred to as contractor-managed, government-owned property). Tactical nonstandard equipment, which is commercially acquired or nondevelopmental equipment that is rapidly acquired and fielded outside the normal Planning, Programming, Budgeting, and Execution System and acquisition processes, in order to bridge capability gaps and meet urgent warfighter needs. According to Army documents, as of March 2011, 36.5 percent of all Army equipment in Iraq was contractor-managed, government-owned property, with a value of approximately $2.5 billion. Furthermore, as of March 2011 an additional 10.7 percent of Army equipment in Iraq, valued at approximately $1.6 billion, was categorized as nonstandard equipment. According to Army officials, all equipment—standard and nonstandard— must be out of Iraq by December 31, 2011. We have reported on issues related to nonstandard equipment in Iraq in the past. In September 2008 we identified several issues that could affect the development of plans for reposturing U.S. forces from Iraq. One of those issues was that DOD, CENTCOM, and the military services had not clearly established roles and responsibilities for managing and executing the retrograde of standard and nonstandard equipment from Iraq. We also noted that data systems used during the retrograde process were incompatible, and although a fix for the data system incompatibility had been identified, it had not been implemented. As a result, we recommended that the Secretary of Defense, in consultation with CENTCOM and the military departments, take steps to clarify the chain of command over logistical operations in support of the retrograde effort. We also recommended that the Secretary of Defense, in consultation with the military departments, correct the incompatibility weaknesses in the various data systems used to maintain visibility over equipment and materiel while they are in transit. DOD partially concurred with our first recommendation, and took steps to clarify the chain of command over logistical operations in support of the retrograde effort. DOD fully concurred with our second recommendation, stating that it was actively assessing various data systems used to maintain visibility over equipment and materiel while in transit. Finally, though we made no recommendations on this issue, we noted that maintaining accountability for and managing the disposition of contractor-managed, government- owned property may present challenges to reposturing in Iraq. In February 2009, in testimony before the Committee on Armed Services of the House of Representatives, we addressed factors that DOD should consider as the United States refines its strategy for Iraq and plans to draw down forces. We then included a section on managing the redeployment of U.S. forces and equipment from Iraq in our March 2009 report on key issues for congressional oversight. In November 2009, in a statement before the Commission on Wartime Contracting in Iraq and Afghanistan, we presented some preliminary observations on DOD’s planning for the drawdown of U.S. forces from Iraq, and in April 2010 issued a report that highlighted actions needed to facilitate the efficient drawdown of U.S. forces and equipment from Iraq. In our April 2010 report, we noted that DOD had created new organizations to oversee, synchronize, and ensure unity of effort during the drawdown from Iraq, and had established goals and metrics for measuring progress. We also noted that, partly in response to our September 2008 report recommendations, representatives from the Secretary of Defense’s Lean Six Sigma office conducted six reviews to optimize theater logistics, one of which focused on the process for retrograding equipment from Iraq, including disposition instructions. Results from the Lean Six Sigma study influenced the development of a new data system—the Theater Provided Equipment Planner—which is intended to automate the issuance of disposition instructions for theater provided equipment. Complementing the Theater Provided Equipment Planner database was a second database—the Materiel Enterprise Non-Standard Equipment database—which catalogued all types of nonstandard equipment in Iraq in order to provide automated disposition. However, we also noted that officials in Iraq and Kuwait stated that, of all categories of equipment, they had the least visibility over contractor-managed, government-owned property, and that U.S. Army Central Command officials said they had low confidence in the accountability and visibility of nonstandard equipment. While these reports, testimonies, and statements focused primarily on plans, procedures, and processes within the CENTCOM area of responsibility, especially in Iraq and Kuwait, this report’s focus will be specifically on nonstandard equipment and MRAPs, and primarily on the plans, processes, and procedures that affect its disposition once it leaves the CENTCOM area of responsibility. MRAPs were first fielded in Iraq in May 2006 by the Marine Corps for use in western Iraq. A year later, the Secretary of Defense affirmed the MRAP program as DOD’s most important acquisition program. As of July 2011, DOD’s acquisition objective was 27,744 MRAPs; according to DOD officials, funding appropriated through fiscal year 2011 is sufficient to cover 27,740. The vast majority of these MRAPs were allocated to the Army for use in Iraq and, increasingly, in Afghanistan. According to Joint Program MRAP statistics, as of February 2011, MRAPs had been involved in approximately 3,000 improvised explosive device events, and have saved thousands of lives. We have also reported on MRAPs in the past. In October 2009, we reported positively on the quick action taken by the Secretary of Defense to declare the MRAP program DOD’s highest priority. However, we also noted as key challenges that long-term sustainment costs for MRAPs had not yet been projected and budgeted and that the services were still deciding how to incorporate MRAPs into their organizational structures. In November 2009, in a statement before the Commission on Wartime Contracting in Iraq and Afghanistan, we noted that although the Army had not yet finalized servicewide requirements for its MRAPs, it had designated Red River Army Depot as the depot that would repair MRAPs, and had issued a message directing the shipment of 200 MRAPs from Kuwait to Red River Army Depot as part of an MRAP Reset Repair Pilot Program. However, we also noted that as of October 2009, there were approximately 800 MRAPs in Kuwait awaiting transportation to the United States. In April 2010 we noted that the Army’s strategy for incorporating MRAPs into its ground vehicle fleet was still pending final approval. As part of the Iraqi drawdown effort, excess nonstandard equipment that is no longer needed in Iraq is either redistributed in the CENTCOM theater, disposed of, provided to other nations through foreign military sales, or packaged for retrograde to a variety of Defense Logistics Agency Distribution Depots or Sierra Army Depot in the United States. According to Army Materiel Command, the majority of the excess nontactical nonstandard equipment is sent to Sierra Army Depot. According to officials at Sierra Army Depot, as of April 2011 the depot had received a total of 22,507 pieces of nontactical nonstandard equipment worth over $114.9 million, and still has on hand approximately 13,200 items worth more than $75 million. Smaller items, which are stored in a warehouse, include such items as desktop computers, computer monitors, printers, laptop computers, handheld palm computers, distress beacons, night vision goggles, rifle scopes, laser sights, radios, and radio frequency amplifiers. Larger items, which are stored outside, include all- terrain vehicles, generators, tractors, fire suppression systems, large refrigerators, and light sets. Once the items are received at Sierra Army Depot, they are removed from their containers, inventoried, evaluated for serviceability, catalogued, and placed in the appropriate location in the warehouse or, if they are larger items, in the appropriate outside storage location. Simultaneously, once the items are catalogued, they are recorded in Sierra Army Depot’s property book for accountability. According to guidance issued by Headquarters, Department of the Army, Army Materiel Command is to provide Army Commands, Army Service Component Commands, and Army Direct Reporting Units access to the inventory of nontactical nonstandard equipment stored at depots such as Sierra Army Depot through the Materiel Enterprise Non-Standard Equipment database; the guidance also discusses use of the depot property book to view available nonstandard equipment. Using these means to view what is on hand at Sierra Army Depot, units can request items from Army Materiel Command, which will then process the request and coordinate for its shipment to the requesting unit. In January 2011, Army Materiel Command introduced another means by which units can requisition nontactical nonstandard equipment from Army Materiel Command. Called the “virtual mall,” this tool uses the Materiel Enterprise Non-Standard Equipment database as a means by which units can both view items at Sierra and other Army depots and request them for their use. According to Sierra Army Depot records, as of April 2011 it had shipped more than 7,600 individual pieces of nontactical nonstandard equipment to various Army organizations. The total value for these items exceeded $29 million. According to Sierra Army Depot officials, its single largest customer in terms of number of items shipped is U.S. Army Installation and Management Command (a Direct Reporting Unit), which, as of April 2011, had received almost 1,800 items of nontactical nonstandard equipment from the depot, including computers, computer monitors, radios, “jaws of life,” cameras, generators, metal detectors, and binoculars. All equipment shipped from Sierra Army Depot is in “as is” condition. Receiving units are responsible for shipping costs and for any sustainment funding. As shown in table 1 above, Army units are not the only organizations that can requisition excess nontactical nonstandard equipment. If an item of nontactical nonstandard equipment has not already been requisitioned by Army or other federal agencies, such as the Department of State, local and state governments may seek to acquire it through the National Association of State Agencies for Surplus Property (NASASP), which accesses it through the General Services Administration (GSA). United States Forces-Iraq makes its excess nontactical nonstandard equipment lists available to GSA and NASASP, which in turn share these lists with state and local governments. Moreover, DOD has facilitated and partially funded the placement of a GSA/NASASP liaison in Kuwait. This liaison enables state and local governments to make informed decisions about available nontactical nonstandard equipment and coordinates its cleaning, customs clearance, movement, and movement tracking. The only costs incurred by state and local governments for equipment they decide to accept are transportation costs, and DOD has offered GSA/NASASP access to the Defense Transportation System, which provides door-to-door delivery, pricing at the DOD rate, and seamless customs processing. Finally, periodically GSA and NASASP officials are invited to Sierra Army Depot to screen excess nontactical nonstandard equipment on site that they did not have an opportunity to screen in theater. According to Army documents, as of January 2011 local and state governments have claimed 20 items valued at over $398,000 from Iraq, and, as of April 2011, an additional 256 items valued at almost $6 million from Sierra Army Depot. These items include generators, forklifts, tool kits, bulldozers, light sets, and concrete mixers. As with Army units, excess nontactical nonstandard equipment is shipped in “as is” condition. Moreover, according to Army officials, some excess items, like generators, do not meet U.S. specifications and therefore require modification. Although Sierra Army Depot has been receiving nontactical nonstandard equipment from Iraq since November 2009, until recently the Army had no guidance as to how long that equipment should be stored before being either redistributed or disposed of. According to Army Materiel Command officials, the potential usefulness of much of the equipment stored at Sierra Army Depot will be lost if items just sit on the shelves. Moreover, Sierra Army Depot records indicate that, as of April 2011, 59 percent of the nontactical nonstandard equipment received at the depot since November 2009 was still in storage there, while approximately 34 percent was shipped to Army organizations for reuse—$18.7 million to Army installations and bases throughout the world, $6.9 million to the Sierra Army Depot, and $4.2 million to the U.S. Army Installation and Management Command. Of the remaining 7 percent, approximately $6 million was donated to state and local governments and $3.2 million was transferred to disposal. On April 27, 2011, Headquarters, Department of the Army, disseminated a message that updated its processes and procedures for the requisitioning of excess nonstandard equipment stored at selected Army Materiel Command depots. According to this message, the intent is to extend the use of that equipment where appropriate. The message also discusses the use of the “virtual mall” under the Materiel Enterprise Non- Standard Equipment database and Sierra Army Depot’s property book for units to view equipment. The message also states that the intent is that once an item is unserviceable or no longer operational, it can be disposed of through local Defense Logistics Agency Disposition Services. Moreover, the April 2011 message calls for the establishment of an executive forum to review and determine the final disposition of excess nonstandard equipment stored at Sierra Army Depot for more than 180 days that has not been identified for reuse. According to this message, this semiannual review is intended to enable the Army’s effort to apply due diligence in the final disposition of nonstandard equipment. In a follow-up to its April 27 message, Headquarters, Department of the Army, issued another message on June 2, 2011, that outlines the makeup of the executive forum, which met for the first time on June 18, 2011. Finally, although neither message states this explicitly, according to a senior official, once a decision is made by the executive committee to dispose of nontactical nonstandard equipment that has been at Sierra Army Depot for more than 180 days, similar instructions will be included in the Materiel Enterprise Non-Standard Equipment database to prevent items that have been determined not to have future value or serviceability from being shipped back to the United States. In this way unnecessary transportation costs will be avoided. According to Army documents, in 2004, the Vice Chief of Staff of the Army directed U.S. Army Training and Doctrine Command’s Army Capabilities and Integration Center to identify promising capabilities in use in the CENTCOM theater that, based on their performance, should quickly become enduring programs of record or acquisition programs. Originally called Spiral to the Army, this effort eventually evolved into the Army’s Capabilities Development for Rapid Transition (CDRT) process. The CDRT process enables the Army to identify capabilities, most of which involve tactical nonstandard equipment that has been rapidly fielded, that are performing well in the CENTCOM theater and then to assess whether the capability should be retained in the Army’s current and future force. Developed by the Army Capabilities and Integration Center and the Army G-3/5/7, the CDRT process involves the periodic nomination and evaluation of tactical nonstandard equipment in use in the CENTCOM theater by a CDRT community of interest. This community includes representatives from the Office of the Secretary of Defense, the Joint Staff, various combatant commands, Army commands, Army service component commands, and various Army centers, such as the Army’s armor center, infantry center, and signal center. At present, the CDRT community of interest convenes quarterly to evaluate nominated capabilities. To qualify as a candidate for consideration in the CDRT process, a piece of tactical nonstandard equipment must first be nominated for consideration and, in addition, must have been in use for at least 120 days and have undergone an operational assessment, among other qualifications. Once identified, a list of candidates for consideration is compiled by the Army Capabilities and Integration Center and the Army G-3/5/7 and then sent to the CDRT community of interest for assessment. Assessment of each item of equipment is performed through a scoring system based on survey responses from operational Army units. Based on the assessment, each piece of equipment is placed in one of three categories: Acquisition Program Candidate/Enduring, Sustain, or Terminate. Tactical nonstandard equipment placed in the “enduring” category is theater-proven equipment assessed as providing a capability applicable to the entire Army and to the future force; as such, it may become eligible to compete for funding in the Army’s base budget. Tactical nonstandard equipment placed in the “sustain” category is equipment assessed as filling a current operational need in the CENTCOM theater, but which is not applicable to the entire Army, useful to the future force, or not yet recommended as an enduring capability. Sustain category tactical nonstandard equipment is resourced through overseas contingency operations funding, and is not programmed into the Army’s base budget. Finally, tactical nonstandard equipment placed in the “terminate” category is equipment deemed to have been ineffective, or as obsolete, or as having not fulfilled its intended function, or as having no further utility beyond current use. Army policy states that tactical nonstandard equipment in this category is not to be allocated Department of the Army funding, although individual units may continue to sustain the equipment with unit funds. Through the CDRT process, the Army has been able to accelerate the normal process by which requirements and needs are developed, as outlined in the Joint Capabilities Integration and Development System. That is because tactical nonstandard equipment placed in the enduring category as a result of the CDRT process enters the Joint Capabilities Integration and Development System at a more advanced developmental stage, as opposed to entering the system from the start. Accordingly, the Army views the CDRT process as a key means for determining the future disposition of rapidly fielded capabilities. Although one of the tenets of the CDRT process is to assess rapidly developed capabilities equipped to deployed units and move those proven in combat to enduring status as quickly as possible, a significant majority of the tactical nonstandard equipment evaluated to date has been categorized as sustain category equipment to be used only in the CENTCOM theater and paid for with overseas contingency operations funds. As of January 2011, the CDRT community of interest had met 10 times and considered 497 capabilities, of which 13 were nonmaterial capabilities. As a result, 30 material and 10 nonmaterial capabilities were selected as enduring; and an additional 13 capabilities were merged into other programs. An example of an enduring category material capability involving tactical nonstandard equipment is the Boomerang Gunshot Detector, which is an antisniper detection system that detects gunfire and alerts soldiers to the shooter’s location. A further 116 material capabilities were terminated. An example of a capability that was terminated because the CDRT community of interest considered it obsolete is the Cupola Protective Ensemble, which is protective clothing worn over body armor to protect troops from the blast effects of improvised explosive devices. The remaining 328 capabilities, including for example the Combined Information Data Network Exchange, were placed in the sustain category. According to Army officials, this piece of tactical nonstandard equipment was placed in the sustain category because, although it works well in the CENTCOM theater, it would not be applicable elsewhere, as it is a database with intelligence information specific to that theater. Capabilities that are designated as sustain category items may be reviewed during future CDRT iterations to see if that decision is still valid, and selected excess equipment placed in this category and no longer required in theater is being warehoused by Army Materiel Command until called upon in the future. Army officials have also stated, however, that the majority of capabilities considered by the CDRT community of interest are placed in the sustain category because the Army has yet to make definitive and difficult decisions about whether it wants to keep them and cannot afford to sustain this equipment without overseas contingency operations appropriations. As we have previously recommended, DOD should shift certain contingency costs into the annual base budget to allow for prioritization and trade-offs among DOD’s needs and to enhance visibility in defense spending. The department concurred with this recommendation. The effectiveness of the Army’s CDRT process is also inhibited by the lack of a system to track, monitor, and manage this equipment, which, in turn, may be attributed to the absence of a single focal point with the appropriate authority to oversee the fielding and disposition of tactical nonstandard equipment. As stated above, to qualify as a candidate for consideration in the CDRT process, a piece of tactical nonstandard equipment must first be nominated. But without a system or entity responsible for tracking, monitoring, and managing all items of tactical nonstandard equipment in its inventory, some capabilities in the CENTCOM theater may not be nominated and, therefore, never considered by the CDRT community of interest. According to federal best practices reported in GAO’s Standards for Internal Control in the Federal Government, management is responsible for developing detailed policies, procedures, and practices to help program managers achieve desired results through effective stewardship of public resources. To this end, in March 2011 we reported that DOD lacks visibility over the full range of its urgent needs efforts—one of the methods though which tactical nonstandard equipment is obtained and fielded—including tracking the solutions developed in response to those needs. Additionally, we found that DOD does not have a senior-level focal point to lead the department’s efforts to fulfill validated urgent needs requirements. Accordingly, we recommended that DOD designate a focal point to lead the department’s urgent needs efforts and that DOD and its components, like the Army, develop processes and requirements to ensure tools and mechanisms are used to track, monitor, and manage the status of urgent needs. DOD concurred with our recommendation and stated that it would develop baseline policies that would guide the services’ own processes in tracking urgent needs and that the Director of the Joint Rapid Acquisition Cell would serve as the DOD focal point. In April 2010 the Vice Chief of Staff of the Army issued a memorandum calling for the development of a rapid acquisition/rapid equipping common operating picture and collaboration tool, as a means to increase the efficiency and transparency of Army urgent needs processes. As of April 2011, however, Army officials stated that the system directed by the Vice Chief of Staff had yet to be deployed due to a lack of agreement over information sharing and over who would be responsible for the system. Because Army officials have repeatedly stressed that they do not have visibility over the entire universe of tactical nonstandard equipment in the CENTCOM theater and consider only those capabilities that have been nominated, in the absence of a common operating picture and a single focal point responsible for tracking, monitoring, and managing Army tactical nonstandard equipment it is possible that a piece of nonstandard equipment may exist in the CENTCOM theater that is either more effective, less expensive, or both, than a comparable piece of equipment that has been considered by the CDRT community of interest. Moreover, without visibility over the universe of tactical nonstandard equipment, the Army cannot project reset and sustainment costs for this equipment, and ensure that equipment is only being funded to the extent needed to meet a continuing requirement. The Army has recently transitioned MRAPs from nonstandard to standard items of equipment and published detailed disposition plans outlining how the vehicles will be integrated into the Army’s force structure. These detailed disposition plans are outlined in the document Final Report, Army Capabilities Integration Center, Mine Resistant Ambush Protected Study II (final report), which was released on June 22, 2011. This final report followed an August 2010 U.S. Army Training and Doctrine Command study to determine the best means to integrate MRAPs into the overall Army force structure. The August 2010 study presented Army leaders with two courses of action. Although there were several similarities between the two—for instance, each called for the placement of approximately 1,700 MRAPs in training sets—there were also some substantial differences. Specifically, the first course of action called for the placement of the majority of the Army’s MRAPs, more than 10,600, into prepositioned stocks. The second course of action allocated almost 4,000 fewer MRAPs to prepositioned stocks, and placed more with Army units. The August 2010 study recommended adoption of the first course of action because, according to Army officials, it offered the most balanced distribution of MRAPs among prepositioned stocks, training sets, reserve sets, and unit sets. Furthermore, the August 2010 study stated that other benefits that would accrue from the first course of action include reduced installation infrastructure effects and lower military construction costs, lower operations and maintenance costs, and lower life-cycle costs. For example, the study estimated that over a 25-year period, the first course of action would accrue $2.093 billion in life-cycle costs, while the second course of action would accrue $2.548 billion in life-cycle costs (these costs do not include onetime costs, discussed below, for upgrading and standardizing MRAPs that are returned to the United States). According to Army officials, the savings would result from having more MRAPs in prepositioned stocks, which, in turn, require less maintenance. Finally, according to Army Training and Doctrine Command officials, the first course of action provided the Army better operational flexibility, because MRAPs would already be positioned in forward areas and would not have to be transported from the United States, while the approach would still maintain sufficient numbers of MRAPs for training. On December 16, 2010, U.S. Army Training and Doctrine Command presented the results of its August 2010 study to the Army Requirements and Resourcing Board, for decision. On April 20, 2011, Headquarters, Department of the Army, published an order to provide guidance to develop an execution plan for the retrograde, reset, and restationing of the MRAP fleet, with an end state being an MRAP fleet that is properly allocated and globally positioned to support the full range of Army operations. The order did not give any specifics regarding the allocation of MRAPs across the Army ground vehicle fleet, however. According to Army officials, these specifics would be provided by the final report, which was released on June 22, 2011. According to the final report, MRAPs will be allocated as shown in table 2. Although the specific allocation of MRAPs varies slightly from that recommended in the August 2010 study (for example, the course of action recommended in the August 2010 study allocated 970 MRAPs to reserve stocks instead of the 746 adopted by the final report), the reasons given in the final report for allocating the MRAPs across the fleet were essentially the same as proposed in the August 2010 study: to provide a balanced distribution of MRAPs between units and prepositioned stocks, to provide strategic depth and operational flexibility by placing the bulk of the MRAPs in prepositioned stocks, and to provide a pool of reserve stock MRAPs that could be used to sustain prepositioned stock sets and maintain unit MRAP readiness. In addition, as had the August 2010 study, the final report highlighted the expected life-cycle costs for MRAPs based on the chosen allocation. This figure, $2.086 billion over 25 years, is slightly lower than the figure estimated in the August 2010 study. Though both the August 2010 study and the final report state the estimated life-cycle costs for MRAPs over 25 years, neither estimate fully follows recommendations in DOD’s instruction on economic analysis and decisionmaking, Office of Management and Budget (OMB) guidance for conducting cost-benefit analyses, and GAO’s Cost Estimating and Assessment Guide. For example, all three sets of guidance recommend that costs be calculated in or adjusted to present value terms, yet both the August 2010 study and the final report present costs in constant fiscal year 2011 dollars. While constant dollars allow for the comparison of costs across years by controlling for inflation, present value analysis is also recommended when aggregating costs to account for the time value of money. As a result of not doing a present value analysis and not recognizing the time value of money, the timing of when the costs are expected to occur is not taken into account. According to DOD’s instruction for economic analysis and decisionmaking, “accounting for the time value of money is crucial to the conduct of an economic analysis.” Moreover, the August 2010 study and the final report present life-cycle costs in aggregate, yet OMB guidance regarding underlying assumptions suggests that key data and results, such as year-by-year estimates of benefits and costs, should be reported to promote independent analysis and review. DOD guidance suggests that the results of economic analysis, including all calculations and sources of data, should be documented down to the most basic inputs to provide an auditable and stand-alone document, and the GAO guide says that it is necessary to determine when expenditures will be made. Without a year-by-year breakout of the costs, decision makers have no insight on the pattern of expenditures, a perspective that could be important for future asset management and budgetary decisions. Moreover, a year-by-year breakout of estimated costs would facilitate independent analysis and review. Complicating the issue surrounding life-cycle costs for MRAPs is that neither the August 2010 study nor the final report indicates that the “known” life-cycle costs, as they are labeled, are not, in fact, the total life- cycle costs. According to Army officials, the costs depicted in both documents are differential costs, meaning that the only life-cycle costs that were used in the decision-making matrix were costs that would differ between the two courses of action. Conversely, costs associated with elements of each course of action that were the same were not included. For example, both courses of action delineated in the August 2010 study allocated 2,818 MRAPs to certain types of units (truck companies for convoy protection, for instance). According to Army officials, costs associated with these MRAPs were not included in the decision matrices depicted in either the August 2010 study or the final report, and nowhere in either report is this indicated. According to Army officials, the Army does not yet know the true total MRAP life-cycle costs, although the Army’s MRAP program management office is leading an effort to complete such an estimate no later than fiscal year 2015. Nevertheless, the fact that neither document states that the life-cycle costs presented in each are not total costs may be misleading for decision makers. It also raises the question of to what extent the Army considered the affordability of either alternative; the associated trade-offs in the sustainment of its current fleet of tactical and combat equipment; or offsets in future modernization procurement that might be necessary in its base budget to sustain the additional 18,259 vehicles, of which 4,727 will be assigned to units. Finally, although Army officials provided us with a copy of a sensitivity analysis, which all three sets of guidance recommend, neither the August 2010 study nor the final report indicates that a sensitivity or uncertainty analysis was done. According to DOD documents, as a joint program, MRAPs have been allocated, through July 2011, $44.1 billion in overseas contingency operations funding. The military departments consequently have not had to fully account for long-term budgetary aspects and will eventually face substantial operational support costs in their annual base budgets. Army officials have likewise expressed concern about the loss of overseas contingency operations funding for MRAPs once the vehicles become part of the Army’s enduring force structure. Specifically, they are concerned about the Army’s ability to fund operations and maintenance costs for MRAPs within the Army base budget and the funding trade-offs that might have to be made with other major acquisition programs. On May 25, 2010, the Under Secretary of Defense (Comptroller) issued budget submission guidance to the DOD components stating that costs for non-war-related upgrades or conversions, home station training costs, and the storage of MRAPs not active in combat operations must be included in base budget estimates for fiscal years 2012 to 2016, thereby compelling the services to begin planning for funding MRAPs. Specific upgrades include increased armor protection, enhanced suspensions, and the standardization and consolidation of the many MRAP variants. In response, the Army has allocated $142.9 million in its fiscal year 2012 base budget submission for the upgrade of 224 MRAPs at Red River Army Depot and, all told, has planned to budget for the upgrade of 3,616 MRAPs for fiscal years 2012 through 2016, at a cost of $1.6 billion. However, the Army has not allocated funding for home station training or MRAP storage over the same period. According to the Army’s Tactical Wheeled Vehicle Strategy, one of the references used to inform the final report, it is important that the Office of the Secretary of Defense and the executive and legislative branches are kept informed of the Army’s needs to support its given missions and of any risks it foresees, so that thoughtful funding decisions can be made. In addition, this strategy states that the availability of adequate funding poses significant risks and that, if funding is lower than forecasted, the Army will be required to make difficult trade-offs that would, in turn, create increased operational risks. Moreover, in its April 20, 2011 order, Headquarters, Department of the Army, noted that one of the objectives of the order was to direct Planning, Programming, Budgeting, and Execution to ensure necessary action to identify and validate requirements used to inform future programming development. However, given the limitations to the cost estimates of both the August 2010 MRAP study and the final report on MRAPs, and the fact that the total cost estimates for the Army MRAP program are not yet complete, it is difficult to see how Planning, Programming, Budgeting, and Execution can be accomplished. Although the Army has plans and processes for the disposition of its nontactical and tactical nonstandard equipment, challenges remain that, if left unresolved, could affect plans for the eventual drawdown of U.S. forces from Iraq as well as Afghanistan. Specifically, without greater oversight over the universe of tactical nonstandard equipment currently being employed in Iraq and without a single focal point responsible for maintaining oversight of this equipment, there is a potential that some tactical nonstandard equipment that has been effective will be overlooked, and the Army could potentially forfeit opportunities for cost- saving efficiency and for ensuring that servicemembers are provided the most effective combat system. In addition, because the Army has categorized the vast majority of the tactical nonstandard equipment that it has considered as equipment that will continue to be funded with overseas contingency operations funds, it has not had to make the hard decisions about finding money for these programs in its base budget. Yet the Army cannot afford to sustain this equipment without overseas contingency operations funds, and continuing to fund these items in this manner places a strain on the Army budget that is not transparent. Finally, future costs associated with MRAPs will remain uncertain without a thorough analysis of those costs based on DOD, OMB, and GAO best practices and the completion of a true total cost estimate. Moreover, without the disclosure of the complete set of costs associated with MRAPs, the Army, the Office of the Secretary of Defense, and congressional decision makers will be unable to ascertain the long-term budgetary effects of the program, which is critical information in a time when competing programs are vying for finite and increasingly constrained funding. To facilitate the Army’s ability to efficiently evaluate, integrate, and provide for the disposition of its nonstandard equipment being retrograded from Iraq, and supply DOD decision makers and Congress with accurate estimates of the future costs of these systems, we recommend that the Secretary of Defense direct the Secretary of the Army to take the following three actions: finalize decisions about the future status of tactical nonstandard equipment, fund those items deemed as enduring capabilities in the Army base budget if applicable, and provide Congress with its plans for and estimates on future funding for or costs associated with any equipment the Army will continue to use in theater that will not become enduring capabilities; designate a senior-level focal point within the Department of the Army with the appropriate authority and resources to manage the service’s effort in overseeing the disposition of its tactical nonstandard equipment to include the implementation of a servicewide means to track, monitor, and manage this equipment; and undertake a thorough total life-cycle cost estimate for integrating MRAPs into its ground vehicle fleet in accordance with DOD, OMB, and GAO guidance and include costs for training, upgrades, standardization, and military construction and use this estimate to assess the affordability of its current plans and make adjustments to those plans if warranted; and provide the total life-cycle cost for integrating MRAPs into its ground vehicle fleet to Congress. In written comments on a draft of this report, DOD partially concurred with our first recommendation, did not concur with our second recommendation, and concurred with our third recommendation. These comments are included in appendix II. In addition, DOD provided technical comments that were incorporated, as appropriate. In response to our first recommendation that the Secretary of Defense direct the Secretary of the Army to finalize decisions about the future status of tactical nonstandard equipment, fund those items deemed as enduring capabilities in the Army base budget if applicable, and provide Congress with its plans for and estimates on future funding for or costs associated with any equipment the Army will continue to use in theater that will not become enduring capabilities, DOD partially concurred. In its response, DOD stated that the Capabilities Development for Rapid Transition (CDRT) process identifies enduring capabilities as Army Program Candidates and that the CDRT meets quarterly and provides recommendations to the DOD Joint Capabilities Development System, the Army Requirements Oversight Council, or the Joint Requirements Oversight Council depending on the acquisition strategy. DOD also stated that program managers and appropriate Army personnel then compete selected programs in the Program Operating Memoranda Joint Capabilities Assessment to secure funding and for inclusion in the President’s Budget Submission. Finally, DOD stated that the Army will provide the recommended report regarding any equipment the Army will continue to sustain in theater after Army forces return from Iraq. We support DOD’s rendering of a report to Congress outlining the equipment that it will continue to sustain in theater with overseas contingency operations funds. We also recognize that the CDRT process has resulted in a recommendation that certain equipment become programs of record and, as such, compete for funding in the Army’s base budget. However, as we reported, of the 484 material capabilities considered by the CDRT process as of January 2011, only 30, including Armored Security Vehicles and One-System Remote Video Terminals, have received such a recommendation while 328 material capabilities considered by CDRT were still being maintained by overseas contingency operations funds. Army officials familiar with the CDRT process have stated that the Army has yet to make definitive and difficult decisions about the majority of the material capabilities considered by CDRT and it cannot afford to sustain this equipment without overseas contingency operations funds. However, in order for the department to plan for and Congress to be informed of the future cost effect of sustaining new items of equipment after the end of overseas contingency operations funding, we continue to believe that the Army should eliminate this unknown by finalizing decisions about the future status of its tactical nonstandard equipment. DOD did not concur with our recommendation that the Secretary of Defense direct the Secretary of the Army to designate a senior-level focal point within the Department of the Army with the appropriate authority and resources to manage the service’s effort in overseeing the disposition of its tactical nonstandard equipment to include the implementation of a servicewide means to track, monitor, and manage this equipment. In its response, DOD stated that our recommendation does not account for the complexity covering requirements determination and approval, combat development, materiel development, management, and sustainment. In addition, DOD’s response stated that the Army used the same processes for managing nonstandard equipment as it does to manage standard equipment and highlighted the responsibilities of the Army G-3/5/7, G-8, G-4, and Assistant Secretary of the Army for Acquisition, Logistics, and Technology with regard to nonstandard equipment. Moreover, in its response DOD maintained that the Army has visibility of the nonstandard equipment in theater and has undertaken extensive efforts to ensure all nonstandard equipment is brought to record and accounted for, and that the Army staff and the Life Cycle Management Commands review nonstandard equipment on a recurring basis to determine its disposition. In summation, DOD’s position is that the Army does not believe it advisable to treat tactical nonstandard equipment different from nontactical nonstandard equipment or standard equipment. However, as the report points out, the Army already does treat tactical nonstandard equipment differently than nontactical nonstandard equipment and standard equipment, a fact underscored by the existence of the CDRT process, which is applicable only to tactical nonstandard equipment and not to any other types of equipment. In addition, Army officials repeatedly stressed to us that they do not have visibility over the universe of tactical nonstandard equipment in the CENTCOM theater. Army officials also told us that, despite an April 2010 memorandum from the Vice Chief of Staff of the Army calling for the development of a common operating picture and collaboration tool as a means to increase efficiency and transparency of Army urgent needs processes by which tactical nonstandard equipment is acquired, as of April 2011 one had yet to be fielded due to a lack of agreement over information sharing and over who would be responsible for the system. Moreover, in March 2011, DOD concurred with our recommendation that the department appoint a senior-level focal point to lead its urgent needs efforts and that its components, like the Army, develop processes and requirements to ensure tools and mechanisms are used to track, monitor, and manage the status of urgent needs. On the basis of the above, we continue to believe that like DOD, the Army should designate a senior-level focal point with the appropriate authority and resources to manage the service’s efforts in overseeing the disposition of its tactical nonstandard equipment to include the implementation of a servicewide means to track, monitor, and manage this equipment. DOD concurred with our third recommendation that the Secretary of Defense direct the Secretary of the Army to undertake a thorough total life-cycle cost estimate for integrating MRAPs into its ground vehicle fleet in accordance with DOD, OMB, and GAO guidance and include costs for training, upgrades, standardization, and military construction; that the Army use this estimate to assess the affordability of its current plans and make adjustments to those plans if warranted; and that the Army provide the total life-cycle cost for integrating MRAPs into its ground vehicle fleet to Congress. DOD commented that the Army staff, in conjunction with the Joint Program Office, is now conducting a Sustainment Readiness Review that addresses issues of total life-cycle costs for MRAPs, and that it will continue to refine its estimates to determine total life-cycle costs, which will inform future budget decisions as the Army continues to reset its force. We believe that if the Army’s total life-cycle cost estimate is conducted in accordance with DOD, OMB, and GAO guidance and used to develop an affordable plan for integrating MRAPs into its vehicle fleet as well as to provide Congress with a total life-cycle cost of its plan, its actions will be responsive to our recommendations. We are sending copies of this report to the appropriate congressional committees, the Secretary of Defense, and the Secretary of the Army. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. Should you or your staff have any questions on the matters discussed in this report, please contact me at (202) 512-8365 or [email protected]. Contact points for our Offices of Congressional relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix III. To determine the extent to which the Army has plans and processes for the disposition of nontactical nonstandard equipment no longer needed in Iraq, we reviewed and analyzed relevant documents, including various Army messages that address the procedures for requisitioning retrograded nonstandard equipment from Iraq. In addition, we interviewed Army officials at relevant organizations throughout the chain of command and at several different organizations. We also reviewed Army Materiel Command briefings regarding the Materiel Enterprise Non-Standard Equipment database and Virtual Mall demonstrations and spoke with officials involved with the National Association of State Agencies for Surplus Property program. Furthermore, we also conducted a site visit to Sierra Army Depot, where the vast bulk of the Army’s nontactical nonstandard equipment is shipped once it leaves Iraq, to view procedures and processes there for the evaluation, disposition, storage, and integration of nontactical nonstandard equipment. We also drew from our body of previously issued work related to nonstandard equipment to include various Iraq drawdown-related issues to identify areas where the Department of Defense (DOD) could make improvements in executing and managing the retrograde of standard and nonstandard equipment from Iraq. To determine the extent to which the Army has plans and processes for the disposition of tactical nonstandard equipment no longer needed in Iraq, we reviewed and analyzed relevant documents, including Army plans, messages, guidance, regulations, and briefings that addressed the subject. We also reviewed Army Audit Agency reports that specifically address the Capabilities Development for Rapid Transition process as well as the sustainment of tactical nonstandard equipment. In addition, we interviewed Army officials at several relevant organizations throughout the chain of command and made a site visit to Fort Monroe, Virginia, where we interviewed officials from U.S. Army Training and Doctrine Command and from the Army Capabilities and Integration Center, both of which play leading roles in determining the ultimate disposition of tactical nonstandard equipment. We also interviewed officials from the Joint Improvised Explosive Device Defeat Organization to discuss the interface between that organization and the Army’s processes for integrating tactical nonstandard equipment into its inventory. Finally, we drew from our body of previously issued work examining DOD’s urgent needs processes and the need for DOD to obtain visibility over these efforts. To determine the extent to which the Army has plans and processes for the disposition of Mine Resistant Ambush Protected vehicles (MRAP) no longer needed in Iraq, we reviewed and analyzed relevant documents, including Army plans, messages, guidance, and briefings that addressed the subject. In particular, we reviewed the Army’s MRAP disposition plans included in the Final Report, Army Capabilities and Integration Center, Mine Resistant Ambush Protected Study II, and also considered in our analysis the Army’s Tactical Wheeled Vehicle Strategy. We also analyzed Army cost estimates for integrating MRAPs into its ground vehicle fleet and compared these estimates with DOD’s instruction for economic analysis, the Office of Management and Budget’s guidance for conducting cost-benefit analyses, and GAO’s Cost Estimating and Assessment Guide. We interviewed relevant officials with direct knowledge of the Army’s future plans for its MRAPs throughout the chain of command to include officials from the Army’s budget office and Red River Army Depot, where MRAPs will be shipped once they are no longer needed in Iraq or Afghanistan. Moreover, we made a site visit to Fort Monroe, Virginia, where we interviewed officials from U.S. Army Training and Doctrine Command and from the Army Capabilities and Integration Center, both of which were tasked to complete the MRAP Study II Final Report; and since the MRAP program is currently a joint program under U.S. Marine Corps lead, we also interviewed officials from the MRAP Joint Program Office. Finally, we also drew from our body of previously issued work regarding MRAPs to include the rapid acquisition of these vehicles as well as the challenges the services have faced with incorporating MRAPs into their organizational structures. In addition to the contact named above, individuals who made key contributions to this report include Larry Junek, Assistant Director; Nick Benne; Stephen Donahue; Guy LoFaro; Emily Norman; Charles Perdue; Carol Petersen; Michael Shaughnessy; Maria Storts; and Cheryl Weissman. | As of March 2011, the Army had over $4 billion worth of nonstandard equipment in Iraq--that is equipment not included on units' standard list of authorized equipment. Concurrently, the Department of Defense (DOD) has acquired over $44 billion worth of Mine Resistant Ambush Protected vehicles (MRAP), most of which have been allocated to the Army. This equipment must be withdrawn from Iraq by December 31, 2011. GAO examined the extent to which the Army has plans and processes for the disposition of (1) nontactical nonstandard equipment; (2) tactical nonstandard equipment; and (3) MRAPs that are no longer needed in Iraq. In performing this review, GAO analyzed relevant documents, interviewed Army officials, and visited Sierra Army Depot, where most nontactical nonstandard equipment is shipped once it leaves Iraq. The Army has plans and processes for the disposition of nontactical nonstandard equipment (e.g., durable goods that are used to provide services for soldiers), and recently created a policy regarding the length of storage time. Excess nontactical nonstandard equipment is either redistributed in the U.S Central Command theater, disposed of, provided to other nations through foreign military sales or other means, or shipped to depots in the United States. In April 2011, the Army issued two messages that updated its procedures for requisitioning excess nonstandard equipment stored at Sierra Army Depot and created a forum to determine its final disposition instructions. The intent was also to extend use of this equipment by making it available to Army units; when an item is deemed not operational, to dispose of it in theater; and to enter these instructions in a disposition database so they will no longer be shipped back to the United States. The Army would then avoid unnecessary transportation costs. The Army has not made disposition decisions for most of its tactical nonstandard equipment (i.e., commercially acquired or non-developmental equipment rapidly acquired and fielded outside the normal budgeting and acquisition process), and its disposition process is impaired by a lack of visibility over this equipment and the absence of a focal point to manage this equipment. The Capabilities Development for Rapid Transition process enables the Army to assess tactical nonstandard equipment already in use in the U.S. Central Command theater and determine whether it should be retained for the Army's current and future force and subsequently funded in the Army's base budget. However, the decision about most of the equipment considered by the process is to continue to fund it with overseas contingency operations funds. In addition, the Army has no system to track, monitor, and manage its inventory of tactical nonstandard equipment and has no single focal point to oversee this equipment. Best practices as cited in GAO's Standards for Internal Control in the Federal Government call for effective stewardship of resources by developing detailed policies, procedures, and practices. Although the Army has plans for the disposition of its MRAP fleet, its cost estimates are incomplete and do not follow cost-estimating best practices. The Army conducted a study to effectively guide its integration of MRAPs into its force structure. The selected option placed the majority of MRAPs in prepositioned stocks. However, this study did not incorporate analyses of future costs based on Department of Defense, Office of Management and Budget, and GAO cost-estimating guidance providing best practices; nor did it delineate total costs for sustainment of its MRAP fleet or when those costs would be incurred. Without such information, decision makers lack the perspective necessary to make asset-management and budgetary decisions. Although Army officials stated that they are working toward providing an estimate of future MRAP costs, this has not yet been completed. GAO recommends that the Secretary of Defense direct Army authorities to (1) finalize decisions about the future status of tactical nonstandard equipment; (2) designate a focal point to oversee this equipment; and (3) undertake a thorough life-cycle cost estimate for its MRAPs. DOD concurred with our third recommendation, partially concurred with our first, and did not concur with the second. Given DOD's lack of visibility over tactical nonstandard equipment, GAO continues to believe a focal point is needed. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
MPOs, representing local governments and working in coordination with state departments of transportation and major providers of transportation services, have responsibility for the regional transportation planning processes in urbanized areas. (See figure 2 for a summary of these processes.) A core function of MPOs is to establish and manage a fair and impartial setting for effective transportation decision making in an urbanized area. To receive federal transportation funding, any project in an urbanized area must emerge from the relevant MPO and state department of transportation planning process. MPOs, which generally have a governing policy board consisting of local elected officials and appropriate state and public transportation officials, facilitate decision making on regional transportation issues including major capital investment projects and priorities. MPOs also generally have a technical advisory committee (including engineers, planners, and other local staff); citizen’s advisory committee; and additional committees, such as a bicycle and pedestrian committee or a freight advisory committee. MPO staff assist the MPO board by preparing documents, fostering interagency coordination, facilitating public input and feedback, and managing the planning process. Staff may also provide committees with technical assessments and evaluations of proposed transportation initiatives. Created to carry out a federally mandated transportation planning process, MPOs’ core membership is spelled out in law, but the organizational structure and staff arrangements were designed to be determined by agreement between local officials and the state. The size of the populations represented by individual MPOs varies. For instance, about 52 percent of the 381 MPOs represent populations of fewer than 200,000 people; 36 percent of MPOs represent populations of 200,000 to 999,999 people; and 11 percent of MPOs represent populations of 1 million or more people. However, the largest MPOs—those representing more than 1 million people—represent about 49 percent of the country. (See figure 1 for a summary of MPO sizes.) All MPOs have the same basic planning requirements. Specifically, all MPOs are required to produce the following: long-range (20-year) transportation plans; short-range (4-year) Transportation Improvement Programs; annual statements of planning priorities and activities (generally called a Unified Planning Work Program or UPWP); and public participation plans. Transportation improvement programs (TIP), based on the long-range plan, should be designed to achieve the area’s transportation goals using spending, operating, management, and financial tools. The area’s transportation goals are determined by the MPO’s policy board, including representatives from relevant jurisdictions and transportation operators, through interactions between stakeholders and the public for the purpose of identifying visions for the community’s future. This process allows the region as a whole to determine how it should allocate its limited transportation resources among the various capital and operating needs of the area, based on local and regional priorities. Both the TIP and the long- range plan must be fiscally constrained—that is, the total estimated cost of the planned transportation improvements cannot exceed anticipated levels of funding. MPOs must develop these plans and programs in cooperation with their state department of transportation as well as local transit operators, land-use entities, and environmental resource agencies. Where they exist in their region, MPOs also consult with tribal governments, airports, Amtrak, or freight rail interests during the planning process. (See figure 2 for a summary of the role of the MPO, state, and federal government in developing the long-range plan and TIP.) Beyond the requirements common to all MPOs, some MPOs have additional planning requirements. For example, MPOs serving urbanized areas with populations of over 200,000 people, which are referred to as transportation management areas (TMA), are required to develop a Congestion Management Process (CMP) that identifies actions and strategies to reduce congestion. In addition, MPOs containing areas that do not conform to federal air quality standards (i.e., nonattainment areas) or areas that have recently come into conformance with the standards (i.e., maintenance areas) are required to ensure that planned transportation improvements will not cause new air quality violations, worsen existing violations, or delay timely attainment of the standards. To ensure that such plans will not negatively affect regional air quality, MPOs must conduct what is termed “conformity analysis” for proposed transportation improvements. To create these transportation plans and programs, MPOs consider a variety of factors, including local travel forecasts and federal considerations. For example, MPOs forecast future travel with the assistance of computerized travel-demand models. These models provide information on how urban growth and proposed facility and operational investments will affect the operation of the transportation system. Such models are complex and require as inputs extensive current information on roadway and transit system characteristics and operations, as well as current and forecast demographic information. Creating and operating the models requires a high degree of technical training and expertise. Additionally, when developing these plans and programs, MPOs must consider specific statutorily defined planning factors. These factors require that the metropolitan planning process provide for consideration of projects and strategies that will support the economic vitality of the metropolitan area, especially by enabling global competitiveness, productivity, and efficiency; increase the safety of the transportation system for motorized and nonmotorized users; increase the security of the transportation system for motorized and nonmotorized users; increase the accessibility and mobility of people and freight; protect and enhance the environment, promote energy conservation, improve the quality of life, and promote consistency between transportation improvements and state and local planned growth and economic development patterns; enhance the integration and connectivity of the transportation system, across and between modes, for people and freight; promote efficient system management and operation; and emphasize the preservation of the existing transportation system. To carry out this regional planning process, 1.25 percent of federal-aid highway funding from the Interstate Maintenance, National Highway System, Bridge, Surface Transportation Program, and Congestion Mitigation and Air Quality (CMAQ) programs is apportioned to the states as metropolitan planning funds. Federal legislation has maintained, and periodically increased, the funding for MPO activities over time. (See figure 3.) These federal funds are distributed to states based on population. Generally states then provide each of their MPOs with baseline funding and distribute any remaining balance according to a formula. While the states can use a range of factors in their formulas, such as congestion levels, they are required to take population into account. Federal planning dollars must also be matched by state and local governments. Specifically, state and local governments must provide at least 20 percent of metropolitan planning funds, although some state and local governments have to provide more than 20 percent in funding to perform all of their necessary planning activities. Federal and state governments oversee this regional planning process. At the federal level, FTA and FHWA work together to perform federal certification reviews—certifying that each TMA has carried out its planning according to the applicable federal statutes. More specifically, the certification review requires that the federal government assess TMAs every 4 years to determine how well they are working with the transportation-related organizations, local governments, public transportation operators, and citizens in their area, as well as with the state departments of transportation, to meet the many statutory and regulatory requirements applicable to the planning process. Additionally, the certification review assesses the quality of the required planning documents. The certification review includes a desk review of the MPO’s plans and a site visit, among other things. Additionally, all MPOs, including both TMAs and non-TMAs, must also self-certify that their planning process meets the federal requirements. States also participate in the regional planning process by, for example, reviewing and approving the MPO’s TIP. If the state approves the TIP, the state must incorporate the TIP, without change, into the statewide transportation improvement program (STIP). If the state does not approve the TIP, the MPO and the projects included in the TIP are not eligible for federal funding. This requirement compels states to coordinate with MPOs and vice versa. The staff size and structure of MPOs vary significantly. Some MPOs are supported by one or two staff, while a few have over 100 full or part-time staff. Most MPOs have a relatively small staff, with a median of four full- time staff per MPO, based on our survey. (See table 1 for a summary of the number of staff by size of MPO.) The type and structure of the organizations housing MPOs also vary across the country. The structure of an MPO is determined by agreement between relevant local governments and the state, and therefore the extent to which these local governments or other regional organizations support MPO activities varies. These organizations can support MPOs by housing staff within their organization, which can include providing the personnel and facilities necessary for MPO activities. Some MPOs are housed and staffed by a local jurisdiction (such as a city or county government) within its boundaries, others by a regional planning council, and still others operate independently. According to our survey respondents, 71 percent of MPOs are a part of agencies such as regional councils and city, county, or state governments. Eighteen percent of MPOs report that they operate independently. Beyond their staff and structure, MPOs also vary in terms of their funding sources and amounts. Federal planning funds—FHWA PL funds and FTA Section 5303 funds—generally make up a large portion of the MPO budget for conducting necessary studies and developing transportation plans, programs, and other documents. According to our survey respondents, about 80 percent of MPOs receive a majority of their planning funds from these federal sources. The amount of matching funds provided by state and local sources also varies considerably by MPO. For example, officials from one state department of transportation we spoke to said that the small MPOs receive considerably more than the required 20 percent of state and local matching funds for transportation planning. Officials from another state told us that although they only receive the required 20 percent match, they also provide technical support to some MPOs. In addition to federal planning funds and the required state and local match, some MPOs receive and use other funds, such as dedicated local taxes and transit fare box revenue. Finally, according to FTA, while most federal transit funds designated for urban areas are apportioned directly from FTA to the transit operator, some funds are apportioned to MPOs, which then allocate those funds themselves. The technical capacity of MPOs to develop travel demand forecasts—a crucial component of the long-range plans—also varies. Some MPOs— about 45 percent of all our survey respondents—use their own models to develop most, if not all, of their forecasts, while 51 percent rely on consultants or their state department of transportation to conduct their modeling. Small MPOs are less likely to conduct their own travel demand forecasts, with only 30 percent reporting that they have their own modeling, according to our survey. Further, the federal government gives local transportation planning agencies, including MPOs, the flexibility to choose their own transportation models without being subject to minimum standards or guidelines. As a result, the type of model used by MPOs also varies. Of the MPOs that reported in our survey that they use a model to conduct their travel demand forecasts, a large majority said that they use a four-step model, which uses survey and other data to estimate future trips and assign those trips to different modes. Seven survey respondents indicated that they use activity-based models, which are tied more closely to household and traveler characteristics and behavior and therefore should, in concept, permit MPOs to address policy questions that cannot be treated with the conventional four-step models. For example, four-step models are not suited to estimating the emissions effects of small transportation projects or linking these effects to air quality; more advanced modeling techniques, such as activity-based models, are needed to estimate such effects. The Transportation Research Board (TRB) also noted that although the four-step process is common, there are considerable variations in the completeness and complexity of the models and data employed. Further, they reported that MPOs vary significantly in the number of staff devoted to travel forecasting. Through our survey and interviews, we also found that many MPOs have additional responsibilities that are not federally required, many of which extend beyond transportation planning. For some MPOs, these additional responsibilities and activities are required by their state, while other MPOs have taken on these responsibilities over time, based on regional needs. Land-use planning. According to our survey respondents, many MPOs conduct all or a portion of their region’s land-use planning, and for some this is a state requirement. Specifically, 70 percent of MPOs have some land-use planning responsibilities, with the larger MPOs generally reporting that they have more of these planning responsibilities than small MPOs. Eleven percent of survey respondents specifically said that their land-use responsibilities are required by their state. In practice, some MPOs integrate land-use planning into their transportation planning process by considering potential land-use scenarios along with proposed projects. Some MPOs have also led public processes to develop an integrated transportation and land-use “vision” for a region and to evaluate future transportation and land-use scenarios. Similarly, for a number of MPOs, various forms of land-use models are now part of the process for analyzing the growth of the region and studying the land-use impacts of alternative transportation investment programs. Generally, though, MPOs do not have authority to make land-use decisions. Rather, local jurisdictions typically have the authority to make such zoning and other decisions. Project selection. By determining which projects are to be included in TIPs, all MPOs have a role in determining which projects will ultimately be funded. However, only certain MPOs have the authority to select—from a list of projects in an approved TIP—which projects are to be implemented in the most immediate time frame, using federal funds available to a metropolitan planning area. In areas designated as TMAs, the MPO, in consultation with the state and public transportation operators, selects from an approved TIP all projects that are to be implemented using funding under Title 23 or under Chapter 53 of Title 49 of the U.S. Code (excluding projects on the National Highway System and projects funded under the Bridge, Interstate Maintenance, and Federal Lands Highway programs). Furthermore, MPOs in air quality nonattainment areas also have the ability to use CMAQ funds. Additionally, in California, regional organizations have project selection authority for 75 percent of their region’s portion of the state’s TIP funds (which includes both federal and state highway money). Project implementation. Some MPOs also have the responsibility for implementing transportation projects. Generally, MPOs do not take the lead in implementing transportation projects; rather, they play a coordinating role in planning and programming funds for projects and operations. Usually, local jurisdictions, transit operators, or state governments take the lead in implementing projects. However, 37 percent of survey respondents—representing MPOs of all sizes—said that they implement projects. For example, one large MPO we spoke with utilizes its local, state, and federal funds to implement projects by leveraging this money with regional partners to construct large-scale transportation projects. Toward this end, the MPO established a program aimed at quickly reducing congestion in particular areas. This initiative uses small- scale projects, such as traffic signal optimization, for congested corridors—which can be implemented within 2 years and are largely funded and carried out by the MPO. Transit operations. Sixteen percent of MPOs responded in our survey that they have some responsibility for operating all or a portion of their regional transit system. For example, one western MPO is both the transit authority—providing mass transit that connects throughout the region— and the transportation-planning agency for the greater metropolitan area. Another MPO noted in our survey that rather than operating the transit system, it serves as the planning staff for both the region’s MPO and the transit agency. Environmental planning. Twenty-one percent of MPOs responding to our survey said that they conduct air quality or emissions analysis, beyond the federally required conformity process. Further, 32 percent of MPOs responding to our survey said that they conduct additional environmental or water quality planning. For example, one state we visited requires its MPOs to consider how their long-range transportation plan increases water and energy conservation and efficiency. MPOs we surveyed and interviewed cited several funding challenges that impact their ability to conduct transportation planning. About 85 percent of all MPOs responding to our survey cited the lack of transportation planning funding as a challenge to transportation planning. MPOs we surveyed and interviewed also cited challenges related to the lack of flexibility of transportation planning funds. Specifically, about half of all MPOs responding to our survey cited the lack of flexibility of funding as a challenge. While FTA allows planning funds to be used for a broad range of planning activities, FHWA is more prescriptive in how planning funds can be spent. For example, FHWA guidance precludes using planning funds for projects’ environmental analyses that definitively go beyond transportation planning. Furthermore, officials at a few MPOs we spoke with stated that it is unclear which activities can be undertaken with planning funds, particularly in terms of the FHWA planning funds, and that such definitions inhibit them from conducting comprehensive planning by not allowing them to use transportation planning dollars for other uses where necessary. DOT officials we spoke with agreed that the eligibility for FHWA planning funds is fairly narrow, but noted that Surface Transportation Program funds can be used for metropolitan planning and are more flexible. MPOs also cited a few other funding-related challenges. First, many MPOs reported having difficulty securing local matching funds for federal transportation planning dollars. About 66 percent of survey respondents overall cited this as a challenge. For example, one MPO we spoke with has been unable to utilize all of the federal planning funds it has been allocated because the MPO cannot meet its local matching requirements. As a result, the MPO has not been able to hire needed staff. Second, MPOs also had mixed opinions regarding the fiscal constraint requirement—that MPOs develop plans that correspond to reliable revenue projections. About 84 percent of survey respondents cited the fiscal constraint requirement as a challenge. One MPO official told us that this is a challenge because the MPO has to submit its TIP without full knowledge of the state’s available funding; this makes creating a realistic fiscally constrained TIP difficult. A previous GAO report found similar concerns. In particular, for MPOs in some urban areas, financially constraining the transportation improvement program meant abandoning proposed projects because of a lack of projected revenue. Although developing a fiscally constrained plan can be difficult, we have also previously reported that the fiscal constraint requirement has been largely beneficial to the planning process because it has led MPOs to obtain more reliable revenue projections from state departments of transportation and transit agencies and to exclude those projects that could not be financed within budget constraints. Third, beyond funding challenges related to planning, officials at a few small MPOs we spoke with often stated that their region had insufficient funding to keep pace with the transportation projects needed. In fact, at one small MPO, an official estimated that the region received about 10 percent of the funding needed to construct necessary projects. This lack of funding could potentially limit the effectiveness of MPO planning because fewer projects from the TIP can be implemented. MPOs also cited staffing constraints, to a lesser extent, as a challenge that impacts their ability to conduct transportation planning. Some MPOs stated that staffing affects their ability to fulfill its planning requirements. For example, one small MPO told us that with only one or two staff members, it is very difficult to satisfy all the federal requirements for MPOs such as creating and updating the TIP and long-range plan and holding public meetings. MPOs also mentioned a lack of trained staff as a challenge to transportation planning. About half of the survey respondents cited lack of trained staff as a challenge in carrying out the federal requirements for transportation planning. Lack of trained staff is also a challenge for small MPOs, according to our survey. For example, officials from several MPOs stated that retaining staff trained to conduct the travel forecasting is difficult because there are few people with the expertise to conduct such technical analyses and consulting firms can often pay modelers a higher salary than an MPO. In addition, officials from one MPO told us that the challenges of having limited staff resources is compounded by requirements to ensure public participation, noting that much of their time is spent carrying out the public participation requirements for the planning process relative to other activities. Concerns about meeting the public participation requirements were consistent across most of the MPOs we surveyed. In particular, 79 percent of survey respondents stated that they have difficulty obtaining the public participation needed to meet their transportation planning requirements. A few MPOs we interviewed stated that it was difficult to generate public participation in the planning process, in part because few people actually understand what an MPO is or what it does. Most MPOs function as part of another planning or governing body, such as a council of governments. According to a few MPOs we interviewed, this arrangement can address staffing and funding limitations by allowing an MPO the ability to cut costs by sharing resources such as a space in which to operate and, in some cases, facilitates coordination between the MPO and other planners or transportation stakeholders. However, this arrangement can also create some challenges. In particular, a few MPOs housed within city governments or other entities connected with a specific jurisdiction said that this arrangement causes them to be viewed as less impartial than MPOs that are stand-alone entities, and that these perceptions can affect their consensus-building efforts. Additionally, 71 percent of small MPO survey respondents cited competing priorities between transportation planning and other tasks related to the council of governments as a challenge. MPOs we surveyed and interviewed also cited the lack of authority as a challenge to effective transportation planning. About 80 percent of all MPOs responding to our survey indicated that the lack of authority to implement the plans they develop is a challenge. The majority of MPOs that responded to our survey do not implement any of the projects contained in the plans that they create. Rather, they rely on other agents such as cities, counties, and state departments of transportation to carry out their plans. Similarly, although many survey respondents reported that they conduct land-use planning for their region, MPOs generally lack the authority to make land-use decisions. Instead, this authority generally rests with state and local jurisdictions. As a result, MPOs indicated that they have difficulty anticipating and integrating land-use decisions into their transportation planning. For example, in one region we visited, local jurisdictions are often reluctant to make land-use planning decisions in-line with the MPO’s regional transportation plan. In part, the official stated that this occurs because local jurisdictions have a difficult time making land-use decisions that benefit the region as a whole as opposed to their individual community. If land-use decisions do not correspond with an MPO’s plans, the MPO’s proposed transportation improvements may not be as effective. Our past work has documented that integrating land- use and transportation investments—including accurately modeling future land-use changes—is important but challenging. MPOs we interviewed also cited their lack of authority in determining which projects will be implemented as a challenge. Although MPOs help determine which projects are eligible for funding and which ones have priority through the development of the TIP, whether a project will be funded and the amount of funds made available for the project are determined by federal, state, and local policymakers. Moreover, according to our survey, the availability of funding and public support are more important drivers of transportation investment decisions than the analysis conducted by MPOs. This is consistent with our previous work regarding transportation decision making, which indicated that even when economic analyses are performed, the results are not necessarily the most important factor considered in terms of which projects to fund; rather, a number of factors, such as public support or the availability of funding, drive transportation investment decisions. Although MPOs in the survey cited lack of authority as a challenge, the MPOs we interviewed had mixed opinions regarding the extent to which they felt being granted additional authority would improve transportation planning. Some of the MPOs we spoke with emphasized that having project implementation and land-use decision-making authority would improve transportation planning. For example, one large MPO told us that although they have developed a close working relationship over the years with transit operators and other transportation stakeholders to make their planning processes successful, they need land-use authority to more comprehensively address critical transportation issues. Another MPO we interviewed, however, suggested that giving MPOs project implementation or land-use authority may not improve transportation planning. Specifically, one MPO official stated that such additional authorities may actually hamper MPOs’ ability to conduct transportation planning, since some of their current ability to generate consensus results from the fact that they do not have a stake in building or operating the transportation plans. MPOs also face technical challenges, in part because the travel demand modeling required to forecast future growth and needs has become more complicated. MPOs today face a much broader and more complex set of requirements and needs in their travel modeling than they did in the 1960s and 1970s, when the primary concern was evaluating highway and transit system capacity expansions. New requirements—such as determining motor vehicle emissions and changes in land use—have created additional data needs to account for the increasing complexity of the transportation system. For example, about half of our survey respondents indicated that their MPOs include a nonattainment or maintenance area and, thus, are required to conduct air quality conformity analyses. An even larger percentage of medium- and large-sized MPOs—66 percent and 76 percent, respectively—indicated that they have such areas within their MPO boundaries. As planning organizations, much of the value of MPOs lies in their ability to forecast and analyze an increasingly complex and growing set of transportation needs. If MPOs’ technical capabilities cannot account for the increasing complexities facing regional transportation systems, MPOs’ contributions to transportation planning may be compromised, which could lead to planning failures and poor investment decisions. Although some MPOs are taking steps to meet the challenges presented by the increasing complexity of the transportation system, MPOs still face modeling challenges. About half of MPOs report that they face challenges related to their limited modeling capacity. Some MPOs have had success updating their travel forecasting techniques to accommodate new requirements. For example, officials at one MPO told us the transit agency in their region is developing a travel demand model specifically for transit, though it has not yet been incorporated into the MPO’s travel models. Some MPOs we interviewed, however, told us that they lack the resources to improve their modeling capabilities. In fact, MPO officials expressed concern in interviews that current models, including the four-step models most MPOs use, do not necessarily produce forecasts that can adequately account for the increasing complexities of transportation planning, such as predicting future land-use patterns and transit’s effect on travel behavior. TRB also found similar challenges—that is, they found inherent weaknesses in current models that are generally unable to address new policy concerns raised by the growing complexity of the transportation system. TRB notes that when the detail required to address a transportation issue increases, the complexity of the analytical techniques should increase as well. For example, a small metropolitan area experiencing minimal growth, with little transit and no air quality problems, will likely be able to use a simple model to determine the area’s needs. Thus no single approach is appropriate for all MPOs. Although modeling presents challenges, according to our survey, the most predominant technical challenge was related to acquiring quality data to use in planning models. Over 70 percent of survey respondents cited data limitations as a challenge. Data reflecting current travel patterns in a metropolitan area are important because models that are supplied with inaccurate or out-of-date data may produce inadequate forecasts that contribute to poor planning. In addition, having robust data to support proposed transportation plans helps to keep planning more objective and lends credibility to the plans developed by MPOs. However, conducting a household travel survey—a survey of random households in a metropolitan area that gathers trip-related data, such as mode of transportation, duration, distance and purpose of trip—to collect updated data is both expensive and time-consuming. For example, officials at one large MPO we interviewed stated that they need to update their household survey but are having difficulty finding the estimated $1.5 million needed to do so. As we mentioned earlier, funding shortages and the lack of staff trained with such technical expertise make increasing technical capacity a challenge for many MPOs, particularly small ones. TRB’s study also found that many MPOs had inadequate data to support their modeling processes. The federal certification review is an important mechanism that FTA and FHWA use to oversee the MPO planning process. Although all MPOs are required to self-certify that they have met the federal transportation planning requirements, SAFETEA-LU also requires DOT to certify the metropolitan planning process of the 155 TMAs every 4 years. To conduct a certification review, FTA and FHWA assemble a team which typically consists of FTA and FHWA field staff, but may also include FHWA or FTA headquarters community planners, EPA officials, other subject matter experts, or experts from DOT’s Volpe National Transportation Systems Center. FHWA division office personnel generally take the lead in these reviews, which typically take 6 to 9 months and include (1) an initial desk review, which includes verifying compliance with basic regulatory requirements, among other things; (2) an evaluation of the MPO’s written response to a series of questions; (3) a 2 to 4 day site visit during which the team gathers additional information; and (4) a meeting to inform the public about planning requirements and provide an opportunity for the public to express concerns about how the process is meeting the needs of the area. After the site visit, the team prepares a final report including review findings and recommendations, which incorporates public comments on the planning process. Consistent with federal law, the federal certification review is process- oriented and conducted without regard to transportation planning outcomes. Specifically, through certification reviews, DOT ensures that the metropolitan planning process of an MPO serving a TMA is carried out in accordance with applicable provisions of federal law—for example, by ascertaining whether or not the MPO has adhered to its public participation plan. Oversight also provides a mechanism through which the federal government can ensure that its funds are being used to achieve its intended goals. The current process-oriented approach toward certification generally focuses on procedural requirements as opposed to performance. FTA and FHWA can withhold apportioned federal highway and transit funds if they determine an MPO is in noncompliance with federal requirements. However, FTA and FHWA officials were unaware of any instance in which an MPO was not certified due to noncompliance during the last 10 years. Furthermore, FTA and FHWA officials noted that the process is meant to be collaborative in nature. Therefore, a finding of noncompliance is as much of a failure on the part of DOT as the MPO, according to a DOT official. Because the federal certification is focused on compliance, not outcomes, it is difficult to determine whether federal oversight is improving transportation planning. GAO has previously recommended to DOT, as well as to Congress, that adopting performance measures and goals for programs can aid in evaluating and measuring the success of the programs, which can lead to better decisions about transportation investments. The procedural focus of the federal certification, and the fact that, according to DOT officials’ knowledge, no MPO has failed to be certified as a result of a certification review also makes it difficult to use the certification results as a performance indicator for MPOs. According to FHWA and FTA officials, certification reviews examine the quality of the MPO planning process by, for example, identifying corrective actions where there is noncompliance with statute or regulations and recommendations for areas needing improvement. Corrective actions are set with milestone dates to rectify the noncompliance and require a status report and re-evaluation of the process. Commendations for the use of noteworthy practices are also identified. However, FTA and FHWA do not assess the progress of the MPO in achieving the goals outlined in the plans. According to FTA and FHWA officials, states may, but are not required to, monitor the progress of MPOs in meeting their goals. Furthermore, an FHWA official noted that the elements that are reviewed through certification serve as proxies for good planning—for example, the resulting plans will be better if the MPO is regularly soliciting and incorporating public input. Most MPOs we interviewed generally view the federal certification reviews as pro forma in nature and place a greater value on informal assistance from the federal government. Officials in one state said that the most important oversight is the “give and take” between agencies on the various transportation plans they create. This informal interaction allows the oversight agencies to identify issues prior to the formal reviews. Likewise, many federal officials with whom we spoke view informal interactions— such as regular meetings, technical assistance, and review of air quality conformity analyses—as an important aspect of oversight. One FHWA division official we interviewed stated that the benefit of ongoing communication is that problems are identified as they arise and can be addressed well before the certification review or self certification is conducted. MPOs also reported that the assistance provided by their states is more important than the federal certification reviews. Although the level of participation of states in the planning process varies, MPOs reported in our survey that state department of transportation officials generally play a greater oversight role than DOT for certain activities. For example, around 80 percent of survey respondents reported that state department of transportation officials are involved in MPO boards and committees, while over 55 percent and 70 percent reported similar participation from federal officials on MPO boards and committees, respectively. This may be due, in part, to the limited number of staff at FHWA and FTA. With the pending expiration of the current surface transportation authorizing legislation, MPO, government, and industry officials have developed various formal and informal proposals to improve or change the current transportation planning process. We reviewed proposals from AMPO, AASHTO, APTA, the Brookings Institution, the previous and current DOT administrations, and the June 2009 House Transportation and Infrastructure Committee blueprint for the surface transportation reauthorization. We also discussed suggestions for improving transportation planning with MPO, federal, and state officials. In reviewing these proposals or suggestions, we identified several recurring changes, or options, that could address some of the resource, authority, and technical challenges facing MPOs. Most of the options have both advantages and disadvantages, and implementing any of the options will require policy trade-offs. Creating an expanded or clarified definition of eligibility for the use of transportation planning funds could allow MPOs to utilize planning funds in ways that best meet the needs of the area. Most of the MPOs we surveyed and many of the MPOs we interviewed suggested that having additional flexibility regarding the types of activities that are eligible to be completed using planning funds would improve the planning process. Currently, FHWA guidance precludes using planning funds for projects’ environmental analyses that “clearly extends beyond transportation planning.” As we mentioned previously, officials at a few MPOs we spoke with stated that they are unclear about what environmental activities are eligible under that definition, which makes it difficult to conduct comprehensive transportation planning. According to many of the MPOs we interviewed and 90 percent of the MPOs responding to our survey, creating more flexibility in how the planning funds can be spent would improve the effectiveness of the planning process and allow MPOs to be more efficient by prioritizing their limited resources to the most critical planning activities. However, providing such flexibility in federal transportation funds could result in less transparency and accountability. In particular, when funds can be flexed across different activities, there is less ability to assess the impact of particular funding streams—such as transportation planning funds—on the achievement of key goals. A number of the proposals for improving the MPO planning process include creating further variation—in addition to the TMA and non-TMA distinction—in MPOs’ planning requirements and authority to account for the wide variation in capacity of MPOs across the country. For example, creating additional variations in MPOs’ planning requirements could include the development of abbreviated planning requirements for MPOs. SAFETEA-LU allows that the Secretary of Transportation may permit MPOs that are not designated as TMAs or are not in nonattainment for ozone or carbon monoxide to develop abbreviated metropolitan transportation plans or TIPs. In so doing, the Secretary must take into account the complexity of transportation problems in the area. MPOs in small metropolitan areas—where transportation needs are often less complex—could benefit from abbreviated planning requirements. To date, no MPOs have applied for the abbreviated planning requirements, according to DOT officials. Other proposals suggest that MPOs that have exhibited increased capacity—e.g., those that are conducting additional activities beyond the current planning requirements—could be allowed additional implementation authority to oversee the development of certain projects. Likewise, an MPO could be granted expanded authority to plan and fund a metropolitan area’s transportation projects—focusing available transportation funds on projects that will benefit a region the most, regardless of mode. A large majority of the survey respondents—79 percent—stated that additional project implementation authority would improve effectiveness of the MPO planning process. However, granting additional authorities to MPOs or reducing the requirements could result in some additional challenges for MPOs and DOT. Additional federal and state oversight may be needed for (1) MPOs that take on new, traditionally non-MPO responsibilities, such as project implementation or (2) MPOs that reduce their planning requirements in order to ensure that the abbreviated process adequately accounts for the transportation needs of the area. Additionally, over half of the survey respondents reported that they do not have the capacity to undertake additional project implementation authorities, despite the fact that a large majority of MPOs stated increased implementation authority would improve the effectiveness of their planning process. Other proposals include changing the legal definition of MPOs to realign the MPO planning process with current capacity and planning needs. In particular, one option calls for an increase in the population threshold for mandatory MPO creation. Requiring the formation of MPOs at a larger population threshold could ease the burden of the previously mentioned resource constraints affecting small MPOs, including funding and staffing shortages. Specifically, one of the state departments of transportation we interviewed—one that contains more rural areas—noted that the current population threshold of 50,000 can create a situation in which a relatively small, rural area with less complex transportation needs is given MPO responsibilities. In these situations, MPOs may have difficulty funding an adequate number of positions—or filling them with qualified individuals— to do the work needed to meet federal and state requirements. Raising the population threshold could raise the likelihood that MPO efforts are limited to urban areas with more advanced transportation needs. However, about 73 percent of survey respondents from small MPOs reported that raising the threshold would not be an appropriate way to improve the planning process. An official from a small MPO we interviewed noted that any reduction in responsibilities for small MPOs must be a contextual decision based on the complexity of the transportation needs in the area, such as proximity to a large metropolitan area that is expected to grow in the future. With regard to technical constraints, improving technical capabilities across MPOs will likely require additional investment in modeling, data gathering, or both. As noted previously, current models are not well suited to representing travelers’ responses to the complex range of policies such as freight movement and motor vehicle emissions. Of particular concern is that many MPOs have inadequate data to support their modeling processes, even for traditional travel demand forecasts. Eighty-seven percent of MPOs surveyed said that greater federal support for transportation research and data would improve their effectiveness. Moreover, many of the MPOs we interviewed agreed that federal government investment in modeling and data gathering is necessary to ensure greater reliability in travel demand forecasting across MPOs and to help account for the increasing complexity of transportation forecasting and data needs in urban areas. Furthermore, without such an investment, policymakers may lack the information needed to make informed decisions on investments related to the transportation system. Toward this end, TRB’s Special Report 288 recommended the development and implementation of new modeling approaches to travel demand forecasting that are better suited to providing reliable information. These new modeling approaches include such applications as multimodal investment analyses, environmental assessments, evaluations of a wide range of policy alternatives, and meeting federal and state regulatory requirements. TRB also made various recommendations for improvements, including increasing DOT support and funding for incremental improvements to models in settings appropriate for their use, and the continued development, demonstration, and implementation of advanced modeling approaches. Additionally, TRB encouraged DOT collaboration with MPOs and states to examine data collection needs, including data requirements for validating current travel forecasting models and meeting regulatory requirements. Most recently, in July 2009, when DOT announced its principles for an 18-month extension of federal highway, transit, and highway and trucking safety programs, it called for an investment of $300 million to build state and MPO planning capacity for the collection and analysis of data on transportation goals. Additionally, DOT’s 18-month extension proposal suggests an investment of $10 million to build MPOs’ informational and analytic capacity to refine assessment tools at the federal level, among other things. Currently there are no requirements to attain explicit performance thresholds, such as reducing congestion or improving highway safety, built into the federal planning requirements for MPOs. MPOs and industry representatives we interviewed recognized the value of making the planning process more performance-based, noting that focusing on outcomes could improve transportation investment decision making. In addition, DOT’s recently released principles for an 18-month extension of certain federal surface transportation programs also calls for stronger requirements for tracking and reporting on the projected and actual outcomes of transportation investments that use federal dollars. Using performance measures could help hold MPOs accountable for carrying out a 3-C transportation planning process that encourages and promotes a safe and efficient surface transportation system. According to our survey, most MPOs already report using performance measures to some extent to assess results achieved. However, MPOs generally reported using output- based measures, such as compliance with state and federal transportation planning rules, rather than outcome-based measures, such as improved safety. Further, some DOT officials we spoke with maintained that the wide variety of needs and capacities among regions would make it difficult to establish national performance measures. To overcome the challenge of creating such measures for all MPOs, some officials said that broader performance goals could be established at the national level, while more specific measures and targets could be left for states and regions to establish. Establishing outcome-based measures for all MPOs would also require DOT to expand its oversight so that it can assess the progress of MPOs in achieving specific results, rather than focusing on compliance with existing statutes and rules. However, a few MPOs and the DOT officials we spoke to noted that it would not be appropriate to hold MPOs accountable for specific outcomes because they do not have the authority to implement their plans. Indeed, it is often up to local jurisdictions and the state to carry out MPO plans, and they do not always have the same priorities and goals as the MPOs. Some MPO stakeholders we spoke to noted that reconciling the needs of the region with the priorities of individual jurisdictions is a significant challenge. Nevertheless, other officials we spoke to noted that the purpose of MPOs is to establish a consensus on a region’s long-term transportation goals and that it would be appropriate to link those goals with specific outcomes. Our survey shows a pattern of variations and challenges that could increasingly compromise the quality of regional transportation planning, potentially allowing transportation problems—such as increasing congestion—to inhibit economic activity in the United States. For example: MPOs’ roles and responsibilities are not commensurate with their requirements. Under the current system, a small MPO with a simple transportation mission and limited technical capacity is generally accountable to the same planning and program requirements and oversight as a large MPO with a complex, multimodal transportation system, raising questions as to whether the federal government is appropriately targeting its oversight resources. SAFETEA-LU allows MPOs to seek permission to use a more abbreviated planning process. MPOs may not be universally aware of this option since, to date, no MPOs have utilized it. The quality of MPOs’ computerized travel demand models and the data used to support the process is often insufficient or unreliable. As planning organizations, one of the important functions of MPOs is the ability to forecast and analyze an increasingly complex and growing set of environmental, transportation, and social trends. Thus if MPOs are not able to keep pace with the increasing complexity of this task, their contribution to transportation planning may be compromised. However, on a cautionary note, effective forecasting requires both quality computer models and accurate data, such that investing in one without improving the other may waste resources. DOT’s July 2009 18-month extension proposal calls for additional resources for the collection and analysis of data on transportation goals to help build transportation planning capacity. Adopting TRB’s modeling and data gathering recommendations is an example of how the additional resources could be invested. Finally, because the oversight mechanisms for MPOs are focused on process, rather than outcomes, it is unclear what impact regional transportation planning is having on transportation outcomes. Despite over 30 years of a federally mandated and funded transportation planning process and billions spent on roads, bridges, and transit projects, there is not enough information for policymakers to determine whether the planning process is addressing critical transportation challenges facing the United States. However, shifting to a more performance-based oversight approach will require legislative changes. Addressing these variations and challenges is particularly important given some proposed reforms that would increase the ability of metropolitan and local governments to access additional federal transportation funds. The upcoming reauthorization of federal surface transportation programs provides Congress and DOT an opportunity to address these challenges and enhance regional transportation planning. For example, Congress and DOT could examine what is being invested in the federal oversight process, what the return for this investment is, and how it may be improved. Congress should consider making MPO transportation planning more performance-based—for example, by identifying specific transportation outcomes for transportation planning and charging the U.S. Department of Transportation with assessing MPOs’ progress in achieving these outcomes in the certification review process. To improve the transportation planning process, we are recommending that the Secretary of Transportation take the following two actions: 1. Direct the Administrators of the Federal Highway Administration and the Federal Transit Administration to establish guidelines for MPOs to apply for, and implement, the abbreviated planning clause for small MPOs, and share these guidelines with existing MPOs. 2. Develop a strategy to improve data gathering and modeling efforts among MPOs, including establishing a timeline for implementing the modeling and data recommendations for the federal government in the Transportation Research Board’s Special Report 288. We provided a draft of this report to DOT for review and comment. DOT agreed to consider the report’s recommendations. DOT also provided technical comments, which we incorporated as appropriate. We are sending copies of this report to interested congressional committees and the Secretary of Transportation. In addition, this report will be available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-2834 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix II. To identify and assess the characteristics and responsibilities of metropolitan planning organizations (MPO) we reviewed current and previous federal statutes and regulations governing MPOs. We also reviewed relevant academic, industry association, GAO, and U.S. Department of Transportation (DOT) research and publications to understand MPOs’ transportation planning responsibilities, the ways MPOs vary, and the challenges MPOs face in carrying out their responsibilities. Additionally, we interviewed representatives from industry associations, as well as MPO, Federal Transit Administration (FTA), Federal Highway Administration (FHWA), and DOT officials to clarify MPO planning responsibilities, identify transportation planning challenges, and assess how DOT provides oversight for MPOs and the extent to which this improves transportation planning. To further examine the role of state departments of transportation in metropolitan planning and assess the potential impact of various changes to MPOs, we contacted 11 additional state departments of transportation by e-mail and received responses from 6. We also attended and observed a DOT on-site certification review in Savannah, Georgia, to further understand the federal oversight of transportation management areas (TMA). To determine the various options for improving regional transportation planning, we reviewed federal surface transportation program reauthorization proposals from the Association of Metropolitan Planning Organizations (AMPO), American Association of State Highway and Transportation officials, American Public Transportation Association, Brookings Institution, the previous and current DOT administrations, and the current House Transportation and Infrastructure Committee blueprint for reauthorization. We also discussed informal proposals or suggestions for improving the planning process with MPO, federal, and state officials. To gather in-depth information on the roles and responsibilities of MPOs, the extent to which federal oversight improves transportation planning, and possible ways to improve regional transportation planning, we conducted a Web-based survey of all 381 MPOs. This survey was conducted from February 3 to April 1, 2009. To prepare the questionnaire, we pretested potential questions with MPOs of different sizes and from different FTA regions to ensure that (1) the questions and possible responses were clear and thorough, (2) terminology was used correctly, (3) questions did not place an undue burden on the respondents, (4) the information was feasible to obtain, and (5) the questionnaire was comprehensive and unbiased. On the basis of feedback from the seven pretests we conducted, we made changes to the content and format of some survey questions. The results of our survey can be found at GAO-09-867SP. To identify MPOs to survey, we obtained MPO contact information from DOT and AMPO; any inconsistencies between the two lists were reconciled with phone calls to the relevant MPO. We also contacted all of the MPOs in advance, by e-mail, to ensure that we had identified the correct respondents and to request their completion of the questionnaire. After the survey had been available for 2 weeks, and again after 4 and 6 weeks, we used e-mail and telephone calls to contact MPOs who had not completed their questionnaires. Using these procedures, we obtained an 86 percent response rate. Because this was not a sample survey, there are no sampling errors. However, the practical difficulties of conducting any survey may introduce errors, commonly referred to as nonsampling errors. For example, difficulties in how a particular question is interpreted, in the sources of information that are available to respondents, or in how the data are entered into a database or were analyzed can introduce unwanted variability into the survey results. We took steps in the development of the questionnaire, the data collection, and the data analysis to minimize these nonsampling errors. For instance, a survey specialist designed the questionnaire in collaboration with GAO staff who have subject-matter expertise. Further, the draft questionnaire was pretested with a number of MPOs to ensure that the questions were relevant, clearly stated, and easy to comprehend. When the data were analyzed, a second, independent analyst checked all computer programs. Finally, nonresponding MPOs were distributed among different states and sizes of MPOs in a way that did not show evidence of bias. To gather additional information on the roles and responsibilities of MPOs, the extent to which federal oversight improves transportation planning, and possible ways to improve regional transportation planning, we conducted case studies in eight metropolitan areas. Each case study involved interviews with the designated MPO for that metropolitan area, as well as the state department of transportation, transit operators, and other relevant regional organizations. We selected MPOs to visit and examine based on the following criteria: population (based on whether or not the MPO is in a designated TMA); location (based on the FTA region); air quality (based on whether the MPO is located in an air quality structure of the MPO (based on whether the MPO is an independent agency or housed within another organization or jurisdiction); and recommendations from internal stakeholders, experts, associations, and federal DOT officials we consulted. Although using these criteria allowed us, in our view, to obtain information from a diverse mix of MPOs, the findings from our case studies cannot be generalized to all MPOs because they were selected as part of a nonprobability sample. Table 2 lists the region and relevant MPOs where we conducted case studies. In addition to the contact named above, A. Nicole Clowers, Acting Director; Kyle Browning; F. Chase Cook; Kathleen Gilhooly; Cathy Hurley; Stu Kaufman; Sara Ann Moessbaeur; Josh Ormond; Stephanie Purcell; Amy Rosewarne; Jay Smale; and Susan Zimmerman made key contributions to this report. | Metropolitan planning organizations (MPO) are responsible for transportation planning in metropolitan areas; however, little is known about what has been achieved by the planning efforts. This congressionally requested report describes (1) the characteristics and responsibilities of MPOs, (2) the challenges that MPOs face in carrying out their responsibilities, (3) how the U.S. Department of Transportation (DOT) provides oversight for MPOs and the extent to which this improves transportation planning, and (4) the options that have been proposed to enhance transportation planning. To address these objectives, GAO surveyed all 381 MPOs (with an 86 percent response rate) and conducted case studies of eight metropolitan areas and conducted a survey of program managers. MPOs vary greatly in terms of capacity and responsibilities. Some MPOs are supported by one or two staff, while others have over 100 staff. While half of MPOs represent populations of less than 200,000, some represent millions. MPOs are typically housed within a regional planning council or a city or county government agency, but also may operate as independent agencies. Most MPOs receive the majority of their planning funds from federal sources, but also receive funds from other sources such as states or localities. The technical capacity of MPOs also varies significantly, both in terms of the type of model used to develop travel demand forecasts and the number of staff available to perform such forecasts. Some MPOs have acquired additional responsibilities, such as project implementation, beyond federal requirements. MPOs cited many challenges in our survey and interviews, primarily related to funding and staffing, authority, and technical capacity. About 85 percent of all MPOs responding to our survey cited the lack of transportation planning funding as a challenge to transportation planning. About half of our survey respondents stated that the lack of flexibility for using federal planning funds inhibits them from conducting comprehensive transportation planning. Staffing constraints, such as limited number of staff and lack of trained staff, also impact MPOs' ability to conduct transportation planning. Finally, according to our survey and interviews, some MPOs lack the technical capacity and data necessary to conduct the type of complex transportation modeling required to meet their planning needs. DOT's Federal Transit Administration (FTA) and Federal Highway Administration (FHWA) work together to oversee MPOs, but given the process-oriented approach of the oversight, it is difficult to determine whether their oversight is improving transportation planning. MPOs representing more than 200,000 in population are subject to federal certification reviews. The certification reviews focus on procedural compliance with planning requirements, not transportation outcomes. MPOs generally view this federal process as pro forma in nature and place a greater value on informal assistance provided by both federal and state governments. Several proposals have been developed by government and industry associations that could address some of the resource, authority, and technical challenges facing MPOs. For example, (1) allowing the use of transportation planning funds for more activities could better meet the needs of some metropolitan areas; (2) varying MPOs' planning requirements and authority or changing the legal definition of MPOs could address varying capacity and planning needs; (3) increasing federal investment in modeling and data gathering could improve the technical capability of MPOs and bring a greater degree of reliability and consistency across MPOs to travel demand forecasting; and (4) making the planning process more performance-based could allow FTA and FHWA to better assess MPOs' progress in achieving specific results. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
When the WTC buildings collapsed on September 11, 2001, an estimated 250,000 to 400,000 people were immediately exposed to a noxious mixture of dust, debris, smoke, and potentially toxic contaminants in the air and on the ground, such as pulverized concrete, fibrous glass, particulate matter, and asbestos. Those affected included people residing, working, or attending school in the vicinity of the WTC and thousands of emergency response workers. Also affected were the estimated 40,000 responders who were involved in some capacity in the days, weeks, and months that followed, including personnel from many government agencies and private organizations as well as other workers and volunteers. A wide variety of physical and mental health effects have been observed and reported among people who were involved in rescue, recovery, and cleanup operations and among those who lived and worked in the vicinity of the WTC. Physical health effects included injuries and respiratory conditions, such as sinusitis, asthma, and a new syndrome called WTC cough, which consists of persistent coughing accompanied by severe respiratory symptoms. Almost all firefighters who responded to the attack experienced respiratory effects, including WTC cough, and hundreds had to end their firefighting careers due to WTC-related respiratory illnesses. The most commonly reported mental health effects among responders and others were symptoms associated with posttraumatic stress disorder—an often debilitating disorder that can develop after a person experiences or witnesses a traumatic event, and which may not develop for months or years after the event. Behavioral effects such as alcohol and tobacco use and difficulty coping with daily responsibilities were also reported. Several federally funded programs monitor the health of people who were exposed to the WTC attack and its aftermath. The monitoring programs vary in such aspects as eligibility requirements, methods used for collecting information about people’s health, and approaches for offering referrals. Of the four programs that offer medical examinations to WTC responders, the only one that is open to federal workers who responded to the disaster in an official capacity is the one implemented by HHS. (See table 1.) None of the monitoring programs receives federal funds to provide clinical treatment for health problems that are identified. The majority of federal funding for these monitoring programs was provided by DHS’s Federal Emergency Management Agency (FEMA), as part of the approximately $8.8 billion in federal assistance that the Congress appropriated to FEMA for response and recovery activities after the WTC disaster. One fiscal year 2003 appropriation specifically authorized FEMA to use a portion of its WTC-related funding for screening and long-term monitoring of emergency services and rescue and recovery personnel. Generally, however, FEMA may fund only short-term care after a disaster, such as emergency medical services, and not ongoing clinical treatment. FEMA entered into interagency agreements with HHS to fund most of these health monitoring programs. HHS is the designated lead agency for the public health and medical support function under the National Response Plan and is responsible for coordinating the medical resources of all federal departments and agencies. HHS’s Office of Public Health Emergency Preparedness (OPHEP) coordinates and directs HHS’s emergency preparedness and response program. Three federally funded programs implemented by state and local governments or private organizations—the FDNY WTC Medical Monitoring Program, WTC Medical Monitoring Program (worker and volunteer program), and New York State responder screening program— have made progress in monitoring the physical and mental health of people affected by the WTC attack. Federal employees who responded to the WTC disaster in an official capacity were not eligible for these programs because it was expected that another program would be developed for them. The New York State program stopped providing examinations in November 2003, and state workers are now eligible for initial or continued monitoring through the worker and volunteer program. In general, the state program has not informed state responders that they are eligible for the worker and volunteer program. All three programs and the WTC Health Registry have collected information that could contribute to better understanding of the health consequences of the attack and improve health care for affected individuals. Officials from the FDNY, worker and volunteer, and WTC Health Registry programs are concerned that federal funding for their programs could end before sufficient monitoring occurs to identify all long-term health problems related to the WTC disaster. Three federally funded programs implemented by state and local governments or private organizations have provided medical examinations to identify physical and mental health problems after the WTC attack. (See table 2.) Two of these programs—the FDNY WTC Medical Monitoring Program and the worker and volunteer program—are tracking the health of WTC rescue, recovery, and cleanup workers and volunteers over time. The third program, the New York State responder screening program, offered one-time screening examinations to state employees, including National Guard personnel, who participated in WTC rescue, recovery, and cleanup work. Federal employees who responded to the WTC disaster in an official capacity were not eligible for any of these programs because it was expected that another program would be developed for them. The FDNY program completed initial screening for over 15,000 firefighters and emergency medical service personnel, and the worker and volunteer program completed initial screening for over 14,000 other responders. In both programs, screenings include physical examinations, pulmonary function tests, blood and urine analysis, a chest X-ray, and questionnaires on exposures and mental health issues. Both programs have begun to conduct follow-up examinations of participants and continue to accept new enrollees who desire initial screening. Current plans are to conduct a total of three follow-up examinations for each participant by 2009. As part of their federally funded activities, both programs provide referrals for participants who require treatment. FDNY employees and retirees can obtain treatment and counseling services from the FDNY Bureau of Health Services and the FDNY Counseling Services Unit, or they can use their health insurance to obtain treatment and counseling services elsewhere. The worker and volunteer program also provides referrals for its participants, including referrals to programs funded by the American Red Cross and other nonprofit organizations. The New York State program screened about 1,700 of the estimated 9,800 state workers and National Guard personnel who responded to the WTC disaster. Officials sent letters to these responders to inform them about the program and their eligibility for it. For each participant, the screening included a health and exposure questionnaire and physical and pulmonary examinations. Participants who required further evaluation or treatment after screening were told to follow up with their personal physician or a specialist. The program stopped screening participants in November 2003, in part because the number of responders requesting examinations was dwindling, and no follow-up examinations are planned. In February 2005, worker and volunteer program officials began to allow New York State responders to participate in that monitoring program. The officials determined that the worker and volunteer program would have sufficient funding to accommodate state workers who want to join the program. The state program has not notified the approximately 1,700 workers it has screened that they are now eligible for continued monitoring from the worker and volunteer program. Program officials relayed this development only to those state responders who inquired about screening or monitoring examinations following the decision to permit state responders to participate in the worker and volunteer program. Worker and volunteer program officials told us that, through August 2005, no state workers who responded to the WTC disaster in an official capacity had received examinations from the worker and volunteer program. According to worker and volunteer program officials, any state worker screened by the state program would need a new baseline examination through the worker and volunteer program because the screening data collected by the state program differ from the data collected in the worker and volunteer program. For example, the worker and volunteer program offers a breathing test not provided by the state program. In addition to providing medical examinations, these three programs—the FDNY program, the worker and volunteer program, and the New York State program—have collected information for use in scientific research to better understand the health consequences of the WTC attack and other disasters. A fourth program, the WTC Health Registry, includes health and exposure information obtained through interviews with participants; it is designed to track participants’ health for 20 years and to provide data on the long-term health consequences of the disaster (see table 2). Physicians who evaluate and treat WTC responders told us they expect that research on health effects from the disaster will not only help researchers understand the health consequences, but also provide information on appropriate treatment options for affected individuals. Both the FDNY program and the worker and volunteer program have been the basis for published research articles on the health of WTC responders. For example, the FDNY program reported on the injuries and illnesses experienced by firefighters and emergency medical service workers after responding to the attack. In addition, the worker and volunteer program published information on the physical and mental health of responders in 2004. Officials from both programs plan to publish additional findings as they track participants’ health over time. Although the New York State program has stopped offering examinations, program officials are continuing to analyze data from the program with plans for eventual publication. The WTC Health Registry program has collected health information through interviews with responders, people living or attending school in the vicinity of the WTC site, and people working or present in the vicinity on September 11, 2001. The registry completed enrollment and conducted interviews with over 71,000 participants by November 2004. Officials updated contact information for all participants in 2005, and they plan to conduct a follow-up health survey of participants in early 2006. Registry officials would like to conduct subsequent follow-up surveys periodically through about 2023—20 years after the program began in 2003—but have not yet secured funding for long-term monitoring. The registry is designed to provide a basis for research to evaluate the long-term health consequences of the disaster. It includes contact information for people affected by the WTC attack, information on individuals’ experiences and exposures during the disaster, and information on their health. In November 2004, registry officials published preliminary results on the health status of registry participants, and officials expect to submit several research papers for publication within the next year. In addition, in May 2005, registry officials published guidelines for allowing registry information to be used in scientific research, and they have since approved three proposals for external research projects that use registry information. These proposals include two studies of building evacuations and a study of psychological responses to terrorism. Officials from the FDNY, worker and volunteer, and WTC Health Registry programs are concerned that current federal funding arrangements for programs designed to track participants’ health over time may be too short to allow for identification of all the health effects that may eventually develop. ATSDR plans to fund the WTC Health Registry through April 2008, and NIOSH plans to fund the FDNY program and the worker and volunteer program through mid-2009. ATSDR’s 5-year cooperative agreement with the New York City Department of Health and Mental Hygiene to support the WTC Health Registry went into effect April 30, 2003, and extends through April 29, 2008. Similarly, NIOSH awarded 5-year grants in July 2004 to continue the FDNY and worker and volunteer programs, which had begun in 2001 and 2002, respectively. Health experts involved in these monitoring programs, however, cite the need for long- term monitoring of affected groups because some possible health effects, such as cancer, may not appear until decades after a person has been exposed to a harmful agent. They also told us that monitoring is important for identifying and assessing the occurrence of newly identified conditions, such as WTC cough, and chronic conditions, such as asthma. HHS’s OPHEP established the WTC Federal Responder Screening Program to provide medical screening examinations for an estimated 10,000 federal workers who responded to the WTC disaster in an official capacity and were not eligible for any other medical monitoring program. OPHEP did not develop a comprehensive list of federal responders who were eligible for the program. The program began in June 2003—about a year later than other monitoring programs—and completed screenings for 394 workers. No examinations have occurred since March 2004, because officials placed the program on hold, temporarily suspending new examinations. The program is still on hold, and OPHEP officials are taking actions intended to lead to restarting the program. We identified two federal agencies that established screening programs for their own personnel who responded to the disaster. HHS’s WTC Federal Responder Screening Program was established to provide free voluntary medical screening examinations for an estimated 10,000 federal workers whom their agencies sent to respond to the WTC disaster from September 11, 2001, through September 10, 2002, and who were not eligible for any other monitoring program. FEMA provided $3.74 million through an interagency agreement with HHS’s OPHEP for the purpose of developing and implementing the program. OPHEP entered into an agreement with HHS’s FOH to schedule and conduct the screening examinations. The launching of the federal responder screening program lagged behind the implementation of other federally funded monitoring programs for WTC responders. For example, the medical screening program for New York State employees and the worker and volunteer program started conducting screening examinations in May 2002 and July 2002, respectively. However, OPHEP did not launch its program until June 2003. (Figure 1 highlights key actions in developing and implementing the program.) OPHEP did not develop a plan for identifying all federal agencies and their personnel that responded to the WTC disaster or for contacting all federal personnel eligible for the screening program. Although OPHEP and FEMA developed a partial list of federal responders—consisting primarily of HHS and FEMA personnel—OPHEP did not have a comprehensive list of agencies and personnel, and so could not inform all eligible federal responders about the WTC screening program. The program’s principal action to communicate with the federal responders was to place program information and registration forms on FEMA’s National Disaster Medical System (NDMS) Web site. The screening program had operated for about 6 months when OPHEP officials decided in January 2004 to place it on hold by temporarily suspending examinations. FOH officials told us that they completed 394 screening examinations from June 2003 through March 2004, with most completed by the end of September 2003. According to FOH, a total of $177,967 was spent on examinations. As of September 7, 2005, the program remained on hold, with 37 people on the waiting list for examinations, and OPHEP has not set a date for resuming the examination process. OPHEP officials told us that three operational issues contributed to the decision to suspend the program. First, OPHEP could not inform all eligible federal responders about the program because it lacked a comprehensive list of eligible federal responders. Second, there were concerns about what actions FOH clinicians could take when screening examinations identified problems. Based on the examinations that had been completed before the program was placed on hold, FOH clinicians determined that many participants needed additional diagnostic testing and follow-up care, primarily in the areas of respiratory functioning and mental health. However, under the existing interagency agreement there was no provision for providing follow-up care and no direction for clinicians on how to handle the provision of further diagnostic tests, treatment, or referrals. FOH officials told us that they were concerned about continuing to provide screening examinations without the ability to provide participants with additional needed services. Third, although the screening program had been established to provide examinations to all federal responders regardless of their current federal employment status, HHS officials told us that the department determined that FOH does not have the authority to provide examinations to people who are no longer in federal service. OPHEP officials told us in September 2005 that they were exploring avenues for providing examinations to federal responders who were no longer federal employees. OPHEP has begun to take action to prepare for offering examinations again. In April 2005, program officials enlisted the assistance of ATSDR— which had successfully developed the WTC Health Registry—to help develop the needed lists of federal agencies and personnel for the federal responder program. OPHEP executed an agreement with ATSDR that allocated about $491,000 from the program’s remaining allocation from FEMA to ATSDR. Under this agreement, which is scheduled to run through April 2006, ATSDR is working with the contractor it used to develop the WTC Health Registry to develop a new registration Web site, develop and implement a comprehensive recruitment and enrollment plan for current and former federal workers, and establish a database containing the names of federal responders. On September 1, 2005, OPHEP sent a letter to 51 federal agencies requesting them to provide ATSDR’s contractor with contact information on the employees they sent to respond to the WTC disaster. In July 2005, OPHEP and FOH executed a new agreement so that when the program begins examining responders again, FOH clinicians will be able to make referrals for follow-up care. For example, they will be able to refer participants with mental health symptoms to an FOH employee assistance program for a telephone assessment. If appropriate, the participant will be referred to an employee assistance program counselor for up to six in- person sessions. If the assessment indicates that longer treatment is necessary, the participant instead will be advised to use health insurance to obtain care or to contact a local Department of Labor Office of Workers’ Compensation to file a claim, receive further evaluation, and possibly obtain compensation for mental health services. The new agreement between OPHEP and FOH also will allow FOH clinicians to order additional clinical tests, such as special pulmonary and breathing tests. We identified two federal agencies that established medical screening programs to assess the health of the personnel they had sent to respond to the WTC disaster. One agency, the Army, established two screening programs—one specifically for Army Corps of Engineers personnel and one that also included other Army responders. The Army Corps of Engineers established a voluntary program to assess the health of 356 employees it had sent to respond to the disaster. The program, initiated in November 2001, consists of sending employees an initial medical screening questionnaire covering physical health issues. If questionnaire results indicate symptoms or concerns that need further evaluation, the employee is offered a medical examination. As of August 2004, 92 Corps of Engineers employees had participated in the program, with 40 receiving follow-up examinations. The Army’s Center for Health Promotion and Preventive Medicine initiated a program—the World Trade Center Support Health Assessment Survey—in January 2002. It was designed as a voluntary medical screening for Army military and civilian personnel, including contractors. From January 2002 through September 2003, questionnaires were sent to 256 employees. According to DOD, 162 employees completed and returned their questionnaires. In addition, the U.S. Marshals Service, within the Department of Justice, modified an existing agreement with FOH in 2003 for FOH to screen approximately 200 U.S. Marshals Service employees assigned to the WTC or Pentagon recovery sites. The one-time assessment includes a screening questionnaire and a medical examination. FOH officials said that as of August 2005, 88 of the 200 U.S. Marshals Service employees had requested and obtained examinations. Officials involved in the WTC health monitoring programs implemented by state and local governments or private organizations—including officials from the federal administering agencies—derived lessons from their experiences that could help officials design such programs in the future. They include the need to quickly identify and contact people affected by a disaster, the value of a centrally coordinated approach for assessing individuals’ health, the importance of monitoring both physical and mental health, and the need to plan for providing referrals for treatment when screening examinations identify health problems. Officials involved in the monitoring programs emphasized the importance of quickly identifying and contacting people affected by a disaster. They said that potential monitoring program participants can become more difficult to locate as time passes. In addition, potential participants’ ability to recall the events of a disaster may decrease over time, making it more difficult to collect accurate information about their experiences and health. However, the time it takes to design, fund, approve, and implement monitoring programs can lead to delays in contacting the people who were affected. For example, the WTC Health Registry received funding in July 2002 but did not begin collecting data until September 2003—2 years after the disaster. From July 2002 through September 2003, the program’s activities included developing the registry protocol, testing the questionnaire, and obtaining approval from institutional review boards and the federal Office of Management and Budget. This delayed the collection of information from participants. To prevent similar delays during the response to future disasters, ATSDR officials are developing a questionnaire, known as the Rapid Response Registry, to allow officials to identify and locate potentially affected individuals immediately after a disaster and collect basic preliminary information, such as their current contact information and their location during the disaster. ATSDR officials expect that using this instrument would reduce delays in collecting time-sensitive information while officials take the time necessary to develop a monitoring program for disaster-related health effects. Furthermore, officials told us that health monitoring for future disasters could benefit from additional centrally coordinated planning. Such planning could facilitate the collection of compatible data among monitoring efforts, to the extent that this is appropriate. Collecting compatible data could allow information from different programs to be integrated and contribute to improved data analysis and more useful research. In addition, centrally coordinated planning could help officials determine whether separate programs are necessary to serve different groups of people. For example, worker and volunteer program officials indicated that it might have been possible for that program to serve federal workers who responded to the disaster in an official capacity, which might have eliminated the need to organize and administer a separate program for them. Officials also stated that screening and monitoring programs should be comprehensive, encompassing both physical and mental health evaluations. Worker and volunteer medical monitoring program officials told us that the initial planning for the program had focused primarily on screening participants’ physical health, and that they did not originally budget for extensive mental health screening. Subsequently, they recognized a need for more extensive mental health screening, including greater participation of mental health professionals, but the program’s federal funding was not sufficient to cover such screening. By collaborating with the Mount Sinai School of Medicine Department of Psychiatry, program officials were able to obtain philanthropic funding to develop a more comprehensive mental health questionnaire, provide on- site psychiatric screening, and, when necessary, provide more extensive evaluations. Many participants in the monitoring programs required additional testing or needed treatment for health problems that were identified during screening examinations. Officials told us that finding treatment sources for such participants is an important, but challenging, part of the programs’ responsibility. For example, officials from the worker and volunteer program stated that identifying providers available to treat participants became a major part of their operations, and was especially difficult when participants lacked health insurance. The officials said that planning for future monitoring programs should include a determination of how best to help participants obtain needed treatment. Federally funded programs implemented by state and local governments or private organizations to monitor the health effects of the WTC attack on thousands of people who responded to the disaster have made progress. However, the program HHS established to screen the federal employees whose agencies sent them to the WTC after the attack has accomplished little, completing screenings of fewer than 400 of the thousands of federal responders. Moreover, no examinations have occurred for over a year. Because of this program’s limited activity, and the inability of federal workers to participate in other monitoring programs because of the assumption that they would have the opportunity to receive screening examinations through the HHS program, many federal responders may not have had an opportunity to identify and seek treatment for health problems related to the WTC disaster. For state responders, the opportunity for continued monitoring could be lost if they are not informed that they are now eligible to participate in the worker and volunteer program. Based on their experiences, officials involved in the monitoring programs have made a number of useful observations that will apply to future terrorist attacks and natural disasters such as Hurricane Katrina. For example, screening for mental as well as physical health problems in New Orleans and along the Gulf Coast will be critical to the recovery of survivors of Hurricane Katrina and the responders to the disaster. The federal, state, and local government officials who are responsible for planning and implementing health monitoring activities in the aftermath of disasters could improve their effectiveness by incorporating the lessons learned from the World Trade Center experience. Mr. Chairman, this completes my prepared remarks. I would be happy to respond to any questions you or other Members of the Subcommittee may have at this time. For further information about this testimony, please contact Cynthia A. Bascetta at (202) 512-7101 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Helene F. Toiv, Assistant Director; George H. Bogart; Alice L. London; Roseanne Price; and William R. Simerl made key contributions to this statement. Through our work, we identified the following agencies that sent employees to respond to the World Trade Center attack of September 11, 2001. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | After the 2001 attack on the World Trade Center (WTC), nearly 3,000 people died and an estimated 250,000 to 400,000 people who lived, worked, or attended school in the vicinity were affected. An estimated 40,000 people who responded to the disaster--including New York City Fire Department (FDNY) personnel and other government and private-sector workers and volunteers--were exposed to numerous physical and mental health hazards. Concerns remain about the long-term health effects of the attack and about the nation's capacity to plan for and respond to both short- and long-term health effects in the event of a future attack or other disaster. Several federally funded programs have monitored the physical and mental health effects of the WTC attack. These monitoring programs include one-time screening programs and programs that also conduct follow-up monitoring. GAO was asked to assess the progress of these programs. GAO examined (1) federally funded programs implemented by state and local government agencies or private institutions, (2) federally administered programs to monitor the health of federal workers who responded to the disaster in an official capacity, and (3) lessons learned from WTC monitoring programs. GAO reviewed program documents and interviewed federal, state, and local officials and others involved in WTC monitoring programs. Three federally funded monitoring programs implemented by state and local governments or private organizations after the WTC attack have provided initial medical examinations--and in some cases follow-up examinations--to thousands of affected responders to screen for health problems. For example, the FDNY medical monitoring program completed initial screening for over 15,000 firefighters and emergency medical service personnel, and the worker and volunteer program screened over 14,000 other responders. The New York State responder screening program screened about 1,700 state responders before ending its examinations in 2003. Most state responders have not been informed that they are now eligible to participate in the worker and volunteer program, and New York State responders could miss the opportunity for continued monitoring. These monitoring programs and the WTC Health Registry have collected information that program officials believe researchers could use to help better understand the health consequences of the attack and improve treatment. Program officials expressed concern, however, that current federal funding arrangements for long-term monitoring may be too short to allow for identification of all future health effects. In contrast to the progress made by other federally funded programs, the Department of Health and Human Services' (HHS) program to screen federal workers who were sent by their agencies to respond to the WTC disaster has accomplished little and is on hold. The program--which started about one year later than other WTC monitoring programs--completed screening of 394 of the estimated 10,000 federal workers who responded in an official capacity to the disaster, but HHS officials suspended examinations and the program has not screened anyone since March 2004. The program's limited activity and the exclusion of federal workers from other monitoring programs because of the assumption that they could receive screening examinations through the HHS program may have resulted in many federal responders losing the opportunity to identify and seek treatment for their WTC-related health problems. Officials involved in WTC health monitoring programs cited lessons from their experiences that could help others who may be responsible for designing and implementing health monitoring efforts that follow other disasters, such as Hurricane Katrina. These include the need to quickly identify and contact people affected by a disaster; to monitor for mental health effects, as well as physical injuries and illnesses; and to anticipate when designing disaster-related monitoring efforts that there will likely be many people who require referrals for follow-up care and that handling the referral process may require substantial effort. HHS and New York State officials provided comments on the facts contained in this testimony and GAO made changes as appropriate. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
A passport is an official government document that certifies an individual’s identity and citizenship and permits a citizen to travel abroad. According to State, many people who have no overseas travel plans have applied for a passport because it is viewed as the premier citizenship and identity document, which allows the bearer to board an airplane, prove citizenship for employment purposes, apply for federal benefits, and fulfill other needs not related to international travel. Under U.S. law, the Secretary of State has the authority to issue passports, which may be valid for up to 10 years. Only U.S. nationals may obtain a U.S. passport, and evidence of nationality is required with every passport application. The Deputy Assistant Secretary for Passport Services oversees the Passport Services Office, within State’s Bureau of Consular Affairs. Passport Services, the largest component of Consular Affairs, consists of five headquarters offices: Field Operations, Technical Operations, Passport Integrity and Internal Controls Program, Planning and Program Support, and Legal Affairs and Law Enforcement Liaison. In addition to these headquarters offices, State operates 17 passport issuing agencies in Aurora, Colorado; Boston; Charleston, South Carolina; Chicago; Honolulu; Houston; Los Angeles; Miami; New Orleans; New York; Norwalk, Connecticut; Philadelphia; Portsmouth, New Hampshire; San Francisco; Seattle; and two offices in Washington, D.C.—a regional passport agency and a special issuance agency that handles official U.S. government and diplomatic passports. State also opened new passport production facilities for the personalization of passport books in Hot Springs, Arkansas, in March 2007 and in Tucson, Arizona, in May 2008. As of May 2008, State employed more than 3,300 government and contract staff to receive, process, and adjudicate passport applications and print and mail out passport books. This number of staff has risen dramatically in recent years to handle the increased number of passport applications. Between October 2006 and May 2008, the number of passport specialists— staff responsible for approving and issuing most U.S. passports—more than doubled, to 1,353. In addition, State’s passport agencies employ roughly 1,500 staff as contractors, who perform nonadjudicative support functions such as data entry, printing, and mailing out passports. Separately, as of May 2008, State also employed about 600 full- and part- time staff at the National Passport Information Center (NPIC), which handles customer service inquiries from the public. Figure 1 summarizes the passport application process, from the submission of an application at an acceptance facility or by mail, through payment processing and basic data entry at lockbox facilities operated by the financial agent, to adjudication and printing at passport agencies around the country. State is authorized to designate acceptance facilities—in addition to its own passport agencies—to provide passport execution services to the American public. The majority of passport applications are submitted by mail or in person at passport application acceptance facilities nationwide. Passport acceptance facilities are located at certain U.S. post offices, courthouses, and other institutions and do not employ State personnel. The passport acceptance agents at these facilities are responsible for, among other things, verifying whether an applicant’s identification document (such as a driver’s license) actually matches that applicant. These agents collect the application package, which includes the passport application, supporting documents, and payment, and send it to State’s centralized lockbox facility. According to State, the number of active acceptance facilities changes frequently as new facilities are added and others are dropped. In recent years, State has expanded its network of acceptance facilities to accommodate increasing passport demand. As of June 2008, there were over 9,400 such facilities nationwide, an increase from fewer than 7,000 facilities in March 2005. Passport acceptance agents send application packages to a lockbox facility operated by a Treasury financial agent. The lockbox is responsible for opening and sorting passport application packages, verifying the completeness of the packages, processing payments, and batching the applications. In addition, lockbox staff scan the first page of the passport application, along with the payment check or money order, and apply a processing date to the application. Once data on the application are captured by software using character recognition and confirmed manually by data entry staff, the information is transferred to a server, which passport agencies can access to download into their passport issuance system. The physical passport application, along with supporting documents such as a birth certificate, is also sent via courier to a passport agency. The lockbox generally performs all application processing functions within 24 hours of receipt of the application from an acceptance facility. Once a passport application has been received by one of the passport agencies, it is examined by a passport specialist who determines, through a process called adjudication, whether the applicant should be issued a passport. Adjudication requires the specialist to scrutinize identification and citizenship documents presented by applicants to verify their identity and U.S. citizenship. It also includes the examination of an application to detect potential indicators of passport fraud and the comparison of the applicant’s information against databases that help identify individuals who may not qualify for a U.S. passport. When passport applications are submitted by mail or through acceptance facilities, specialists adjudicate the applications at their desks. A relatively small number of passport applications are submitted directly by applicants to one of the passport agencies. Applicants are required to demonstrate imminent travel plans to set an appointment for such services at one of the issuing agency’s public counters. “Counter” adjudication allows specialists to question applicants directly or request further information on matters related to the application, while “desk” adjudication requires contacting the applicants by telephone or mail in such cases. Once an applicant has been determined eligible for a passport by a passport specialist, the passport is personalized with the applicant’s information at the passport agency or one of the centralized printing facilities and then delivered to the applicant. The National Passport Information Center, located in Dover, New Hampshire, and Lansing, Michigan, is State’s centralized customer service center. NPIC is a contractor-operated center that provides information and responds to public inquiries on matters related to passport services. Linked electronically to all passport agencies, NPIC provides an automated telephone appointment service that customers can access nationwide 24 hours a day and an online service for customers to check the status of their applications. A separate telephone number and e-mail address are dedicated for congressional staff inquiries. State has experienced a tremendous increase in the number of passports it processes in recent years. Between 2004 and 2007, the number of passports issued more than doubled to nearly 18.5 million passports (see fig. 2). This rate of increase far surpasses historical trends—a 2005 study on passport operations noted that the number of passports issued in the 30 years between 1974 and 2004 increased just 72 percent. Demand for passports is seasonal in nature, with applications usually peaking between January and April, as the public prepares for spring and summer vacations, and then falling off from September through December (see fig. 3). In estimating future demand for passports, State factors in this seasonality. According to State data, about 28 percent of the U.S. population has a passport, with 85.5 million U.S. passports in circulation as of February 2008. Of these, more than 24 million will expire in the next 5 years. State noted that the number of people applying for passport renewal varies depending on the laws and regulations in effect, the economy, and other factors. In addition, people may apply for a passport renewal before their book expires or up to 5 years after it expires. In response to this rapid increase in demand for passports, State’s requested budget for passport activities has increased tenfold since 2002 (see fig. 4). This request is part of State’s Border Security Program, which includes funding for passport operations, systems, and facilities. The Border Security Program is funded through a combination of Machine Readable Visa fees, the Western Hemisphere Travel surcharge, Enhanced Border Security Program fees, and Fraud Prevention fees, as well as through appropriated funds. The majority of this funding comes from the Machine Readable Visa fees, which amounted to nearly $800 million of the Border Security Program budget in fiscal year 2007. State also collects fees for expedited passports and the Passport Security surcharge. The increased demand for passports is primarily the result of WHTI, DHS’s and State’s effort to specify acceptable documents and implement document requirements at 326 air, land, and sea ports of entry. When fully implemented, WHTI will require all citizens of the United States and nonimmigrant citizens of Canada, Mexico, and Bermuda to have a passport or other accepted travel document that establishes the bearer’s identity and citizenship to enter or re-enter the United States at all ports of entry when traveling from within the Western Hemisphere. Prior to this legislation, U.S. citizens did not need a passport to enter the United States if they were traveling from within the Western Hemisphere, except from Cuba. DHS is implementing WHTI in two phases: first, for air ports of entry, and second, for land and sea ports of entry (see fig. 5). On January 23, 2007, DHS implemented WHTI document requirements at air ports of entry. On January 31, 2008, DHS began implementing the second phase of WHTI at land and sea ports of entry by ending the routine practice of accepting credible oral declarations as proof of citizenship at such ports. DHS is required by law to implement WHTI document requirements at the land and sea ports of entry on the later of two dates: June 1, 2009, or 3 months after DHS and State certify that certain implementation requirements have been met. During the 2007 surge in passport demand, due to the passport application backlog, certain WHTI requirements were suspended. Specifically, on June 8, 2007, State and DHS announced that U.S. citizens traveling to Canada, Mexico, the Caribbean, and Bermuda who have applied for but not yet received passports could temporarily enter and depart from the United States by air with a government-issued photo identification and Department of State official proof of application for a passport through September 30, 2007. In October 2006, to meet the documentation requirements of WHTI and to facilitate the frequent travel of persons living in border communities, State announced plans to produce a passport card as an alternative travel document for re-entry into the United States by U.S. citizens at land and sea ports of entry. The passport card is being developed as a lower-cost means of establishing identity and nationality for American citizens and will be about the size of a credit card. Individuals may apply for either a traditional passport book or a passport card, or both. Applications for the passport card will undergo the same scrutiny and security checks as applications for the traditional passport book, and the card will incorporate security features similar to those found in the passport book. State began accepting applications for the passport card in February 2008 and began producing the card in July 2008. State and other officials have suggested that the availability of the passport card may generate additional demand, as individuals may apply for a card for identification for nontravel purposes, such as voting. State was unprepared for the record number of passport applications it received in 2007 because it underestimated overall demand for passports and did not anticipate the timing of this demand. Consequently, State struggled to process this record number of passports, and wait times rose to record levels. State’s efforts to respond to the demand for passports were complicated by communications challenges, which led to large numbers of applicants being unable to determine the status of their applications. State’s initial estimate for passport demand in fiscal year 2007, 15 million applications, was significantly below its actual receipt of about 18.6 million passport applications, a record high. Because of its inability to accurately determine the increase in applications, State was unable to provide revisions in its estimates to the lockbox financial agent in enough time for the lockbox to prepare for the increased workload, leading to significant backlogs of passport applications. State was largely unprepared for the unprecedented number of passport applications in 2007 because it did not accurately estimate the magnitude or the timing of passport demand. In January 2005, State estimated that it would receive 15 million passport applications in fiscal year 2007—about 44 percent more than it received in fiscal year 2005. However, actual receipts totaled about 18.6 million applications in fiscal year 2007, about 23 percent more than State had originally estimated. According to State officials, planning efforts to respond to increased demand are predicated on demand estimates, highlighting the need for accurate estimates. Limitations in the survey methodology used by State’s contractor responsible for collecting survey data on passport demand contributed to State’s underestimate. State based its estimate partly on a survey of an unrepresentative sample of land border crossers. This survey initially estimated an increase over the baseline demand for passports of more than 4 million applications in fiscal year 2007 due to implementation of the first phase of WHTI. However, our analysis of the survey methodology found several limitations. First, the survey was conducted in July 2005, over a year before the beginning of fiscal year 2007 and roughly 2 years before the peak of the surge in demand. According to contractor officials, many respondents have a limited ability to estimate their likely travel plans that far in advance. Moreover, State officials noted that travel document requirements were changed several times by Congress and by regulation between 2005 and 2007, likely affecting passport demand. Second, the 2005 survey did not estimate total passport demand because it did not collect new data on air and sea travelers. Third, the survey was unable to provide estimates on when the increased demand would occur. To refine its estimate, State adjusted the figures provided by the survey by using monthly application trends from previous years. According to these trends, State expected to receive 4.7 million passport applications in the first 3 months of 2007. However, demand for passports in 2007 did not follow previous seasonal trends, and State ultimately received about 5.5 million applications during those first 3 months. According to the then- Assistant Secretary for Consular Affairs, this unprecedented level of demand in a compressed period contributed to State’s inability to respond to demand. State’s efforts to estimate demand for passports were also complicated by several external factors, including preparations for the introduction of the passport card for land border crossers and changes in implementation timelines for WHTI. For example, in its fiscal year 2007 budget request and Bureau Performance Plan for Consular Affairs, submitted to the Office of Management and Budget in January 2005, State anticipated the receipt of 15 million passport applications in 2007 and requested $185 million for passport operations, facilities, and systems to meet this demand. However, due to these changing circumstances, State revised the 2007 estimates in subsequent planning and budget documents, estimating 16.2 million receipts in April 2006 and 17.7 million receipts in March 2007. State’s fluctuating demand estimates also complicated efforts to prepare for the surge in demand at the lockbox operated by the financial agent, which provides passport application data entry and payment processing services. According to lockbox agent documents, between May 2006 and February 2007, State provided lockbox officials with at least five sets of estimates of passport applications for fiscal year 2007. Although the lockbox agent began preparing for an increased workload in the end of 2006, lockbox officials told us that they had difficulty adjusting to these changing estimates, because it takes roughly 60 to 90 days to prepare for increased demand, such as by hiring additional staff and ordering additional scanners. Further, these officials told us they did not expect the volume of applications they eventually did receive. According to State officials, the lockbox agent planned to process 325,000 applications per week, but actual workload peaked at 500,000 applications per week, an increase of over 50 percent. As a result, large numbers of passport applications accumulated at the lockbox facility, and applications took far longer to be processed than the typical 24 hours. In April 2007, according to lockbox data, many applications took as long as 3 weeks to process before being sent to passport agencies for adjudication. The primary issues contributing to this backlog, according to lockbox officials, were incorrect demand estimates from State and insufficient lead time. State issued a record number of passports in fiscal year 2007, but deficiencies in its efforts to prepare for this increased demand contributed to lengthy backlogs and wait times for passport applicants. Reported wait times for routine passport applications peaked at 10 to 12 weeks in the summer of 2007—with hundreds of thousands of applications taking significantly longer—compared to 4 weeks in 2006. According to State data, the department issued a record number of passports in fiscal year 2007—about 18.5 million passports, over 50 percent more than the 12.1 million passports it issued in fiscal year 2006. State officials characterized the increase in passport demand as exponential over the past few years and attributed it mostly to the increased number of applications from Americans complying with the WHTI requirements. As noted earlier, the number of passports issued doubled between 2004 and 2007. In January 2007, State began to notice a sharp increase in passport applications. Department officials initially believed this increase was temporary because of their efforts, initiated in December 2006, to publicize new travel document requirements related to the WHTI; however, State reported that the number of applications it received increased from about 1.5 million per month in January and February 2007 to about 1.8 million or more in each of the following 3 months. Additionally, as noted in a 2007 study, passport applications in 2007 did not conform to historical trends, contributing to State’s lack of preparedness. As a result of the increased number of passport applications in the first half of 2007, reported wait times more than doubled, causing applicants to wait 10 to 12 weeks for their passports on average, though many applicants waited significantly longer. According to State data, the average time to process a passport—from the time one of State’s passport agencies receives the application until the time it mails the passport to the applicant—was about 3½ weeks in January 2007, better than the goal of 5 weeks that State had during that period. However, by the summer of 2007, processing times had risen to about 8½ weeks, which, according to State officials, led to wait times of between 10 and 12 weeks. Further, data provided by State show that 373,000 applications—or about 12 percent of all routine applications—took over 12 weeks to process during the peak of the surge in July and August 2007. By contrast, average processing times peaked at just over 4 weeks in 2006 and just over 3 weeks in 2005 (see fig. 6). Furthermore, expedited passport applications, which State guaranteed would be processed within 3 business days of receipt, took an average of over 6 days to process in July 2007, leading to reported wait times of 2 to 3 weeks for expedited applications. In addition, there were wide variations in routine application processing times between the different passport agencies during the surge. According to State’s data, average processing times for individual passport agencies ranged between 13 and 58 days during the peak of the surge in July 2007. State does not have consistent service standards or goals for timeliness of passport processing. During the 2007 surge, many applicants found it difficult to get timely, accurate information from State regarding wait times for passports; as a result, State experienced a record number of customer service inquiries from the public and Congress during the surge, drawing resources away from adjudicating passports and increasing wait times. In addition, State does not systematically measure applicants’ wait times—measuring instead processing time, which does not include the applicant’s total wait time—further contributing to the confusion and frustration of many applicants. State does not provide passport applicants with a committed date of issuance for passports; rather, it publishes current processing times on the department Web site. Over the past year, State has changed the information provided on its Web site from estimated wait time to expected processing time. Because these processing times fluctuate as passport demand changes, applicants do not know for certain when they will receive their passports. For example, at the beginning of the surge, reported wait times were 6 to 8 weeks. By the summer of 2007, however, reported wait times had risen to 10 to 12 weeks before falling to 6 to 8 weeks in September and 4 to 6 weeks in October 2007. According to passport agency staff, however, the times on State’s Web site were not updated frequently enough during the surge, which led to inaccurate information being provided to the public. Further, State has not had consistent internal performance goals for passport timeliness (see table 1). While State generally met its goals for passport processing times—which decreased from 25 to 19 days—between 2002 and 2005, the department changed its timeliness goal in 2007 from processing 90 percent of routine applications within 19 days to maintaining an average processing time of 35 days for routine applications. According to State officials, the department relaxed its goals for 2007 and future years due to the large increase in workload and the expectation of future surges in passport demand. However, even with the unprecedented demand for passports in 2007 and State’s lack of preparedness, the department managed to maintain a reported average processing time of 25 days over the course of the year, raising questions about whether State’s 35-day goal is too conservative. During the 2007 surge in passport demand, applicants found it difficult to get information about the status of their applications, leading many to contact several entities for information or to reapply for their passports. Many of the applicants who did not receive their passports within their expected time frame called NPIC—State’s customer service center— overwhelming the center’s capacity and making it difficult for applicants to get through to a customer service representative. Other applicants contacted passport agencies or acceptance facilities directly. However, passport agency staff told us that there was little or no contact between their customer service representatives and the acceptance facilities, leading to applicants receiving inconsistent or inaccurate information regarding wait times. Passport agency staff said that officials in Washington provided processing time estimates to postal facilities that were far below actual processing times. In addition to contacting State and State’s partners, thousands of applicants contacted their Members of Congress for assistance in getting their passports on time, according to State data. One Senator noted that he increased the number of staff in his office responding to passport inquiries from one to seven during the height of the surge in passport demand. According to State officials, many applicants made inquiries about the status of their passports through multiple channels—through NPIC, passport agencies, State headquarters, or congressional offices—leading to several cases in which multiple staff at State were tasked with searching for the same application. This duplication of effort drew resources away from passport adjudication and further contributed to delays in processing. According to State officials, many applicants who were unable to receive timely, accurate information on the status of their passport applications appeared in person at passport agencies to resubmit their applications— some having driven hundreds of miles and others having taken flights to the nearest passport agency. For example, according to officials at the New York passport agency, whose workload consists primarily of counter applications, the number of in-person applicants nearly doubled at the height of the surge. These officials told us they generally issue 450 to 550 passports on any given day, but during the surge they experienced an extra 400 to 600 daily applicants without appointments, most of whom were resubmitting their applications. Officials at another passport agency added that customers appearing in person at the passport agency stated that they would have made alternative arrangements had they known how long the wait time was going to be. This high number of resubmissions further slowed State’s efforts to reduce passport backlogs during the surge. The inundation of in-person applicants led to long lines and large crowds at many passport agencies during the summer of 2007. For example, officials in New York said that customers waited in line outside the building for up to 6 hours before appearing at an appointment window— and then waited even longer to see a passport specialist. According to these officials, this line snaked around the building, and the agency had to work with local law enforcement to control the crowds. Officials in Houston also said that crowd control during the surge was a significant challenge for their agency due to the large numbers of applicants appearing without appointments. The passport processing times that State publishes on its Web site do not measure the total length of time between the applicant’s submission of an application and receipt of a passport. According to State officials, processing times are calculated based on passport aging statistics—that is, roughly the period beginning when the passport agency receives a passport application from the lockbox facility and ending when the passport is mailed to the applicant. Consequently, State’s measure of processing times does not include the time it takes an application to be sent from an acceptance facility to the lockbox, be processed at the lockbox, or be transferred from the lockbox to a passport agency. While this time may be as short as 1 to 2 days during nonpeak periods, during the surge, when hundreds of thousands of passport applications were held at the lockbox facility for as long as 3 weeks, this time was significantly longer. Passport agency officials told us that during the surge, applicants were confused about the times published on State’s Web site, as they were not aware that State did not start measuring processing times until a passport agency received the application from the lockbox facility. Finally, customers wishing to track the status of their applications are unable to do so until 5 to 7 days after they have submitted their passport application, because applications do not appear in State’s tracking system until the department receives them from the lockbox facility. State increased the capacity of its staffing, facilities, customer service, and lockbox functions during the surge. Passport agencies also developed their own efforts to increase the efficiency and effectiveness of passport operations. State’s actions, combined with seasonal declines in passport applications, decreased wait times to normal levels by October 2007. State estimated the cost of the emergency measures to respond to the surge to be more than $40 million. In reaction to the 2007 surge in passport demand, State took a variety of actions related to staffing to increase its production capacity. State instituted mandatory overtime for all government and contract staff and suspended all noncritical training and travel for passport staff during the surge. State hired additional contract staff for its passport agencies to perform nonadjudication functions. State also issued a directive that contractor staff be used as acceptance agents to free up passport specialist staff to adjudicate passport applications, and called upon department employees—including Foreign Service officers, Presidential Management Fellows, retirees, and others—to supplement the department’s corps of passport specialists by adjudicating passports in Washington and at passport agencies around the United States. State also obtained an exemption from the Office of Personnel Management to the hiring cap for civil service annuitants, so that it could rehire experienced and well-trained retired adjudicators while it continued to recruit and train new passport specialists. In addition, the department dispatched teams of passport specialists to high-volume passport agencies to assist with walk- in applicants and process pending passport applications. These teams also provided customer support, including locating and expediting applications of customers with urgent travel needs. Finally, consular officers at nine overseas posts also remotely adjudicated passports, using electronic files. In addition, State took steps to increase the capacity of its facilities to handle the increased workload. State expanded the hours of operations at all of its passport agencies by remaining open in the evenings and on weekends. Several agencies also added a second shift, and State’s two passport processing centers operated 24 hours a day, in three shifts. Public counters at passport agencies were also opened on Saturdays for emergency appointments, which were scheduled through State’s centralized customer service call center. In addition to increasing work hours, State realigned workspace to make more room for adjudication purposes. For example, passport agencies used training and conference rooms to accommodate additional passport specialists. One passport agency borrowed space from another government agency housed in the same building to prescreen applicants. Some passport agencies that had more than one shift instituted desk sharing among staff. In some instances, because of the lack of workstations, adjudication staff also manually adjudicated applications with a pen and paper and entered the application’s approval into State’s information system at a later time. In addition, one passport agency renovated its facility by expanding the fraud office to add desks for more staff. To further increase the capacity of its customer service function, State extended NPIC’s operating hours and, according to State officials, increased the number of its customer service representatives from 172 full- time and 48 part-time staff in January 2007 to 799 full-time and 94 part-time staff in September 2007. In response to heavy call volume at NPIC during the surge, State installed 18 additional high-capacity lines, each of which carries 24 separate telephone lines, for a total of 432 new lines—25 percent of which were dedicated to congressional inquiries, according to State officials. State also established an e-mail address for congressional inquiries. To supplement NPIC, State also established a temporary phone task force in Washington composed of department employees volunteering to provide information and respond to urgent requests, augmented an existing consular center with about 100 operators working two shifts, and temporarily expanded its presence at a federal information center with 165 operators available to assist callers 7 days a week. State also took emergency measures in coordination with Treasury to bolster the lockbox function in reaction to the surge. First, Treasury coordinated with State to amend the terms of its memorandum of understanding with its financial agent responsible for passport application data entry and payment processing, to increase the agent’s lockbox capacity. Specifically, under the revised memorandum, the financial agent committed to processing up to 3 million applications per month at the lockbox. According to Treasury officials, to increase its processing capacity, the financial agent increased the number of its staff at the lockbox facility from 833 in January 2007 to 994 in September 2007; offered a pay incentive to increase the number of its employees working overtime; and opened an additional lockbox facility—operating 24 hours a day, 7 days a week in three shifts. In addition, the financial agent implemented some process improvements at the lockbox during the surge, including automating data entry, presorting mail by travel date, and implementing a new batching process to increase the number of applications processed. The financial agent also increased the number of scanners, the capacity of its application server and data storage, and the bandwidth of its network to accommodate the heavy volume of passport applications. In addition to the measures described above, Treasury and State held weekly conference calls with the financial agent to discuss concerns and determine various courses of action to clear the passport application backlog. Treasury and State officials also visited lockbox facilities to review operations and received daily status reports from the financial agent indicating the processing volumes and holdover inventory. In addition to the emergency steps that State took, it also accelerated some planned efforts such as hiring more permanent staff and opening a new passport book printing facility. While State’s hiring of additional permanent staff was already in CA’s long-term planning efforts to handle an increase in passport demand, the time frame to do so was moved up to respond to the passport demand surge, according to State officials. Consequently, State hired an additional 273 staff in the last quarter of fiscal year 2007; however, according to State officials, not all of these staff were on board at the end of the fiscal year because of delays in processing security clearances for new hires. Additionally, State opened a new passport book printing center in March 2007, ahead of its schedule to open in June 2007, to centralize its book printing function and free up space at passport agencies for adjudication. Passport agencies took various actions to meet their specific needs in reaction to the surge. During our site visits, State officials told us that their passport agencies had undertaken such actions as developing a software program to better track suspense cases; creating a batch tracking system whereby each shelf was numbered and all batches boxed on this shelf were marked with the same number; developing a “locator card” for customers, which was color-coded to indicate different situations—such as customers submitting new applications, inquiring about pending applications, and resubmitting applications—to enable the agency to locate the application file before the customer came into the agency; providing customers with a ticket that provided expedited service if they had to return on another day; and using students for nonadjudication tasks for the summer. In addition, according to State officials, some passport agencies used security guards to prescreen applicants at the entrance to control crowds and improve the efficiency of operations. Finally, other agencies organized teams to handle inquiries from congressional staff and State headquarters staff. In an effort to document and disseminate such initiatives, State compiled a best practices document for passport operations during the surge. These best practices were submitted by passport agencies on a variety of issues, including work flow improvements, counter management, and communication, among others. To provide a forum for feedback for passport agencies and improve passport operations, State also conducted a lessons learned exercise following the surge. State gathered information from passport staff at all levels and compiled a lessons learned document, which was made available on CA’s internal Web site. According to this review, the primary lesson learned from the surge was that the United States passport is increasingly viewed by the American public not only as a travel document, but as an identity document. Accordingly, the lessons learned document outlined lessons learned in five main categories—process, communications, technology, human resources, and contracts—to help meet future demand for passports. However, State officials told us that this document was a draft and State has not formally embraced it. The extraordinary measures that State implemented to respond to the surge in passport demand, combined with the normal seasonal decline in passport applications between September and January, helped State reduce wait times by October 2007. According to data provided by State, the department returned to normal passport processing times of 4 to 6 weeks by October 2007. These data show that State has maintained these processing times through July 2008, according to State’s Web site. State estimated the cost of its emergency measures to respond to the 2007 surge in passport demand to be $42.8 million. This amount included $28.5 million for contract-related costs, $7.5 million for overtime pay for staff from CA and other bureaus within State, and $3.1 million spent on travel to passport agencies for temporary duty staff. In addition, State spent $3.2 million on costs associated with buying equipment and furniture. State also spent an additional $466,000 on costs related to telephone services for its call centers, for rentals, and for Office of Personnel Management position announcements for hiring additional passport staff during the surge. These estimates do not include the costs of other measures such as hiring additional staff, which State had already planned but accelerated in order to respond to the 2007 surge. To cover costs incurred due to the surge, State notified Congress in June 2007 of its plans to devote an additional $36.9 million to the Border Security Program. According to State officials, this amount included $27.8 million for passport operations, such as $15 million for a passport processing center, additional costs for Foreign Service Institute training for new passport specialists, and salaries for 400 new staff to be hired in fiscal year 2007. In September 2007, State notified Congress of its intent to obligate an additional $96.6 million for its Border Security Program, including $54 million for additional passport books, according to State officials. In December 2007, State sent a revised spending plan for fiscal year 2008 to Congress to increase its resources to enable it to handle processing of 23 million passports. This plan included an additional 700 personnel to meet anticipated passport demand and a new passport adjudication center. The plan also provided for three passport gateway agencies to be established in fiscal year 2008. State has enhanced its capacity for responding to surges in passport demand in the near term, such as by improving its efforts to estimate passport demand. However, State lacks a comprehensive, long-term strategy for improving passport operations. State commissioned a review of its passport operations, completed in 2005, that identified several deficiencies and proposed a number of potential measures to guide modernization efforts; however, State does not have a plan to prioritize and synchronize these efforts. We have reported that an enterprise approach could help agencies develop more efficient processes, and this type of approach could help State improve passport operations and better prepare for future changes in passport demand. State has taken several steps to increase its passport production capacity and improve its ability to respond to near-term increases in passport demand. As we have noted, State hired more staff and improved individual components of passport operations, such as centralizing the printing of passport books and upgrading information technology, during and following the 2007 surge in passport demand. Additionally, State developed two shorter-term plans to address a future increase in demand, including an adjudicative capacity plan, which establishes a set of triggers for determining when to add capacity. According to State officials, the department has completed some preparations for future surges in demand, such as opening a second book printing facility in May 2008 and creating a reserve adjudication force. State also expects to open new passport agencies in Dallas and Detroit by the end of 2008 and in Minneapolis by March 2009, according to these officials. However, it faces challenges in completing others. For example, State hired only 84 out of a planned 400 additional staff called for in the first quarter of fiscal year 2008. Similarly, according to officials, State has not yet established an additional mega processing center and is behind schedule in renovating and expanding some of its existing facilities. According to these officials, these plans were developed to expand State’s capacity to issue passports. State has also taken several steps to improve future estimates of passport demand since it underestimated demand in fiscal year 2007. In particular, State’s contractor designed a new passport demand survey to overcome limitations in its 2005 survey, which was not representative of all border crossers and did not include air and sea travelers. The contractor’s 2007 estimate of total demand for passports in 2008 was derived from (1) a land border crosser survey to collect data on the impact of WHTI on passport demand in 2008, and (2) a nationally representative panel survey of 41,000 U.S. citizens, which included data on overall passport demand, including for sea and air travel and for nontravel identification purposes. State then applied average monthly application rates from previous years to the contractor’s data to estimate the number of passport applications for each month and to identify peak demand. We found these methodologies sound in terms of survey design, sample selection, contact procedures, follow-up, and analysis of nonrespondents. However, estimating passport demand faces several limitations. State and contractor officials outlined some of these limitations, which include the following. It can be difficult for respondents to anticipate travel many months—or years—into the future. For example, the most recent surveys were conducted in May and September of 2007 and were used to estimate travel throughout 2008. Survey respondents tend to overstate their prospective travel and thus their likelihood of applying for a passport. While the contractor adjusted for this phenomenon in its 2007 survey, it did so based on assumptions rather than data. Some survey respondents did not understand certain regulations and options for passports and border crossings, such as WHTI requirements, suggesting that some of these individuals are unaware of the future need to apply for passports for travel to Canada or Mexico. Changes in personal or professional circumstances, or in the economy, can lead to changes in individuals’ international travel plans. Changes in regulations can affect passport demand. For example, New York State signed a memorandum of understanding with DHS to issue enhanced driver’s licenses that could be used for land border crossings and could reduce the demand for passports or passport cards obtained solely to meet WHTI requirements. A 2005 study of passport operations commissioned by State identified several limitations in State’s passport operation, many of which were exposed during the department’s response to the 2007 surge in demand. This study and other plans, as described above, have also proposed numerous improvements to passport operations—many of which were generated by State officials themselves—and the department has begun to implement some of them. However, State does not have a long-term strategy to prioritize and synchronize these improvements to its operation. As we have reported previously, using a business enterprise approach that examines a business operation in its entirety and develops a plan to transition from the current state to a well-defined, long-term goal could help State improve its passport operations in the long term. In 2004, State contracted with an independent consulting firm to study its passport operations, which had not been formally examined for over 25 years. The study, issued in 2005, outlined the current state of passport operations and identified several issues that limited the efficiency and effectiveness of passport operations. Several of these limitations were exposed by State’s response to the 2007 surge in passport demand, and many of them remain unresolved. For example, the study found that State’s practice of manually routing the original paper passport application through the issuance process—including mailing, storage, management, and retrieval of physical batch boxes containing paper applications— slowed the process, extended processing time, and made upgrade requests difficult to handle. Due to the overwhelming number of applications during the surge, a few passport agencies told us that there was no extra space available at their facilities; according to agency officials, this situation led to duplicative efforts. In addition, the study found that limited information was available to management and that reporting tools, such as Consular Affairs’ Management Information System, could not produce customized reports. Further, the study found that this system could not provide information on the performance of its business partners, such as acceptance facilities or the lockbox, resulting in data being available only for applications that had been received at a passport agency. As a result, during the surge, State was not immediately aware of the growing workload at the lockbox. The study also found limitations in State’s communications, including challenges to communicating among passport agencies, providing feedback to headquarters in Washington, and conducting public outreach. For example, during the surge, State did not effectively disseminate management decisions and communicate changes in internal processes and resources available to field staff, according to State’s lessons learned document. In addition to identifying limitations, the study proposed a guide for State’s modernization efforts, including a framework to put in place for passport services by the year 2020. As part of this guide, the study identified key factors that affected State’s methods for conducting business and the performance of passport operations. For example, the study identified increased demand as one such factor, due to normal trends in passport demand, the impact of WHTI implementation, and the passport’s increasing role as an identification document for everyday transactions. To address these issues, the study suggests that State will have to take steps such as redistributing workload through centralization to meet increasing volumes—State has begun to implement this suggestion by establishing passport printing facilities in Arkansas and Arizona. Additionally, the study notes that passport adjudication practices will become an even more important part of combating terrorism and other security concerns in the future, which will require State to utilize technology and external data to improve its risk assessment and fraud detection methods. Finally, the study also suggests that State will face changing customer expectations in the future, requiring more frequent and effective communications and, possibly, changes to service standards. As we previously noted, this issue continues to be a challenge for State. Although the study proposed several initiatives to improve passport operations, State officials told us that the department has not developed a formal plan to implement the initiatives, nor does it have a strategic plan outlining how it intends to improve its entire passport operations. State officials told us that because the department has been largely focused on carrying out its day-to-day operations—especially as it responded to the 2007 surge in passport demand—it has not had time to document its strategic plan. While State has taken a few steps to implement some of the proposed initiatives of the study—such as developing and implementing the e-Passport, opening two passport adjudication centers, and issuing passports remotely—State does not have a systematic strategy to prioritize and synchronize these potential improvements to its passport operations. Some of these proposed initiatives that State has not implemented could be useful to State’s current operations, including the following: leveraging electronic work flow management—enabling State to develop flexible, streamlined work streams that improve its ability to monitor and manage passport operations while reducing manual processes for the physical movement and storage of paper applications and supporting documentation—to ensure a more efficient work flow that supports the issuance of increasing numbers of passports every year; providing management visibility over the end-to-end passport issuance process extending across State and partner organizations, to effectively manage the process and enforce performance standards; applying validations and identity checks automatically upon receipt or modification of an application by consistently applying a comprehensive set of business rules, to strengthen an adjudication process that supports the integrity of the passport as a primary identity document; offering an online point of service with expanded functionality as a means for self-service by the public to facilitate a simplified, flexible, and well- communicated application process to enhance service to the passport customer; and conducting a comprehensive workforce analysis to define a sustainable workforce structure and plans through 2020 and enhancing communications within State and its business partners to improve efficiency and promote knowledge sharing. The recent increases in passport demand have made the need for a plan to prioritize the study’s proposed initiatives that State intends to implement more urgent. While the study assumed that State would issue a minimum of 25 million passports by 2020, this time frame has already become outdated, as actual issuances were 18.6 million in fiscal year 2007 and, in July 2007, were estimated by State to reach 30 million as early as 2010. We have reported that using an enterprise approach to examine and improve the entirety of a business process can help agencies develop more efficient processes. An enterprise approach defines day-to-day operations in order to meet agency needs and processes that result in streamlined operations, rather than simply automating old ways of doing business, and effectively implements the disciplined processes necessary to manage the project. A key element of this approach is the concept of operations, which assesses the agency’s current state, describes its envisioned end state, and provides a transition plan to guide the agency from one state to the other. An effective concept of operations would also describe, at a high level, how all of the various elements of an organization’s business systems relate to each other and how information flows among these systems. Further, a concept of operations would serve as a useful tool to explain how all the entities involved in a business system can operate cohesively, rather than in a stovepiped manner—in the case of passport issuance, this tool would include acceptance agents, the lockbox facility, and the various components of passport operations within State. Finally, it would provide a road map that can be used to (1) measure progress and (2) focus future efforts. Using an enterprise approach could provide State with management visibility over the passport issuance process extending across its entire passport operations, thereby improving these operations in the long term. While State has made several improvements to its passport operations, it has yet to develop and implement a comprehensive strategy for passport operations. The 2005 study, which included a proposed concept of operations, recognized the need for a comprehensive approach and was designed to analyze the entire passport issuance process—including the applicant, passport agency, acceptance facility, lockbox facility, passport processing center, and passport book printing center. However, according to State officials, State has not adopted the framework for improving passport operations proposed by this study, nor has it developed an alternative strategy for prioritizing and synchronizing its varied efforts to improve these operations. The 2007 surge in passport demand exposed serious deficiencies in State’s passport issuance process. Passport wait times reached record highs, leading to inconvenience and frustration for many thousands of Americans. Once it recognized the magnitude of the problem it was facing, State took extraordinary measures to reduce wait times to normal levels by October 2007. However, these actions were not part of a long-term, comprehensive strategy to improve passport operations. State estimates that demand for passports will continue to grow significantly, making such a strategy an urgent priority. Indeed, a study State commissioned to identify potential improvements to its passport operations was premised upon demand estimates for 2020 that are likely to be surpassed as early as this year. State needs to rethink its entire end-to-end passport issuance process, including each of the entities involved in issuing a passport, and develop a formal strategy for prioritizing and implementing improvements to this process. Doing so would improve State’s ability to respond to customer inquiries and provide accurate information regarding expected wait times by increasing its visibility over a passport application from acceptance to issuance. It would also encourage greater accountability by providing transparency of State’s passport operations to the American public. In order to improve the effectiveness and efficiency of passport operations, we recommend that the Secretary of State take the following two actions: Develop a comprehensive, long-term strategy for passport operations using a business enterprise approach to prioritize and synchronize the department’s planned improvements. Specifically, State should fully implement a concept of operations document that describes its desired end state for passport operations and addresses how it intends to transition from the current state to this end state. Begin tracking individual passport applications from the time the customer submits an application at an acceptance facility, in order to maintain better visibility over the passport process and provide better customer service to passport applicants. State provided written comments on a draft of our report, which we have reprinted in appendix II. State concurred with our recommendations; however, it expressed disappointment with our finding that the department lacks a comprehensive strategy to improve its passport operations. Although the department has developed short- term and contingency plans for increasing passport production capacity and responding to future surges in demand, we do not believe these efforts constitute a comprehensive strategic plan. However, we believe the establishment and staffing of the Passport Services Directorate’s Strategic Planning Division is a step in the right direction, and we encourage this office to focus on the modernization efforts discussed in this report. State also disputed our characterization of the 2005 study it commissioned to review existing processes and propose recommendations for improving these processes. We did not intend to suggest that the department fully adopt all of the recommendations in that study and have clarified that point in our findings. State and Treasury also provided technical comments and updated information, which we have included throughout this report as appropriate. We are sending copies of this report to the Secretaries of State and the Treasury and will make copies available to others upon request. We will also make copies available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-4128 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix III. In this report, we review (1) the extent to which the Department of State (State) was prepared for the surge in passport demand in 2007 and how State’s readiness affected passport operations, (2) how State increased its passport production capacity in response to the 2007 surge, and (3) State’s readiness for near-term surges in demand and whether State has a comprehensive strategy in place to improve long-term passport operations. To determine the extent to which State was prepared for the surge in passport demand in 2007, how State’s readiness affected passport operations, and how State increased its passport production in response to the surge, we observed passport operations and interviewed U.S. government officials at six passport agencies—Hot Springs, Arkansas; Charleston, South Carolina; Houston; New Orleans; New York; and Washington. We selected these sites based on their workload volume and geographic locations. We visited State’s lockbox facility in New Castle, Delaware, and interviewed officials from the financial agent responsible for providing lockbox functions. We reviewed State’s passport demand estimates for fiscal year 2007 and analyzed the survey methodology supporting these estimates. We also collected and analyzed data on passport receipts and issuances, and staffing. In addition, we interviewed officials from State’s Bureau of Consular Affairs, the Department of Homeland Security, the Department of the Treasury’s Financial Management Service, and State contractors responsible for collecting survey data on passport demand. To determine State’s passport processing times during the 2007 surge in demand, we interviewed cognizant officials, analyzed data provided by State, and reviewed public statements by State officials and information on State’s Web site. We determined that these data were sufficiently reliable to illustrate the sharp rise in processing times that occurred in the summer of 2007, and place that rise in the context of yearly and monthly trends from 2005 to 2007. However, we found that the rise in application processing time in the summer of 2007 was likely understated to some degree. This understatement likely occurred because the turnaround time for entering applications into State’s data system increased greatly at some points during 2007, due to the abnormally large volume of applications. To determine the reliability of data on passport issuances from 1997 through 2007, we interviewed cognizant officials and analyzed data provided by State. We determined that the data were sufficiently reliable to illustrate a relatively stable level of demand for passports between 1997 and 2003, followed by a significant increase in passport issuances since 2003. To determine whether State is prepared to more accurately estimate future passport demand and has a comprehensive strategy in place to address such demand, we assessed State’s passport demand study for fiscal year 2008 and beyond, a draft report on lessons learned from the 2007 surge in passport demand, and State’s long-term road map for the future of passport operations. We also reviewed prior GAO reports on enterprise architecture and business systems management. In addition, we interviewed Bureau of Consular Affairs officials in Washington and at the regional passport agencies. We conducted this performance audit from August 2007 through July 2008 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. 1. State said that the 2005 study it commissioned to review existing passport processes and propose recommendations for improving these processes did not identify or mention deficiencies. We disagree. The version of the study provided to us notes that it includes an “assessment of the deficiencies of the current state” (p. 3) and identifies issues that “limit the efficiency and effectiveness of passport operations” (p. 4). These deficiencies included the reliance on a manual, paper-based work flow, ineffective communications, and inflexible passport systems. 2. We did not intend to suggest that the department should have adopted all of the 2005 study’s recommendations and have made slight modifications to our finding to clarify this point. Our intent was to note that the department has developed a variety of recommendations to improve its passport operations—many of which were developed by staff in Consular Affairs—but still needs a comprehensive strategy to prioritize and synchronize the improvements it intends to undertake. 3. While our report recognizes that State has developed several plans designed to increase passport production capacity, improving the department’s ability to respond to near-term increases in demand, these plans are not the same as a comprehensive strategy for improving passport operations. Our recommendation addresses State’s need for such a strategy to guide its modernization efforts, by using a business enterprise approach, and not just to increase capacity. 4. State notes that it has improved its efforts to track the 72 percent of passport applications it receives from U.S. Postal Service acceptance facilities for accountability purposes. From a customer service standpoint, we believe that the department should track all applications from the time of execution in order to give the customer an accurate estimate of when to expect his or her passport. Doing so would help eliminate customer confusion, which contributed to the strain on State’s customer service operation experienced during the 2007 surge. In addition to the person named above, Michael Courts (Assistant Director), Robert Ball, Melissa Pickworth, and Neetha Rao made key contributions to this report. Technical assistance was provided by Carl Barden, Joe Carney, Martin de Alteriis, Chris Martin, and Mary Moutsos. | In 2007, following the implementation of new document requirements for travelers entering the United States from within the Western Hemisphere, the Department of State (State) received a record number of passport applications. In June 2009 further document requirements are scheduled to go into effect and will likely lead to another surge in passport demand. GAO examined (1) the extent to which State was prepared for the surge in passport demand and how its readiness affected passport operations, (2) State's actions to increase passport production capacity in response to the surge, and (3) State's readiness for near-term surges in demand and its strategy to improve passport operations. GAO interviewed officials from State and the Departments of the Treasury and Homeland Security, conducted site visits, and reviewed data on passport processing times and reports on passport operations. State was unprepared for the record number of passport applications it received in 2007, leading to significant delays in passport processing. State underestimated the increase in demand and consequently was not able to provide enough notice to the financial agent it uses for passport application payment processing for the agent to prepare for the increased workload, further adding to delays. As a result, reported wait times reached 10 to 12 weeks in the summer of 2007--more than double the normal wait--with hundreds of thousands of passports taking significantly longer. State had difficulty tracking individual applications and failed to effectively measure or communicate to applicants the total expected wait times, prompting many to re-apply and further straining State's processing capacity. State took a number of emergency measures and accelerated other planned efforts to increase its passport production capacity in 2007. For example, to help adjudicate passports, State established four adjudication task forces and deployed passport specialists to U.S. passport agencies severely affected by the surge. In addition, State accelerated hiring and expansion efforts. As a result of these efforts and the normal seasonal decline in passport applications, wait times returned to normal by October 2007. According to State estimates, these emergency measures cost $42.8 million. Although State has taken steps to improve its ability to respond to near-term surges in passport demand, it lacks a comprehensive strategy to improve long-term passport operations. State previously identified several deficiencies limiting the efficiency and effectiveness of passport operations, such as reliance on a paper-based work flow and ineffective communications, and these deficiencies were exposed by State's response to the surge. While State also identified a framework to guide its modernization efforts, it does not have a comprehensive plan to prioritize and synchronize improvements to its passport operations. A comprehensive strategy for making these improvements--for example, using a business enterprise approach--would better equip State to handle a significantly higher workload in the future. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
The AOC’s authority to contract for goods and services is vested by statute in the agency head who has delegated this responsibility to the Chief Administrative Officer (CAO). The CAO has responsibility to, among other things, administer the procurement function on behalf of AOC. AMMD, which falls under the CAO, is authorized to enter into contracts on behalf of the agency. AMMD is the primary office responsible for developing contracting policies and procedures, appointing contracting officers, and awarding and overseeing contracts. Requirements for goods and services are identified by AOC’s operational units, which consist of various jurisdictions and offices that handle the day-to-day operations including the support, maintenance, and operations of buildings as well as construction and renovation projects. While AMMD has the primary responsibility of awarding and administering contracts, AMMD often works with the AOC’s jurisdictions and offices to assist in monitoring the progress of contracts awarded to support AOC’s various projects, such as the restoration of the Capitol Dome. From fiscal years 2011 through 2015, AOC obligated, on average, $326 million annually to procure goods and services. During the 5-year period, as figure 1 shows, the level of contracting actions has generally declined while obligations on contracts and orders varied. There was a substantial increase in obligations between fiscal years 2014 and 2015 when AOC awarded a contract to begin construction for the Cannon building renewal project. The vast majority of AOC’s spending to procure goods and services stems from the agency’s jurisdictions listed below in figure 2. Among the jurisdictions, the Capitol Power Plant and the House Office Buildings collectively accounted for 55 percent of AOC’s fiscal year 2015 contract obligations. As a legislative branch agency, AOC is subject to some but not all of the procurement laws applicable to government agencies. For example, both AOC and executive branch agencies are subject to the Buy American Act and the Contracts Disputes Act of 1978. Additionally, in some instances AOC has adopted certain procurement policies and regulations it would not otherwise be subject to. For example, although not subject to the Small Business Act, AOC worked with the Small Business Administration to establish a small business subcontracting and set-aside program to help the AOC more fully utilize small businesses. In addition, AOC has adopted certain characteristics and clauses of the FAR. For example, AOC incorporates FAR clauses related to contract changes, inspections, differing site conditions, availability of funds, and terminations. According to AOC officials, incorporating FAR clauses into AOC contracts offers significant benefits because the contract clauses have been drafted and reviewed by subject matter experts across the government and are familiar to government contractors. According to AOC officials, federal case law is usually available to address any contract interpretation issues. Our previous work has shown that acquisition planning, market research, and competition are key foundations for successful acquisition outcomes. Acquisition planning is the process by which agencies establish requirements and develop a plan to meet those requirements. Generally, project and contracting officials share responsibility for acquisition planning activities intended to produce a comprehensive plan for fulfilling the agency’s needs in a timely manner and at a reasonable cost. Our past work has found that acquisition planning is strengthened by documenting decisions to guide existing acquisitions and capturing important lessons and other considerations that can be used to inform future procurements. Market research is the process of collecting and analyzing data about capabilities in the market that could satisfy an agency’s needs. It is a critical step in informing decisions about how best to acquire goods and services. Effective market research can help agencies determine the availability of vendors to satisfy requirements and improve the government’s ability to negotiate fair and reasonable prices. Competition is the cornerstone of a sound acquisition process and a critical tool for achieving the best return on investment for taxpayers. Using full and open competitive procedures to award contracts means that all responsible contractors are permitted to submit offers. The benefits of competition in acquiring goods and services from the private sector are well established. Competitive contracts can help save taxpayer money, conserve scarce resources, improve contractor performance, curb fraud, and promote accountability for results. AOC developed a contracting manual to provide guidance for agency officials responsible for purchasing goods and services. The manual was implemented in April 2014 and includes guidelines on topics similar to those included in the FAR. AOC’s contracting manual outlines procedures and guidance for acquisition planning, market research, and competition. In general, for the 21 contracts and orders we reviewed, AOC officials implemented procedures related to these critical functions, such as documenting justifications for the use of noncompetitive procedures, in a manner consistent with the manual. AOC has identified competition as a key objective, and the agency tracks the number of sole-source awards and percentage of dollars obligated under sole-source awards. However, the agency conducts limited analysis of the factors driving the number of sole-source awards or the level of competition achieved across different product types. Such analysis could help identify where additional management attention may be needed to maximize competition to the fullest extent. In 2014, AOC issued a contracting manual that incorporates statutes and federal regulations applicable to the AOC, as well as internal policies in order to provide uniform policies across the agency and guidance to personnel. The AOC Inspector General had previously found that while AOC had developed procurement policies, orders, and procedures, they were not consolidated in one location, which made it difficult for AOC staff to access. The manual covers topics central to AOC day-to-day contracting functions, such as acquisition planning, market research, and competition, all of which we have previously found to be key aspects of a sound acquisition process. AOC started requiring written acquisition plans in August 2012, approximately 18 months prior to the publication of the contracting manual. Though AOC staff engaged in acquisition planning to inform procurement decisions before August 2012, plans were not consistently documented, according to contracting officers. Further, AMMD officials stated that another reason they started requiring acquisition plans was to help enforce acquisition timeframes agreed upon by the office that needed the acquisition and contracting officers. According to officials, the requiring offices consistently missed important deadlines, oftentimes resulting in lengthy acquisition cycles. As a result, AMMD implemented the requirement for written acquisition plans to help alleviate this problem. AMMD officials believe that requiring written acquisition plans has helped shorten acquisition timeframes. AOC developed a template to assist staff in preparing written acquisition plans which, in turn, helps to ensure key information is considered and captured for each acquisition. AMMD officials are considering options to revise the template staff use to document acquisition plans so that it is more adaptable to the specific circumstances of a procurement. As shown in table 1, the AOC manual shares some common acquisition planning principles with the FAR. On all the contracts and orders we reviewed, we found that the AOC conducted acquisition planning. AOC’s practices generally met the agency’s requirements for acquisition planning, including preparing written acquisition plans, addressing the prospects for competition, and involving the appropriate stakeholders in the planning process, among other things. Of the 21 contracts and orders that we reviewed, seven files required written acquisition plans, based on the dollar threshold outlined in the contracting manual, as well as timing of the requirement, and five of those seven files had written acquisition plans. For the remaining two files that required acquisition plans, AMMD officials cited an administrative oversight and a requirement to use a mandatory service provider as the reasons for not preparing a written acquisition plan. In addition, we found that two other files contained written acquisition plans even though they were not required. The contracting officer on one of those projects, the Refrigeration Plant Revitalization project, stated that while not required, a written acquisition plan was completed due to the cost, complexity, and visibility of the project. The AOC’s contracting manual requires that a written plan be completed well in advance of when an acquisition is needed but does not establish timeframes for how far in advance acquisition plans should be completed. AMMD officials noted that the nature and complexity of the acquisition—such as a new or recurring requirement—determines the extent of advance preparation needed to develop the acquisition plan. As a result, AOC did not establish specific timeframes in the contracting manual. As shown in table 2, AOC has implemented market research policies in its manual that shares some common principles with the FAR. In our review of AOC practices, we found that they generally met the requirements to conduct and document market research activity. We found that AOC employs a number of different ways of conducting market research that reflect what is in the contracting manual. For instance, AOC will often invite vendors to a potential construction worksite before publicizing a solicitation. This helps AOC identify potential qualified vendors and also allows vendors an opportunity to learn more about the requirement and determine if they want to make an offer on a project. We found that AOC held industry days for 5 of the 21 contracts and orders we reviewed for projects such as the Cannon Renewal project, the Dome Restoration project and the replacement of the skylight in the Hart Building, among others. Another example of market research that AOC performed was the use of a “sources sought” notification to determine the capabilities of the marketplace. For the 21 contracts and orders in our sample, we found that market research was documented in different ways. For instance, if a contract had an acquisition plan associated with it, the market research performed for that requirement would be documented in the acquisition plan. Additionally, we found contracting officers would document what research was performed and the results of those searches in memoranda contained in the contract files. AMMD officials stated that they are taking action to improve the quality of market research conducted, which is typically performed by the requiring office. AMMD plans to provide market research training in 2016 to enhance staff’s knowledge of how to conduct and document effective market research. AOC’s market research training is expected to focus on documenting market research, using a standardized template to capture the steps taken, and results of market research efforts. AOC’s contracting manual promotes full and open competition in the procurement process. Under full and open competition all responsible suppliers are provided an opportunity to compete for the agency’s contracts. AOC’s manual shares some common competition principles with the FAR as highlighted in table 3. Within our sample of contracts and orders, we found that AOC generally met its competition requirements as provided for in the agency’s contracting manual. Ten of the 21 contracts and orders we reviewed were competed and received more than one offer. In our previous work, we have reported that competitions that yield only one offer in response to a solicitation deprive agencies of the ability to consider alternative solutions in a reasoned and structured manner. All 11 of the non- competed contracts and orders we reviewed were awarded using non- competitive procedures based on exceptions cited in the AOC contracting manual. Specifically, Two contracts for janitorial services were awarded without full and open competition because of statutory provisions requiring that agencies use a list of specified providers for these services. Three task orders were awarded under base contracts that had originally been competed. In these three cases, since the original base contracts were awarded to only one vendor, any task order awarded under the base contracts is not required to be competed. Four contracts were awarded non-competitively because only one supplier was available. For example, when AOC was seeking to award a contract for the audio listening systems used as part of guided tours at the Capitol Visitor Center, AOC evaluated three vendors and determined that it was more cost effective and a better value to the government to maintain and replace the existing brand of listening devices instead of purchasing a new system. One contract was awarded non-competitively to develop continuity of operations plans in case of emergencies. The justification stated that open competition would publicly reveal sensitive information that could pose a security risk. As a result, AOC awarded the contract to a firm that had been used previously in order to limit the number of individuals with access to information on security risks and vulnerabilities. One contract to provide construction administration services, such as field observations, was awarded to the company that had designed and prepared all drawings and specifications for the project. The AOC believed that this company had the requisite technical expertise and therefore was in a unique position to provide the necessary evaluations and review of the documents. AOC has taken steps to gauge its effectiveness in implementing the agency’s policy to promote competition in the procurement process; however, currently it conducts limited analysis in this area. AOC leadership considers competition to be a key priority for the agency. The AOC contracting manual also emphasizes the importance of competition and recognizes market research as a means to evaluate competition. Our analysis of AOC procurement data showed that the agency competed approximately 50 percent of its contract obligations for the past 3 fiscal years—compared to 65 percent for the federal government overall. Federal internal control standards call for agencies to establish mechanisms to track and assess performance against their objectives. In addition, our prior work has shown that for policies and processes to be effective, they must be accompanied by controls and incentives to ensure they are translated into practice. The AOC began to collect competition data in fiscal year 2012. AOC has implemented mechanisms to track data on the number of non-competed awards and dollars obligated. In addition, AOC tracks competition levels across its organizational units as well as the agency’s use of allowable exceptions to competition. For example, AOC’s data shows that in fiscal year 2015, the primary basis for awarding noncompetitive contracts was the only one responsible source exception to competition—meaning that only one vendor could fulfill the requirement. While this is a good first step to gaining insight into the agency’s competition efforts, additional analyses could provide key information that highlights trends in AOC’s overall competition levels, the factors driving the use of the only one responsible source exception such as the quality of AOC’s market research, the types of goods and services that AOC is most successful in competing, and areas where focused attention may be needed. AOC officials did not dispute the value of further analyzing data about the agency’s competition efforts, but noted they have not previously identified the need to conduct analyses beyond their current efforts. Tracking competition data instills accountability at all levels and ensures that AOC leadership has the information readily available to make decisions rather than rely on ad hoc means. Routinely tracking its procurements at a more granular level—such as competition across goods and services—also would provide AOC leadership with important information to identify successful competition strategies that can be replicated across the agency and help the agency focus its resources to maximize competition. AOC uses various approaches to monitor contractors’ progress and work quality and address contractor performance, but does not have suspension and debarment procedures. AOC, like other agencies, primarily relies on contracting officers and COTRs who use oversight tools such as inspection reports and periodic progress meetings to monitor contracts. When AOC identifies contractor performance problems using these tools, AOC has a variety of approaches at its disposal to help address performance issues, such as providing written notice to the contractor highlighting the problem and seeking action to address the performance issue. If a contractor does not take action to improve performance, AOC may then invoke a number of contractual provisions including the collection of liquidated damages from the contractor. Although AOC has tools and resources at its disposal to manage and correct deficiencies on a contract-by-contract basis, AOC does not have a suspension and debarment process that allows it to exclude an individual or firm from receiving future AOC contracts. AOC uses a number of oversight tools to monitor contractor performance and protect the government against substandard work from the contractor. AOC’s monitoring approaches are generally applicable to all the agency’s projects. Depending on the type of project and severity of the deficiency, AOC may employ some or all approaches in any sequence it deems appropriate to seek immediate remedies or damages. As described below, across our sample of contracts and orders, we observed AOC’s use of a variety of approaches, including oversight tools, performance communications and some of the available contractual provisions to monitor and address contractor performance, as shown in figure 3. Tools identified by AOC officials to oversee contracts include onsite representatives, daily progress reports, inspection reports, and progress meetings, as described in table 4. These oversight tools can help AOC identify instances of poor workmanship, safety issues, or timeliness problems, among other things. the contractor at a progress meeting and requested a recovery plan. June-August 2014: AOC issued 2 letters of concern due to continued schedule delays and overall project management concerns. January 2015: AOC gave the contractor a negative interim performance rating related to schedule and management areas to emphasize the importance of the situation. resolved through routine communication, AOC may then issue a notice to comply to the contractor, which formally notifies a contractor that it is not complying with one or more contract provisions. Based on our review, these notices are generally issued by the COTR, lay out the specific performance concern or contract compliance issue, and request corrective action by the contractor within a specified time frame. AOC may issue multiple notices on the same matter before it is fully addressed. The notice to comply does not always indicate a performance problem but could also be issued for noncompliance with administrative contract requirements, such as failure to submit progress reports. The 53 notices to comply that we reviewed from our sample of contracts and orders typically addressed safety, work quality, or administrative contract compliance concerns. superintendent was replaced among other actions, and performance improved significantly, recovering lost time. October 2015: AOC gave the contractor a more favorable interim performance rating in these two areas in recognition of the improvement. Letter of Concern: If performance issues are not resolved through routine communication or notices to comply, AOC officials said the agency may then issue a letter of concern to a contractor. Based on our review, letters of concern are very similar to notices to comply, as they typically lay out a specific concern and request corrective action within a specified time frame. The main difference between a notice and letter is that letters are issued by the contracting officer instead of the COTR. The 27 AOC letters that we reviewed also addressed many of the same types of issues as notices to comply—safety, work quality, and personnel or schedule concerns. Contractor Performance Assessments: AOC routinely assesses contractor performance on an interim and final basis in government- wide contractor performance systems, and the ratings are available to other federal agencies through the Past Performance Information Retrieval System. In completing past performance evaluations, AOC officials rate the contractor on various elements such as the quality of the product or service delivered, schedule timeliness, and cost control. AOC officials said that contractor performance assessments are one of the most valuable methods available to incentivize a contractor to improve performance because a negative assessment could limit the contractor’s ability to be awarded future contracts from AOC or other federal agencies. AOC also has a variety of contractual provisions it can invoke if it determines that a contractor has failed to meet some or all of its contractual requirements. For example, certain provisions allow AOC to seek damages from poorly performing contractors. Contract Disputes: The Contract Disputes Act of 1978 outlines the process for resolving disputes between a contractor and the government. AOC policy calls for seeking an amicable resolution before invoking procedures identified in the Contract Disputes Act. When all attempts to settle the dispute amicably fail, AOC must issue a contracting officer’s final decision on the matter. All of the contracts we reviewed included the relevant contract clause that sets forth this process for resolving disputes. However, none of the contracts that we reviewed involved a dispute between the contractor and the government that required invoking the processes laid out by the disputes clause. Liquidated Damages: To protect itself from construction delays, the AOC contracting manual requires that all construction contracts valued over $50,000 include a liquidated damages clause. The liquidated damages clause provides that if the contractor fails to complete the work within the time specified in the contract, the contractor pays the government a daily fixed amount for each day of delay until the work is completed or accepted. According to its guidance, AOC generally determines the daily fixed amount based on the dollar value of the contract. For the 7 construction contracts in our sample that met the applicable threshold for liquidated damages, daily rates ranged from $200 a day to $28,201 a day. However, AOC had not invoked the clause for any of these contracts. Further, Congress recently enacted legislation prohibiting the AOC from using funds made available by the Consolidated Appropriations Act, 2016, to make incentive or award payments to contractors for work on contracts that are behind schedule or over budget, unless certain determinations are made. Termination for Default: When poor contractor performance cannot be corrected through other means, AOC may take additional steps and ultimately terminate the contract for default. AOC would start the process using either a cure notice or a show-cause notice. A cure notice provides the contractor typically at least 10 days to correct the issues identified in the notice or otherwise fulfill the requirements. A show-cause notice notifies the prime contractor that the AOC intends to terminate for default unless the contractor can show cause why they should not be terminated. Typically, a show-cause notice calls the contractor’s attention to the contractual liabilities, if the contract is terminated for default. None of the contracts in our sample resulted in a cure notice or show-cause notice; however, AOC officials said that these have been used in a couple of instances from fiscal years 2013 through 2015. For example, AOC issued a cure notice in 2013 to a contractor due to repeated poor quality control that delayed progress on the project. The cure notice followed repeated attempts by AOC to address the issues with the contractor through other methods, including issuing five letters of concern in the 6-month period leading up to the cure notice. AOC currently has no agency-wide process for suspending or debarring individuals or firms that the agency has determined lack the qualities that characterize a responsible contractor. In the absence of such a process, AOC does not have a mechanism that allows it to determine in advance of a particular procurement that an individual or firm lacks present responsibility and therefore should not receive AOC contracts. The FAR and the AOC contracting manual provide that contracts should be awarded only to individuals or firms that are responsible prospective contractors. A responsible contractor is one that has the financing, workforce, equipment, experience and other attributes needed to perform the contract successfully. Similar to executive branch agencies, contracting officers at AOC are required to review these factors prior to the award of any contract. In addition, contracting officers must review the excluded parties list in the governmentwide System for Award Management (SAM), which is maintained by the General Services Administration, to determine whether the contractor in line for an award has been suspended, debarred, or proposed for debarment by any other agency. A suspension temporarily disqualifies a contractor from federal contracting while a debarment excludes a contractor for a fixed period, generally up to 3 years. Although AOC officials must check the list of excluded parties in SAM, and as a matter of policy AOC declines to award contracts to excluded firms or individuals, AOC has no procedure for taking its own suspension or debarment actions or adding firms to the list of excluded parties. Our prior work has found that there are several agencies, like AOC, that lack an effective suspension and debarment process. In August 2011, we reported that six executive branch agencies had not taken any suspension or debarment actions within the past 5 years despite spending significant amounts of appropriated funds buying goods and services. By contrast, four other agencies had active suspension and debarment programs, and we identified three factors that these agencies had in common. First, these four agencies had detailed suspension and debarment policies and procedures. Second, they had identified specific staff responsible for the function. And third, they had an active process for referring matters that might lead to a suspension or debarment to the appropriate agency official. Consistent with the findings from our prior work, in a September 2012 management letter, the AOC Inspector General proposed that AOC develop a suspension and debarment process as a means to deal with “unscrupulous or ineffective contractors.” According to AOC officials, the agency declined to implement that recommendation, largely because without being subject to the FAR, AOC believed it could only debar contractors from doing business with AOC, and it was thought that the small number of actions anticipated would likely not justify the cost of developing a new process. However, we do not believe that this is a convincing reason. GAO, which is also a legislative branch agency, established a suspension and debarment process in 2012. For our process, we follow the policies and procedures on debarment and suspension contained in the FAR. Our process identifies new roles and responsibilities for existing offices and officials within the agency. As part of our process, we would report on the list of excluded parties, the names of all contractors we have debarred, suspended, or proposed for debarment. Although debarment, suspension, or proposed debarment of a contractor taken by GAO would have mandatory application only to GAO, listing a contractor on the excluded parties list provides an indication to other federal agencies that they need to thoroughly assess whether the contractor is sufficiently responsible to be solicited or awarded a contract. In addition, one of the advantages of a suspension and debarment process is that an agency can address issues of contractor responsibility and provide the agency and contractors with a formal process to follow. When we shared our experience with them, officials at AOC did not identify any reasons why a similar approach could not be taken at their agency. With more than half of AOC’s budget authority currently being spent on contracting, acquisition clearly plays a central role in achieving AOC’s mission. AOC has taken initial steps to establish an efficient and effective acquisition function by issuing the AOC contracting manual. The manual will help promote full and open competition in AOC’s procurement process. AOC is taking action to improve the quality of its market research which, in turn, can help enhance competition. The agency only recently started to collect competition data to inform its progress, but AOC is not fully using these data to determine the extent of its overall competition efforts and identify areas where additional focus is needed to ensure the agency is obtaining competition to the maximum extent possible. AOC is using several tools to provide oversight and hold contractors accountable; however, it lacks suspension and debarment processes that could help further protect the federal government’s interests. Given the high-profile nature of AOC’s mission, because of the congressional clients AOC serves, and the buildings it is responsible for, such a process would help to ensure that contracts are awarded only to responsible sources. Implementing policies and procedures for suspension and debarment would build upon AOC’s existing accountability framework and would further foster an environment that seeks to hold the entities they deal with accountable. To further enhance the acquisition function, we recommend that the Architect of the Capitol take the following two actions: Explore options for developing a more robust analysis of AOC’s competition levels, including areas such as the trends in competition over time, the use of market research to enhance competition, and the types of goods and services for which competition could be increased; Establish a process for suspensions and debarments that is suitable for the AOC’s mission and organizational structure, focusing on policies, staff responsibilities, and a referral process. We provided a draft of this report to AOC for review and comment. AOC provided written comments on the draft, which are reprinted in appendix II. AOC agreed with our findings, concurred with our recommendations and noted it is taking steps to implement them. We also received technical comments from AOC, which we incorporated throughout our report as appropriate. We are sending copies of this report to the appropriate congressional committees and to the Architect of the Capitol. This report will also be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions concerning this report, please contact me at (202) 512-4841 or by e-mail at [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Staff who made key contributions to this report are listed in appendix III. Our objectives were to assess (1) the extent to which AOC has developed and implemented acquisition policies and processes to guide its contracting function, and (2) the tools used by AOC to monitor and address contractor performance. To address these objectives, we used data from AOC’s financial management system to identify contracts and orders with obligations during fiscal years 2013 through 2015. We selected a non-generalizable sample of 21 contracts and orders, during this timeframe to obtain insights into AOC’s recent contracting practices. To narrow our focus on which contracts to include in our review, we identified contract actions for AOC’s largest and most complex projects, which the AOC defines as any project estimated to cost $50 million or more over the life of the project— the Cannon House Office Building Renewal Project, Cogeneration Plant Project, Capitol Dome Restoration Project, and the Refrigeration Plant Revitalization Project. As Table 2 below shows, the sample represents a mix of large and small dollar awards and types of products and services procured to support various projects across AOC. We excluded any transaction related to real estate rental or electric power payments. We assessed the reliability of AOC’s financial management system data by (1) reviewing existing information about the data and the system that produced them, and (2) comparing reported data to information from the contract files we sampled. Based on these steps, we determined that the data obtained from AOC’s financial management system were sufficiently reliable for the purposes of this review. To examine AOC’s policies that guide its acquisition function, we reviewed its contracting policies and procedures and compared them to what is outlined in the Federal Acquisition Regulation (FAR). While the FAR does not apply to the AOC, it reflects practices widely used throughout the executive branch of the federal government. We focused our review on competition, acquisition planning, and market research as our prior work has shown that these activities are critical to building a strong foundation for successful acquisition outcomes. We reviewed prior GAO reports to identify generally accepted contract management practices for market research, acquisition planning, and competition. We reviewed market research reports, acquisition plans, justifications and approvals for sole-source awards, solicitations, and independent government cost estimates for the contracts and orders in the sample. We analyzed these documents to determine the extent to which acquisition planning and market research was consistent with AOC’s guidance. To supplement information obtained from contract files within our sample, we met with contracting officers and contracting officer technical representatives to confirm our understanding of information in the contract files. We also interviewed officials from the Acquisition and Material Management Division on the policies and procedures that guide the acquisition function. To provide insights about the extent to which AOC competes contracts it awards, we used procurement data from AOC’s financial management system for fiscal years 2013 through 2015 to calculate its competition rate. Unlike other federal agencies, AOC does not report its procurement data to the Federal Procurement Data System-Next Generation (FPDS- NG), which is the government’s procurement database. To provide a basis of comparison, we calculated the governmentwide competition rate using data from FPDS-NG. For both AOC and governmentwide, we calculated the competition rate as the total dollars obligated annually on competitive contract actions as a percentage of total dollars obligated on all contract actions during fiscal years 2013 through 2015. This includes obligations on new contracts, orders, and modifications of existing contracts. Typically, FPDS-NG codes task and delivery orders from competitive single-award contracts as also being competed. In contrast, AOC classifies task and delivery orders derived from a competed single award contract as not competed even though the base contract was competed. In contrast, AOC classifies task and delivery orders derived from a competed single award contract as not competed because the orders are not available for competition, according to an AOC official. We adopted AOC’s classification of these orders as not competed. As a result, our determination of AOC’s competition rate may be understated. However, AOC and GAO officials agreed the difference is likely not substantial given the small number of single award contracts at AOC. We compared AOC’s efforts to assess its competition levels against acquisition best practices and Standards for Internal Control in the Federal Government, which call for continually tracking spending to gain insight about how resources are being used and using the information to assess how agency’s objectives are being achieved. To determine how the AOC oversees contractor performance, we reviewed the same sample of 21 contracts and orders, reviewed AOC project management guidance, and interviewed relevant officials. Specifically, we used the sample to gain insight into how AOC oversees contractor performance and resolves any disagreements that may arise during the performance of the contract. We reviewed documentation in the files such as relevant clauses, notices to comply, letters of concern, contractor performance reports, and other key documents used for monitoring and compliance purposes. We also reviewed AOC contracting policies and project management guidance on how the AOC monitors contractor performance. In addition, we reviewed prior GAO work to identify tools available to agencies to monitor and take actions to address or correct deficiencies regarding contractor performance. We also interviewed AOC contracting officials and contracting officer’s technical representative about their experiences in monitoring contractor performance. We interviewed officials from the Planning and Project Management division, contracting officers, and contracting officer technical representatives to understand how they ensure compliance with the terms of contracts and resolve disagreements that may arise. We reviewed AOC’s contracting procedures to determine whether AOC had a process in place to address contractor performance and ensure it engages with responsible contractors and used previous GAO work on suspension and debarment as the basis for assessing AOC’s efforts. We conducted this performance audit from April 2015 to April 2016 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the contact named above, Candice Wright (Assistant Director); Emily Bond; Lorraine R. Ettaro; Victoria C. Klepacz; Julia Kennon; Katherine S. Lenane; Jose A. Ramos; Beth Reed Fritts; Roxanna Sun; and Alyssa Weir also made key contributions to this report. | The AOC is responsible for the maintenance, operation, and preservation of the buildings and grounds of the U.S. Capitol complex, which covers more than 17.4 million square feet in buildings and 587 acres of grounds. In fiscal year 2015, Congress appropriated $600.3 million to fund AOC's operations, over half of which was used to procure various goods and services ranging from large projects like the restoration of the Capitol Dome, to routine custodial services. GAO was asked to review the AOC's contracting practices. This report examines (1) the extent to which the AOC has developed and implemented acquisition policies and processes to guide its contracting function, and (2) the tools used by the AOC to monitor and address contractor performance. GAO reviewed the AOC's acquisition policies, interviewed contracting officials, and reviewed a non-generalizable sample of 21 contracts and task or delivery orders with dollars obligated in fiscal years 2013 through 2015. The sample consists of a mix of high-value contracts for goods and services. The Architect of the Capitol (AOC) recently implemented a contracting manual that centralizes current law and regulations applicable to the AOC, as well as policies, orders and procedures. As a legislative branch agency, the AOC is not subject to the Federal Acquisition Regulation (FAR) which governs executive branch agencies; however, its manual draws on the FAR and covers topics central to the AOC's day-to-day contracting functions, such as acquisition planning, market research, and competition, all of which are key aspects of a sound acquisition process. In the 21 contracts and task orders GAO reviewed, AOC officials generally followed the policies in the contracting manual related to these critical functions—such as documenting justifications for the use of noncompetitive procedures. The AOC began to collect competition data in fiscal year 2012, but the agency only conducts a limited assessment of its efforts to achieve competition. The AOC manual states it is agency policy to promote competition, and federal internal control standards state that agencies should establish mechanisms to track and assess performance against their objectives. While the AOC monitors data to track the number of sole-source contracts awarded, other analyses are limited. GAO's analysis of the AOC's data found that the agency competed approximately 50 percent of its contract obligations for the past 3 fiscal years—compared to 65 percent for the overall federal government. By examining the factors driving the number of sole-source awards or level of competition across different product types, AOC may be better positioned to identify where additional management attention may be needed to maximize competition. The AOC uses a variety of approaches to monitor contractor performance on its projects, with contracting officers and their technical representatives being the primary officials responsible for providing oversight. The AOC uses a number of methods to address contractor performance problems, as shown in the figure below. While the AOC has tools for addressing poor performance on specific contracts, it does not have a suspension and debarment process in place that could bar irresponsible contractors from working for the AOC or provide notice to other government agencies. Past GAO work has shown that having suspension and debarment procedures is critical to ensuring that the government only does business with responsible contractors. GAO recommends that AOC explore options for developing a more robust analysis of its competition levels and establish a suspension and debarment process suitable to its mission and structure. AOC agreed with GAO's findings and concurred with the two recommendations and noted it is taking steps to implement them. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
Coordinating and evaluating research are important elements in ensuring federal dollars are used efficiently and effectively. RITA is responsible for coordinating and reviewing the DOT operating administrations’ RD&T activities so that (1) no unnecessary duplication takes place and (2) the activities have been evaluated in accordance with best practices. The Committee on Science, Engineering, and Public Policy—a joint committee of the National Academy of Sciences, the National Academy of Engineering, and the Institute of Medicine—has emphasized the importance of careful coordination and focused evaluation of federal research and developed principles to help agencies evaluate their research programs. The committee recommended establishing a formal process to coordinate research across agencies. While this recommendation is focused on cross-agency research, the goals—enhancing collaboration, ensuring that questions are explored, and reducing inefficiencies—are important and applicable within agencies as well. Coordination of research ensures that information is shared so that, if necessary, research can be adjusted to ensure a field is appropriately covered and understood. In addition, the committee noted that evaluating research against established performance measures in agency strategic plans, developing measures that are appropriate for the type of research being developed, and using expert reviews aid in assessing the quality of the research. Relatedly, the Government Performance and Results Act of 1993 (GPRA) requires federal agencies to set performance goals and measure performance against those goals to ensure the effectiveness of federal investments. GPRA’s emphasis on results implies that federal programs contributing to the same or similar outcomes should be closely coordinated to ensure that goals are consistent and complementary, and that program efforts are mutually reinforcing. Making appropriate and cost-effective investment choices is an essential aspect of responsible fiscal stewardship. Such choices are even more important in today’s climate of expected trillion-dollar deficits. Careful decisions will need to be made to ensure that RD&T activities achieve their intended (or other) purposes and do so efficiently and economically. In 2006, we made seven recommendations to enhance RITA’s ability to manage and ensure the effectiveness of RD&T activities, including developing strategies for coordinating and reviewing RD&T activities and developing performance goals and measures. (See table 1.) RITA has implemented five of our recommendations and is making progress on implementing the remaining two. Preventing duplication of effort. In response to our recommendation, RITA developed a strategy to ensure that no unnecessary duplication of research programs occurs within the department, incorporated the results into various high-level DOT planning documents, and reported the results in its strategic plan. RITA’s strategy consists of ongoing internal reviews of all of DOT’s research programs. These reviews are conducted by (1) convening meetings in which officials from each of the operating administrations share information about areas of ongoing and planned research, seeking opportunities for joint effort, and (2) conducting annual reviews of each operating administration’s research plans, looking for research duplication, among other things. In addition, RITA has formed eight working groups, in concert with DOT’s operating administrations, to foster collaboration on cross-modal issues. According to a RITA official, results of these reviews have identified several areas for cross-modal collaboration, including climate change, freight capacity, security, alternative energy technologies, and advanced materials and sensors. According to RITA officials, as a result of these actions, RITA is better able to meet legislative and DOT requirements for coordinating its research, leverage resources for cross-modal research initiatives, and prevent unnecessary research duplication. Following best practices. RITA also developed a strategy to ensure that the results of all DOT’s research activities are evaluated according to established best practices. The strategy includes three primary mechanisms: (1) ensuring systematic application of the Office of Management and Budget’s Research and Development Investment Criteria (relevance, quality, and performance) and the Program Assessment Rating Tool by the operating administrations; (2) annual internal program reviews with self-reporting by the operating administrations; and (3) documenting the operating administrations’ external stakeholder coordination and review. According to RITA, reviews conducted in fiscal years 2007 and 2008 focused on how well the operating administrations are implementing best practices, including external stakeholder involvement, merit review of competitive proposals, independent expert review, research performance measures, and external research coordination. RITA reports the results of its reviews to the department’s RD&T Planning Council, which consists of administrators from each of the operating administrations, including RITA, and officials from DOT’s Office of the Secretary. According to RITA officials, as a result of these efforts, RITA is better able to determine the quality and effectiveness of its research activities and investments and determine whether they are achieving their intended (or other) goals. Establishing RD&T project databases. RITA created two database systems to inventory and track all of DOT’s research activities and provide tools for querying and searching individual projects to identify potential duplication and areas where operating administrations could collaborate. The first database, the RITA Research Notification System, captures research investments at the transaction level, allowing users to search by activity, contracts and grants, and contractor names, enabling identification of funded programs for coordination, collaboration and review. The second database is part of the annual Research Planning and Investment Coordination (RPIC) process, which captures research at the budget request level, allowing for departmentwide transparency and coordination of proposed programs and projects. According to a RITA official, eventual combination of the two databases will offer a mechanism for measuring and tracking investments from request through funding and execution. Communicating evaluation efforts. To communicate its efforts in evaluating DOT’s research to Congress, senior DOT officials, and the transportation community, RITA and its predecessor organization published a summary of all research program evaluations for 2004 through 2006 and included that summary in a high-level DOT planning document and in a report to Congress. First, RITA’s predecessor published what was essentially a summary of all research program evaluations conducted in fiscal year 2004—in the form of a summary of the results of its review of the operating administrations’ application of the Office of Management and Budget’s Research and Development Investment Criteria—in its 2005 annual RD&T plan. Secondly, RITA developed a summary of the results of its fiscal year 2005 and 2006 research program reviews, and a schedule of RITA’s planned fiscal year 2007 reviews, and included it in DOT’s “Research, Development and Technology Annual Funding Fiscal Years 2006-2008, A Report to Congress.” This report also includes summaries of research program evaluations conducted by modal research advisory committees, the Transportation Research Board, and key modal stakeholders in fiscal years 2006 and 2007. According to RITA officials, as a result of this reporting, RITA has provided better continuity and context to Congress and the transportation community about the results of its research evaluations. Documenting processes. RITA has also acted to document its process for systematically evaluating the results of its own multimodal research programs, such as the Hydrogen Safety Program and various grant programs. RITA evaluates the results of its RD&T activities by ensuring they align with DOT goals, meet the research and development investment criteria, and are subject to an annual peer review process. RITA has documented this process in its strategic plan. Establishing performance goals. In 2006, we found that RITA lacked performance goals and an implementing strategy and evaluation plan to delineate how the activities and results of its coordination, facilitation, and review practices will further DOT’s mission and ensure the effectiveness of the department’s RD&T investment. RITA has partially implemented our recommendation that it develop these elements. Setting meaningful goals for performance, and using performance information to measure performance against those goals, is consistent with requirements in GPRA. Developing an evaluation plan and analyzing performance information against set goals for its own coordination, facilitation, and review practices could assist RITA in identifying any problem areas and taking corrective actions. Linking performance goals with the planning and budget process, such as DOT’s annual budget process, can also help RITA determine where to target its resources to improve performance. Guidance provided by the Committee on Science, Engineering, and Public Policy notes that evaluating the performance of research in the context of the strategic planning process ensures the research is relevant to the agency’s mission. Without such goals and an evaluation plan, it is difficult for RITA to determine its success in overseeing the effectiveness of DOT’s RD&T activities. According to RITA officials, while an overall implementing strategy and evaluation plan has not yet been established, RITA has created performance goals. A RITA official told us that the RPIC process—a relatively new process that integrates the budget and strategic planning processes—will help in creating an implementing strategy. The RPIC process is meant to provide information to the Planning Council and Planning Team, which is responsible for defining the department’s overall RD&T strategic objectives. The RPIC process assesses the department’s RD&T activities in terms of the following performance goals: (1) balanced portfolio (e.g., mix of basic, applied, developmental, and high risk RD&T), (2) alignment of RD&T programs with DOT goals and each operating administration’s mission, and (3) return on investment. The RPIC process has been in place only for fiscal year 2009, and as a result, the Planning Council does not yet have the information needed to make decisions about a strategy. In addition, RITA does not yet have an evaluation plan to monitor and evaluate whether it is achieving its goals. A RITA official told us that the RPIC process needs to be in place for 2 or 3 fiscal years before it can provide enough information for RITA to establish a strategy or evaluation plan. Developing performance measures. In 2006, we also found that RITA did not work with the operating administrations to develop common performance measures for DOT’s RD&T activities. According to RITA officials, RITA has partially implemented our recommendation that it do so. Without common performance measures for the RD&T activities of the operating administrations, RITA and the operating administrations lack the means to monitor and evaluate the collective results of those activities and determine that they are achieving their intended (or other) results and furthering DOT-wide priorities. In response to our recommendation, RITA officials told us that they are working with the operating administrations through the RD&T Planning Team—made up of senior officials in RITA and each of the operating administrations. During Planning Team meetings, representatives from each of the operating administrations share information about how RD&T projects are measured and prioritized. For example, according to a RITA official, the Federal Railroad Administration measures how frequently its RD&T projects are used in real-world applications. Once representatives from each operating administration have had the chance to share information, RITA officials will then look for commonalities and determine whether any of the measures could be adopted for the department’s RD&T activities. In closing, since it became operational in 2005, RITA has taken a number of positive steps to meet its vision of becoming a DOT-wide resource for managing and ensuring the effectiveness of RD&T activities. While we have not assessed the effectiveness of these efforts since our 2006 report, we believe that RITA has made progress. We will continue to monitor RITA’s performance in implementing our recommendations. As reauthorization approaches, we look forward to assisting Congress as it considers RITA’s management of DOT’s research program, to better ensure that taxpayers receive the maximum value for DOT’s RD&T investment. Mr. Chairman, this concludes my prepared statement. I would be pleased to respond to any questions that you or other members of the subcommittee might have. For further information regarding this statement, please contact David Wise at (202) 512-2834 or [email protected]. Individuals who made key contributions to this statement are Michelle Everett, Colin Fallon, Erin Henderson, and James Ratzenberger. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | Research, development, and technology (RD&T) activities are vital to meeting the Department of Transportation's (DOT) priorities, such as increasing safety, enhancing mobility, and supporting the nation's economic growth. In fiscal year 2008, the department's RD&T budget totaled over $1.1 billion, primarily for highway and aviation projects. Over the years, concerns have been raised about DOT's capabilities to improve RD&T coordination and evaluation efforts across the agency. In 2004, Congress created DOT's Research and Innovative Technology Administration (RITA) to coordinate and review the department's RD&T programs and activities for the purposes of reducing research duplication, enhancing opportunities for joint efforts, and ensuring RD&T activities are meeting goals. In 2006 GAO reported that RITA had made progress toward these ends, but needed to do more. GAO's testimony focuses on (1) the importance of coordinating and evaluating RD&T activities and (2) RITA's progress in implementing GAO's 2006 recommendations. GAO's statement is based on its 2006 report, a review of best practices for coordination and evaluation, and follow-up discussions with RITA officials on actions to implement GAO's recommendations. GAO did not assess whether RITA's actions have improved the effectiveness of the department's RD&T investment. Coordinating and evaluating research are important elements in ensuring that federal dollars are used efficiently and effectively. Coordinating research enhances collaboration, ensures that questions are explored, and reduces inefficiencies, such as from duplication of research. Evaluating research activities entails comparing research with established performance measures in agency strategic plans and using expert reviews to assess the quality of the research. With DOT's large RD&T budget--over $1.1 billion--coordination and evaluation are critical to making cost-effective investment choices in today's climate of expected trillion-dollar deficits. RITA has fully implemented five recommendations that GAO made in 2006 aimed at enhancing RITA's ability to manage and determine the effectiveness of RD&T activities, and partially implemented the remaining two. (See table below.) Regarding implemented recommendations, most notably, RITA has implemented a strategy to coordinate RD&T activities and look for areas where joint efforts would be appropriate. Results of its coordination efforts have identified a number of areas for cross-modal collaboration, including the areas of climate change and freight capacity. RITA has also developed a strategy to ensure that the results of DOT's research activities are evaluated against best practices, using governmentwide guidance and external stakeholder reviews. Regarding partially implemented recommendations, RITA has not yet developed an overall strategy, evaluation plan, or performance measures that delineate how its activities ensure the effectiveness of the department's RD&T investment. However, it has developed a process for doing so. In this regard, RITA plans to use an existing departmentwide strategic planning and budget process and collaborative meetings to develop an overall strategy and performance measures. RITA officials expect that it will fully implement activities related to this recommendation by 2012. GAO will continue to monitor RITA's activities. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
The number of biosafety level (BSL)-3 and BSL-4 laboratories (high- containment laboratories) began to rise in the late 1990s, accelerating after the anthrax attacks throughout the United States. The laboratories expanded across federal, state, academic, and private sectors. Information about their number, location, activities, and ownership is available for high-containment laboratories registered with CDC’s Division of Select Agent and Toxins (DSAT) or the U.S. Department of Agriculture’s (USDA) Animal and Plant Health Inspection Service (APHIS) as part of the Federal Select Agent Program. These entities register laboratories that work with select agents that have specific potential human, animal, or plant health risks. Other high-containment laboratories work with other pathogens that may also be dangerous but are not identified as “select agents” and therefore these laboratories are not required to register with DSAT or APHIS. We reported in 2009 that information about these non-select agent laboratories is not known. Our work has found that expansion of high-containment laboratories was not based on a government-wide coordinated strategy. The expansion was based on the perceptions of individual agencies about the capacity required for their individual missions and the high-containment laboratory activities needed to meet those missions, as well as the availability of congressionally approved funding. Decisions to fund the construction of high-containment laboratories were made by multiple federal agencies (e.g., Department of Health and Human Services (HHS), Department of Defense, USDA), in multiple budget cycles. Federal and state agencies, academia, and the private sector (such as drug companies) considered their individual requirements, but as we have previously reported a robust assessment of national needs was lacking. Since each agency or organization has a different mission, an assessment of needs, by definition, was at the discretion of the agency or organization. We have not found any national research agenda linking all these agencies, even at the federal level, that would allow for a national needs assessment, strategic plan, or coordinated oversight. As we last reported in 2013, after more than 12 years, we have not been able to find any detailed projections based on a government-wide strategic evaluation of research requirements based on public health or national security needs. Without this information, there is little assurance of having facilities with the right capacity to meet our national needs. This deficiency may be more critical today than 5 years ago when we first reported on this concern because current budget constraints make prioritization essential. Our work on this issue has found a continued lack of national standards for designing, constructing, commissioning, and operating high- containment laboratories. These laboratories are expensive to build, operate, and maintain. For example, we noted in our 2009 report that the absence of national standards means that the laboratories may vary from place to place because of differences in local building requirements or standards for safe operations. In 2007, while investigating a power outage at one of its recently constructed BSL-4 laboratory, CDC determined that construction workers digging at an adjacent site had some time earlier cut a critical grounding cable buried outside the building. CDC facility managers had not noticed that cutting the grounding cable had compromised the electrical system of the facility that housed the BSL-4 laboratory. It became apparent that the building’s integrity as it related to the adjacent construction had not been adequately supervised. In 2009, CDC officials told us that standard procedures under local building codes did not require monitoring of the new BSL-4 facility’s electrical grounding. This incident highlighted the risk of relying on local building codes to ensure the safety of high-containment laboratories in the absence of national standards or testing procedures specific to those laboratories. Some guidance exists about designing, constructing, and operating high- containment laboratories. The Biosafety in Microbiological and Biomedical Laboratories guidance, often referred to as BMBL recommends various design, construction and operations standards, but our work has found it is not universally followed. of whether the suggested design, construction, and operations standards are achieved. As we have recommended, national standards would be valuable for not only new laboratory construction but also periodic upgrades. Such standards need not be constrained in a “one-size fits all” model but could help specify the levels of facility performance that should be achieved. Department of Health and Human Services (Washington, D.C., 2007), Biosafety in Microbiological and Biomedical Laboratories, 5th ed. HHS has developed and provided biosafety guidelines outlined in this manual. developed by the funding or regulatory agencies. In 2013, we reported that another challenge of this fragmented oversight is the potential duplication and overlap of inspection activities in the regulation of high- containment laboratories. We recommended that CDC and APHIS work with the internal inspectors for Department of Defense and Department of Homeland Security to coordinate inspections and ensure the application of consistent inspection standards. According to most experts that we have spoken to in the course of our work, a baseline risk is associated with any high-containment laboratory. Although technology and improved scientific practice guidance have reduced the risk in high-containment laboratories, the risk is not zero (as illustrated by the recent incidents and others during the past decade). According to CDC officials, the risks from accidental exposure or release can never be completely eliminated and even laboratories within sophisticated biological research programs—including those most extensively regulated—has and will continue to have safety failures. Many experts agree that as the number of high-containment laboratories has increased, so the overall risk of an accidental or deliberate release of a dangerous pathogen will also increase. Oversight is critical in improving biosafety and ensuring that high- containment laboratories comply with regulations. However, our work has found that aspects of the current oversight programs provided by DSAT and APHIS depend on entities’ monitoring themselves and reporting incidents to the regulators. For example, with respect to a certification that a select agent had been rendered sterile (that is, noninfectious), DSAT officials told us, citing the June 2014 updated guidance, that “the burden of validating non-viability and non-functionality remains on the individual or entity possessing the select agent, toxin, or regulated nucleic acid.” While DSAT does not approve each entity’s scientific procedure, DSAT strongly recommends that “an entity maintain information on file in support of the method used for rendering a select agent non-viable . . . so that the entity is able to demonstrate that the agent . . . is no longer subject to the select agent regulations.” Biosafety select agent regulations and oversight critically rely on laboratories promptly reporting any incidents that may expose employees or the public to infectious pathogens. Although laboratories have been reasonably conscientious about reporting such incidents, there is evidence that not all have been reported promptly. The June 2014 incident in which live anthrax bacteria were transferred from a BSL-3 contained environment to lower-level (BSL-2) containment laboratories at CDC in Atlanta resulted in the potential exposure of tens of workers to the highly virulent Ames strain of anthrax. According to CDC’s report, on June 5, a laboratory scientist in the BSL-3 Bioterrorism Rapid Response and Advanced Technology (BRRAT) laboratory prepared protein extracts from eight bacterial select agents, including Bacillus anthracis, under high-containment (BSL-3) conditions.were being prepared for analysis by matrix-assisted laser desorption/ionization time-of-flight (MALDI-TOF) mass spectrometry, a relatively new technology that can be used for rapid bacterial species identification. Also, according to CDC officials that we spoke to this protein extraction procedure was being evaluated in a preliminary assessment of whether MALDI-TOF mass spectrometry could provide a cheaper and faster way to detect a range of pathogenic agents, including anthrax, compared to conventional methods and thus could be used by emergency response laboratories. According to CDC officials, the researchers intended to use the data collected in this experiment to submit a joint proposal to CDC’s Office of Public Health Preparedness and Response to fund further evaluation of the MALDI TOF method These samples because MALDI TOF is increasingly being used by clinical and hospital laboratories for infectious disease diagnostics. The protein extraction procedure was chemically based and intended to render the pathogens noninfectious, which alternative extraction procedures would have done using heat, radiation, or other chemical treatments that took longer. The procedure that was used to extract the proteins was not based on a standard operating procedure that had been documented as appropriate for all the pathogens in the experiment and reviewed by more senior scientific or management officials. Rather, the scientists used a procedure identified by the MALDI TOF equipment manufacturer that had not been tested for effectiveness, in particular, for rendering spore-forming organisms such as anthrax noninfectious. Following that procedure, the eight pathogens were exposed to chemical treatment for 10 minutes and then plated (spread on plates to test for sterility or noninfectious status) and incubated for 24 hours. According to CDC, on June 6, when no growth was observed on sterility plates after 24 hours, the remaining samples, which had been held in the chemical solution for 24 hours, were moved to CDC BSL-2 laboratories for testing using the MALDI TOF technology. Importantly, the plates containing the original sterility samples were left in the incubation chamber rather than destroyed as would normally occur because of technical problems with the autoclave that would have been used for destruction. According to CDC officials, on June 13, a laboratory scientist in the BRRAT laboratory observed unexpected growth on the anthrax sterility plate, possibly indicating that the sample was still infectious. (All the other pathogen protein samples showed no evidence of growth.) That scientist and a colleague immediately reported the discovery to the CDC Select Agent Responsible Official (RO) in accordance with the BRRAT Laboratory Incident Response Plan. That report triggered a response that immediately recovered the samples that had been sent to the BSL-2 laboratories and returned them to BSL-3 containment, and a response effort that lasted a number of days was implemented to identify any CDC employees who might have been affected by exposure to live anthrax spores. (The details of the subsequent actions and CDC’s lessons learned and proposed actions are described in CDC’s July 11, 2014, Report on Potential Exposure to Anthrax. That report indicates that none of the potentially affected employees experienced anthrax-related adverse medical symptoms.) Our preliminary analysis indicates that the BRRAT laboratory was using a MALDI-TOF MS method that had been designed for protein extraction but not for the inactivation of pathogens and that it did not have a standard operating procedure (SOP) or protocol on inactivation. We did not find a complete set of SOPs for removing agents from a BSL-3 laboratory in a safe manner. Further, neither the preparing (BRRAT BSL-3) laboratory nor the receiving laboratory (BRRAT BSL-2) laboratory conducted sterility testing. Moreover, the BRRAT laboratory did not have a kill curve based on multiple concentration levels. When we visited CDC on July 8, it became apparent to us, that a major cause of this incident was the implementation of an experiment to prepare protein extractions for testing using the MALDI TOF technology that was not based on a validated standard operating procedure. acknowledged that significant and relevant studies in the scientific literature about chemical procedures studied for preparing protein samples for use in the MALDI TOF technology, were successful in rendering tested pathogens noninfectious, except for anthrax. The literature clearly recommends an additional filtering step before concluding that the anthrax samples are not infectious. Our preliminary work indicates that this step was not followed for all the materials in this incident. Validating a procedure or method provides a defined level of statistical confidence in the results of the procedure or method. staff did not perform sterility testing on the suspension received in March 2004. CDC’s 2004 report further stated that “Research laboratory workers should assume that all inactivated B. anthracis suspension materials are infectious until inactivation is adequately confirmed [using BSL-2 laboratory procedures].” These recommendations are relevant to the June 2014 incident in Atlanta but were not followed. The laboratories receiving the protein extractions were BSL-2 laboratories, but the activities associated with testing with the MALDI TOF technology were conducted on open laboratory benches, not using biocontainment cabinets otherwise available in such laboratories. CDC’s July 11, 2014, Report on the Potential Exposure to Anthrax describes a number of actions that CDC plans to take within its responsibilities to avoid another incident like the one in June. However, we continue to believe that a national strategy is warranted that would evaluate the requirements for high-containment laboratories, set and maintain national standards for such laboratories’ construction and operation, and maintain a national strategy for the oversight of laboratories that conduct important research on highly infectious pathogens. This completes my formal statement, Chairman Murphy, Ranking Member DeGette and members of the committee. I am happy to answer any questions you may have. For future contacts regarding this statement, please contact Nancy Kingsbury at (202) 512-2700 or at [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Sushil Sharma, Ph.D., Dr.PH, Assistant Director; and Elaine L. Vaurio also made key contributions to this statement. High-Containment Laboratories: Assessment of the Nation’s Need Is Missing. GAO-13-466R. Washington, D.C.: February 25, 2013. Biological Laboratories: Design and Implementation Considerations for Safety Reporting Systems. GAO-10-850. Washington, D.C.: September 10, 2010. High-Containment Laboratories: National Strategy for Oversight Is Needed. GAO-09-1045T. Washington, D.C.: September 22, 2009. High-Containment Laboratories: National Strategy for Oversight Is Needed. GAO-09-1036T. Washington, D.C.: September 22, 2009. High-Containment Laboratories: National Strategy for Oversight Is Needed. GAO-09-574. Washington, D.C.: September 21, 2009. Biological Research: Observations on DHS’s Analyses Concerning Whether FMD Research Can Be Done as Safely on the Mainland as on Plum Island. GAO-09-747. Washington, D.C.: July 30, 2009. High-Containment Biosafety Laboratories: DHS Lacks Evidence to Conclude That Foot-and-Mouth Disease Research Can Be Done Safely on the U.S. Mainland. GAO-08-821T. Washington, D.C.: May 22, 2008. High-Containment Biosafety Laboratories: Preliminary Observations on the Oversight of the Proliferation of BSL-3 and BSL-4 Laboratories in the United States. GAO-08-108T. Washington, D.C.: October 4, 2007. Biological Research Laboratories: Issues Associated with the Expansion of Laboratories Funded by the National Institute of Allergy and Infectious Diseases. GAO-07-333R. Washington, D.C.: February 22, 2007. Homeland Security: CDC’s Oversight of the Select Agent Program GAO-03-315R. Washington, D.C.: November 22, 2002. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | Recent biosecurity incidents—such as the June 5, 2014, potential exposure of staff in Atlanta laboratories at the Centers for Disease Control and Prevention (CDC) to live spores of a strain of anthrax—highlight the importance of maintaining biosafety and biosecurity protocols at high-containment laboratories. This statement summarizes the results of GAO's past work on the oversight of high-containment laboratories, those designed for handling dangerous pathogens and emerging infectious diseases. Specifically, this statement addresses (1) the need for governmentwide strategic planning for the requirements for high-containment laboratories, including assessment of their risks; (2) the need for national standards for designing, constructing, commissioning, operating, and maintaining such laboratories; and (3) the oversight of biosafety and biosecurity at high-containment laboratories. In addition, it provides GAO's preliminary observations on the potential exposure of CDC staff to anthrax. For this preliminary work, GAO reviewed agency documents, including a report on the potential exposure, and scientific literature; and interviewed CDC officials. No federal entity is responsible for strategic planning and oversight of high-containment laboratories. Since the 1990s, the number of high-containment laboratories has risen; however, the expansion of high-containment laboratories was not based on a government-wide coordinated strategy. Instead, the expansion was based on the perceptions of individual agencies about the capacity required for their individual missions and the high-containment laboratory activities needed to meet those missions, as well as the availability of congressionally approved funding. Consequent to this mode of expansion, there was no research agenda linking all these agencies, even at the federal level, that would allow for a national needs assessment, strategic plan, or coordinated oversight. As GAO last reported in 2013, after more than 12 years, GAO has not been able to find any detailed projections based on a government-wide strategic evaluation of research requirements based on public health or national security needs. Without this information, there is little assurance of having facilities with the right capacity to meet the nation's needs. GAO's past work has found a continued lack of national standards for designing, constructing, commissioning, and operating high-containment laboratories. As noted in a 2009 report, the absence of national standards means that the laboratories may vary from place to place because of differences in local building requirements or standards for safe operations. Some guidance exists about designing, constructing, and operating high-containment laboratories. Specifically, the Biosafety in Microbiological and Biomedical Laboratories guidance recommends various design, construction, and operations standards, but GAO's work has found it is not universally followed. The guidance also does not recommend an assessment of whether the suggested design, construction, and operational standards are achieved. As GAO has reported, national standards are valuable not only in relation to new laboratory construction but also in ensuring compliance for periodic upgrades. No one agency is responsible for determining the aggregate or cumulative risks associated with the continued expansion of high-containment laboratories; according to experts and federal officials GAO interviewed for prior work, the oversight of these laboratories is fragmented and largely self-policing. On July 11, 2014, the Centers for Disease Control and Prevention (CDC) released a report on the potential exposure to anthrax that described a number of actions that CDC plans to take within its responsibilities to avoid another incident like the one in June. The incident in June was caused when a laboratory scientist inadvertently failed to sterilize plates containing samples of anthrax, derived with a new method, and transferred them to a facility with lower biosecurity protocols. This incident and the inherent risks of biosecurity highlight the need for a national strategy to evaluate the requirements for high-containment laboratories, set and maintain national standards for such laboratories' construction and operation, and maintain a national strategy for the oversight of laboratories that conduct important work on highly infectious pathogens. This testimony contains no new recommendations, but GAO has made recommendations in prior reports to responsible agencies. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
NTIA and RUS have until September 30, 2010, to obligate the Recovery Act funding for broadband projects. While the completion time will vary depending on the complexity of the project, recipients of BTOP grants and BIP awards must substantially complete projects supported by these programs no later than 2 years, and projects must be fully completed no later than 3 years, following the date of issuance of the award. As we reported in November 2009, NTIA and RUS faced a number of challenges in evaluating applications and awarding broadband stimulus funds during the first funding round. For example, although both agencies had previously administered small telecommunications grant or loan programs, they had to review more applications and award far more funds with fewer staff to carry out their Recovery Act programs. In addition, the agencies faced tight time frames for awarding funds. To address these challenges, NTIA and RUS awarded contracts to Booz Allen Hamilton and ICF International, respectively, to help the agencies implement the programs within the required time frames. The contractors have supported the development and implementation of application review processes, helped with the review of technical and financial materials, and assisted in the development of postaward monitoring and reporting requirements. To meet the September 30, 2010, deadline to award Recovery Act funds, NTIA and RUS have established project categories for directing funds to meet the act’s requirements; released two funding notices; conducted public outreach to increase participation among all eligible entities; developed processes to accept, evaluate, advance, and award applications; and advanced efforts to oversee recipients to ensure proper use of Recovery Act funds. For the first funding round, NTIA and RUS coordinated their efforts and issued one joint funding notice detailing the requirements, rules, and procedures for applying for funding. The first 18 broadband stimulus awards were announced on December 17, 2009. NTIA and RUS completed the first round of awards on April 26 and March 30, 2010, respectively. Table 1 shows the funding timeline for NTIA’s and RUS’s broadband stimulus programs. Table 2 summarizes the categories of projects eligible for funding during the first round for both BTOP and BIP. Based on the agencies’ experiences with the first round, and drawing on public comments, both NTIA and RUS made changes to how the second- round funding for BTOP and BIP will be structured and conducted. Unlike the first round, NTIA and RUS issued separate funding notices and applicants had the option of applying to either BTOP or BIP, but not to both. In the second round, NTIA will again award grants for three categories of eligible projects, however the infrastructure program has been reoriented toward Comprehensive Community Infrastructure grants, which will support Middle Mile projects serving anchor institutions such as community colleges, libraries, hospitals, universities, and public safety institutions. RUS has prioritized Last Mile projects and added 3 new grant programs: Satellite, Rural Library, and Technical Assistance projects. Table 3 provides information on the second-round project categories. The first funding notice, published July 9, 2009, set forth the processes for reviewing applications that NTIA and RUS followed during the first funding round. Both agencies developed a multistep application review process designed to balance the applicants’ need for time to prepare their applications with the agencies’ need for time to review them, as well as to minimize the burden on the applicants that did not ultimately qualify for program funding. Generally, both agencies initially screened applications to determine whether they were complete and eligible and then submitted the qualifying applications to a due-diligence review. For this review, the applicants were asked to submit additional documentation to further substantiate their financial, technical, and other project information. Table 4 compares the agencies’ first-round application review processes. In addition to implementing the BTOP program, NTIA is implementing the broadband mapping provisions referenced in the Recovery Act. Up to $350 million of the $4.7 billion was available to NTIA pursuant to the Broadband Data Improvement Act and for the purpose of developing and maintaining a nationwide map of broadband service availability. NTIA explained that this program would fund projects that collect comprehensive and accurate state-level broadband mapping data, develop state-level broadband maps, aid in the development and maintenance of a national broadband map, and fund statewide initiatives directed at broadband planning. NTIA accepted applications for the State Broadband Data and Development Grant program until August 14, 2009. NTIA originally funded state data collection efforts for a 2-year period, allowing the agency to assess initial state activities before awarding funding for the remainder of this 5-year initiative. On May 28, 2010, NTIA announced that state governments and other existing awardees had until July 1, 2010, to submit amended and supplemental applications for 3 additional years of mapping and data collection activities and to support all other eligible purposes under the Broadband Data Improvement Act. In the first round of broadband stimulus funding, NTIA and RUS received almost 2,200 applications and awarded 150 grants, loans, and loan/grant combinations totaling over $2.2 billion in federal funds to a variety of entities for projects in nearly every state and U.S. territory. This funding includes over $1.2 billion for 82 BTOP projects and more than $1 billion for 68 BIP projects. More than 70 percent of these projects were awarded to non-governmental entities, such as for-profit corporations, nonprofit organizations, and cooperative associations. Ten BTOP and 3 BIP grants were awarded to applicants with multistate projects. For example, RUS awarded a grant to Peetz Cooperative Telephone Company for a Last Mile Remote project covering parts of Colorado and Nebraska and NTIA awarded a grant to One Economy Corporation for a Sustainable Broadband Adoption project covering parts of 32 states. Figure 1 illustrates the locations of the broadband stimulus projects and the total project funding per state awarded in the first round. BTOP. During the first funding round, NTIA awarded more than $1 billion in BTOP funds for 49 broadband infrastructure projects to deploy Middle Mile and Last Mile broadband technology to unserved and underserved areas of the United States; $57 million for 20 Public Computer Center projects to provide access to broadband, computer equipment, computer training, job training, and educational resources to the general public and specific vulnerable populations; and $110 million for 13 Sustainable Broadband Adoption projects to promote broadband demand through innovation, especially among vulnerable population groups that have traditionally underused broadband technology. NTIA awarded grants to a variety of entities in the first funding round, including public entities, for-profits, nonprofits, cooperative associations, and tribal entities. Our analysis of NTIA’s data shows that public entities, such as states, municipalities, or other local governments, received the largest number of BTOP grants and largest percentage of the funding. This funding supports BTOP projects in 45 states and territories. Table 5 shows the entity types and the amounts of funding per entity type during the first round. Of the 82 grants awarded, over half were for infrastructure projects, and NTIA awarded over 40 percent of these grants to for-profit entities in the first round. NTIA awarded Public Computer Center and Sustainable Broadband Adoption projects to public entities and nonprofit organizations. Table 6 shows the types of entities awarded funds for each BTOP funding category. BIP. During the first funding round, RUS announced 49 broadband infrastructure awards totaling nearly $740 million in program funding for Last-Mile nonremote projects, 13 awards totaling $161 million for Last Mile remote projects, and 6 awards totaling $167 million for Middle Mile broadband infrastructure projects. The majority of funding was awarded in the form of loan/grant combinations. Of the nearly $1.1 billion in first round funding, RUS awarded 53 loan/grant combinations totaling over $957 million in program funds, 12 grants totaling about $69 million, and 3 loans totaling over $41 million. RUS awarded grants, loans, and loan/grant combinations to a variety of entities. Eighty-five percent of BIP recipients are for-profit companies or cooperative associations. Four tribal entities also received BIP funding. In addition, 43 of the 68 BIP recipients are Title II borrowers and have previously received rural electrification and telephone loans from RUS. These represent the incumbent local telecommunications providers in the funding area. Table 7 shows the entity types and amount of funding received during the first round. RUS made nearly three-quarters of its awards for Last Mile non-remote projects and the majority of these awards went to for-profit and cooperative associations. Table 8 shows the types of entities that received awards and the number of projects awarded in each BIP funding category. As of June 29, 2010, RUS had provided $899.6 million in program funds for 61 of these 68 projects, representing approximately 85 percent of the awards announced in the first round. This amount represents about $485 million charged against RUS’s Recovery Act budget authority. Of the remaining projects, 4 are still in the contract award process and 3 awards were declined by the recipients. To substantiate information in the applications, NTIA, RUS, and their contractors reviewed financial, technical, environmental, and other documents and determined the feasibility and reasonableness of each project. The agencies reviewed application materials for evidence that the applicants satisfied the criteria established in the first funding notice. The first funding notice identified several types of information that would be subject to due-diligence review, including details related to the following items: Proposed budget, capital requirements and the source of these funds, and operational sustainability. Technology strategy and construction schedule, including a map of the proposed service area and a diagram showing how technology will be deployed throughout the project area (for infrastructure projects) and a timeline demonstrating project completion. Completed environmental questionnaire and historic preservation documentation. Evidence of current subscriber and service levels in the project area to support an “unserved” or “underserved” designation. Recipient’s eligibility to receive a federal award. Any other underlying documentation referenced in the application, including outstanding and contingent obligations (debt), working capital requirements and sources of these funds, the proposed technology, and the construction build-out schedule. To implement the due-diligence review, the agencies with their contractors reviewed the application materials for adherence to the first- round funding notice’s guidelines. The contractors formed teams with specific financial or technical expertise to perform the due-diligence evaluation of applications. Generally, the agencies followed similar due- diligence review processes, but there were some differences. For example, NTIA teams analyzed and discussed the application materials and assigned scores to applications based on the criteria established in the first-round funding notice: (1) project purpose, (2) project benefits, (3) project viability, and (4) project budget and sustainability. Also, NTIA teams contacted applicants when necessary to obtain additional materials or clarify information in the application. Both NTIA and RUS officials reviewed environmental questionnaires addressing National Environmental Policy Act (NEPA) concerns and other documents addressing National Historic Preservation Act (NHPA) concerns. Agency officials requested that applicants provide full environmental and historical impact reports for their projects unless the projects received an exclusion. At the time we reviewed our sample of application files, these reports were pending for NTIA applications; all RUS applications we reviewed received categorical exclusions. During the due-diligence review, agency officials said that the contractor teams had frequent contact with NTIA and RUS to discuss issues that arose during the review. The review teams produced detailed briefing reports describing the information contained in each file and used professional judgment to make recommendations as to each project’s viability and sustainability, and the applicant’s apparent capacity to implement and maintain the project. Agency officials used these reports and other information in making award decisions. The review teams also recommended follow-up actions the agencies might consider to gather more information on unresolved issues. Both agencies’ officials reported that they were satisfied with the quality of their contractors’ work. Based on our analysis of the files of 32 awarded applications, we found that the agencies consistently reviewed the applications and substantiated the information as specified in the first-round funding notice, a finding consistent with the Department of Commerce Inspector General’s April 2010 report. In each of the files we reviewed, we observed written documentation that the agencies and their contractors had reviewed and verified pertinent application materials, or made notes to request additional documentation where necessary. In general, we saw evidence that the agencies and the contractors verified the following information: basic fit with the programs (project descriptions); financial reasonableness (capital and operating budgets, financial statements); technological viability (maps of the proposed coverage area, a description of the technology to be used and how it would be employed); environmental and historic preservation/remediation; project planning (construction schedules, project milestones); organizational capacity (resumes or biographies of the principals involved in the project, matching funds, support from both the affected communities and other governmental entities); and congressional districts affected. The two agencies developed different processes to investigate the merits of public comments on whether proposed projects met the definition of “unserved” or “underserved” published in the first funding notice. This investigation is known as an “overbuild analysis” and is needed because of the continued lack of national broadband data. In general, the public comments were submitted by companies that claimed they were already providing service in the proposed service areas and that the applicant’s project would thus lead to overbuilding. NTIA’s contractor researched the commenting companies’ claims of provided service via industry databases, the companies’ Web sites and advertisements, and then produced an overbuild analysis for review by agency officials that described the research results and the contractor’s level of confidence in the accuracy of the analysis. For RUS, field staff personally contacted the entities that submitted the comments to verify their claims that they provided service in the affected areas. According to RUS, field staff reconciled any difference between the application and commenter, and where necessary, conducted an actual field visit to the proposed service territory. In all cases in our sample, we observed that agencies and their contractors found that the projects met the definitions of “unserved” and “underserved” set forth in the first funding notice. In at least one case, public comments were retracted following a request for additional information; in other cases, the additional information provided did not support claims of overbuilding. Finally, we interviewed representatives of five industry associations and two companies that received funding during the first round to learn their perspectives on the thoroughness of the due-diligence reviews. Generally, the industry association representatives confirmed that their constituents who had applied for and received broadband funding had undergone due- diligence reviews, but they were not familiar with the extent to which NTIA and RUS had verified applicant information. According to representatives of two companies that received funding during the first round, the agencies’ due-diligence process was thorough and rigorous. During the second funding round, NTIA and RUS have more funds to award and less time to award these funds than they had for the first round, and although the agencies received fewer applications for the second round, they are conducting more due-diligence reviews than they did for the first round. NTIA and RUS have until September 30, 2010, to obligate approximately $4.8 billion in remaining broadband stimulus funds, or more than twice the $2.2 billion they awarded during the first funding round. More specifically, in the second funding round, NTIA must award $2.6 billion in BTOP grants and RUS must award $2.1 billion in BIP loans and loan/grant combinations. Moreover, NTIA has 2 fewer months in the second funding round to perform due-diligence reviews and obligate funds for selected BTOP projects than in the first funding round, and RUS has 3 months less for BIP. Whereas NTIA took 8 months for these tasks during the first funding round from the August 20, 2009, application deadline through April 26, 2010, it has 6 months for the second round, from the March 26, 2010, application deadline to the program’s September 30, 2010, obligation deadline. Similarly, RUS took at least 9 months for the first funding round and has 6 months for the second round. (As of July 1, 2010, RUS had not obligated funds for four first-round awards.) For the second funding round, NTIA and RUS received 1,662 applications, compared with 2,174 for the first round. For the first round, NTIA reviewed 940 applications for BTOP, RUS reviewed 401 applications for BIP, and the agencies concurrently reviewed 833 joint applications for both programs. For the second funding round, NTIA received 886 applications for BTOP and RUS received 776 for BIP. No joint applications were solicited for the second round as the agencies published separate funding notices. As of July 2, 2010, NTIA and RUS have awarded a total of 66 second round broadband stimulus projects totaling $795 million. While NTIA and RUS have fewer applications to review for the second round, they expect their due-diligence workload to increase. According to agency officials, the quality of the second-round applications is substantially better and more applications will be eligible for due-diligence reviews. Agency officials believe that their staffs’ increased experience, together with some process changes implemented in response to lessons learned during the first funding round (discussed later in this report) will enable their staffs to manage the increased workload and maintain the same high standards in the time allotted. However, as the Recovery Act’s obligation deadline draws near, the agencies may face increased pressure to approve awards. Agency officials state that their programs’ goals remain to fund as many projects as possible that meet the requirements of the act and to select the projects that will have the most economic impact; simply awarding funds is not the goal. The continued lack of national broadband data complicates NTIA and RUS efforts to award broadband stimulus funding in remote, rural areas where it may be needed the most. Although NTIA recently issued grants to states and territories to map broadband services, the National Broadband Map showing the availability of broadband service will not be completed until 2011. The most recent FCC report on currently available Internet access nationwide relies on December 2008 data. Because of the lack of current data, NTIA and RUS are using a cumbersome process to verify the status of broadband services in particular geographic locations. The agencies must collect and assess statements by applicants as well as the aforementioned public comments submitted by existing broadband providers delineating their service areas and speeds available. NTIA and RUS are investing time and resources to review these filings, and in some cases due-diligence reviews have found information in the filings to be inaccurate. During our review of 32 judgmentally selected applications, we found several instances noted by RUS in which companies provided inaccurate information when claiming they were already providing service in a proposed service area. For example, when an RUS field representative asked one company to provide supporting information to verify its number of subscribers in its service area during the due-diligence review process, the company admitted the information in its filing was incorrect and withdrew the comment. In addition, for a number of applications we reviewed, NTIA’s contractor had a low or medium level of confidence in the accuracy of the overbuild analysis because data were inconclusive. Because the National Broadband Map will not be completed until 2011, NTIA and RUS will have to complete awards for round two based on existing data. Both agencies have taken steps to streamline their application review processes in an effort to obligate the remaining funds by September 30, 2010. First, the agencies agreed to generally target different types of infrastructure projects and issued separate funding notices for the second round to save time during the eligibility screening phase. Second, the agencies reduced the number of steps in the application review process from two to one, adding some time to the application window and agency review process. NTIA also reduced the basic eligibility factors for BTOP from five to three, moved from a largely unpaid to a paid reviewer model to ensure that reviews were conducted in a timely fashion, and decreased the number of reviewers per application from three to two. These steps allowed the agency to complete the initial portion of its review ahead of schedule, according to BTOP officials. NTIA also split the second round applications into four groups for due-diligence reviews, allowing staff to concentrate on one group at a time. Due-diligence reviews for the first group were completed in June; awards for this group will be announced in July. Reviews for the second group will be completed in July, with awards to be announced in August; reviews for the third and fourth group will be complete in August, with final awards to be announced in September. Third, NTIA began to use Census tract data, which companies already compile and report to FCC, to verify applicants’ claims and simplified the process to allow existing broadband providers to supply information about their services. RUS is relying on its mapping tool, which does show Census block data, but not Census track data, to determine whether the service area is eligible. According to RUS officials, the tool has been upgraded several times to make it easier for applicants to submit information about existing service providers to the agency. Finally, RUS eliminated funding for the Last-Mile Remote project designation, reducing the number of project types to screen for award, and also stopped accepting paper applications. Notwithstanding these efficiencies, a few second round changes may lengthen the time required to complete due-diligence reviews and obligate funds. For example, on May 28, 2010, after the application deadline was closed for round two, NTIA notified State Broadband Data and Development Grant program recipients that they were able to submit amended and supplemental applications for eligible mapping activities in those states. With regards to BTOP, NTIA also solicited applications for public safety broadband infrastructure projects nationwide through July 1, 2010, which adds additional burden on the agency. The time remaining for due diligence to be performed on these applications is a month shorter than for the first group of round two applications. RUS increased the opportunity for more applications to obtain funding by instituting a “second-chance review” process to allow an applicant to adjust an application that may not have contained sufficient documentation to fully support an award. During the second-chance review, BIP application reviewers will work with applicants to assist them in providing the documentation needed to complete their applications. Adding these activities to the BIP application reviewers’ duties may lengthen the time required to complete due-diligence reviews and obligate funds by September 30, 2010. Both agencies have renegotiated with their contractors for greater staffing flexibility. RUS has extended its contract with ICF International to provide BIP program support through 2012. In addition, RUS also indicated that its previously established broadband support program made no awards in 2010, freeing staff time for BIP activities. Despite this, NTIA and RUS officials told us that existing staff are overworked and there has been some turnover with contractor support. With the completion of second round funding and the beginning of the postaward phase, it will be critical for NTIA and RUS to ensure that they have enough staff dedicated to project oversight. Under Section 1512 of the Recovery Act and related OMB guidance, all nonfederal recipients of Recovery Act funds must submit quarterly reports that are to include a list of each project or activity for which Recovery Act funds were expended or obligated and information concerning the amount and use of funds and jobs created or retained by these projects and activities. Under OMB guidance, awarding agencies are responsible for ensuring that funding recipients report to a central, online portal no later than 10 calendar days after each calendar quarter in which the recipient received assistance. Awarding agencies must also perform their own data-quality review and request further information or corrections by funding recipients, if necessary. No later than 30 days following the end of the quarter, OMB requires that detailed recipient reports be made available to the public on the Recovery.gov Web site. In addition to governmentwide reporting, BTOP and BIP funding recipients must also submit program-level reports. BTOP-specific reports. The Recovery Act requires BTOP funding recipients to report quarterly on their use of funds and NTIA to make these reports available to the public. Specifically, NTIA requires that funding recipients submit quarterly reports with respect to Recovery Act reporting, as well as BTOP quarterly and annual financial and performance progress reports. BTOP financial reports include budget and cost information on each quarter’s expenses and are used to assess the overall financial management and health of each award and ensure that BTOP expenditures are consistent with the recipient’s anticipated progress. BTOP performance reporting includes project data, key milestones, and project indicator information, such as the number of new network miles deployed, the number of new public computer centers, or the number of broadband awareness campaigns conducted. BIP-specific reports. RUS requires BIP funding recipients to submit quarterly balance sheets, income and cash-flow statements, and data on how many households are subscribing to broadband service in each community, among other information. In addition, RUS requires funding recipients to specifically state in the applicable quarter when they have received 67 percent of the award funds, which is RUS’s measure for “substantially complete.” BIP funding recipients must also report annually on the number of households; businesses; and education, library, health care, and public safety providers subscribing to new or more accessible broadband services. A final source of guidance is the Domestic Working Group, which has highlighted leading practices in grants management. Effective grants management calls for establishing adequate internal control systems, including efficient and effective information systems, training, policies, and oversight procedures, to ensure grant funds are properly used and achieve intended results. Some agencies have developed risk-based monitoring criteria to assess where there is a need for heightened monitoring or technical assistance. These criteria can include total funding, prior experience with government grants or loans, independent audit findings, budget, and expenditures. Given the large number of BTOP and BIP grant and loan recipients, including many first-time recipients of federal funding, it is important that NTIA and RUS identify, prioritize, and manage potential at-risk recipients. NTIA. NTIA has developed and is beginning to implement a postaward framework to ensure the successful execution of BTOP. This framework includes three main elements: (1) monitoring and reporting, (2) compliance, and (3) technical assistance. NTIA will use desk reviews and on-site visits to monitor the implementation of BTOP awards and ensure compliance with award conditions by recipients. NTIA plans to provide technical assistance in the form of training, webinars, conference calls, workshops, and outreach for all recipients of BTOP funding to address any problems or issues recipients may have implementing the projects, as well as to assist in adhering to award guidelines and regulatory requirements. NTIA has provided training to recipients in grant compliance and reporting, and has also developed a recipient handbook with a number of checklists to assist recipients with performance and compliance under their federal awards. In addition, NTIA has developed training, handbooks, and other guidance for program staff and grant recipients throughout the entire postaward process and through the completion of BTOP projects in 2013. According to NTIA officials, the agency is preparing a risk-based model for postaward project monitoring and designating three levels of monitoring for grant recipients: routine, intermediate, and advanced. Under this model, program staff will reassess the risk level of each recipient on an annual basis and conduct site visits accordingly. NTIA has recently reorganized several senior positions to distribute grants management and grants administration responsibilities more evenly among a larger group of personnel, and to more effectively balance workloads. As a result, more NTIA employees will share postaward responsibilities up to September 30, 2010. For fiscal year 2011, the President’s budget request includes nearly $24 million to continue oversight activities, yet even if this amount is appropriated, agency officials said that there is some risk that NTIA will have insufficient resources to implement this comprehensive postaward framework. RUS. RUS is also putting into place a multifaceted oversight framework to monitor compliance and progress for recipients of BIP funding. Unlike NTIA, which is developing a new oversight framework, RUS plans to replicate the oversight framework it uses for its existing Community Connect, Broadband Access and Loan, Distance Learning and Telemedicine, and Rural Electrification Infrastructure Loan programs. However, RUS still has several open recommendations from a Department of Agriculture Inspector General’s report pertaining to oversight of its grant and loan programs. The main components of RUS’s oversight framework are (1) financial and program reporting and (2) desk and field monitoring. According to RUS officials, no later than 30 days after the end of each calendar-year quarter, BIP recipients will be required to submit several types of information to RUS through its Broadband Collection and Analysis System, including balance sheets, income statements, statements of cash flow, summaries of rate packages, the number of broadband subscribers in each community, and each project’s completion status. BIP funding recipients will also be required to submit detailed data on the numbers of households and businesses subscribing to or receiving improved broadband service and the numbers of schools, libraries, health care facilities, and public safety organizations obtaining either new or improved access to broadband service. In addition, RUS will conduct desk and site reviews using 52 permanent general field representatives and field accountants. RUS also has access to 15 additional temporary field staff who can assist with BIP oversight. Moreover, RUS extended its contract with ICF International through 2012, giving the agency additional resources in conducting program oversight. The President’s budget request does not include additional resources to continue BIP oversight activities in fiscal year 2011, but RUS officials believe they have sufficient resources to oversee BIP-funded recipients. Overall, both NTIA and RUS have taken steps to address the concerns we noted in our November 2009 report. For example, the agencies are developing plans to monitor BTOP- and BIP-funded recipients and are working to develop objective, quantifiable, and measurable goals to assess the effectiveness of the broadband stimulus programs. Finally, NTIA now has audit requirements in place for annual audits of commercial entities receiving BTOP grants. Despite this progress, some risks to projects’ success remain. Scale and Number of Projects. NTIA and RUS will need to oversee a far greater number of projects than in the past. As we reported in 2009, the agencies face the challenge of monitoring these projects with fewer staff than were available for their legacy grant and loan programs. Although the exact number of funded projects is still unknown, based on the first funding round’s results and the amount of funding remaining to be awarded, the agencies could fund several hundred projects each before September 30, 2010. In addition, BTOP- and BIP-funded projects are likely to be much larger and more diverse than projects funded under the agencies’ prior broadband-related programs. For example, NTIA and RUS expect to fund several types of broadband projects, and these projects will be dispersed nationwide, with at least one project in every state. NTIA is funding several different types of broadband projects, including Last Mile and Middle Mile broadband infrastructure projects for unserved and underserved areas, public computer centers, and sustainable broadband adoption projects. RUS can fund Last Mile and Middle Mile infrastructure projects in rural areas across the country. Adding to these challenges, NTIA and RUS must ensure that the recipient constructs the infrastructure project in the entire project area, not just the area where it may be most profitable for the company to provide service. For example, the Recovery Act mandates that RUS fund projects where at least 75 percent of the funded area is in a rural area that lacks sufficient access to high-speed broadband service to facilitate rural economic development; these are often rural areas with limited demand, and the high cost of providing service to these areas make them less profitable for broadband providers. The rest of the project can be located in an area that may already have service from an existing provider. Companies may have an incentive to build first where they have the most opportunity for profit and leave the unserved parts of their projects for last in order to achieve the highest number of subscribers as possible. In addition, funding projects in low-density areas where there may already be existing providers could potentially discourage further private investment in the area and undermine the viability of both the incumbents’ investment and the broadband stimulus project. During our review of BIP applications, we found several instances in which RUS awarded projects that would simultaneously cover unserved areas and areas with service from an existing provider. To ensure that Recovery Act funds reach hard-to-serve areas, recipients must deploy their infrastructure projects throughout the proposed area on which their award was based. NTIA and RUS oversight and monitoring procedures will help ensure that the unserved areas are in fact built out. Lack of Sufficient Resources. Both NTIA and RUS face the risk of having insufficient staff and resources to actively monitor BTOP- and BIP- funded projects after September 30, 2010. BTOP and BIP projects must be substantially complete within 2 years of the award date and fully complete within 3 years of the award date. As a result, some projects are not expected to be complete until 2013. However, the Recovery Act does not provide budget authority or funding for the administration and oversight of BTOP- and BIP-funded projects beyond September 30, 2010. Effective monitoring and oversight of over $7 billion in Recovery Act broadband stimulus funding will require significant resources, including staffing, to ensure that recipients fulfill their obligations. NTIA and RUS officials believe that site visits, in particular, are essential to monitoring progress and ensuring compliance; yet, it is not clear if they will have the resources to implement their oversight plans. As discussed earlier, NTIA requested fiscal year 2011 funding for oversight, but the agency does not know whether it will receive the requested funding and whether the amount would be sufficient. RUS intends to rely on existing staff and believes it has sufficient resources; however, RUS field staff members have other duties in addition to oversight of BIP projects. Because of this, it is critical that the oversight plans the agencies are developing recognize the challenges that could arise from a possible lack of resources for program oversight after September 30, 2010. For example, the agencies’ staff will need to conduct site visits in remote locations to monitor project development, but a lack of resources will pose challenges to this type of oversight. Planning for these various contingencies can help the agencies mitigate the effect that limited resource levels may have on postaward oversight. The Recovery Act broadband stimulus programs are intended to promote the availability and use of broadband throughout the country, as well as create jobs and stimulate economic development. In the first round, NTIA and RUS funded a wide variety of projects in most states and territories to meet these goals. In doing so, the agencies developed and implemented an extensive and consistent process for evaluating project applications. In addition, the agencies made efforts to gather and apply lessons learned from the first funding round to the second round in order to streamline the application review process, making it easier for applicants to submit and officials to review applications. However, the agencies must also oversee funded projects to ensure that they meet the objectives of the Recovery Act. To date, NTIA and RUS have begun to develop and implement oversight plans to support such efforts and have developed preliminary risk-based frameworks to monitor the progress and results of broadband stimulus projects. However, the Recovery Act does not provide funding beyond September 30, 2010. As the agencies continue to develop their oversight plans, it is critical that they anticipate possible contingencies that may arise because of the limited funding and target their oversight resources to ensure that recipients of Recovery Act broadband funding complete their projects in a manner consistent with their applications and awards. To ensure effective monitoring and oversight of the BTOP and BIP programs, we recommend that the Secretaries of Agriculture and Commerce incorporate into their risk-based monitoring plans, steps to address the variability in funding levels for postaward oversight beyond September 30, 2010. We provided a draft of this report to the Departments of Agriculture and Commerce for review and comment. In its written comments, RUS agreed that awarding and obligating the remaining funds under the BIP program will be challenging and noted that the loan obligation process for the second funding round will be expedited because financial documents have been crafted and are now in place. In addition, RUS agreed that there is a lack of data on broadband availability throughout the country and stated that the agency is using field representatives and other Rural Development field staff to support the BIP program as needed. RUS also noted that it is developing contingency plans to retain the majority of its temporary Recovery Act staff beyond September 30, 2010. RUS took no position on our recommendation. In its comments, NTIA stated that it is on schedule to award all of its Recovery Act funds by September 30, 2010. In addition, NTIA noted that the President’s fiscal year 2011 budget request, which includes authority and funding for NTIA to administer and monitor project implementation, is vital to ensuring that BTOP projects are successful and that recipients fulfill their obligations. NTIA took no position on our recommendation. Finally, the agencies provided technical comments that we incorporated, as appropriate. RUS’s and NTIA’s full comments appear in appendixes III and IV, respectively. We are sending copies of this report to the Secretary of Agriculture and the Secretary of Commerce, and interested congressional committees. This report is available at no charge on the GAO Web site at http://www.gao.gov. If you have any further questions about this report, please contact me at (202) 512-2834 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix V. The objectives of this report were to examine (1) the results of the first broadband stimulus funding round; (2) the extent to which the National Telecommunications and Information Administration’s (NTIA) and the Rural Utilities Service’s (RUS) due-diligence review substantiated information in the awardees’ applications; (3) the challenges, if any, facing NTIA and RUS in awarding the remaining broadband stimulus funds; and (4) the actions, if any, NTIA and RUS are taking to oversee grant and loan recipients. To describe the results of the first funding round, we obtained and analyzed data from NTIA and RUS and the agencies’ Web sites and press releases, interviewed agency officials, and reviewed agency program documentation. We are reporting publicly available data that NTIA and RUS provided on the first round broadband stimulus awards with the intent to describe the number of awards, the entities receiving first round funding, and the types of projects. This information is presented for descriptive purposes. The data are available online at BroadbandUSA.gov, the Web site through which NTIA and RUS publicly report Broadband Technology Opportunities Program (BTOP) and Broadband Initiatives Program (BIP) application and award data. In addition, we obtained and reviewed internal application information and award documentation from both agencies. We also interviewed NTIA and RUS officials who were involved in reviewing applications and awarding the broadband stimulus funds. During these interviews, we reviewed the progress NTIA and RUS were making to complete the first funding round and discussed the status of the awards, including the number of awards that had been obligated, and progress made during the second funding round. To familiarize ourselves with the programs and track their ongoing status, we reviewed NTIA and RUS program documentation, both publicly available online and internal documents provided by the agencies; reviewed a November 2009 GAO report on NTIA’s and RUS’s broadband stimulus programs; and reviewed April 2010 reports by the Congressional Research Service (CRS) and the Department of Commerce Inspector General (Commerce IG) covering first funding round applications, awards, and program management. To determine the extent to which NTIA’s and RUS’s due-diligence reviews substantiated information in awardees’ applications, we reviewed a judgmental sample of 32 awarded application files, including 15 from BTOP and 17 from BIP. In choosing our sample, we considered individual award amounts, aggregate amounts of awards per state or territory (state), type of project, type of applicant, and geographic location of the state. To determine our sample criteria, we analyzed descriptive statistics for all awards and grouped states into three categories: “below $50 million” (low); “between $50 million and $100 million” (middle); and “above $100 million” (high). Because BIP’s aggregate award amounts to the states to which it awarded funds were slightly higher than those for BTOP overall, we chose to review a slightly larger number of BIP application files than BTOP files. We chose states from among the three award categories so that the representation of low-, middle-, and high-award states approximated that in the overall population. After choosing our sample, we met with agency officials to discuss the contents of the application files and clarify the requirements of the due-diligence review process. Then, we arranged to inspect the agency files: RUS provided electronic access to its due-diligence materials for each application via an online Web site and we performed our file review remotely; NTIA provided us with a CD-ROM containing the relevant project files and we reviewed these at the Department of Commerce. We reviewed the decision memos summarizing the total output of the due-diligence review, documentation of environmental reviews, project budgets, construction schedules, and assessment of public notice filings. We recorded our findings on a data collection instrument and verified the results by using two separate reviewers. We did not evaluate the agencies’ decisions to award or deny applications or the potential for success of any project. Rather, we assessed the extent to which NTIA and RUS developed and implemented a due-diligence review process. In addition to reviewing the sample, we interviewed agency officials and two award recipients. To determine the challenges, if any, that NTIA and RUS face in awarding the remaining broadband stimulus funds, we studied the requirements set forth in the Recovery Act; evaluated changes between the first- and second-round funding notices; and interviewed agency officials, representatives of five telecommunications associations, and two award recipients. We also reviewed prior GAO, CRS, and Commerce IG reports to learn about issues affecting the broadband stimulus programs. We also monitored agency press releases and tracked notices published on the Broadbandusa.gov Web site. Finally, to determine the actions NTIA and RUS are taking to oversee grant and loan recipients, we interviewed agency officials about plans to monitor and oversee awardees. During these meetings, we discussed Recovery Act reporting requirements, as well as specific BTOP and BIP requirements. We also reviewed agency plans and guidance provided to recipients. We compared those plans to requirements established in the Recovery Act and guidance from the Office of Management and Budget, the Domestic Working Group, and GAO. We conducted this performance audit from February through August 2010 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Table 9 provides information on the 10 BTOP and 3 BIP projects covering areas in multiple states. In addition to the contact named above, Michael Clements, Assistant Director; Jonathan Carver; Elizabeth Eisenstadt; Brandon Haller; Tom James; Elke Kolodinski; Kim McGatlin; Josh Ormond; and Mindi Weisenbloom made key contributions to this report. | Access to affordable broadband service is seen as vital to economic growth and improved quality of life. To extend broadband access and adoption, the American Recovery and Reinvestment Act (Recovery Act) provided $7.2 billion to the Department of Commerce's National Telecommunications and Information Administration (NTIA) and the Department of Agriculture's Rural Utilities Service (RUS) for grants or loans to a variety of program applicants. The agencies are awarding funds in two rounds and must obligate all funds by September 30, 2010. This report addresses the results of the first broadband stimulus funding round, the extent to which NTIA's and RUS's application reviews substantiated application information, the challenges facing NTIA and RUS in awarding the remaining funds, and actions taken to oversee grant and loan recipients. GAO analyzed program documentation, reviewed a judgmentally-selected sample of applications from first round award recipients, and interviewed agency officials and industry stakeholders. In the first round of broadband stimulus funding that began in July 2009 and ended in April 2010, NTIA and RUS received over 2,200 applications and awarded 150 grants, loans, and loan/grant combinations totaling $2.2 billion to a variety of entities in nearly every state and U.S. territory. This funding includes $1.2 billion for 82 projects awarded by NTIA and more than $1 billion for 68 projects awarded by RUS. NTIA primarily awarded grants to public entities, such as states and municipalities, whereas RUS made grants, loans, and loan/grant combinations primarily to private-sector entities, such as for-profit companies and cooperatives. NTIA and RUS consistently substantiated information in first round award recipients' applications. The agencies and their contractors reviewed financial, technical, environmental, and other documents and determined the feasibility and reasonableness of each project. GAO's review of 32 award recipient applications found that the agencies consistently reviewed the applications and substantiated the information as specified in the first funding notice. In each of the files, GAO observed written documentation that the agencies and their contractors reviewed and verified pertinent application materials, and requested additional documentation where necessary. To meet the Recovery Act's September 30, 2010, deadline for obligating broadband funds, NTIA and RUS must award approximately $4.8 billion--or more than twice the amount they awarded during the first round--in less time than they had for the first round. As the end of the Recovery Act's obligation deadline draws near, the agencies may face increased pressure to approve awards. NTIA and RUS also lack detailed data on the availability of broadband service throughout the country, making it difficult to determine whether a proposed service area is unserved or underserved, as defined in the program funding notices. To address these challenges, NTIA and RUS have streamlined their application review processes by, for example, eliminating joint reviews and reducing the number of steps in the due-diligence review process, and NTIA began using Census tract data to verify the presence of service. NTIA and RUS are putting oversight plans in place to monitor compliance and progress for broadband stimulus funding recipients, but some risks remain. The agencies will need to oversee far more projects than in the past and these projects are likely to be much larger and more diverse than projects funded under the agencies' prior broadband-related programs. Additionally, NTIA and RUS must ensure that the recipients construct the infrastructure projects in the entire project area, not simply the area where it may be most profitable for the company to provide service. Both NTIA and RUS face the risk of having insufficient resources to actively monitor Recovery Act funded broadband projects. Because of this, planning for a possible lack of resources for program oversight after September 30, 2010, can help the agencies mitigate the effect of limited resources on postaward oversight. The Secretaries of Agriculture and Commerce should incorporate into their risk-based monitoring plans, steps to address variability in funding levels for postaward oversight beyond September 30, 2010. Both agencies took no position on GAO's recommendation and noted steps being taken to complete their respective programs. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
Recent advances in aircraft technology, including advanced collision avoidance and flight management systems, and new automated tools for air traffic controllers enable a shift from air traffic control to collaborative air traffic management. Free flight, a key component of air traffic management, will provide pilots with more flexibility, under certain conditions, to fly more direct routes from city to city. Currently, pilots primarily fly fixed routes—the aerial equivalent of the interstate highway system—that often are less direct because pilots are dependent on ground- based navigational aids. Through free flight, FAA hopes to increase the capacity, efficiency, and safety of our nation's airspace system to meet the growing demand for air transportation as well as enhance the controllers’ productivity. The aviation industry, especially the airlines, is seeking to shorten flight times and reduce fuel consumption. According to FAA’s preliminary estimates, the benefits to the flying public and the aviation industry could reach into the billions of dollars when the program is fully operational. In 1998, FAA and the aviation community agreed to a phased approach for implementing the free flight program, established a schedule for phase 1, and created a special program office to manage this phase. During phase 1, which FAA plans to complete by the end of calendar year 2002, the agency has been deploying five new technologies to a limited number of locations and measuring their benefits. Figure 1 shows how these five technologies—Surface Movement Advisor (SMA), User Request Evaluation Tool (URET), Traffic Management Advisor (TMA), Collaborative Decision Making (CDM), and passive Final Approach Spacing Tool (pFAST)—operate to help manage air traffic. According to FAA, SMA and CDM have been deployed at all phase 1 sites on or ahead of schedule. Table 1 shows FAA’s actual and planned deployment dates for URET, TMA, and pFAST. To measure whether the free flight tools will increase system capacity and efficiency, in phase 1, FAA has been collecting data for the year prior to deployment and initially planned to collect this information for the year after deployment before making a decision about moving forward. In December 1999, at the urging of the aviation community, FAA accelerated its funding request to enable it to complete the next phase of the free flight program by 2005—2 years ahead of schedule. During this second phase, FAA plans to deploy some of the tools at additional locations and colocate some of them at selected facilities. FAA also plans to conduct research on enhancements to these tools and incorporate them when they are sufficiently mature. FAA plans to make an investment decision in March 2002 about whether to proceed to phase 2. However, by that date, the last site for URET will have been operational for only 1 month, thus not allowing the agency to collect data for 1 year after deployment for that site before deciding to move forward. (See table 1.) FAA officials told us that because the preliminary data showed that the benefits were occurring more rapidly than anticipated, they believe it is unnecessary to wait for the results from the evaluation plan to make a decision about moving forward. To help airports achieve their maximum capacity for arrivals through free flight, FAA’s controllers will undergo a major cultural change in how they will manage the flow of air traffic over a fixed point (known as metering). Under the commonly used method, controllers use “distance” to meter aircraft. With the introduction of TMA, controllers will have to adapt to using “time” to meter aircraft. The major technical challenge with deploying the free flight tools is making URET work with FAA’s other air traffic control systems. While FAA does not think this challenge is insurmountable, we believe it is important for FAA to resolve this issue to fully realize URET's benefit of increasing controller productivity. Initially, controllers had expressed concern about how often they could rely on TMA to provide the data needed to effectively manage the flow of traffic. However, according to FAA and subsequent conversations with controllers, this problem was corrected in May 2001 when the agency upgraded TMA software and deployed the new version to all sites. To FAA’s credit, it has decided not to deploy pFAST to additional facilities in phase 2 because of technical difficulties associated with customizing the tool to meet the specific needs of each facility, designing other automated systems that are needed to make it work, and affordability considerations. Ensuring that URET is compatible with other major air traffic control systems is a crucial technical challenge because this requires FAA to integrate software changes among multiple systems. Among these systems are FAA’s HOST, Display System Replacement, and local communications networks. Compounding this challenge, FAA has been simultaneously upgrading these systems’ software to increase their capabilities. How well URET will work with these systems is unknown because FAA has yet to test this tool with them. FAA has developed the software needed for integration and has begun preliminary testing. Although problems have been uncovered during testing, FAA has indicated that these problems should not preclude URET’s continued deployment. By the end of August 2001, FAA expects to complete testing of URET’s initial software in conjunction with the agency’s other major air traffic control systems. FAA acknowledges that further testing might uncover the need for additional software modifications, which could increase costs above FAA’s current estimate for this tool’s software development and could cause the agency to defer capabilities planned for phase 1. Ensuring URET’s compatibility with other air traffic control systems is important to fully realize its benefits of increasing controllers’ productivity. URET is used in facilities that control air traffic at high altitudes and will help associate and lead controllers work together to safely separate aircraft. Traditionally, an associate controller has used the data on aircraft positions provided by the HOST computer and displayed on the Display System Replacement workstation to assess whether a potential conflict between aircraft exists. If so, an associate controller would annotate the paper flight strips containing information on their flights and forward these paper flight strips to the lead controller who would use the Display System Replacement workstation to enter flight plan amendments into the HOST. URET users we spoke with said that this traditional approach is a labor-intensive process, requiring over 30 keystrokes. With URET, an associate controller can rely on this tool to automatically search for potential conflicts between aircraft, which are then displayed. URET also helps an associate controller resolve a potential conflict by automatically calculating the implications of any change prior to amending the flight plan directly into the HOST. According to the users we spoke with, these amendments require only three keystrokes with URET. FAA, controllers, maintenance technicians, the aviation community, and other stakeholders agree on the importance of using a phased approach to implementing the free flight program. This approach allows FAA the opportunity to gradually deploy the new technologies at selected facilities and users to gain operational experience before total commitment to the free flight tools. It basically follows the “build a little, test a little, field a little” approach that we have endorsed on numerous occasions. To FAA’s credit, the agency has appropriately used this approach to determine that it will not deploy pFAST in phase 2. We also agree with major stakeholders that adapting to the program’s tools poses the greatest operational challenge because they will change the roles and responsibilities of the controllers and others involved in air traffic services. However, the success of free flight will rely on agencywide cultural changes, especially with controllers, who trust their own judgment more than some of FAA’s new technologies, particularly because the agency’s prior efforts to deploy them have had significant problems.Without training in these new tools, air traffic controllers would be hampered in fulfilling their new roles and responsibilities. Another major challenge is effectively communicating TMA’s capabilities to users. Because FAA has been deferring and changing capabilities, it has been difficult for controllers to know what to expect and when from this tool and for FAA to ensure that it provides all the capabilities that had been agreed when FAA approved the investment for phase 1. During our meetings with air traffic controllers and supervisors, their biggest concern was that the free flight tools would require cultural changes in the way they carry out their responsibilities. By increasing their dependence on automation for their decisionmaking, these tools are expected to help increase controllers’ productivity. Moreover, the tools will require changes in commonly recognized and accepted methods for managing traffic. Controllers and supervisors emphasized that URET will increase the responsibilities of the associate controllers in two important ways. First, their role would no longer be focused primarily on separating traffic by reading information on aircraft routes and altitudes from paper flight strips, calculating potential conflicts, and manually reconfiguring the strips in a tray to convey this information to a lead controller. With the URET software that automatically identifies potential conflicts up to 20 minutes in advance, associate controllers can be more productive because they will no longer have to perform these manual tasks. Second, they can assume a more strategic outlook by becoming more focused on improving the use of the airspace. URET enables them to be more responsive to a pilot’s request to amend a flight plan (such as to take advantage of favorable winds) because automation enables them to more quickly check for potential conflicts before granting a request. Although the controllers said they look forward to assuming this greater role and believe that URET will improve the operational efficiency of our nation’s airspace, they have some reservations. Achieving this operational efficiency comes with its own set of cultural and operational challenges. Culturally, controllers will have to reduce their dependency on paper flight strips as URET presents data electronically on a computer screen. According to the controllers we interviewed, this change will be very challenging, especially at facilities that handle large volumes of traffic, such as Chicago, because the two facilities that have received URET have taken several years to become proficient with it even though they have less traffic. Operationally, controllers said that URET’s design must include some backup capability because they foresee the tool becoming a critical component in future operations. Moreover, as controllers become increasingly experienced and reliant on URET, they will be reluctant to return to the former manual way because those skills will have become less current. As new controllers join the workforce, an automated backup capability will become increasingly essential because they will not be familiar with controlling traffic manually with paper flight strips. Currently, FAA is not committed to providing a backup to URET in either phase because the tool is only a support tool, not a mission-critical tool that requires backup. However, the agency is taking preliminary steps to provide some additional space for new equipment in the event it decides to provide this backup. Depending on how the agency plans to address this issue, the cost increase will vary. For TMA, controllers emphasized during our discussions that using time rather than distance to meter properly separated aircraft represents a major cultural shift. While controllers can visually measure distance, they cannot do the same with time. As one controller in a discussion group commented, TMA “is going to be a strain, … and I hate to use the word sell, but it will be a sell for the workforce to get this on the floor and turn it on and use it.” Currently, controllers at most en route facilities use distance to meter aircraft as they begin their descent into an airport’s terminal airspace. This method, which relies on the controllers’ judgment, results in the less efficient use of this airspace because controllers often add distance between planes to increase the margin of safety. With TMA, controllers will rely on the computer’s software to assign a certain time for aircraft to arrive at a predetermined point. Through continuous automatic updating of its calculations, TMA helps balance the flow of arriving flights into the congested terminal airspace by rapidly responding to changing conditions. The controllers at the first three of the en route centers that have transitioned to TMA easily accepted it because they had been using time to meter air traffic for 20 years. However, as other en route centers transition to TMA, the controllers’ receptivity will be difficult because they have traditionally used distance to meter air traffic. FAA management realizes that the controllers’ transition to metering based on time versus distance will be challenging and has allowed at least 1 full year for them to become proficient in using the tool and begin to reap its full benefits. As a result, the Free Flight Program Office has established a 1-year period for controllers to become trained and comfortable with using this tool. FAA is relying heavily on national user teams to help develop training for TMA and URET. However, because of a lack of training development expertise and other factors, their efforts to provide adequate training for TMA have been hampered. Controllers said that, while they have knowledge of TMA, they are not specialists in developing training and therefore need more assistance from the program office. Also, because only a few key controllers have experience in using TMA, the teams have had to rely on them to develop a standardized training program while working with local facilities to tailor it to their needs. Moreover, these controllers are being asked to troubleshoot technical problems. Finally, controllers said the computer-based training they have received to date has not been effective because it does not realistically simulate operational conditions. FAA is currently revising its computer-based training to provide more realistic simulations. Because using the free flight tools will require controllers to undergo a complex and time-consuming cultural change, developing a comprehensive training program would greatly help FAA’s efforts to implement the new free flight technologies. Communicating to users how the new tools will benefit the organization and them will greatly enhance the agency’s training strategy. While FAA’s training plans for URET are preliminary because it is undergoing testing and is not scheduled for deployment until the latter part of 2001, we believe that providing adequate training in advance is essential for controllers to become proficient in using this tool. Our discussions with controllers and FAA’s TMA contractor indicated that in order to address local needs and to fix technical problems with TMA, FAA deferred several aspects of the tool that had been established for earlier deployment in phase 1. FAA officials maintain that these capabilities will be deployed before the end of phase 1. However, if these capabilities are not implemented in phase 1, pushing them into phase 2 will likely increase costs and defer benefits. For example, TMA’s full capability to process data from adjacent en route centers has been changed because FAA determined that providing the full capability was not cost effective. While controllers said that even without this full capability, TMA has provided some benefits, they said that deferring some aspects of the tool’s capabilities has made it less useful than they expected. Moreover, controllers maintain that FAA has not clearly communicated the changes with the tool’s capabilities to them. Without knowing how the tool’s capabilities are being changed and when the changes will be incorporated, it is difficult for users to know what to expect and when and for FAA to evaluate the tool’s cost, schedule, and ability to provide expected benefits. FAA has begun to measure capacity and efficiency gains from using the free flight tools and its preliminary data show that the tools provide benefits. FAA expects additional sites to show similar or greater benefits, thus providing data to support a decision to move to phase 2 by March 2002. Because the future demand for air traffic services is expected to outpace the tools’ capacity increases, the collective length of delays during peak periods will continue to increase but not to the extent that they would have without them. When FAA, in collaboration with the aviation industry, instituted the phased approach to implement its free flight program in 1998, the agency established a qualitative goal for increasing capacity and efficiency. In May 2001, FAA announced quantifiable goals for each of the three tools. For URET, FAA established an efficiency goal to increase direct routings by 15 percent within the first year of being fully implemented. Achieving this goal translates into reduced flight times and fuel costs for the airlines. The capacity goals for TMA and pFAST are dependent upon whether they are used together (colocated) and whether any constraints at an airport prevent them from being used to their full potential to expand capacity. If they are used together (such as at Minneapolis), FAA expects capacity to increase by 3 percent in the first year of operations and by 5 percent in the following year. However, at Atlanta, which is constrained by a lack of runways, the goal is 3 percent when these tools are used together. If only one of these tools is deployed (such as at Miami), FAA expects a 3-percent increase in capacity. While FAA has established quantifiable goals for these tools, the agency has only recently begun to develop information to determine whether attaining its goals will result in a positive return on the investment. Making this determination is important to help ensure that the capacity and efficiency gains provided by these tools are worth the investment. As previously shown in table 1, the actual systems that will be deployed for TMA and pFAST have only recently been installed at several locations or are scheduled to be installed this winter. To date, prototypes of these tools have been colocated at one location, and the actual equipment has been colocated at three locations. TMA is in a stand-alone mode at two locations. FAA reported that TMA achieved its first-year goal of a 3- percent increase in capacity at Minneapolis, and the agency is collecting data to determine whether the tool is meeting its goals at the other locations. Most of FAA’s data regarding the benefits provided by these tools are based on operations of their prototypes at Dallas-Fort Worth. These data show that TMA and pFAST achieved the 5-percent colocation goal. However, the data might not be indicative of the performance of the actual tools that will be deployed to other locations because Dallas-Fort Worth does not face the constraints affecting many other airports (such as a lack of runways). Because FAA does not plan to begin deploying the actual model of URET until November 2001, the agency’s data on its benefits have been based only on a prototype. At the two facilities—Indianapolis and Memphis— where the prototype has been deployed since 1997, FAA reported that URET has increased the number of direct routings by over 17 percent as of April 2001. According to FAA’s data, all flights through these two facilities were shortened by an average of one-half mile, which collectively saved the airlines approximately $1.5 million per month in operating costs. However, the benefits that FAA has documented for using URET reflect savings for just a segment of a flight—when an airplane is cruising through high-altitude airspace—not the entire flight from departure to arrival. Maintaining URET’s benefits for an entire flight is partly dependent on using it in conjunction with TMA and pFAST. Although a researcher at the Massachusetts Institute of Technology, who is reviewing aspects of FAA’s free flight program, recognizes URET’s potential benefits, the researcher expressed concerns that its benefits could be lessened in the airspace around airports whose capacity is already constrained. Likewise, in a study on free flight supported by the National Academy of Sciences and the Department of Transportation, the authors found that the savings attributed to using direct routings might “be lost as a large stack of rapidly arriving aircraft must now wait” in the terminal airspace at constrained airports. Although URET can get an airplane closer to its final destination faster, airport congestion will delay its landing. While TMA and pFAST are designed to help an airport handle arrivals more efficiently and effectively, they cannot increase the capacity of an airport’s terminal airspace beyond the physical limitations imposed by such constraining factors as insufficient runways or gates. In contrast, FAA’s Free Flight Program Office believes that the savings observed with the prototype of URET will accrue when the actual tool is used in conjunction with TMA and pFAST. FAA plans to have procedures in place by the time these three tools are used together so that URET’s benefits will not be reduced. However, the colocation of these three tools is not expected to occur until February 2002, which is only 1 month before the agency plans to make an investment decision for phase 2. Thus, we believe that FAA will not have enough time to know whether URET’s benefits would be reduced. During peak periods, the demand for air traffic currently exceeds capacity at some airports, causing delays. FAA expects this demand to grow, meaning that more aircraft will be delayed for longer periods. Free flight tools have the potential to allow the air traffic system to handle more aircraft (increase capacity) but not to keep up with the projected growth in demand. Thus, they can only slow the growth of future delays. They cannot fully eliminate future delays or reduce current delays unless demand remains constant or declines. FAA’s model of aircraft arrivals at a hypothetical congested airport, depicted in figure 2, illustrates the projected impact of the tools. According to the model, if demand increases and the tools are not deployed (capacity remains constant); the collective delays for all arriving flights (not each one) will increase by about an hour during peak periods. But if demand increases exceed capacity increases from deploying the tools, these delays will only increase by about half an hour. While recognizing that the free flight tools will provide other benefits, FAA has not quantified them. According to FAA, although TMA and pFAST are designed to maximize an airport’s arrival rates, they also can increase departure rates because of their ability to optimize the use of the airspace and infrastructure around an airport. Regarding URET, FAA maintains that by automating some of the functions that controllers had performed manually, such as manipulating paper flight strips, the tool allows controllers to be more productive. If FAA’s data continue to show positive benefits, the agency should be in a position by March 2002 to make a decision to deploy TMA to additional sites. However, FAA might not be in a position to make an informed decision on URET because the schedule might not allow time to collect sufficient data to fully analyze the expected benefits from this tool during phase 1. Currently, operational issues present the greatest challenge because using the free flight tools will entail a major cultural shift for controllers as their roles and responsibilities and methods for managing air traffic will change. While FAA management has recognized the cultural changes involved, they have not taken a leadership role in responding to the magnitude of the changes. In particular, while involving controllers in developing and delivering training on these new tools, FAA has not provided support to ensure that the training can be effectively developed and presented at local sites. Because the agency has been changing the capabilities of TMA from what had been originally planned but not systematically documenting and communicating these changes, FAA and the users of this tool lack a common framework for understanding what is to be accomplished and whether the agency has met its goals. While the free flight tools have demonstrated their potential to increase capacity and save the airlines money, only recently has FAA established quantifiable goals for each tool and begun to determine whether its goals are reasonable—that they will result in a positive return on investment. Because several factors influence the benefits expected from the tools, it is important for FAA to clearly articulate the expectations for each tool by specific location. To make the most informed decision about moving to phase 2 of the free flight program, we recommend that the Secretary of Transportation direct the FAA Administrator to take the following actions: Collect and analyze sufficient data in phase 1 to ensure that URET can effectively work with other air traffic control systems. Improve the development and the provision of local training to enable field personnel to become proficient with the free flight tools. Determine that the goals established in phase 1 result in a positive return on investment and collect data to verify that the goals are being met at each location. Establish a detailed set of capabilities for each tool at each location for phase 2 and establish a process to systematically document and communicate changes to them in terms of cost, schedule, and expected benefits. We provided a draft of this report to the Department of Transportation and the National Aeronautics and Space Administration for their review and comment. We met with officials from the Office of the Secretary and FAA, including the Director and Deputy Director Free Flight Program Office, to obtain their comments on the draft report. These officials generally concurred with the recommendations in the draft report. They stated that, to date, FAA has completed deployment of the Surface Movement Advisor and the Collaborative Decision Making tools on, or ahead of, schedule at all phase 1 locations and plans to complete the deployment of the remaining free flight tools on schedule. FAA officials also stated that the agency is confident that it will be in position to make an informed decision, as scheduled in March 2002, about moving to the program’s next phase, which includes the geographic expansion of TMA and URET. Furthermore, FAA stated that the free flight tools have already demonstrated positive benefits in an operational environment and that it expects these benefits will continue to be consistent with the program’s goals as the tools are installed at additional sites. In addition, FAA officials provided technical clarifications, which we have incorporated in this report, as appropriate. We acknowledge that FAA has deployed the Surface Movement Advisor and the Collaborative Decision Making tools on schedule at various locations. Furthermore, the report acknowledges that the free flight tools have demonstrated benefits and that the agency should have the data on TMA to make a decision about moving forward to phase 2 by March 2002. However, as we note in the report, FAA faces a significant technical challenge in ensuring that URET works with other air traffic control systems. Moreover, the data on URET's benefits reflect those of the prototype system. FAA is scheduled to deploy the first actual system in November 2001 and the last in February 2002—just 1 month before it plans to make an investment decision. With this schedule, the actual system might not be operational long enough to gather sufficient data to measure its benefits. Furthermore, FAA has yet to overcome the operational challenge that is posed when controllers use TMA and must shift from the traditional distance-based method of metering air traffic to one based on time. If FAA can not satisfactorily resolve these issues, the free flight program might not continue to show positive benefits and could experience cost overruns, delays, and performance shortfalls. The National Aeronautics and Space Administration expressed two major concerns. First, it felt that the benefits provided from the TMA tool justified its further deployment. Our initial conclusion in the draft report, that FAA lacked sufficient data to support deploying this tool to additional sites, was based on FAA’s initial evaluation plan, which required at least 1 year of operational data after each tool had been deployed. FAA officials now believe that waiting for full results from the evaluation plan before making a decision to move forward is no longer necessary because TMA's performance results are occurring more rapidly than anticipated. This report now acknowledges that the agency should have the data it needs to make a decision to move forward with this tool. Second, NASA felt that the report was unclear regarding the nature of our concerns about the reliability of TMA's data. The discussion in the draft report indicated that FAA lacked sufficient data to show that it had addressed our concerns with TMA. FAA officials provided this support, and this report has been revised accordingly. In addition, National Aeronautics and Space Administration officials provided technical clarifications, which we have incorporated into this report, as appropriate. (See appendix II for the National Aeronautics and Space Administration's comments.) As agreed with your offices, unless you publicly release its contents earlier, we plan no further distribution of this report until 30 days after the date of this letter. At that time, we will send copies of this report to interested Members of Congress; the Secretary of Transportation; the Administrator, Federal Aviation Administration; and the Administrator, National Aeronautics and Space Administration. We will also make copies available to others upon request. If you have questions about this report, please contact me at (202) 512- 3650. Key contributors are listed in appendix III. Because of the importance of the free flight program to the future operation of our nation’s aviation system and the upcoming decision about whether to proceed to the next phase, the Chairmen of the Senate Committee on Commerce, Science, and Transportation and the Subcommittee on Aviation asked us to provide information to help them determine whether the Federal Aviation Administration (FAA) will be in a position to decide on moving to the next phase. This report discusses (1) the significant technical and operational issues that could impair the ability of the free flight tools to achieve their full potential and (2) the extent to which these tools will increase efficiency and capacity while helping to minimize delays in our nation’s airspace system. Our review focused on three free flight phase 1 tools—the User Request Evaluation Tool, the Traffic Management Advisor, and the passive Final Approach Spacing Tool—because they account for approximately 80 percent of FAA’s $630 million estimated investment for phase 1 and approximately 80 percent of FAA’s $717 million estimated investment for phase 2. We did not review the Surface Movement Advisor or the Collaborative Decision Making tools because generally they had been implemented at all phase 1 locations when we started this review and FAA does not intend to deploy their identical functionality in phase 2. To obtain users’ insights into the technical and operational issues and the expected benefits from these tools, we held four formal discussion group meetings with nationwide user teams made up of controllers, technicians, and supervisors from all the facilities currently using or scheduled to receive the Traffic Management Advisor during phase 1. We also visited and/or held conference calls with controllers, technicians, and supervisors that used one or more of these tools in Dallas, Texas; southern California; Minneapolis, Minnesota; Memphis, Tennessee; Indianapolis, Indiana; and Kansas City, Kansas. development and acquisition. Based on these criteria, we interviewed FAA officials in the Free Flight Program Office, the Office of Air Traffic Planning and Procedures, and the Office of Independent Operational Test and Evaluation. To review test reports and other documentation highlighting technical and operational issues confronting these tools, we visited FAA’s William J. Hughes Technical Center in Atlantic City, New Jersey, and FAA’s prime contractors that are developing the three free flight tools. We also visited the National Aeronautics and Space Administration’s Ames Research Center at Moffett Field, California, to understand how its early efforts to develop free flight tools are influencing FAA’s current enhancement efforts. To determine the extent to which the free flight tools will increase capacity and efficiency while helping to minimize delays, we analyzed the relevant legislative and Office of Management and Budget’s requirements that recognize the need for agencies to develop performance goals for their major programs and activities. We also interviewed FAA officials in the Free Flight Program Office and the Office of System Architecture and Investment for information on the performance goals of the free flight tools during phase 1. In addition, we held discussions with officials from RTCA, which provides a forum for government and industry officials to develop consensus-based recommendations. We also reviewed documentation explaining how the tools are expected to and actually have helped increase system capacity and efficiency, thereby helping to minimize delays. We conducted our review from October 2000 through July 2001, in accordance with generally accepted government auditing standards. In addition to those named above, Nabajyoti Barkakati, Jean Brady, William R. Chatlos, Peter G. Maristch, Luann M. Moy, John T. Noto, and Madhav S. Panwar made key contributions to this report. | This report reviews the Federal Aviation Administration's (FAA) progress on implementing the Free Flight Program, which would provide more flexibility in air traffic operations. This program would increase collaboration between FAA and the aviation community. By using a set of new automated technologies (tools) and procedures, free flight is intended to increase the capacity and efficiency of the nation's airspace system while helping to minimize delays. GAO found that the scheduled March 2002 date will be too early for FAA to make an informed investment decision about moving to phase 2 of its Free Flight Program because of significant technical and operational issues. Furthermore, FAA's schedule for deploying these tools will not allow enough time to collect enough data to fully analyze their expected benefits. Currently, FAA lacks enough data to demonstrate that these tools can be relied upon to provide accurate data. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
A comprehensive reassessment of agencies’ roles and responsibilities is central to any congressional and executive branch strategy that seeks to bring about a government that is not only smaller but also more efficient and effective. GPRA provides a legislatively based mechanism for Congress and the executive branch to jointly engage in that reassessment. In crafting GPRA, Congress recognized the vital role that consultations with stakeholders should have in defining agencies’ missions and establishing their goals. Therefore, GPRA requires agencies to consult with Congress and other stakeholders in the preparation of their strategic plans. These consultations are an important opportunity for Congress and the executive branch to work together in reassessing and clarifying the missions of federal agencies and the outcomes of agencies’ programs. Many federal agencies today are the product of years of accumulated responsibilities and roles as new social and economic problems have arisen. While adding the particular roles and responsibilities may have made sense at the time, the cumulative effect has been to create a government in which all too frequently individual agencies lack clear missions and goals and related agencies’ efforts are not complementary. Moreover, legislative mandates may be unclear and Congress, the executive branch, and other stakeholders may not agree on the goals an agency and its programs should be trying to achieve, the strategies for achieving those goals, and the ways to measure their success. For example, we reported that the Environmental Protection Agency (EPA), had not been able to target its resources as efficiently as possible to address the nation’s highest environmental priorities because it did not have an overarching legislative mission and its environmental responsibilities had not been integrated. As a result of these problems, EPA could not ensure that its efforts were directed at addressing the environmental problems that posed the greatest risk to the health of the U.S. population or the environment. To respond to these shortcomings, EPA is beginning to sharpen its mission and goals through its National Environmental Goals Project, a long-range planning and goal-setting initiative that, as part of EPA’s efforts under GPRA, is seeking to develop a set of measurable, stakeholder-validated goals for improving the nation’s environmental quality. The situation at EPA is by no means unique. Our work has shown that the effectiveness of other agencies, such as the Department of Energy and the Economic Development Administration, also has been hampered by the absence of clear missions and strategic goals. and limit the overall effectiveness of the federal effort. For example, the $20 billion appropriated for employment assistance and training activities in fiscal year 1995 covered 163 programs that were spread over 15 agencies. Our work showed that these programs were badly fragmented and in need of a major overhaul. Moreover, in reviewing 62 programs that provided employment assistance and training to the economically disadvantaged, we found that most programs lacked very basic information needed to manage. Fewer than 50 percent of the programs collected data on whether program participants obtained jobs after they received services, and only 26 percent collected data on wages that participants earned. Both houses of Congress in recent months have undertaken actions to address the serious shortcomings in the federal government’s employment assistance and training programs, although agreement has not been reached on the best approach to consolidation. In another example, we identified 8 agencies that are administering 17 different programs assisting rural areas in constructing, expanding, or repairing water and wastewater facilities. These overlapping programs often delayed rural construction projects because of differences in the federal agencies’ timetables for grants and loans. Also, the programs experienced increased project costs because rural governments had to participate in several essentially similar federal grant and loan programs with differing requirements and processes. We found that, because of the number and complexity of programs available, many rural areas needed to use a consultant to apply for and administer federal grants or loans. The examples I have cited today of agencies with unclear missions and other agencies that are duplicating each other’s efforts are not isolated cases. Our work that has looked at agencies’ spending patterns has identified other federal agencies whose missions deserve careful review to ensure against inappropriate duplication of effort. As I noted in an appearance before the Senate Committee on Governmental Affairs last May, in large measure, problems arising from unclear agency missions and goals and overlap and fragmentation among programs can best be solved through an integrated approach to federal efforts. Such an approach looks across the activities of individual programs to the overall goals that the federal government is trying to achieve. The GPRA requirement that agencies consult with Congress in developing their strategic plans presents an important opportunity for congressional committees and the executive branch to work together to address the problem of agencies whose missions are not well-defined, whose goals are unclear or nonexistent, and whose programs are not properly targeted. Such consultations will be helpful to Congress in modifying agencies’ missions, setting better priorities, and restructuring or terminating programs. The agencies’ consultations with Congress on strategic plans will begin in earnest in the coming weeks and months. The Office of Management and Budget’s (OMB) guidance to agencies on GPRA requirements for strategic planning said that agencies would be asked to provide OMB with selected parts of their strategic plans this year. Some departments, such as the Department of the Treasury, are scheduling meetings on their strategic plans with the appropriate authorization, appropriation, and oversight committees. As congressional committees work with agencies on developing their strategic plans, they should ask each agency to clearly articulate its mission and strategic goals and to show how program efforts are linked to the agency’s mission and goals. Making this linkage would help agencies and Congress identify program efforts that may be neither mission-related nor contribute to an agencies’ desired outcomes. It would also help Congress to identify agencies whose efforts are not coordinated. As strategic planning efforts proceed, Congress eventually could ask OMB to identify programs with similar or conflicting goals. As was to be expected during the initial efforts of such a challenging management reform effort, the integration of GPRA into program operations in pilot agencies has been uneven. This integration is important because Congress intended that outcome-oriented strategic plans would serve as the starting points for agencies’ goal-setting and performance measurement efforts. Ultimately, performance information is to be used to inform an array of congressional and executive branch decisions, such as those concerning allocating scarce resources among competing priorities. To help accomplish this integration, GPRA requires that beginning with fiscal year 1999, all agencies are to develop annual performance plans that provide a direct linkage between long-term strategic goals and what program managers are doing on a day-to-day basis to achieve those goals. These plans are to be submitted to OMB with the agencies’ budget submissions and are expected to be useful in formulating the president’s budget. Congress can play a decisive role in the implementation of GPRA by insisting that performance goals and information be used to drive day-to-day activities in the agencies. Consistent congressional interest at authorization, appropriation, budget, and oversight hearings on the status of an agency’s GPRA efforts, performance measures, and uses of performance information to make decisions, will send an unmistakable message to agencies that Congress expects GPRA to be thoroughly implemented. Chairman Clinger and the Committee on Government Reform and Oversight took an important first step last year when they recommended that House committees conduct oversight to help ensure that GPRA and the CFO Act are being aggressively implemented. They also recommended that House committees use the financial and program information required by these acts in overseeing agencies within their jurisdiction. A further important step toward sharpening agencies’ focus on outcomes would be for congressional committees of jurisdiction to hold comprehensive oversight hearings—annually or at least once during each Congress—using a wide range of program and financial information. Agencies’ program performance information that can be generated under GPRA and the audited financial statements that are being developed to comply with the Government Management Reform Act (GMRA) should serve as the basis for these hearings. GMRA expanded to all 24 CFO Act agencies the requirement for the preparation and audit of financial statements for their entire operations, beginning with those for fiscal year 1996. Also, consistent with GMRA, OMB is working with six agencies to pilot the development of consolidated accountability reports. By integrating the separate reporting requirements of GPRA, the CFO Act, and other specified acts, the accountability reports are intended to show the degree to which an agency met its goals, at what cost, and whether the agency was well run. I have endorsed the concept of an integrated accountability report and was pleased to learn that OMB plans to develop guidance, which is to be based on the experiences of the initial six pilots, for other agencies that may wish to produce such reports for fiscal year 1996. I believe that by asking agencies the following or similar questions, Congress will both lay the groundwork for communicating to agencies the importance it places on successful implementation of GPRA and obtain important information on the status of agencies’ GPRA efforts. The experiences of many of the leading states and foreign countries that have implemented management reform efforts similar to GPRA suggest that striving to measure outcomes will be one of the most challenging and time-consuming aspects of GPRA. Nevertheless, measuring outcomes is a critical aspect of GPRA, particularly for informing the decisions of congressional and high-level executive branch decisionmakers as they allocate resources and determine the need for and the efficiency and effectiveness of specific programs. As expected at this stage of GPRA’s implementation, we are finding that many agencies are having difficulty in making the transition to a focus on outcomes. For example, to meet the goals in its current GPRA performance plan, the Small Business Administration (SBA) monitors its activities and records accomplishments largely on the basis of outputs, such as an increased number of Business Information Centers. Such information is important to SBA in managing and tracking its activities. However, to realize the full potential of outcome-oriented management, SBA needs to take the next step of assessing, for example, the difference the additional Centers make, if any, to the success of small businesses. SBA also needs to assess whether the Centers and the services they provide are the most cost-effective way to achieve SBA’s goals. Similarly, the goals in the Occupational Health and Safety Administration’s (OSHA) GPRA performance plan are not being used to set the direction for OSHA and the measurable outcomes it needs to pursue. For example, one of OSHA’s goals is to “focus resources on achieving workplace hazard abatement through strong enforcement and innovative incentive programs.” Focusing resources may help OSHA meet its mission, but this represents a strategy rather than a measurable goal. Officials leading OSHA’s performance measurement efforts recognize that OSHA’s goals are not sufficiently outcome-oriented and that OSHA needs to make significant progress in this area to provide a better link between its efforts and the establishment of safer and healthier workplaces. We also are finding instances where pilot agencies could better ensure that their GPRA performance goals include all of their major mission areas and responsibilities. It is important that agencies supply information on all of their mission areas in order to provide congressional and executive branch decisionmakers with a complete picture of the agency’s overall efforts and effectiveness. For example, the Bureau of Engraving and Printing’s GPRA performance plans contain a goal for the efficient production of stamps and currency. However, these performance plans do not address an area that the Bureau cites as an important part of its mission—security. The Bureau has primary responsibility for designing and printing U.S. currency, which includes incorporating security features into the currency to combat counterfeiting. The importance of security issues has been growing recently because of heightened concern over currency counterfeiting. Foreign counterfeiters especially are becoming very sophisticated and are producing very high-quality counterfeit notes, some of which are more difficult to detect than previous counterfeits. The value of an agency’s performance information arises from the use of that information to improve the efficiency and effectiveness of program efforts. By using performance information, an agency can set more ambitious goals in areas where goals are being met and identify actions needed to meet those goals that have not been achieved. information in cases where goals are not met. In the pilot reports we reviewed, 109 of the 286 annual performance goals, or about 38 percent, were reported as not met. GPRA requires that agencies explain why goals were not met and provide plans and schedules for achieving those goals. However, for the 109 unmet goals we examined, the pilot reports explained the reason the goal was not met in only 41 of these cases. Overall, the pilot reports described actions that pilots were taking to achieve the goal for 27, or fewer than 25 percent, of the unmet goals. Moreover, none of the reports included plans and schedules for achieving unmet goals. Discussions of how performance information is being used are important because GPRA performance reports are to be one of Congress’ major accountability documents. As such, these reports are to help Congress assess agencies’ progress in meeting goals and determine whether planned actions will be sufficient to achieve unmet goals, or, alternatively, whether the goals should be modified. As you are aware, I have long been concerned about the state of the federal government’s basic financial and information management systems and the knowledge, skills, and abilities of the staff responsible for those systems. Simply put, GPRA cannot be fully successful unless and until these systems are able to provide decisionmakers with the program cost and performance information needed to make decisions. Because these financial systems are old and do not meet users’ needs, they have become the single greatest barrier to timely and meaningful financial reporting. Self-assessments by the 24 CFO Act agencies showed that most agency systems are not capable of readily producing annual financial statements and do not comply with current system standards. The CFO Council has designated financial management systems as its number one priority. organizations and said that such training was critical to the success of their reform efforts. We are concerned that most federal agencies have not made progress in developing plans to provide this essential training in the creative and low-cost ways that the current budget environment demands. I fully appreciate that, in this environment, maintaining existing budgets devoted to management systems and training is a formidable challenge. However, continued—and in some cases, augmented—investment in these areas is important to ensure that managers have the information and skills needed to run downsized federal organizations efficiently. In passing GPRA, Congress recognized that, in exchange for shifting the focus of accountability to outcomes, managers must be given the authority and flexibility to achieve those outcomes. GPRA therefore includes provisions to allow agencies to seek relief from certain administrative procedural requirements and controls. Agencies’ efforts to focus on achieving results are leading a number of them to recognize the need to change their core business processes to better support the goals they are trying to achieve. For example, the U.S. Army Corps of Engineers’ Civil Works Directorate, Operation and Maintenance program, changed its core processes by means of several initiatives, including decentralizing its organizational structure and delegating decisionmaking authority to project managers in the field. In exchange for this delegated decisionmaking, managers at the Corps of Engineers increasingly are being held accountable for achieving results. The Corps has estimated that, by changing its core processes, it has saved about $6 million annually including 175 staff years. critical to continuing the momentum needed to ensure the aggressive implementation of GPRA. This concludes my prepared statement. I would be pleased to respond to any questions. GPRA Performance Reports (GAO/GGD-96-66R, Feb. 14, 1996). Office of Management and Budget: Changes Resulting From the OMB 2000 Reorganization (GAO/GGD/AIMD-96-50, Dec. 29, 1995). Transforming the Civil Service: Building The Workforce of The Future, Results Of A GAO-Sponsored Symposium (GAO/GGD-96-35, Dec. 20, 1995). Financial Management: Continued Momentum Essential to Achieve CFO Act Goals (GAO/T-AIMD-96-10, Dec. 14, 1995). Block Grants: Issues in Designing Accountability Provisions (GAO/AIMD-95-226, Sept. 1, 1995). Financial Management: Momentum Must Be Sustained to Achieve the Reform Goals of the Chief Financial Officers Act (GAO/T-AIMD-95-204, July 25, 1995). Managing for Results: Status of the Government Performance and Results Act (GAO/T-GGD-95-193, June 27, 1995). Managing for Results: Critical Actions for Measuring Performance (GAO/T-GGD/AIMD-95-187, June 20, 1995). Managing for Results: The Department of Justice’s Initial Efforts to Implement GPRA (GAO/GGD-95-167FS, June 20, 1995). Government Restructuring: Identifying Potential Duplication in Federal Missions and Approaches (GAO/T-AIMD-95-161, June 7, 1995). Government Reorganization: Issues and Principles (GAO/T-GGD/AIMD-95-166, May 17, 1995). Managing for Results: Steps for Strengthening Federal Management (GAO/T-GGD/AIMD-95-158, May 9, 1995). Managing for Results: Experiences Abroad Suggest Insights for Federal Management Reforms (GAO/GGD-95-120, May 2, 1995). Government Reform: Goal-Setting and Performance (GAO/AIMD/GGD-95-130R, Mar. 27, 1995). Block Grants: Characteristics, Experience, and Lessons Learned (GAO/HEHS-95-74, Feb. 9, 1995). Program Evaluation: Improving the Flow of Information to the Congress (GAO/PEMD-95-1, Jan. 30, 1995). Managing for Results: State Experiences Provide Insights for Federal Management Reforms (GAO/GGD-95-22, Dec. 21, 1994). Reengineering Organizations: Results of a GAO Symposium (GAO/NSIAD-95-34, Dec. 13, 1994). Management Reform: Implementation of the National Performance Review’s Recommendations (GAO/OCG-95-1, Dec. 5, 1994). Management Reforms: Examples of Public and Private Innovations to Improve Service Delivery (GAO/AIMD/GGD-94-90BR, Feb. 11, 1994). Performance Budgeting: State Experiences and Implications for the Federal Government (GAO/AFMD-93-41, Feb. 17, 1993). The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | GAO discussed: (1) the Government Performance and Results Act's (GPRA) potential contributions to congressional and executive branch decisionmaking; and (2) Congress' role in implementing GPRA. GAO noted that: (1) more federal agencies are recognizing the benefits of focusing on outcomes rather than activities to improve their programs' efficiency and effectiveness; (2) agencies cannot quickly and easily shift their focus because outcomes can be difficult to define and measure and major changes in services and processes may be required; (3) strong and sustained congressional attention is needed to ensure GPRA success; (4) GPRA provides a mechanism for reassessing agencies' missions and focusing programs while downsizing and increasing efficiency; (5) unclear goals and missions have hampered the targeting of program resources and caused overlaps and duplications; (6) Congress needs to hold periodic comprehensive oversight hearings and to gather information on measuring outcomes and determine how GPRA performance goals and information drive agencies' daily operations, how agencies use performance information to improve their effectiveness, agencies' progress in improving their financial and information systems and staff training and recruitment, and how agencies are aligning their core business processes to support mission-related outcomes. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
No one commonly accepted definition of SWFs exists, although the feature of a government-controlled or government-managed pool of assets is a part of most definitions. Government officials and private researchers use varying characteristics to categorize SWFs, and depending on the source and primary defining characteristic, different types of funds may be included or excluded. Definitions have been developed by Treasury, IMF, and private researchers. Some definitions include pension funds or investments made from foreign currency reserves maintained in central banks. An explanation of how we chose funds to include in our analysis is in appendix I. Countries that are major exporting nations or natural resource providers may accumulate large amounts of foreign currency reserves through the sale of their manufactured goods or natural resources to other nations. While all countries need some amount of foreign currency reserves to meet their international payment obligations, in some cases countries may accumulate currency reserves in excess of the amounts needed for current or future obligations. Some countries invest their foreign exchange reserves in assets such as the sovereign debt of other countries, including securities issued by Treasury to fund U.S. government operations. However, some countries have formed SWFs to invest a portion of their excess foreign currency reserves in assets likely to earn higher returns, such as the equity shares issued by foreign publicly traded companies. Some countries with current account (a broad measure of international flows that includes trade balances) surpluses have created SWFs. These include countries that are major exporters of commodities or natural resources, such as oil, as well as those, such as China, that are exporters of manufactured goods. In contrast, as the world’s largest importer of goods and natural resources, the United States has run increasingly large current account deficits since the early 1990s. The current account deficit of the United States was $731.2 billion in 2007, whereas Asian countries with SWFs had a combined current account surplus of over $400 billion and oil-producing countries with SWFs had a combined surplus of about $338 billion (see fig. 1). These current account surpluses have led to a buildup of foreign currency reserves in some countries. Since 1995, currency reserves in industrial economies have more than doubled and currency reserves in developing economies have increased sevenfold. Foreign currency accumulation has been especially large among oil-producing countries and Asian countries with large trade surpluses, especially with the United States. China, Korea, Japan, and Russia hold the largest quantities of foreign currency reserves. Asian exporting countries’ combined current account surpluses grew from $53 billion in 2000 to $443 billion in 2007. Currency reserves have accumulated in SWF countries (see fig. 2). The U.S. dollar accounted for slightly less than two-thirds of total central bank foreign reserve holdings of all countries as of the first quarter of 2008. For oil-exporting countries with SWFs, which include some nations in the Middle East, as well as Norway and Russia, oil revenues remained relatively stable from 1992 to 1998. But in 1999, oil prices—as measured by the annual weighted world average price per barrel—began to rise (see fig. 3). Consequently, oil revenues have increased 561 percent for the major exporting nations from 1992 to 2006, the year for which the latest data were available. These revenue increases have occurred as the price of oil per barrel has increased from $23 in January 2000 to well over $100 in the first half of 2008, including over $137 in July 2008. An interagency group is responsible for reviewing some foreign investment transactions in the United States. CFIUS, initially established by executive order in 1975, reviews some foreign investments in U.S. businesses, including some investments by SWFs. Section 721 of the Defense Production Act authorizes the President to suspend or prohibit mergers, acquisitions, or takeovers that could result in foreign control of a U.S. business if the transaction threatens to impair national security. The President delegated his section 721 authority to review individual transactions to CFIUS. CFIUS and its structure, role, process, and responsibilities were formally established in statute in July 2007 with the enactment of the Foreign Investment and National Security Act (FINSA). FINSA amends section 721 of the Defense Production Act to expand the illustrative list of factors to be considered in deciding which investments could affect national security and brings greater accountability to the CFIUS review process. Under FINSA, foreign government-controlled transactions, including investments by SWFs, reviewed by CFIUS must be subjected to an additional 45-day investigation beyond the initial 30-day review, unless a determination is made by an official at the deputy secretary level that the investment will not impair national security. CFIUS reviews transactions solely to determine their effect on national security, including factors such as the level of domestic production needed for projected national defense requirements and the capability and capacity of domestic industries to meet national defense requirements. If a transaction proceeds to a 45-day investigation after the initial 30-day review and national security concerns remain after the investigation, the President may suspend or prohibit a transaction. According to Treasury, for the vast majority of transactions, any national security concerns are resolved without needing to proceed to the President for a final decision. The law provides that only those transactions for which the President makes the final decision may be disclosed publicly. Information about SWFs publicly reported by SWFs, the governments that control them, international organizations, and private researchers provides a limited picture of their size, investments, and other descriptive factors. Our analysis found that the amount and level of detail that SWFs and their governments report about their activities vary significantly, and international organizations that collect and publish various statistics about countries’ finances do not consistently report on SWFs. As a result, some of the available information about the size of certain of these funds consists of estimates made by private researchers. Based on a combination of data or estimates from these various sources, SWFs currently hold assets estimated to be valued from $2.7 trillion to $3.2 trillion. Several researchers expect SWFs to grow substantially in the coming years. In our analysis of the publicly available government sources and private researcher lists of SWFs, we identified 48 SWFs across 34 countries that met our criteria. These include funds from most regions of the world. Of the 48 SWFs we identified, 13 were in the Asia and Pacific region. Ten were located in the Middle East, with the remaining 25 spread across Africa, North America, South America, the Caribbean, and Europe. Some countries, such as Singapore, the United Arab Emirates, and the Russian Federation, have more than one entity that can be considered an SWF. Some SWFs have existed for many years, but recently a number of new funds have been created. For example, the Kuwait Investment Authority and the Kiribati Revenue Equalization Reserve Fund have existed since 1953 and 1956, respectively. The Kuwait Investment Authority was founded to invest the proceeds of natural resource wealth and provide for future generations in Kuwait, and the Kiribati Revenue Equalization Reserve Fund was formed to manage revenues from the sale of Kiribati’s phosphate supply. However, since 2000, many commodity and trade- exporting countries have set up new SWFs. These funds have grown as a result of rising exports of commodities such as oil, whose prices have also risen. Of the 48 funds we identified, 28 have been established since 2000 and 20 of these can be classified as commodity funds that receive funds from selling commodities such as oil (see fig. 4). Based on our review of public disclosures from SWFs on government Web sites, we determined that the extent to and the level of detail at which SWFs publicly report information on their sizes, specific holdings, or investment objectives varied. Based on these reviews, we found that 29 of 48 funds publicly disclosed the value of assets since the beginning of 2007. According to documents published by the countries, 17 of these 29 report asset figures that are subject to an annual audit by either an international public accounting firm or by the country’s national audit agency. In total, 36 of the 48 funds provided publicly reported size estimates, though some date back to 2003. While most provided a specific value, 2 reported only a minimum value. Among the largest 20 funds, 13 publicly reported total assets. Of the funds in our analysis, 24 of 48 funds disseminated information on fund-specific sites and 21 used other government Web sites, such as those belonging to the finance ministry or central bank. Thirty funds reported at least some information on their investment activities. Of the largest 20 funds, 12 reported this information. Only 4 of the 48 funds fully disclose the names of all the entities in which they have invested. The level of detail reported by others’ funds varied. For example, 21 funds reported information about some of their investments, such as the names of their significant investments, while others disclosed only the regional breakdown of their holdings or gave only general statements about the types of assets and sectors in which they invested or planned to invest. These assets usually included equities, bonds, real estate, or other alternative investments. We found that about 77 percent of the 48 SWFs publicly reported the purpose of their funds, with 13 of the largest 20 funds doing so. In many cases, fund purposes included using the country’s financial or natural resources to earn investment returns intended to benefit the country in various ways, including providing income for future generations, balancing the government’s budget, or preserving foreign currency purchasing power. The information publicly reported about SWFs varies in part because of different disclosure requirements across countries. The nature, frequency, and content of any SWF information are reported at each country’s discretion. Some countries may restrict the type of reporting that can be released. For example, according to documents published by the government of Kuwait, Kuwaiti law requires the Kuwait Investment Authority to submit a detailed report on its activities to the Kuwaiti government authorities, but prohibits it from disclosing this information to the public. In contrast, according to Norwegian government documents, Norwegian law requires that the country’s SWF publicly release comprehensive and detailed information regarding its assets, objectives, and current holdings on a quarterly and annual basis. Some funds that are not required to disclose information have begun to do so voluntarily. For example, Temasek Holdings, an SWF located in Singapore, is not required by Singapore law to release financial performance reports, but it began doing so in 2004, according to a Temasek Holdings official. This official told us that each SWF operates in a different environment and must decide on the appropriate amount of transparency. The official said that since Temasek Holdings began publishing its annual review in 2004, it has disclosed more information each year. The extent to which other large private classes of investors disclose information about their assets and investments also varies. For example, investors such as hedge funds and private equity funds that are not investment companies under U.S. securities laws have as their investors primarily institutions or high net worth individuals and are generally not required to publicly disclose information about their investment portfolios. In contrast, U.S. mutual funds are generally required to disclose certain information about their investment portfolios and portfolio transactions. While some SWFs disclose holdings information, officials of other SWFs expressed concerns that disclosures could be used by other investors in ways that could reduce the funds’ investment earnings. International organizations collect and publish various statistics about countries’ finances, but report only limited information on SWFs. Until recently there has been a lack of guidance in macroeconomic statistical standards on the treatment of SWFs and no systematic review of whether the assets of these funds are included in the data reported. IMF officials have initiated an effort to increase the amount and specificity of SWF activities in IMF documents. Currently, IMF members are expected to disclose a range of fiscal and macroeconomic statistics, including countries’ balance of payments and their international investment position, that are made public in various IMF reports and IMF’s World Economic Outlook and Global Financial Stability Report. The data that countries report may include the level of reserves and the amount of external assets they hold. However, the coverage of SWFs in these statistics is not uniform, not least because SWFs can be included in different accounts depending on specific statistical criteria. According to IMF staff, the countries themselves determine whether to include the value of their SWF assets in their reserve assets or separately as external assets. In some cases, countries do not report any information about their SWF. Further, some member countries do not submit data on their international investment position to IMF. Analyzing a selection of 21 countries with SWFs, IMF staff found that only 11 included the value of their SWFs’ assets in either their balance of payments or international investment position data. IMF staff noted that members are not required to report the value of the SWF holdings as a separate line item and no member currently does so. In addition to information from required data reporting, IMF staff also collected some information about SWFs through their consultations with individual countries. IMF staff periodically hold policy discussions—called Article IV consultations from the section of the IMF rules that requires them—in member countries to monitor their economic developments and outlook and assess whether their policies promote domestic and external stability. According to IMF staff, Article IV staff reports are expected to focus strictly on those issues. We reviewed publicly available Article IV reports, or summary reports in several cases, for the 34 countries that we identified as having SWFs. Based on this analysis, we found information about the size of a country’s SWF in the Article IV reports or public summaries for 13 of these countries. The extent of the information on SWFs publicly reported from the Article IV consultations varied, with some documents only noting that the country had such a fund and others providing the current level of assets in the SWF and country officials’ expectations for growth of the SWF through revenues or fiscal transfer. IMF is implementing changes to its reporting that could expand the official data available on SWF activities. IMF officials have stated that collecting additional data on SWFs is important because of the fiscal, monetary, and economic policy impacts that the funds could have for IMF member countries and for the global economy, given their increasing prevalence and growth. IMF expects to implement new reporting guidance in 2009 that would call for countries to separately report their SWF holdings on a voluntary basis. While this is a positive development that could further expand the official information available, its success depends on the degree to which countries participate. In addition, IMF is including guidance on how to properly classify SWF assets in its latest version of the balance of payments manual, which it expects to publish in late 2008. The current version of this manual was last updated in 1993 and does not address SWFs. In recognition of the growing number of SWFs, IMF officials told us that they began to address the methodological issues related to a definition of SWFs and SWF assets in 2005 and subsequently initiated an international consultation on the issue. IMF expects that this additional detail will provide an understanding of the location of SWF assets, which, depending on certain criteria, can be reported in either the reserve assets or the external accounts or in other accounts of a country’s financial accounts data. IMF notes that this new reporting item will help to facilitate proper identification of SWFs and may contribute to greater transparency and to a better understanding of their impact on the country’s external position and reserve assets. Treasury staff told us that they were involved in the group that considered these changes, and while the United States and some other countries would have preferred that the proposed SWF reporting be mandatory, the group chose the voluntary option. In addition, IMF is facilitating and coordinating the work of the International Working Group of Sovereign Wealth Funds that is deliberating on a set of Generally Accepted Principles and Practices relating to sovereign wealth funds. These are intended to set out a framework of sound operation for SWFs. The specific elements are likely to come from reviews of good SWF practices. These principles are also aimed at improving the understanding of SWFs in both their home countries and recipient countries. Other organizations involved in monitoring international financial developments do not regularly report on SWF activities. For example, the Organisation for Economic Co-operation and Development (OECD), an organization of 30 countries responsible for supporting global economic growth and trade, collects data on foreign direct investment inflows and outflows and national accounts information from its member countries and some selected nonmember countries. These data, however, do not specifically identify SWFs. Further, only 9 countries included in OECD’s surveys are known to have SWFs. Many countries with SWFs are not members of OECD. A recent document from OECD indicates that it is using other publicly available sources of data to estimate the size and asset allocation of selected SWFs. Recognizing the growing importance of SWFs, Group of Seven finance ministers and central bank governors, including officials from Treasury, proposed that OECD develop investment policy guidelines for recipient countries of SWF investment. According to recent OECD publications, the organization is focusing its SWF work on these guidelines. Because not all SWFs disclose their activities publicly, information about the size of some SWFs comes from estimates published by private researchers, including investment bank researchers and nonprofit research and policy institutions. Though their methodologies and data sources vary, these researchers generally begin by reviewing publicly available data and statements from national authorities of SWF countries, press releases, and other research reports. In some cases, they use confidential data, such as information provided to their firms’ trading staffs by SWF officials or private conversations with their firms’ investment managers. However, they are usually prohibited from publicly disclosing the sources of information received confidentially. At least one researcher we spoke with also used certain IMF balance of payments and World Economic Outlook statistics as proxies for SWF outflows and purchases. The researchers often make projections of the level of foreign reserves, commodity prices, amounts of transfers from reserves to the SWF, and the assumed rate of return for the fund to develop judgmental estimates of the current size of assets held by SWFs. These estimates have been published publicly by these researchers as part of their analysis of trends affecting world financial markets. The researchers we spoke with acknowledged that the accuracy of their estimates is primarily limited by the sparse official data on SWFs. Researchers also cited other limitations, including difficulty in verifying underlying assumptions, such as the level of transfers from SWFs or the projected rates of return of the funds, and questionable accuracy and validity of data they use from secondary sources to support their models. By analyzing the information reported by individual SWFs, IMF data, and private researchers’ estimates, we found the total assets held by the 48 SWFs we identified are estimated to be from $2.7 trillion to $3.2 trillion (see app. II for the list of funds). Many of these estimates were published in the last year prior to the significant rise in oil prices in the first half of 2008. The largest funds held the majority of these assets, with the largest 20 funds representing almost 95 percent and the largest 10 having more than 80 percent of the total SWF assets. The largest 20 funds had assets estimated to range from $2.5 trillion to $3.0 trillion, as shown in figure 5. Although SWFs have sizeable assets, their assets are a small portion of overall global assets and are less than the assets of several other large classes of investors. The estimated total size of the SWFs we identified, $2.7 trillion to $3.2 trillion, constituted about 1.6 percent of the estimated $190 trillion of financial assets outstanding globally as of the end of 2006. The estimated SWF holdings we identified likely exceed those of hedge funds, which most researchers estimated to be about $2 trillion. However, according to an estimate by the consulting firm McKinsey Global Institute, assets in pension funds ($28.1 trillion) and mutual funds ($26.2 trillion) exceed those of SWFs by a large margin. SWF assets are expected to grow significantly in the future. SWFs are predicted to continue to grow significantly, to between $5 trillion and $13.4 trillion by 2017. IMF staff estimated that in the next 5 years assets in SWFs will grow to between $6 trillion and $10 trillion. The variation in estimates largely reflects researchers’ use of different methods and assumptions about future economic conditions. Though their methodologies varied, each of these researchers generally used several common factors, the most common being changes in oil prices. Several researchers stated that if oil prices rise higher than their projections, revenues going to oil-based SWFs will likely increase and the assets could grow beyond currently estimated levels. Other factors include growth in foreign exchange reserves, amount of transfers from surpluses to SWFs, persistence in trade imbalances, the rate of return or performance of an SWF, and variation in exchange rate regimes across SWF countries. BEA and Treasury are charged with collecting and reporting information on foreign investment in the United States, but the extent to which SWFs have invested in U.S. assets is not readily identifiable from such data. Only a few SWFs reported specific information on their U.S. investments. Some individual SWF investments in U.S. assets can be identified from reports filed by investors and issuers as required by U.S. securities laws, but these filings would not necessarily reflect all such investments during any given time period. Further, some private data collection entities also report information on specific transactions by SWFs, but these also may not capture all activities. Two U.S. agencies, Treasury and Commerce’s BEA, collect and report aggregate information on foreign investment in the United States that includes SWF investments. To provide information to policymakers and to the public, Congress enacted the International Investment Survey Act of 1976 (subsequently broadened and redesignated as the International Investment and Trade in Services Survey Act [International Investment Survey Act]), which authorizes the collection and reporting of information on foreign investment in the United States. The act requires that a benchmark survey of foreign direct investments and foreign portfolio investments in the United States be conducted at least once every 5 years. Under this authority, BEA collects data on foreign direct investment in the United States, defined as the ownership of 10 percent or more of a business enterprise. BEA collects the data on direct investment in the United States by both public and private foreign entities, which by definition would generally include SWFs, by surveying U.S. companies regarding foreign ownership. The data are used to calculate U.S. economic accounts, including the U.S. international investment position data. Treasury collects data on foreign portfolio investment in the United States, defined as foreign investments that are not foreign direct investments. Treasury collects the data through surveys of U.S. financial institutions and others. These surveys collect data on ownership of U.S. assets by foreign residents and foreign official institutions. Officials from these agencies use these data in computing the U.S. balance of payments accounts and the U.S. international investment position and in the formulation of international economic and financial policies. The data are also used by agencies to provide aggregate information to the public on foreign portfolio investments, including reporting this information periodically in the monthly Survey of Current Business. SWF investment holdings are included in the foreign investment data collected by Treasury and BEA, but cannot be specifically identified because of data collection limitations and restraints on revealing the identity of reporting persons and investors. BEA’s foreign direct investment data are published in the aggregate and do not identify the owner of the asset. BEA also aggregates the holdings of private and government entities for disclosure purposes. As a result, the extent to which SWFs have made investments of 10 percent or more in a U.S. business, while included as part of the foreign direct investment total, cannot be identified from these data. Treasury’s portfolio investment data collection and reporting separates foreign official portfolio investment holdings, which include most SWFs, from foreign private portfolio investment. However, the information that is reported to Treasury does not include the specific identity of the investing organization; thus the extent of SWF investment within the overall foreign official holdings data cannot be identified. In addition, Treasury officials reported that some SWF investments may be classified as private if the investments are made through private foreign intermediaries, such as investment banks, or if an SWF is operated on a subnational level, such as by a state or a province of a country, as those types of organizations are not included in Treasury’s definition of official government institutions. Both BEA and Treasury stated that the data published do not include the identity of specific investors and are aggregated to ensure compliance with the statutory requirement that collected information not be published or disclosed in a manner in which a reporting person can be identified. Figure 6 illustrates how data on SWF investments are included but are not specifically identifiable in the data collected or reported by these agencies. The data BEA and Treasury collect include the total amount of foreign direct investment and the total amount of portfolio investment by foreign official institutions, but the extent of SWF investments in either category cannot be determined. The data collected on both direct and portfolio investments are used by BEA in computing the U.S. international investment position, published annually in its July issue of the Survey of Current Business. The U.S. international investment position data show that foreign investors, including individuals, private entities, and government organizations, owned assets in the United States in 2007 valued at approximately $20.1 trillion. As shown in figure 6, foreign direct investment, which includes all direct investments by SWFs, totaled $2.4 trillion in 2007 (shown in the line “Direct investment at current cost”). This is up from $1.4 trillion in 2000. Foreign official portfolio investment holdings, which include SWF investments, totaled $3.3 trillion in 2007 (shown in the line “Foreign official assets in the United States”). This is up from $1 trillion in 2000. BEA officials stated that within this total many of the SWF portfolio investment holdings are classified as “Other foreign official assets,” since this subcategory reflects transactions by foreign official agencies in stocks and bonds of U.S. corporations and in bonds of state and local governments. These reported holdings totaled roughly $404 billion in 2007. This shows an increase from $102 billion in 2000. To the extent that SWFs are invested in U.S. government securities, those holdings are included in the “U.S. Treasury securities” or “Other” subcategories of “U.S. government securities” under “Foreign official assets in the United States.” Bank accounts or money market instruments held by SWFs are included in “U.S. liabilities reported by U.S. banks” under “Foreign official assets in the United States.” While BEA and Treasury data cannot be used to identify the total extent of SWF investment, these data show that the United States has been receiving more investment over time from countries with SWFs and from foreign official institutions. BEA data on foreign direct investment by country show an increase in foreign direct investment in the United States in recent years from countries with SWFs. These investments would include those made by private individuals and businesses and by any government entities in those countries—including their SWFs. As figure 7 illustrates, foreign direct investment holdings from countries with SWFs have increased from $173 billion in 2000 to roughly $247 billion in 2006. Although the exact extent cannot be determined, some of this increase is likely from SWF investments. Similarly, Treasury data show that portfolio investment from foreign official institutions, which could include SWFs, has increased in all asset classes, including equities. Treasury’s portfolio investment by asset class data show that in 2007, approximately $9.1 trillion of all U.S. long-term securities were foreign owned, and of that, $2.6 trillion were held by official institutions. The $2.6 trillion in foreign official securities holdings includes almost $1.5 trillion in U.S. Treasury debt and $0.3 trillion in U.S. equities. While official institutions owned only a small share of the total amount of foreign-owned U.S. equities, their investment has grown dramatically, increasing from $87 billion in 2000 to $266 billion in 2007. (See fig. 8.) Treasury officials reported that the recent rise in U.S. equity ownership by foreign official institutions in the United States may reflect investments from SWFs, since SWFs are intended as vehicles to diversify a country’s reserves into alternative assets. Although BEA and Treasury may be able to adjust their data collection activities to obtain more information about SWF investment, such changes may not result in more detailed public disclosure and will entail increased costs, according to agency officials. BEA officials told us that they may be able to use the information they currently collect to differentiate between foreign official and private owners of direct investment holdings in the data collected by utilizing the ultimate beneficial owner codes assigned to transactions in their surveys. They stated that they are considering reporting the information in this manner. This breakout would help to narrow the segment of foreign direct investment that contains SWF investments, but it would not identify SWF investment specifically since it would still combine SWF investments with other official government investments. Regarding portfolio investment, Treasury officials reported that while more detailed information would allow them to report SWF investment separately from that of other foreign official institutions, Treasury would be prohibited from releasing such information publicly in cases where it would make the foreign owners easily identifiable. According to a Treasury official, some business groups advised Congress during the initial passage of the International Investment Survey Act that disclosure of their transactions and holdings in foreign countries would adversely affect their companies. If foreign companies in the United States share that view, then removing the disclosure restriction might make foreign investors less likely to invest in the United States, and they might seek investments in countries with less stringent disclosure requirements. In addition, the official said that collecting additional information on foreign investment would increase costs for Treasury as well as for reporting entities. A representative of one financial institution that is a reporting entity told us that the institution would need to make changes to its internal reporting systems in order to provide more identifying information on investors, but the official could not estimate the total costs of doing so. Some information on specific SWF investments in U.S. assets can be determined from disclosures made by SWFs themselves and from private data sources. A limited number of SWFs publicly disclose information about their investment activities in individual countries, including the United States. Based on our review of disclosures made on SWF or national government Web sites, we found that 16 of the 48 SWFs we identified provided some information on their investment activity in the United States. The amount of detail varied from a complete listing of all asset holdings of a fund to only information about how investments are allocated by location. For example, Norway’s SWF publishes a complete list of its holdings, which indicated that as of year-end 2006, its fund held positions in over 1,000 U.S. companies, valued at over $110 million. In contrast, the disclosures made by Kuwait’s fund did not identify its investments, but stated that the fund invests in equities in the United States and Canada; the fund did not provide information on total asset size or dollar values or identify specific investments. One of Singapore’s SWFs disclosed the identity of a few of its key holdings, including U.S.-based investments. Seven other funds also reported information on their investment holdings; however, none of these noted any U.S. investments. Disclosure reports required to be filed with SEC by both investors and issuers of securities are a source of information on individual SWF transactions in the United States, but only for those transactions that meet certain thresholds. Any investor, including SWFs, upon acquiring beneficial ownership of greater than 5 percent of a voting class of an issuer’s Section 12 registered equity securities, must file a statement of disclosure with SEC. The information required on this statement includes identifying information, including citizenship; the securities being purchased and the issuer; the source and amount of funds used to purchase the securities; the purpose of the transaction; the number of shares acquired, and the percent of ownership that number reflects; and the identification of any contracts, arrangements, understandings, or relationships with respect to securities of the issuer. When there are changes to the level of ownership or amount of securities held, the investor is generally required to file an amendment. Investors taking passive ownership stakes in the same equities, meaning they do not intend to exert any form of control, may qualify to file a less-detailed statement. SEC also requires disclosure for investors, including SWFs, whose beneficial ownership of a class of voting equity securities registered under Section 12(b) or 12(g) of the Securities Exchange Act of 1934 exceeds 10 percent. Under Section 16(a) of the Securities Exchange Act of 1934, within 10 calendar days of becoming a more than 10 percent beneficial owner, an investor must file an initial report disclosing the amount of the investor’s beneficial ownership of any equity securities of the issuer, whether direct or indirect. In addition, for as long as investors remain more than 10 percent beneficial owners, they must file reports of changes in beneficial ownership with the SEC within two business days of the transaction resulting in the change in beneficial ownership. This requirement applies to sales and additional purchases of any equity securities of the same issuer. Certain beneficial ownership filings required under U.S. securities laws offer some information about SWF activities, but cannot be used to determine the full extent of SWF transactions in the United States. For example, any transaction involving a purchase resulting in total beneficial ownership of 5 percent or less of a voting class of an issuer’s Section 12 registered equity securities would not have to be disclosed under the federal beneficial ownership reporting rules and otherwise may not necessarily have to be disclosed. Thus, the SEC data would most likely not include information on SWF investments under this threshold. In addition, although the filing of these reports is mandatory for all investors who meet the requirements, SEC staff told us that without conducting a specific inquiry, their ability to determine whether all qualifying investments have been disclosed, including any by an SWF, may be limited. To identify nonfilers, the SEC staff told us that they sometimes use sources such as public comments and media reports. SEC has not brought a public action against an SWF for violating these beneficial ownership reporting requirements. Finally, given that these filings are primarily used to disclose ownership in the securities of specific issuers, the information is not compiled or reported by SEC in any aggregated format. Thus, identifying SWF transactions requires searching by issuer or SWF name, and SEC staff noted, for example, that identifying such transactions can be difficult because some SWF filers may have numerous subsidiaries under whose names they might file a report. Information about some SWF investments in U.S. issuers can be identified in certain filings made under the federal securities laws. As a result of the recent interest in SWF activities, SEC staff analyzed such filings to identify transactions involving SWFs. To identify specific transactions, they searched for filings by known SWFs and also reviewed filings from countries with SWFs to identify SWF filings. According to their analysis, since 1990 eight different SWFs have reported to SEC ownership of over 5 percent, covering 147 transactions in 58 unique issuers. SEC staff told us that their analysis likely reflects only some of the SWF investments in U.S. issuers. The federal securities laws also require U.S. public companies to file reports on their financial condition, which can also reveal data on some SWF investments in the United States below the greater than 5 percent threshold for investor disclosures. Companies with publicly traded securities are required to publicly disclose events they deem material to their operations or financial condition, such as the acquisition of assets or the resignation of officers. In some cases, U.S. companies have made these filings to announce that they have received investments from an SWF, including investments that did not exceed the 5 percent threshold and require a beneficial ownership filing by the SWF. For example, Citigroup filed a report outlining a transaction involving an SWF investment that was below the 5 percent ownership stake but the SWF investment was still deemed material to the financial condition of the company. Some of the U.S. companies that recently received SWF investments also included information about these transactions in their annual reports. Private data collection entities compile and report information on specific SWF transactions, including those captured by SEC filings, but do not capture all SWF transactions. A number of private firms collect and distribute information relating to financial transactions. For example, database companies, such as Dealogic and Thompson Reuters, collect information globally on financial transactions, including mergers and acquisitions, for users such as investment banks, rating agencies, and private researchers. To compile their databases, they use public filings (including SEC filings), press releases, news announcements, and shareholder lists, as well as information from relationships with large financial intermediaries, such as investment banks and attorneys. Therefore, information in these databases includes transactions that can be identified through SEC filings but also may include additional transactions that are not disclosed through U.S. securities laws but are identified in other ways, such as through company press statements or discussions with parties to the transaction. However, these data may not be complete, and the database companies cannot determine to what extent they capture all SWF transactions. For example, officials at these companies told us that they will most likely miss smaller transactions, consisting of acquisitions resulting in aggregate beneficial ownership of 5 percent or less, or unannounced relatively small dollar deals. Since many SWFs have historically taken noncontrolling interests in U.S. companies with total ownership often below 5 percent, the number of transactions not captured could be large. In addition, a transaction completed by a subsidiary of an SWF may not be identified as an SWF investment. We reviewed the information collected by Dealogic on investments made by SWFs in foreign countries, otherwise known as cross-border transactions. Based on this, the United States has, since 2000, attracted the largest volume of cross-border SWF investment, with announced deals totaling approximately $48 billion. Roughly $43 billion of this value reflects investment since 2007, largely consisting of deals involving financial sector entities. These deals comprised 8 of the top 10 announced SWF investments in the United States since 2000 (see table 1). These large SWF-led investments into financial sector entities came at a time when firms were facing large losses and asset write-downs due to the subprime mortgage crisis of 2007. The investments were seen as positive events by some market participants because they provided much-needed capital. (See app. III for a summary of some of these transactions.) According to Dealogic, Switzerland, China, and the United Kingdom were also major targets of cross-border SWF investments since 2000 (see fig. 9). Dealogic data also show that announced cross-border investments led by SWFs worldwide have risen dramatically since 2000, both in terms of number of deals and total dollar volume. (See fig. 10) Transactions targeting the United States have also risen sharply, due in part to favorable market conditions for foreign investors. In 2005, only one announced U.S. transaction totaling $50 million was reported by Dealogic. In contrast, nine transactions were reported in 2007, totaling $28 billion. As of June 2008, four transactions in the United States have been announced for the year, totaling almost $20 billion. As shown in figure 10, global cross-border SWF investment increased from $429 million in 2000 to almost $53 billion in 2007. However, these transactions, which have totaled about $119 billion since 2000, represent a small portion of the overall reported assets of SWFs, which, as noted previously, were estimated to be from $2.7 trillion to $3.2 trillion. This illustrates how much of the investment held by SWFs is not generally identifiable in existing public sources, unless SWFs themselves disclose comprehensive data on their asset holdings. We requested comments on a draft of this report from Treasury, Commerce, and SEC. In a letter, Treasury’s Deputy Assistant Secretary for International Monetary and Financial Policy indicated that our report was timely and valuable and underscored the importance of developing a better appreciation of the systemic role of SWFs, and also said that they generally agreed with the conclusions in the report. The Deputy Assistant Secretary’s letter stated that Treasury has been a leader in the international community’s efforts, including in the multi-lateral IMF- facilitated effort to develop generally accepted principles and practices for SWFs. Implementation of such practices will hopefully foster a significant increase in the information provided by SWFs. (Treasury’s letter is reproduced in app. V.) In Commerce’s letter, the Undersecretary for Economic Affairs stated that our report was a useful and timely contribution to the existing literature on this highly debated and complex subject. (Commerce’s letter is reproduced in app. VI.) In SEC’s letter, the Director of the Office of International Affairs reiterated that the disclosure requirements under U.S. securities laws regarding concentrations and change of control transactions apply equally to SWFs and to other large investors. (SEC’s letter is presented in app. VII.) Treasury, Commerce and SEC also provided technical comments, which we incorporated into the report as appropriate. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time we will send copies to other interested Members of Congress; the Secretaries of Treasury and Commerce; and the Commissioner of the SEC. We will also make copies available at no charge on the GAO website at http://www.gao.gov. If you or your staffs have any questions about this report, please contact either Yvonne Jones at (202) 512-8678 or [email protected], or Loren Yager at (202) 512-4128 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix VIII. Our objectives in this report were to examine (1) the availability of data on the size of sovereign wealth funds (SWF) and their holdings internationally that have been publicly reported by SWFs, their governments, international organizations, or private organizations, and (2) the availability of reported data by the U.S. government and other sources on SWF investments in the United States. To identify SWFs and to develop criteria for selecting the funds to include in our analysis, we reviewed the definitions of SWF and the lists of such funds that have been compiled by U.S. and international agencies, financial services firms, and private researchers. The funds of most interest to policymakers were those that are separate pools of capital without underlying liabilities or obligations to make payments in the near term. SWFs have raised concerns over their potential to use their investments for non-economic purposes. As a result, we chose to include in our analysis those funds that (1) were government chartered or sponsored investment vehicles; (2) invested, in other than sovereign debt, some or all of their assets outside the country that established them; (3) were funded through transfers from their governments of funds arising primarily from sovereign budget surpluses, trade surpluses, central bank currency reserves, or revenues from the commodity wealth of the countries, and (4) were not currently functioning as pension funds receiving contributions from and making payments to individuals. We included government-chartered or government-sponsored entities that invest internationally because such entities raise concerns over whether the controlling government will use their funds to make investments to further national interests rather than to solely earn financial returns. Entities that are funded primarily through trade surpluses or natural resources wealth would also seem to be more vulnerable to pressure to make noneconomic investments than entities funded through employee contributions or other nonwindfall sources. We excluded internationally active pension funds that are receiving contributions from or making benefit payments to individuals as these funds generally have specific liabilities, unlike SWFs that are not encumbered by such near-term obligations, and thus are not as likely to make noneconomic investments. We also excluded investment funds that invested only in the sovereign debt of other nations. Such an investment strategy is an approach that central banks have traditionally taken and, although widely debated, has not generally raised control issues. SWF investments in the equity securities of commercial firms in other countries may be viewed as creating the potential for actions of a noncommercial nature that could be detrimental to another country's economy. In order to determine our final list of SWFs, we independently examined each fund on a compiled list of unique funds that others claimed were SWFs. We verified that our above criteria were met using national authorities and International Monetary Fund (IMF) data sources. To begin our analysis, we reviewed lists of SWFs generated from seven different private data sources and IMF. These sources, included in publications from Deutsche Bank, Goldman Sachs, JPMorgan, Morgan Stanley, the Peterson Institute, RGE Monitor, Standard Chartered Bank, and IMF, were then used to prepare a comprehensive list of 258 possible SWFs. Since the names of the funds varied depending on the source, we manually matched the sources based on the fund name, inception year, and fund size to obtain a list of 81 unique funds. After compiling the list of unique funds, we attempted to verify the extent to which the funds met our four criteria. We did this by having at least two analysts reach agreement on whether the criteria were met upon review of several data sources. We used official governmental source data, such as information from the country’s central bank, finance ministry, or other government organization or from the fund’s Web site. If official government source information was unavailable, we used the country’s Article IV consultation report or other documents reported to IMF as a secondary source. Those funds that did not meet any one of our four specified criteria were excluded from our analysis. In cases where we could verify some but not all of our criteria from national authority or IMF sources, we attempted to validate the remaining criteria using private or academic sources. Analyst judgment to include or exclude a fund was employed in some cases where all criteria could not be validated. Of the 48 funds selected for our analysis, we were able to verify all four criteria in 60 percent of cases. For all of the 48 funds we were able to verify two or more of our four criteria using a mix of various sources. We encountered some limitations to our independent verification of national authorities’ source data. These limitations include Web searches only being conducted in English, Web sites being under construction, and Web sites being incompletely translated from the original source language to English. In these cases, we located speakers of the languages in question and assisted them in conducting searches for SWF information on the national government Web sites in the country’s language. If our team members who were knowledgeable of the foreign language found relevant information, we asked them to translate the information to English and provide us with a written copy. We used this translated information to verify our criteria for 10 funds. Languages needing translation were Arabic, French, Portuguese, and Spanish. If we found sources in English that were relevant for our purposes, we did not review the sources in their original languages. To determine the availability of data on the size and other characteristics of SWFs that were reported by SWFs, their governments, international organizations, or private sources, we reviewed documents produced by SWFs and Web sites sponsored by SWFs. We also reviewed studies of SWFs done by investment banks and private research firms. For some recently established funds, we reported the initial capitalization of the fund if the market value of the fund was not available. We used a private researcher estimate for one fund that only reported a minimum value and for two funds where private researcher data appeared to be more recent than those of national authorities. We interviewed officials from two SWFs, investment banks, finance and trade associations, a private equity group, IMF, and others. To determine the availability of data on SWF investments in the United States reported by the U.S. government and others, we reviewed the extent to which federal data collection efforts of the Departments of the Treasury (Treasury) and Commerce (Commerce) and the Securities and Exchange Commission (SEC) were able to report on SWF activities. We interviewed officials from Commerce and Treasury, SEC, the Federal Reserve Bank of New York, two financial data companies, a law firm, several private researchers, and other organizations. We analyzed data on SWF cross-border transactions in the United States and other countries obtained from Dealogic, which designs, develops, and markets a suite of software, communications, and analytical products. We assessed the procedures that Dealogic uses to collect and analyze data and determined that the data were sufficiently reliable for our purposes. To identify the transactions, we worked with Dealogic to develop a query that would extract all transactions with an SWF as the acquirer of the asset and where the asset resided in a country other than that of the SWF. However, Dealogic does not capture all SWF transactions. Because of its reliance on public filings, news releases, and relationships with investment banks, Dealogic may not capture low-value transactions that are not reported publicly. We also reviewed public filings, obtained through SEC’s Electronic Data Gathering, Analysis, and Retrieval (EDGAR) database, of selected U.S. companies that received major SWF investments in 2007 and 2008. Because acquisitions resulting in total beneficial ownership of 5 percent or less of a voting class of a Section 12 registered equity security will not be reported to SEC, these data sources capture only a proportion of the total U.S. SWF investments. We also spoke with officials from the Board of Governors of the Federal Reserve System, Commerce, the Department of Defense, the Department of State, the Federal Reserve Bank of New York, Treasury, IMF, SEC, and the U.S. Trade Representative. We attended hearings on SWFs before the Senate Committee on Banking, Housing, and Urban Affairs, the Senate Committee on Foreign Relations, the House Committee on Foreign Affairs, and the House Committee on Financial Services, the Joint Economic Committee, and the U.S.-China Economic and Security Review Commission. We met with officials representing investment funds or SWFs from Dubai, Norway, and Singapore. To better understand the context behind SWFs, we interviewed industry and trade associations, a legal expert, investment banks, private researchers, and others who have experience in international finance, trade, and foreign investment issues. We conducted this performance audit from December 2007 through August 2008 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. odSize of fund (doll in illion) Government Penion Fnd--GlobaKit Invetment Athority (KIA) Size of fund (dollar in billion) Severnce Tx Permnent Fnd (New Mexico) Permnent Minerl Trust Fnd (Wyoming) Fond de Genertion (Qec) The IMF data contain information from Article IV consultation reports, IMF staff reports, and a memorandum of understanding. The date range for the IMF data is from September 2006 through 2007. Private researcher data contain information from reports from Deutsche Bank, Goldman Sachs, JPMorgan, Morgan Stanley, the Peterson Institute, Standard Chartered Bank, and RGE Monitor. The publication dates for these reports range from September 10, 2007 through May 22, 200. For Gabon, Singapore, and Vietnam, private researcher estimates were used instead of the national authority sources because the private researchers appeared to provide more up-to-date estimates of the size of the funds. São Tomé and Príncipe had a balance of $100 million as of September 1, 2006. We report a zero balance due to rounding. In 2007 and early 2008, SWFs, in conjunction with other investors, supplied almost $43 billion of capital to major financial firms in the United States. Citigroup was the major recipient of capital, receiving $20 billion in late 2007 and early 2008. The other recipients were Merrill Lynch, Morgan Stanley, the Blackstone Group, and the Carlyle Group. Below is a timeline of these transactions and their history. Some government organizations, an international financial institution, investment banks, and private research organizations have published reports on SWFs that offer explicit definitions of SWFs or lists of SWFs. Those that propose definitions of SWFs have not come up with one commonly accepted definition. Varying characteristics—ownership, governance, funding sources, and investment strategies, among others— are used to characterize SWFs and include or exclude funds from SWF lists. Treasury defines SWFs as government investment vehicles funded by foreign exchange assets that are managed separately from official reserves. They seek higher rates of return and may be invested in a wider range of asset classes than traditional reserves. Treasury says that SWFs generally fall into two categories based on the source of their foreign exchange assets: commodity and noncommodity funds. Treasury has not released a list of SWFs. IMF defines SWFs as government-owned investment funds set up for a variety of macroeconomic purposes. They are commonly funded by the transfer of foreign exchange assets that are invested long term and overseas. SWFs are a heterogeneous group and may serve multiple, overlapping, and changing objectives over time: as stabilization funds to insulate the budget and economy against commodity price swings; as savings funds for future generations; as reserve investment corporations established to increase the return on reserves; as development funds to help fund socioeconomic projects or promote industrial policies; or as contingent pension reserve funds to provide for unspecified pension liabilities on the government’s balance sheet. IMF researchers have published a list of SWFs. This relatively broad definition allows for inclusion of a Saudi Arabian investment fund managed from the central bank. Some investment bank reports have offered definitions of SWFs. One states that SWFs are broadly defined as special government asset management vehicles that invest public funds in a wide range of financial instruments. Unlike central banks, which focus more on liquidity and safekeeping of foreign reserves, most SWFs have the mandate to enhance returns and are allowed to invest in riskier asset classes, including equity and alternative assets, such as private equity, property, hedge funds, and commodities. This bank does publish a list of SWFs. It says that it is not always easy to differentiate between pure SWFs and other forms of public funds, such as conventional public sector pension funds or state-owned enterprises. Another investment bank defines SWFs as having five characteristics: (1) sovereign, (2) high foreign currency exposure, (3) no explicit liabilities, (4) high-risk tolerance, and (5) long investment horizon. Similar to SWFs are official reserves and sovereign pension funds. Another investment bank says that SWFs are vehicles owned by states that hold, manage, or administer public funds and invest them in a wider range of assets of various kinds. SWFs are mainly derived from excess liquidity in the public sector stemming from government fiscal surpluses or from official reserves at central banks. They are of two types—either stabilization funds to even out budgetary and fiscal policies of a country or intergenerational funds that are stores of wealth for future generations. SWFs are different from pension funds, hedge funds, and private equity funds. SWFs are not privately owned. This investment bank researcher does offer a list of SWFs. Some non-investment-bank private researcher reports have offered definitions of SWFs. One researcher says that SWFs are a separate pool of government owned or government-controlled assets that include some international assets. The broadest definition of an SWF is a collection of government-owned or government-controlled assets. Narrower definitions may exclude government financial or nonfinancial corporations, purely domestic assets, foreign exchange reserves, assets owned or controlled by subnational governmental units, or some or all government pension funds. This researcher includes all government pension and nonpension funds to the extent that they manage marketable assets. This researcher does publish a list of SWFs. Another private research group defines an SWF as meeting three criteria: (1) it is owned by a sovereign government; (2) it is managed separately from funds administered by the sovereign government’s central bank, ministry of finance, or treasury; and (3) it invests in a portfolio of financial assets of different classes and risk profiles, including bonds, stocks, property, and alternative instruments, with a significant portion of its assets under management invested in higher-risk asset classes in foreign countries. This researcher thinks of SWFs as part of a continuum of sovereign government investment vehicles that runs along a spectrum of financial risk, from central banks as the most conservative and risk averse, to traditional pension funds, to special government funds, to SWFs, and finally to state-owned enterprises, which are the least liquid and are the highest-risk investments. This research group publishes a list of SWFs. In addition to the contacts named above, Cody Goebel, Assistant Director; Celia Thomas, Assistant Director; Patrick Dynes; Nina Horowitz; Richard Krashevski; Jessica Mailey ; Michael Maslowski; Marc Molino; Omyra Ramsingh; and Jeremy Schwartz made major contributions to this report. | Sovereign wealth funds (SWF) are government-controlled funds that seek to invest in other countries. With new funds being created and many growing rapidly, some see these funds providing valuable capital to world markets, but others are concerned that the funds are not transparent and could be used to further national goals and potentially harm the countries where they invest. GAO plans to issue a series of reports on various aspects of SWFs. This first report analyzed (1) the availability of publicly reported data from SWFs and others on their sizes and holdings internationally, and (2) the availability of publicly reported data from the U.S. government and other sources on SWFs' U.S. investments. GAO reviewed foreign government disclosures, Department of the Treasury (Treasury) and Department of Commerce (Commerce) reporting, and private researcher data to identify SWFs and their activities. GAO also analyzed information from international organizations and securities filings. Treasury and Commerce commented that GAO's report provides timely and useful contributions to the SWF debate; SEC noted that U.S. securities requirements apply to all large investors, including SWFs. Future GAO reports will address laws affecting SWF investments, SWF governance practices, and the potential impact of SWFs and U.S. options for addressing them. Limited information is publicly available from official government sources for some SWFs. While some have existed for decades, 28 of the 48 SWFs that GAO identified have been created since 2000, primarily in countries whose foreign exchange reserves are growing through oil revenues or trade export surpluses. GAO analysis showed that about 60 percent of these 48 SWFs publicly disclosed information about the size of their assets since the beginning of 2007, but only about 4 funds published detailed information about all their investments--and some countries specifically prohibit any disclosure of their SWF activities. Although the International Monetary Fund (IMF) currently collects data on countries' international financial flows, GAO found that only 13 countries separately reported their SWF holdings in public IMF documents. IMF plans to issue new reporting guidance in 2009 that asks countries to voluntarily report the size of their SWF holdings in their international statistics. While this could increase the transparency of SWFs, its success depends on the extent to which countries participate. In the absence of official national or international public reporting, much of the available information about the value of holdings for many SWFs is from estimates by private researchers who project funds sizes by adjusting any reported amounts to reflect likely reserve growth and asset market returns. For the funds GAO identified, officially reported data and researcher estimates indicated that the size of these 48 funds' total assets was from $2.7 trillion to $3.2 trillion. Some researchers expect these assets to continue to grow significantly. U.S. government agencies and others collect and publicly report information on foreign investments in the United States, but these sources have limitations and the overall level of U.S. investments by SWFs cannot be specially identified. From surveys of U.S. financial institutions and others, Treasury and Commerce reported that foreign investors, including governments, private entities, and individuals, owned over $20 trillion of U.S. assets in 2007, but the amounts held by SWFs cannot be specifically identified from the reported data because either the agencies do not obtain specific investor identities or the agencies are precluded from disclosing individual investor information. GAO found that as many as 16 of the 48 SWFs reported some information on their U. S. investments. One reported all U.S. holdings, but others only identified a few specific investments or indicated that some of their total assets were invested in the United States. Some SWF investments can be identified in U.S. securities filings, under a requirement for disclosure of investments that result in aggregate beneficial ownership of greater than 5 percent of a voting class of certain equity securities. At least 8 SWFs have disclosed such investments since 1990. GAO analysis of a private financial research database identified SWF investments in U.S. companies totaling over $43 billion from January 2007 through June 2008, including SWF investments in U.S. financial institutions needing capital as a result of the 2007 subprime mortgage crisis. Additional U.S. reporting requirements would yield additional information for monitoring the U.S. activities of SWFs, although some U.S. officials have expressed concerns that they could also increase compliance costs for U.S. financial institutions and agencies and could potentially discourage SWFs from making investments in U.S. assets. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
For retirees aged 65 or older, Medicare is typically the primary source of health insurance coverage. Medicare covered about 41 million beneficiaries as of July 2003. The program covers hospital care as well as doctor visits and outpatient services but has never covered most outpatient prescription drugs. Under traditional Medicare, eligible individuals may apply for part A, which helps pay for care in hospitals and some limited skilled nursing facility, hospice, and home health care, and may purchase part B, which helps pay for doctors, outpatient hospital care, and other similar services. Depending on where they live, individuals may have the option of obtaining traditional Medicare coverage (on a fee-for-service basis) or coverage from a managed care or other private plan offered through the Medicare Advantage program. Many beneficiaries have been attracted to these plans because they typically have lower out-of-pocket costs than fee- for-service plans and offer services not covered by traditional Medicare prior to the MMA, such as routine physical examinations and most outpatient prescription drugs. Nearly 4.7 million Medicare beneficiaries were enrolled in a local Medicare Advantage plan as of July 2004. To cover some or all of the costs Medicare does not cover, such as deductibles, copayments, and coinsurance, Medicare beneficiaries may rely on private retiree health coverage through former employment or through individually purchased Medicare supplemental insurance (known as Medigap).13, 14 For example, for 2001, the Medicare Current Beneficiary Survey (MCBS) found that about three-fourths of Medicare-eligible beneficiaries obtained supplemental coverage from the following sources: a former employer or union (29 percent); individually purchased coverage, including Medigap policies (27 percent); both employment-based and individually purchased coverage (7 percent); or Medicaid (13 percent). About 24 percent had Medicare-only coverage. Medigap is a privately purchased health insurance policy that supplements Medicare by paying for some of the health care costs not covered by Medicare. 42 U.S.C. 1395ss (2000). Medicare beneficiaries can purchase 1 of 10 standardized Medigap benefit packages. Three of the 10 standardized Medigap benefit packages offer limited prescription drug benefits, paying 50 percent of drug charges up to either $1,250 per year or $3,000 per year after the beneficiary pays a $250 deductible. New Medigap plans sold after January 1, 2006, will no longer include prescription drug benefits. MMA sec. 104(a)(1), § 1882(v)(1), 117 Stat. 2161 (to be codified at 42 U.S.C. § 1395ss(v)(1)). Health plans typically require enrollees to pay a portion of the cost of their medical care. These cost sharing arrangements include deductibles, which are fixed payments enrollees are required to make before coverage applies; copayments, which are fixed payments enrollees are required to make at the time benefits or services are received; and coinsurance, which is a percentage of the cost of benefits or services that the enrollee is responsible for paying directly to the provider. Employers generally offer health benefits to retirees on a voluntary basis. While these benefits vary by employer, they almost always include prescription drugs and often cover both retirees under age 65 as well as those eligible for Medicare. However, coverage can vary between these groups of retirees. For example, premiums are often lower for those aged 65 and over because Medicare pays for certain costs, and cost sharing requirements, which can make retirees more sensitive to the costs of care, may differ. Plan types may also differ based on Medicare eligibility. For example, some employers offer retirees under age 65 a preferred provider organization (PPO) plan but offer a fee-for-service plan for retirees eligible for Medicare. Regardless of the type of plan offered, retirees who have employment-based coverage generally have a choice of more than one plan. Plan sponsors typically coordinate their retiree health benefits with Medicare once retirees reach age 65, with Medicare as the primary payer and the plan sponsor as the secondary payer. Several types of coordination occur between plan sponsors and Medicare. For example, some plan sponsors coordinate through a carveout approach, in which the plan calculates its normal benefit and then subtracts (or carves out) the Medicare benefit, generally leaving the retiree with out-of-pocket costs comparable to having the employment-based plan without Medicare. Another approach used by plan sponsors is full coordination of benefits, in which the plan pays the difference between the total health care charges and the Medicare reimbursement amount, often providing retirees complete coverage and protection from out-of-pocket costs. According to one employer benefit survey, carveout is the most common type of coordination used by employers that sponsor retiree health plans. In January 2006, Medicare will begin offering beneficiaries outpatient prescription drug coverage through a new Medicare part D. Medicare beneficiaries who choose to enroll for this voluntary benefit will have some of their prescription drug expenditures covered by prescription drug plans authorized by the MMA. In addition to paying a premium— estimated initially to be about $35 per month ($420 per year)— beneficiaries must meet other out-of-pocket expense requirements: a $250 deductible; 25 percent of their next $2,000 in prescription drug expenditures; and 100 percent of the next $2,850 in prescription drug expenditures, a coverage gap often referred to as the Medicare part D benefit “doughnut hole.” Medicare beneficiaries must therefore pay $3,600 out-of-pocket for prescription drugs in 2006 before part D catastrophic coverage begins. Part D catastrophic coverage pays most drug costs once total costs exceed $5,100, with beneficiaries paying either the greater of a $2 copayment for each generic drug and $5 copayment for other drugs, or 5 percent coinsurance. Only prescription drug costs paid by the part D enrollee or by another person or certain charitable organizations or state pharmaceutical assistance programs on behalf of the enrollee, rather than by a plan sponsor, are considered in determining a beneficiary’s true out-of-pocket costs. (See fig. 1.) After the part D benefit becomes effective in January 2006, Medicare beneficiaries will be able to receive prescription drug coverage in several ways, such as the following: Beneficiaries covered through the traditional fee-for-service Medicare program will be able to enroll in privately sponsored prescription drug plans that contract with CMS to receive their drug benefits. Beneficiaries enrolled in Medicare Advantage plans providing part D prescription drug benefits will receive all of their health care services, including part D benefits, through their Medicare Advantage plan. Beneficiaries will be able to continue to receive prescription drug benefits from other sources, such as an employment-based plan, if the plan sponsor chooses to provide prescription drug coverage to Medicare-eligible retirees. The MMA creates options and incentives for a current or a potential sponsor of an employment-based retiree health plan to provide prescription drug coverage to Medicare-eligible retirees. Options for plan sponsors under the MMA include the following: Offer retirees comprehensive prescription drug coverage through an employment-based plan in lieu of Medicare part D prescription drug coverage. Under this option, a sponsor of a plan with prescription drug coverage actuarially equivalent to that under part D will receive an incentive to maintain coverage through a federal tax-free subsidy equal to 28 percent of the allowable gross retiree prescription drug costs over $250 through $5,000 (maximum $1,330 per beneficiary) for each individual eligible for part D who is enrolled in the employment-based plan. For 2006, CMS estimated that the average annual subsidy would be $668 per beneficiary. In order to qualify for this subsidy, however, a plan sponsor must attest that the actuarial value of prescription drug coverage under the plan is at least equal to the actuarial value of standard Medicare part D prescription drug coverage. Furthermore, a plan sponsor will receive a subsidy only for those Medicare beneficiaries who do not enroll in the Medicare part D benefit. Offer prescription drug coverage that supplements (“wraps around”) the part D benefit, as health plans commonly do for hospital and physician services under Medicare parts A and B. Pay all or part of the monthly premium for any of the prescription drug plans or Medicare Advantage plans in which Medicare-eligible retirees (and dependents) choose to enroll. Contract with a prescription drug plan or Medicare Advantage plan to provide the standard part D prescription drug benefit or enhanced benefits to the plan sponsor’s retirees who are Medicare-eligible (equivalent to offering a fully insured benefit) or become a prescription drug plan or Medicare Advantage plan (equivalent to offering a self-insured benefit). Plan sponsors also have other options. As has always been the case, plan sponsors could stop providing any type of subsidized health care coverage, including prescription drugs, to Medicare-eligible retirees and their dependents. While they are not available for current Medicare beneficiaries, the MMA also authorized the use of health savings accounts (HSA) to which employers and active workers and retirees not eligible for Medicare can contribute to cover future health care costs. This option could provide a means for employees who are not offered employment- based retiree health coverage to save money for health coverage when they retire. On August 3, 2004, CMS published a proposed rule for implementing the Medicare part D prescription drug provisions of the MMA, and the comment period closed October 4, 2004. The proposed rule provided a preliminary overview of how CMS intended to implement the MMA, including the subsidy and other options. On January 28, 2005, CMS published a final rule implementing the MMA. CMS also indicated that it will provide further guidance relating to the subsidy for plan sponsors providing retiree drug coverage. The percentage of employers offering health benefits to retirees, including those who are Medicare-eligible, has decreased since the early 1990s, according to employer benefit surveys, but offer rates have leveled off in recent years. At about the same time, the percentage of Medicare-eligible retirees aged 65 and older with employment-based coverage has remained relatively consistent. Meanwhile, employment-based retiree health plans experienced increased costs to provide coverage, with one employer benefit survey citing double-digit annual average increases from 2000 through 2003. Financial statements we reviewed for a random sample of 50 Fortune 500 employers showed that over 90 percent of the employers that offered retiree health coverage had increased postretirement benefit obligations from 2001 through 2003. Private and public plan sponsors, including those that provide coverage for Medicare-eligible retirees, have responded to increasing costs by implementing strategies that require these retirees to pay more for coverage and thus contribute to a gradual erosion of the value and availability of benefits. Employer benefit surveys reported that the percentage of employers offering health benefits to retirees has decreased since the early 1990s; however, these offer rates have remained relatively stable in recent years. A series of surveys conducted by Mercer Human Resource Consulting indicated that the portion of employers with 500 or more employees offering health insurance to Medicare-eligible retirees declined from 44 percent in 1993 to 27 percent in 2001, and leveled off from 2001 through 2004, with approximately 28 percent offering the benefits to Medicare- eligible retirees in 2004 (see fig. 2). A second series of surveys conducted by the Kaiser Family Foundation and Health Research and Educational Trust (Kaiser/HRET) estimated that the percentage of employers with 200 or more employees offering retiree health coverage—for those Medicare- eligible or those under age 65 or both—decreased from 46 percent in 1991 to 36 percent in 1993 and then leveled off from 1993 through 2004, with approximately 36 percent of employers with 200 or more employees offering retiree health benefits to these groups in 2004 (see fig. 3). For Medicare-eligible retirees specifically, the percentage of employers in the Kaiser/HRET survey offering coverage fluctuated from 1995 to 2004, but differed by only 1 percentage point in 1995 (the earliest data available) and 2004, with 28 and 27 percent of employers, respectively, offering coverage in these 2 years. Coverage for early retirees, those under age 65, has also been significantly affected since the early 1990s. For example, the Mercer surveys showed a steady decline in employers with 500 or more employees offering coverage to this population from 50 percent in 1993 to 34 percent in 2001, although this percentage has generally leveled off since 2001. Employer benefit consultants and the 15 private and public sector plan sponsors that we interviewed consistently cited a general erosion in health benefits for all retirees, including those who are Medicare-eligible, but some officials we interviewed also told us that plan sponsors that could eliminate benefits had already done so, which is consistent with the period of leveling off shown in the Mercer and Kaiser/HRET surveys. For example, although the provision of health benefits for all retirees by employers is generally voluntary, officials we interviewed noted that employers that continue to offer retiree health benefits may be limited in their ability to decrease benefits further because of existing contracts with unions, which are generally negotiated every 3 to 5 years. According to the 15 private and public sector plan sponsors and employer benefit consultants that we interviewed, many plan sponsors have restricted coverage for future retirees—including those who are Medicare-eligible— but have continued to offer benefits to existing retirees, which would also contribute to a leveling off of these rates. Large employers are more likely than small employers to offer retiree health coverage, including coverage for Medicare-eligible retirees. For example, Kaiser/HRET data for 2004 showed that 36 percent of employers with 200 or more employees offered health benefits to retirees compared to approximately 5 percent of employers with 3 to 199 employees. Within the Mercer and Kaiser/HRET definitions of large employers (at least 500 and at least 200 employees, respectively), those with the greatest numbers of employees were the most likely to sponsor health benefits for retirees. For example, Kaiser/HRET reported that approximately 60 percent of employers with 5,000 or more employees offered health benefits in 2004 to retirees compared to about 31 percent of employers with 200 to 999 employees. Based on the 2003 Mercer survey, 63 percent of employers with 20,000 or more employees offered coverage specifically to Medicare- eligible retirees compared to 23 percent of employers with 500 to 999 employees. In addition, employers with a union presence were more likely to offer retiree health coverage than those employers without a union presence. According to the 2004 Kaiser/HRET survey, among employers with 200 or more employees, 60 percent of these employers with union employees offered health coverage to retirees compared to 22 percent of these employers without union employees. The provision of retiree health coverage also varies between the private and public sector and by industry type. For example, employers in the public sector were more likely than employers in the private sector to offer coverage to retirees, including those who are Medicare-eligible. All federal government retirees—Medicare-eligible and those under age 65— are generally eligible for FEHBP health benefits and pay the same premiums as active federal workers for the same benefits, including prescription drugs. State plan sponsors also typically have higher offer rates than private sector employers for retirees. For example, the 2004 Kaiser/HRET study showed that 77 percent of state and local government employers with 200 or more employees offered coverage to retirees compared with the average offer rate of 36 percent across all employer industries. For retirees aged 65 and older, Medical Expenditure Panel Survey (MEPS) data for 2002 indicated that approximately 86 percent of state entities offered health insurance to this group of retirees. After government employers, according to the 2004 Kaiser/HRET study, the industry sector with the next highest percentage offering retiree coverage was transportation/communication/utility, with 53 percent of all employers in this industry sector (200 or more employees) offering health benefits to their retirees in 2004. The industry sectors in this survey least likely to offer coverage were health care and retail, with 22 percent and 10 percent, respectively, of employers (200 or more employees) in these industry sectors offering retiree health benefits. The overall percentage of Medicare-eligible retirees and their insured dependents aged 65 and older obtaining employment-based health benefits through a former employer has remained relatively consistent from 1995 through 2003, based on data from the U.S. Census Bureau’s Current Population Survey (CPS). According to our analysis of CPS data, the percentage of Medicare-eligible retirees aged 65 and older with employment-based health coverage and their insured dependents was approximately 32 percent in 1995 and 31 percent in 2003. Among Medicare-eligible retirees and their insured dependents aged 65 through 69 and aged 70 through 79, there was a modest decline in the percentage with employment-based health coverage from 1995 through 2003, but a modest increase among Medicare-eligible retirees and their insured dependents aged 80 and over (see fig. 4). The modest decline among those aged 65 through 69 and aged 70 through 79 relative to all Medicare-eligible retirees aged 65 and over may be because plan sponsors are more likely to reduce benefits for future or recent retirees than for all retirees. Thus, the effect of changes that plan sponsors have made to their retiree health benefits may take additional time to be evident in the percentage of current retirees receiving employment-based health benefits. Retiree health costs continue to increase for many plan sponsors of retiree health coverage, including those that provide coverage to Medicare- eligible retirees. Our analysis of financial statements filed with the SEC by a sample of 50 Fortune 500 employers pointed to increases—some 50 percent or higher—in employers’ obligations for postretirement benefit obligations from 2001 through 2003. Employer benefit surveys and our interviews with officials from 15 private and public plan sponsors have also cited increased retiree health costs. These increases often have prompted plan sponsors to attempt to contain cost growth to provide coverage in a variety of ways, including requiring greater cost sharing from retirees. The cost of providing retiree health coverage—and prescription drug costs in particular—is increasing for many plan sponsors. Financial statements filed with the SEC by 50 randomly selected Fortune 500 employers showed that over 90 percent of the 38 employers that reported postretirement benefit obligations from 2001 through 2003 had an increase in these obligations during this period. About 20 percent of these 38—8 employers—had an increase in their obligations above 50 percent, while one-third of these 38—13 employers—had an increase of between 25 and 50 percent from 2001 through 2003. During this same period, the Bureau of Labor Statistics estimated that the Consumer Price Index, which reports prices for all consumer items, increased 5.3 percent, a 1.8 percent average annual rate of increase. Over 80 percent of the 38 employers that reported postretirement benefit obligations from 2001 through 2003 had a change in their postretirement benefit obligations that exceeded the Consumer Price Index increase of 5.3 percent for all consumer items from 2001 through 2003. Data from employer benefit surveys also showed increased costs for plan sponsors for roughly the same period. For example, a survey conducted in 2004 by the Kaiser Family Foundation and Hewitt Associates reported that the total cost of providing health benefits to all retirees for employers surveyed (1,000 or more employees) rose rapidly between 2003 and 2004, with an estimated average annual increase of nearly 13 percent. Mercer data projections by employers for 2003 also showed an average annual cost increase of approximately 11 percent from 2002 for Medicare-eligible retirees from—$2,702 to $3,003—the fourth straight year of double digit increases. (For active employees, employers in the Mercer survey reported a 10 percent increase in 2003 in the average total health benefit cost from 2002.) The cost for public sector plan sponsors to provide retiree health coverage, both for Medicare-eligible retirees and those under age 65, is also increasing. For example, one public sector plan sponsor we interviewed reported that retiree health care costs had doubled in a 6-year period, from $440 million in 1998 to over $900 million in 2003, with an average annual cost in 2004 of $3,542 per Medicare-eligible retiree, compared to $1,822 per Medicare-eligible retiree in 1998. For FEHBP, as set in statute, the federal government pays 72 percent of the weighted average premium of all health benefit plans participating in FEHBP but no more than 75 percent of any health benefit plan’s premium. Thus, retirees and active workers pay approximately 28 percent of their plan premiums—a share that has not changed since it became effective in January 1999. While the percentage of plan premiums contributed by the government has remained constant in recent years, the actual rates have increased over time. In December 2002, we reported that health insurance premiums for FEHBP plans had increased on average about 6 percent per year from 1991 through 2002. According to OPM, average FEHBP premiums increased by 11 percent in 2003, about 11 percent in 2004, and about 8 percent for 2005. Prescription drug benefits represent a large share of plan sponsors’ retiree health costs, particularly for Medicare-eligible retirees. In 2002, prescription drug costs were cited as a key driver of increases in employment-based retiree health costs and were estimated to be typically 50 to 80 percent of an employer’s total health care costs for Medicare- eligible retirees. According to 2001 MCBS data, prescription drug expenditures for retired Medicare beneficiaries that were paid by employment-based insurance accounted for 45 percent of all health care expenditures for these beneficiaries. Three Fortune 500 employers we interviewed reported that prescription drug costs for Medicare-eligible retirees and their dependents ranged from approximately 56 to 64 percent of their total estimated annual cost of providing health benefits for this same population. Faced with increasing costs, private sector plan sponsors have implemented certain strategies to reduce these obligations that often require retirees to pay more for coverage and contribute to a general erosion in the value and availability of health coverage for retirees. For example, many plan sponsors have increased cost sharing through increased copayments, coinsurance, and premium shares; restricted eligibility for benefits based on retirement or hiring date; implemented financial caps or other limits on plan sponsors’ contributions to coverage; and made changes to prescription drug benefits, such as creating tiered benefit structures and increasing retiree out-of-pocket contributions. These cost-cutting strategies are not new—in 2001 we reported that employers had implemented similar mechanisms designed to control retiree health care expenditures. However, according to private plan sponsors we interviewed, the share of costs paid by retirees is increasingly affected as the plan sponsors reach and enforce financial caps and other limits they had set. While these strategies are intended to limit the increase in plan sponsor obligations, the information provided by employer benefit surveys and the plan sponsors and consultants we interviewed did not specify the magnitude of any decrease in plan sponsors’ costs for retiree health benefits that could be attributed to these changes. Increasing Retirees’ Cost Sharing. One strategy that plan sponsors have adopted to limit their obligations for retiree health costs is increasing the share of costs for which the retiree is responsible. For example, employers have increased retiree copayments and coinsurance. When asked about changes made “in the past year,” Kaiser/Hewitt reported that nearly half of its surveyed private employers (1,000 employees or more) had increased cost sharing. The majority of employers in the Kaiser/Hewitt study reported that they expected to make similar increases “for the 2005 plan year,” with 51 percent indicating they were very or somewhat likely to increase retiree coinsurance or copayments. These increases are consistent with the changes cited in our interviews with private employers and with officials we interviewed at other organizations, including benefit consulting firms and an organization representing unions. For example, one employer we interviewed reported cost sharing increases for all retirees every year since 1993; another employer we interviewed introduced a mix of coinsurance and copayment requirements in January 2004 to address rising health care costs and make retirees more aware of the cost of the benefits they received; and a third employer we interviewed that had historically paid approximately 90 percent of total retiree health care costs was planning to increase the share of costs borne by retirees who had retired prior to 1994 from approximately 10 percent to 20 percent of health care costs by January 1, 2006. Increasing Premiums. Increased contributions by retirees to health care premiums is another area where plan sponsors have continued to make changes to control their health care expenditures. Kaiser/Hewitt data showed that 79 percent of surveyed employers had increased retiree contributions to premiums in the past year, and 85 percent reported that they were very or somewhat likely to increase these contributions for the 2005 plan year. Retiree contributions for new retirees aged 65 and over increased, on average, 24 percent from 2003 to 2004, according to the Kaiser/Hewitt study. The Mercer 2003 study reported that employers varied retiree premium contributions, with Medicare-eligible retirees paying on average about 38 percent of plan premiums when the cost was shared between the employer and the retiree, an increase of approximately 4 percentage points since 1999. Four of the 12 Fortune 500 plan sponsors we interviewed also reported changes to premiums. For example, one plan sponsor made a change in 2004 to increase premiums for all individuals retiring after January 1, 2004, consistent with increases in premiums for active workers, whereas previously retirees kept the same premiums for life. Officials we interviewed representing unions and their members also cited increased premiums for many retired union workers. Some employers are also beginning to offer access-only coverage to some or all retirees, in which employers allow retirees to buy into a health plan at the group rate, but without any financial assistance from the employer. For example, according to 2003 Mercer data, about 37 percent of retiree health plans for employers with 500 or more employees required Medicare-eligible retirees to pay the full cost of the employment-based plan. Thirteen percent of the Kaiser/Hewitt employers reported making a change in the past year to provide access-only coverage to retirees, with retirees paying 100 percent of the costs. A supplement to the 2004 Kaiser/HRET survey examined the percentage of Medicare-eligible retirees with access-only coverage and found that 5 percent of Medicare-eligible individuals who are retired from employers with 200 or more employees that offer retiree health benefits had such coverage. While 5 of the 12 Fortune 500 plan sponsors we interviewed had implemented access-only coverage, 1 of these plan sponsors had implemented this level of coverage for all of its retirees in the early 1990s in response to health care costs. The other 4 plan sponsors had implemented the access-only change at a later date, implementing it for some or all employees ranging from those hired after January 1, 1995, to those retiring on or after January 1, 2007. Reducing Benefits for Future Retirees. Implementing access-only coverage is often part of a broader movement by plan sponsors to restrict eligibility or offer reduced benefits for employees who are hired or retire after a certain date. In December 2004, Kaiser/Hewitt reported that 8 percent of surveyed employers (with 1,000 or more employees) said they had made a change “in the past year” to eliminate their subsidized health benefits for future retirees, typically for those hired after a specific date. Of the 12 Fortune 500 plan sponsors we interviewed, 5 plan sponsors had eliminated retiree health coverage for some or all individuals hired after a certain date, ranging from January 1, 1993, to January 1, 2003, while 4 of the 5 plan sponsors that had switched to providing access-only coverage did so for some or all of their future retirees. Some plan sponsors said that they generally avoided making changes for current retirees rather than for future retirees, who may be in a position to make other arrangements. In addition, plan sponsors generally tried to minimize the disruption when making changes for those already in retirement. For example, one Fortune 500 plan sponsor we interviewed carried 15 separate health plans for several years that had accumulated as the result of grandfathering in current coverage levels for existing retirees. It was only in 2003 that the company consolidated the 15 plans into 3 plans and instituted changes affecting both existing and some future retirees. Plan sponsors that have either eliminated coverage or created access-only plans for some or all retirees generally reported that recruitment had not been affected. One plan sponsor we interviewed, however, noted that current employees’ retirement planning could be affected, as some employees might stay longer with the company because they could not afford to retire. This sentiment is consistent with data reported in Mercer’s 2003 annual survey of employer-sponsored health plans showing that retirees tended to delay retirement when their employers did not sponsor retiree medical plans. Introducing and Enforcing Financial Caps. In 2001, we reported that some employers had established caps and other limits on expenditures for retiree health benefits, but it was not clear at that time how employers would ensure that spending did not exceed the caps and how coverage would be affected. Employers began to implement caps in response to rising retiree health costs and to accounting changes introduced in the early 1990s when the Financial Accounting Standards Board (FASB) adopted Financial Accounting Standards (FAS) 106, requiring employers to report annually on the obligation represented by the promise to provide retiree health benefits to current and future retirees. The 2003 annual survey of employer-sponsored health plans conducted by Mercer shows that 18 percent of employers with 500 or more employees have implemented caps, while an additional 10 percent of such employers were considering them. Caps were most common among the employers with the largest number of employees in the Mercer study (20,000 or more employees); 33 percent of such employers had implemented these limits on overall spending and 9 percent were considering them. Similarly, 54 percent of employers with 1,000 or more employees offering retiree coverage in the 2004 Kaiser/Hewitt employer survey reported having capped contributions. Ninety percent of employers in the Kaiser/Hewitt study that have hit caps or anticipated hitting caps in the next year reported that they intended to enforce them or already had. Of the 12 Fortune 500 plan sponsors we interviewed, 8 had implemented capped contributions or other limits on retiree health spending. For example, 1 plan sponsor reported monthly caps of $217 per person for nonunionized retirees under age 65 and $51 per person for nonunionized Medicare-eligible retirees. Another plan sponsor provided fixed company health care credits for its retirees under age 65 (unionized and nonunionized) in which an individual could receive up to $3,750 to apply to the plan sponsor’s estimate for health care costs for a retiree under age 65. While many plan sponsors had implemented these types of limits, they varied as to whether all groups of retirees were affected and whether the caps had been reached and thus enforced. For example, 2 of the 12 Fortune 500 plan sponsors we interviewed had capped benefits for some individuals depending on the individual’s date of retirement (typically more recent retirees were affected), and in some cases the caps varied by whether retirees were part of a union or former employees of an acquired company. The plan sponsors we interviewed whose retiree health benefit costs had reached the caps generally were enforcing them. For example, one plan sponsor required some retirees—both Medicare-eligible and those under age 65—to pay a portion of premiums for the first time after the plan’s costs reached the cap in 2002. However, implementing and enforcing caps can be an issue in union negotiations. One plan sponsor we interviewed had opted in the past to negotiate benefit changes with unions to delay hitting the caps, but now expects to hit and enforce the caps by 2007. Another plan sponsor, while enforcing the financial caps for retiree health benefits, has agreed in some union negotiations to give retirees an additional contribution toward health care expenses that effectively offsets the premium increases triggered by reaching the caps. A few plan sponsors and benefit consultants we interviewed noted that employers are more likely today to enforce caps than to raise them. For example, the 2003 Kaiser/Hewitt study stated that there is some concern that auditors will question the effectiveness of a cap if there is a pattern of continually raising it once costs approach the set limit. Implementing Changes to Prescription Drug Benefit Design. Given the sensitivity of retiree health benefits to prescription drug costs, many plan sponsors have made changes to prescription drug benefits. The primary mechanisms cited by the 2004 Kaiser/Hewitt employer benefit survey and benefit consultants and the 12 Fortune 500 plan sponsors we interviewed included increasing copayments; switching from copayments to coinsurance; and implementing tiered benefit structures in which generic drugs, formulary/preferred drugs, and nonformulary/nonpreferred drugs are subject to different retiree copayment and coinsurance rates. Over half of the employers in the 2004 Kaiser/Hewitt study reported having increased copayments or coinsurance for prescription drugs in the past year, and 15 percent had replaced fixed-dollar copayments with coinsurance in the past year. Over half of the plan sponsors offered a three-tiered benefit structure, and among plans with this design, about two-thirds require copayments and nearly one-fourth required coinsurance for retail pharmacy purchases. In addition, about one-fourth of the Kaiser/Hewitt-surveyed employers had instituted a three-tiered drug plan in the past year to save money. The 12 Fortune 500 plan sponsors we interviewed echoed these types of changes in their prescription drug benefit for retirees within the last 5 years. For example, 3 plan sponsors had instituted retiree coinsurance requirements, which can make retirees more price conscious because the retiree out-of-pocket cost is higher for more expensive drugs than for less expensive drugs. One plan sponsor reported it had increased drug copayments for retirees. Several of the 12 Fortune 500 plan sponsors we interviewed already had tiered benefits in place. One plan sponsor had implemented a three-tiered structure as well as mandatory use of a mail- order pharmacy for some prescription drugs. Another plan sponsor reported it planned to implement “step-therapy” in January 2005, in which retirees would have to demonstrate the ineffectiveness of a lower-cost generic drug before receiving coverage for a higher cost brand-name drug. Officials we interviewed representing unions and their members noted similar prescription drug trends for many former union workers. While public sector plan sponsors generally offer more coverage than those in the private sector, these plan sponsors are also starting to implement cost-cutting mechanisms similar to those implemented in the private sector, with one major exception—they generally are not eliminating retiree health benefits for future retirees. For example, in December 2002, we reported that FEHBP plans had implemented some benefit reductions for all enrollees—mostly by increasing enrollee cost sharing. We reported that three large fee-for-service plans had increased or introduced cost sharing features such as copayments or coinsurance for prescription drugs and deductibles for other services. OPM officials informed us that FEHBP plans have implemented cost-containment strategies relating to prescription drugs, such as three-tiered cost sharing, comparable to private sector employers. However, OPM does not implement cost-containment strategies for retirees that do not also affect active workers. Similarly, other public sector plan sponsors, such as state governments, are starting to reduce benefit levels and implement cost-cutting mechanisms, including changes to prescription drug benefits. However, eliminating retiree health benefits entirely for current or future retirees does not appear to be as prevalent in the public sector as the private sector. For example, a 2003 survey conducted by the Segal Company, a benefit consulting firm specializing in the public sector, reported that no state plan sponsor in its survey was considering eliminating retiree health coverage as a cost-containment strategy. A 2003 study prepared by Georgetown University for the Kaiser Family Foundation that collected survey data from 43 states and the District of Columbia also found that no state government had terminated subsidized health benefits for current or future retirees and no state government was planning to do so. However, the Georgetown study found that 24 of these states reported increased cost sharing in the past 2 years, while 13 had increased retiree premium shares in the past 2 years. A study released by AARP in July 2004 on state government retiree health benefits found that 11 states required Medicare- eligible retirees to pay the full amount of the premium. Almost all of the states in the Georgetown study cited prescription drugs as the most important driver behind the growth in state retiree health spending and, as a result, have taken specific steps to manage these costs, such as increasing cost sharing and implementing tiered benefit structures. The majority of states in the AARP study had three-tiered copayment benefits. One public sector plan sponsor we interviewed is proposing significant changes to keep its retiree health benefits fund solvent that would vary the employer’s contribution toward retiree health care costs on the basis of the retiree’s age and years of service, rather than paying the full cost of coverage for those meeting the minimum age and service requirements. Benefit consultants and officials from other organizations we interviewed noted new pressures on public sector funding of retiree health care benefits as a result of standards adopted in 2004 by the Governmental Accounting Standards Board (GASB) that affect the reporting of postretirement benefit obligations for many public sector sponsors of employment-based retiree health coverage. Similar to FAS 106 for private sector employers, the new standards require public sector plan sponsors, including state governments, to accrue the costs of postretirement health care benefits during the years of service as opposed to reporting these costs on a pay-as-you-go basis. However, the GASB standards are not identical to those in the private sector, and the July 2004 AARP study noted that it is unclear whether the experience of FAS 106—and its frequently cited impact on the decrease in employment-based retiree health coverage—would directly translate to the public sector. While the study stated that the new GASB standards might encourage state governments to reduce retiree health benefit programs in order to reduce obligations, it also noted that these standards alone were not likely to cause major program changes. Regardless, benefit consultants and other officials we interviewed cited notable implications for public sector employers. For example, large unfunded obligations can affect bond ratings in the public sector, which affect these public sector entities’ ability to borrow money. One benefit consultant told us that its public sector clients are raising issues such as plan design, cost, financing, and the possible reduction of retiree health benefits in light of the new GASB standards. The provision of retiree health benefits in the public sector may also be affected by other factors, such as state budget deficits and state political pressures. At the time of our review, many employers and plan sponsors said they had not decided which MMA options they would implement for their Medicare-eligible retirees, but the primary option many sponsors were considering was the subsidy. Ten of the 15 plan sponsors we interviewed said that while undecided, they were considering the federal subsidy option for some or all of their Medicare-eligible retirees, while 2 other plan sponsors had chosen the subsidy option for all their Medicare-eligible retirees. Four plan sponsors we interviewed were concerned that because their benefits already had reached or soon would reach the caps they had set on their retiree health benefit obligations, they would be ineligible for the subsidy and therefore said that redesigning their benefits to wrap around Medicare would be prudent. In our random sample of 50 Fortune 500 employers, most that reported obligations for retiree health benefits indicated that they would choose the federal subsidy or other options, but others had not reported their final MMA decisions on their financial statements filed with the SEC as of November 2004. In addition, 2 plan sponsors we interviewed were considering Medicare Advantage plans, but these plan sponsors were waiting to see how the market for these developed. While plan sponsors generally expected to continue to maintain coverage levels for their retirees as they considered their MMA options, they acknowledged that cost pressures could cause them to reevaluate their benefits. If employers were not already providing prescription drug benefits to retirees, most benefit consultants and other experts we interviewed said that the MMA was not likely to prompt employers to begin providing coverage or supplementing the Medicare benefits. The 15 private and public sector sponsors of retiree health benefit plans we interviewed were considering their MMA options for prescription drug coverage, but few had decided which MMA options they would choose for all their Medicare-eligible retirees. Of the 15 plan sponsors we interviewed, 12 were Fortune 500 private sector employers. Two of the 12 had made a decision for all of their Medicare-eligible retirees, and 10 said they had not yet made their final decisions for some or all of their retirees and were assessing the implications associated with the MMA options. Officials from the three public sector sponsors of health benefit plans we interviewed— the federal government and two state retirement systems—said they were considering their options. Both private and public sector plan sponsors told us they anticipated making their final decisions by early 2005. In addition, officials we interviewed representing multiemployer plans told us that most multiemployer plans had not focused on the MMA options to the same extent as single-employer private sector plan sponsors, and therefore most were undecided about the options they would implement. As part of their deliberations, plan sponsors were considering several MMA options, including the federal subsidy option if they decided to provide their own prescription drug benefits for Medicare-eligible retirees; coordinating with part D by wrapping their prescription drug benefits around the Medicare part D benefit, thus providing secondary coverage; and several other options. In some cases, plan sponsors were considering implementing a combination of options for different groups of Medicare- eligible retirees. Ten of the 15 private and public sector sponsors of employment-based retiree health benefits that we interviewed were considering the 28 percent federal subsidy for prescription drug costs for some, if not all, Medicare-eligible retirees, although they were in different stages of the decision-making process at the time of our interviews. Two private sector sponsors had chosen the subsidy option for all of their Medicare-eligible retirees. Three of the private and public sector sponsors, including OPM for FEHBP, said they would not or did not expect to choose the subsidy option. (See table 1.) Retiree health benefit plan designs and other circumstances affected plan sponsors’ decisions regarding the subsidy. In particular, whether a plan sponsor had implemented financial caps on retiree health benefit expenditures played a major role in the decision-making process. In addition, plan sponsors that negotiated retiree health benefits with unions said that they did not have as much flexibility to change these benefits prior to negotiations. Three of the private sector plan sponsors we interviewed said they would choose the subsidy option only for some of their Medicare-eligible retirees because of capped benefits, the role of unions, or both, as described in the following: One of the three plan sponsors capped health benefits for workers who retired after a specific date, so it offered richer uncapped benefits to those who retired before that date. This sponsor determined that the uncapped benefits would be actuarially equivalent. Therefore, this plan sponsor said it would choose the subsidy option for the uncapped benefits but was not certain that the capped benefits would be actuarially equivalent for purposes of the subsidy. Another of these sponsors offered many different prescription drug plan designs to retirees with collectively bargained benefits (union retirees) and those without collectively bargained benefits. This sponsor chose the subsidy option for all plans that met the actuarial equivalence test. The sponsor’s plans that were not likely to meet the actuarial equivalence test typically had financial caps. The third sponsor said it was fairly certain it would choose the subsidy option for its collectively bargained retiree prescription drug benefits. While both the collectively bargained and noncollectively bargained retiree benefits were capped, the unions had renegotiated higher capped amounts for the collectively bargained benefits. The next negotiation session with the primary union was scheduled for July 2006, thereby making it difficult to make changes to these benefits other than accepting the subsidy in the interim. The caps for the retirees with noncollectively bargained benefits would be reached sooner and were less likely to be actuarially equivalent. Therefore, this sponsor said it was considering other options for the retirees with noncollectively bargained benefits. Although they had not made any final decisions on the MMA options at the time of our interviews, 5 of the 12 private sector Fortune 500 sponsors of employment-based health benefit plans we interviewed were considering the subsidy option. Two of these 5 plan sponsors said they were likely to apply for the subsidy for their Medicare retirees. Of the 2, 1—whose employees were partially unionized and that had not capped any of its retiree health benefits—said it did not “strongly consider” any other options during its deliberations. The other of the 2 sponsors expected to apply for the subsidy for all of the prescription drug plans for Medicare- eligible retirees that met the actuarial equivalence test. At the time of our interviews, 3 of these 5 plan sponsors said they either needed additional information from CMS regarding actuarial equivalence or needed more time before they could make their final decisions about the subsidy option. Two large state sponsors of health benefits for Medicare-eligible retirees were considering the subsidy option along with others. OPM had not made any decisions at the time of our interview, but in written comments on a draft of this report it indicated that it did not expect to choose the federal subsidy for FEHBP. The subsidy option offers plan sponsors several advantages. Cost savings associated with the subsidy played a major role in the plan sponsors’ decision-making process. Several benefit consultants and plan sponsors we interviewed stressed the importance of cost savings when considering the MMA options. Most of the plan sponsors we interviewed considered the savings associated with the subsidy to be an advantage. For example, one plan sponsor estimated that it would reduce its accumulated postretirement benefit obligations by about $161 million just by choosing the subsidy option for one group of its Medicare-eligible retirees. Some plan sponsors and benefit consultants we interviewed said that most of the prescription drug expenditures for Medicare-eligible retirees would be eligible for the subsidy because most retirees incurred costs from $251 through $5,000, the range eligible for the subsidy as defined in the MMA. While Medicare-eligible retirees’ prescription drug expenditures could be paid by several different sources, employment-based coverage accounted for about 27 percent of total expenditures in 2001, while out-of-pocket payments accounted for about 37 percent, according to our analysis of MCBS. According to our projections of the estimated amount of Medicare- eligible retirees’ total prescription drug expenditures that employment- based plans would pay for and that beneficiaries would pay out-of-pocket in 2006, most of the expenditures from employment-based coverage and from out-of-pocket—about 75 percent—could be eligible for the subsidy (see fig. 5). Preserving the benefits the plan sponsors currently provide and retaining the control over and flexibility of the benefits were also cited as advantages to choosing the subsidy option. Benefit consultants, plan sponsors, and others we interviewed said that it would be easier for beneficiaries if the benefits offered did not change. Choosing the subsidy option also gave plan sponsors the ability to maintain control over the benefits and their costs. In addition, preserving their current benefits allowed plan sponsors time to see how other MMA options would play out in the marketplace. For some plan sponsors, these advantages made the subsidy the easiest, most seamless, and least risky option to pursue. Several benefit consultants we interviewed said that to receive the subsidy, sponsors of employment-based retiree health plans would have to fulfill certain administrative reporting and record keeping requirements, as identified by CMS. For example, sponsors will have to apply for the subsidy no later than 90 days prior to the start of the calendar year, including providing an attestation regarding actuarial equivalence. Each application must include the names of all people enrolled in the sponsor’s drug plan to ensure that a sponsor is not receiving a subsidy for an individual who is enrolled in a part D prescription drug plan or a Medicare Advantage plan. The plan sponsor must also notify Medicare-eligible retirees and their spouses and dependents whether their retiree health plan provides “creditable coverage”—that is, generally whether the expected amount of paid claims under the plan sponsor’s prescription drug coverage is at least equal to that of the expected amount of paid claims under the standard part D coverage. This notice is important because retirees who do not enroll in part D when first eligible will be charged a penalty for late enrollment if they enroll after finding that their previous employment-based coverage did not meet CMS’s creditable coverage criteria. A special enrollment period will be provided, however, without a late enrollment penalty, when there is an involuntary loss of creditable coverage because, for example, an employer eliminates or reduces coverage. All plan sponsors choosing the subsidy will have to document prescription drug costs that fall within the MMA’s eligibility criteria. Although several benefit consultants saw the potential administrative requirements as a disadvantage of the subsidy, most of the plan sponsors we interviewed were not concerned about the subsidy’s proposed administrative requirements. For example, one plan sponsor told us it was less concerned about how it would manage the subsidy’s administrative requirements than about how it would manage relations with retirees if it changed prescription drug benefits under other MMA options. At the time of our interviews, however, some plan sponsors said they were not fully aware of or had not considered all of the administrative requirements. Besides cost savings, ease for retirees, and administrative requirements, plan sponsors we interviewed said they also considered other factors when making decisions about the subsidy. For example, plan sponsors considered as part of their decision-making process possible negative press, potential for lawsuits, relations and communications with Medicare- eligible retirees, benefit equity between Medicare-eligible retirees and retirees not yet eligible for Medicare, future union negotiations, hiring and retention of workers, marketplace competition, and uncertainty about CMS rules. One alternative to choosing the federal subsidy option for plan sponsors that provide prescription drug coverage to Medicare-eligible retirees is the option of coordinating with part D by wrapping their benefits around the new Medicare part D benefit by covering some drug costs not paid by Medicare. Plan sponsors would offer coverage wrapping around Medicare part D rather than providing their own comprehensive prescription drug coverage. Prescription drug costs not covered by Medicare part D that plan sponsors could cover might include the $250 deductible or the retirees’ costs within the coverage gap (i.e., the doughnut hole) until the Medicare catastrophic coverage begins paying for most drug costs. Several plan sponsors we interviewed said they were considering this option for Medicare-eligible retirees along with the subsidy and other options as part of their overall MMA deliberations. For example, one plan sponsor said it was considering wrapping its drug benefits around the part D benefit as its primary option for all its Medicare-eligible retirees because it had set financial caps on its retiree health benefit obligations that would eventually render it ineligible for the subsidy. Three other plan sponsors told us they were considering wrapping their prescription drug benefits around the part D benefit for those Medicare-eligible retirees for whom they could not qualify to receive the federal subsidy. Furthermore, OPM officials said that wrapping prescription drug benefits around the part D benefit could be more complex for the federal government than for employers in the private sector because, in contrast to many large private sector employers, FEHBP does not provide different benefits for active workers and for retirees. Some plan sponsors and benefit consultants we interviewed expected that wrapping prescription drug benefits offered to Medicare-eligible retirees around the new Medicare part D benefit would provide several advantages. For example, some benefit consultants said that this option could save more money than the subsidy. However, they said plan sponsors would have to do a cost/benefit analysis to make this determination. Also, plan sponsors could continue to provide the same level of benefits to Medicare-eligible retirees in coordination with the Medicare part D coverage, thereby maintaining benefit continuity. Conceptually, sponsors of employment-based health benefit plans and benefit consultants generally viewed the option to wrap prescription drug benefits around the part D benefit as being similar to how most now coordinate other benefits with Medicare parts A and B. Some sponsors we interviewed planned to rely on their pharmacy benefit managers, benefit consultants, and others for assistance in administering the benefit. However, plan sponsors and benefit consultants we interviewed were waiting to learn more from CMS about how the benefit coordination would operate. As a result, at the time of our interviews, employers and others had questions about how prescription drug benefit designs would wrap around the Medicare part D benefit. Wrapping benefits around the Medicare part D benefit also could present some administrative and other challenges for plan sponsors. Two benefit consultants we interviewed told us that wrapping benefits around the different Medicare part D plans, such as Medicare Advantage or a private prescription drug plan, in which retirees might enroll could add to the administrative complexity. Also, according to one benefit consultant and CMS officials, while coordinating with the Medicare program can be a fairly straightforward task for part A and B services, part D coordination might be more difficult because each Medicare-eligible retiree’s true out- of-pocket costs must be determined. Part D requires that Medicare beneficiaries must have $3,600 in out-of-pocket expenses for covered drugs in 2006 before federal catastrophic coverage begins. Generally, beneficiaries’ expenses reimbursed by other sources such as employment- based plans are not counted. This can become complicated for plan sponsors that have different copayment and coinsurance requirements for different groups of retirees. Another possible challenge for plan sponsors in wrapping around Medicare part D coverage is financial. Plans sponsors that supplement the Medicare part D benefit could spend thousands of dollars for each retiree before the Medicare catastrophic coverage begins. Two plan sponsors and several benefit consultants were concerned about how employment-based drug benefits that wrap around the Medicare part D benefit would affect the out-of-pocket payment requirements for beneficiaries. For example, if a plan sponsor covered 75 percent of a Medicare-eligible retiree’s expenditures within the coverage gap (i.e., the doughnut hole) the plan sponsor would have to spend $8,550 before the retiree reached $3,600 in out-of-pocket expenditures as required by the MMA. Specifically, under this wraparound scenario, the Medicare-eligible retiree would spend $3,600 out-of-pocket—$250 for the part D deductible, $500 in coinsurance for the next $2,000 in expenditures, and $2,850 for the expenses not covered by Medicare; Medicare would spend $1,500—75 percent of the next $2,000 in expenditures after the deductible is met; and the plan sponsor would spend $8,550. This would require a total of $13,650 in expenditures from all sources before the retiree would reach the amount—that is, combined Medicare and beneficiary expenditures equal to $5,100—at which Medicare part D catastrophic coverage would begin. Under the MMA, sponsors of employment-based health benefit plans for Medicare-eligible retirees have several other options. For example, plan sponsors could contract with privately marketed prescription drug plans and Medicare Advantage plans to cover the part D benefit, or they could become prescription drug plans or Medicare Advantage plans. In addition, while not allowed for current Medicare-eligible retirees, plan sponsors could establish HSAs for their active workers, who could use these benefits when they retire. Several benefit consultants told us their clients might consider these other MMA options, and some plan sponsors we interviewed were doing so. For example, four benefit consultants we interviewed said that Medicare Advantage plans could offer advantages to plan sponsors. Two of these benefit consultants said that having Medicare-eligible retirees enroll in Medicare Advantage plans would shift the financial risk away from the plan sponsor to the Medicare Advantage plan. The other two said that Medicare Advantage plans could help to reduce costs, and they also believed that having Medicare-eligible retirees enroll in these plans could help reduce administrative burdens associated with the Medicare part D benefit. Two benefit consultants noted that these plans might not be available in all parts of the country, but others said that increased federal reimbursement rates established as part of the MMA might cause more private plans to enter this market in the future. In addition, two benefit consultants commented that their clients might be more interested in Medicare Advantage once the market for these plans is established. During our interviews, some Fortune 500 plan sponsors generally discussed Medicare Advantage plans as an option they might consider. While several plan sponsors said that none of their Medicare retirees were enrolled in a health maintenance organization (HMO), two said that HMOs might be a viable option in the future as long as managed care plans continued to participate in the Medicare program. One plan sponsor considered Medicare Advantage plans as an option during its deliberations but determined that based on its past experience with Medicare+Choice, it did not provide many savings. One benefit consultant we interviewed said that plan sponsors might be reluctant to form their own Medicare Advantage plans because many HMOs left the Medicare+Choice program in the past. However, new options that had not yet been offered under the Medicare Advantage program might also be attractive to employers with retirees living all across the country. CMS officials said that they are currently developing the waivers that plan sponsors would need to form their own Medicare Advantage plans. The MMA also established HSAs, which receive preferential tax treatment, that are used in conjunction with high deductible health insurance plans. The HSA can be used to pay for qualified medical expenses not covered by insurance or other reimbursements. Although HSAs cannot be set up to fund health benefits for current Medicare-eligible retirees, they can be a savings vehicle for workers to pay the cost of their health care coverage when they retire. However, some benefit experts said it is unlikely that enough money would accumulate in these accounts for retirees, especially for older workers, to benefit substantially from them. Six of the 15 plan sponsors we interviewed said they were exploring how HSAs would integrate into their overall benefit programs or were considering them for the future. According to financial statements filed with the SEC as of November 2004, most of the Fortune 500 employers we reviewed that reported postretirement benefit obligations (27 of 39) reflected the effect of the MMA options on these obligations. For example, 3 of these plan sponsors each reported reductions in accumulated obligations of over $100 million. The other 12 employers did not report on their MMA decisions in these financial statements. (See table 2.) Thirteen of the 27 plan sponsors that reflected the effect of the MMA options reported they would be choosing the subsidy option, which reduces their postretirement benefit obligations and other expenditures. However, even among these 13 plan sponsors, 3 reported that they would be choosing the subsidy option for some but not all of their retirees. They had not reported what options they would pursue for the remaining retirees. While the remaining 14 plan sponsors addressed the MMA options in their financial statements, their MMA decisions for Medicare-eligible retirees were not as clear. These plan sponsors generally reported that the MMA options either reduced their postretirement benefit obligations or that the changes they made because of the MMA were not expected to have a material impact on their postretirement benefit obligations. Twelve of the 39 employers that reported sponsoring retiree health benefit plans and having postretirement benefit obligations did not report on their MMA decisions in financial statements filed as of November 2004. One of these 12 plan sponsors reported that it had determined that its prescription drug benefits were not actuarially equivalent to the Medicare part D benefit and could not take advantage of the subsidy option. This plan sponsor reported that it was evaluating the impact of other MMA options. The remaining 11 plan sponsors did not report on the impact of the MMA on their postretirement obligations; 4 of these 11 plan sponsors did not expect any changes they made to be material. In interviews, sponsors of health plans that included prescription drug benefits for Medicare-eligible retirees told us they did not expect to reduce these benefits in response to the new Medicare part D benefit and the MMA options. Although one benefit consultant said that some of his clients might consider reducing benefits in response to the MMA, plan sponsors we interviewed that were considering choosing the subsidy option said they did not expect to reduce their benefits in response to the MMA, even though some could do so and still qualify for the subsidy. Plan sponsors considering wrapping their benefits around the Medicare part D benefit were focused on wrapping benefits in a way that would maintain, not restrict, the current level of benefits. According to a benefit consultant, many employers who sponsored retiree health benefit plans supplemented Medicare parts A and B with additional benefits and might also do so for Medicare part D. However, plan sponsors change benefits for different reasons. Even though they said they were not considering a reduction in prescription drug benefits in response to the MMA, some plan sponsors and benefit consultants said that ongoing cost pressures prompt plan sponsors to constantly review and, if necessary, adjust their benefits for future retirees. Two of the 12 private sector employers that sponsored retiree health benefits told us that during their deliberations on the MMA options they had considered, but dismissed, elimination of some or all retiree prescription drug benefits as one of several options. One of these plan sponsors said eliminating prescription drug coverage would not be realistic, especially with collectively bargained benefits. The other plan sponsor said it was easier to continue to provide the benefits to this declining population—it no longer offered retiree health benefits to new hires—than to contend with the negative press and relations with current retirees and active workers. None of the three public sector sponsors of health benefits for Medicare- eligible retirees we interviewed expected to reduce or eliminate prescription drug benefits in response to the MMA options. OPM officials said that they did not plan to decrease or eliminate any prescription drug coverage for Medicare-eligible retirees in response to the MMA. These officials, who administer health benefits for federal employees and retirees, noted that eliminating prescription drug benefits would not be a politically realistic option. An official at a public sector plan that provides health benefits to Medicare-eligible retirees in one state said that the state also was not planning to reduce its benefits in response to the MMA. However, the state had already planned to make extensive changes to its benefits in response to rising health care costs about a year before Congress passed the MMA, and eliminating or further reducing benefits for public sector retirees was not an option currently being considered. Few employers, if any, that were not sponsoring retiree prescription drug benefits were expected to begin sponsoring them in response to the MMA. Benefit consultants and experts we interviewed consistently agreed that it was doubtful that an employer would want to assume new benefit obligations for retiree health or prescription drugs if it did not already do so, regardless of the MMA options. Furthermore, the availability of Medicare’s prescription drug benefits in 2006 might give employers more of an incentive not to start to provide these benefits because prescription drug benefits would be available without the employer’s participation. Ultimately, benefit consultants and experts told us this decision would vary by employer. An employer’s particular financial, business, and competitive situation could affect the employer’s decision to provide any new benefits or to provide supplemental coverage—pay the part D premium, cover out-of-pocket expenses, or consider a Medicare Advantage plan as an option—to Medicare-eligible retirees in response to the MMA. According to officials at organizations representing small and midsized employers and other experts, the MMA is not likely to encourage such employers to add to their operating costs by beginning to offer retiree health benefits or supplementing the prescription drug benefits available through Medicare part D. These employers are more concerned about providing health benefits to active workers rather than to retirees. However, as with large employers, employers’ specific circumstances drive their business and benefit decisions. Therefore, according to these officials, while there may be isolated individual employers that might begin to provide retiree health benefits or prescription drug coverage supplementing the benefits established by the MMA, they would likely be the exception rather than the rule. The provision of employment-based retiree health benefits for Medicare beneficiaries continues to be an issue for evaluation and change with employers and other plan sponsors even as they begin to choose options available as a result of the Medicare drug benefit enacted as part of the MMA. The long-term decline in the percentage of employers offering retiree health benefits to Medicare-eligible individuals has leveled off in recent years. Plan sponsors have continued to modify their requirements for eligibility, benefits, and cost sharing in an effort to contain cost growth. As employers and other plan sponsors choose options as provided under the MMA, they likely will continue to face rising health care costs, particularly for prescription drugs, that will increase their obligations for retiree health benefits. The Medicare drug benefit is expected to provide some insulation from these cost increases for plans that qualify and employers that receive a subsidy for a portion of their drug expenditures or that choose to allow Medicare to bear primary responsibility for these costs for Medicare-eligible retirees. Nonetheless, even after employers select a particular option in response to the Medicare drug benefit, it is likely that they will continue to reshape their retiree health benefits in response to cost pressures, as they have for the last decade. However, few employers not already offering retiree health or prescription drug coverage are likely to begin doing so as a result of the options available under the MMA. We provided a draft of this report to CMS, OPM, and experts on retiree health benefits at the Employee Benefits Research Institute, Health Research and Educational Trust, Hewitt Associates, and Mercer Human Resource Consulting. In its written comments, CMS generally agreed with our findings. CMS stated that the new Medicare drug benefit and the subsidy can help plan sponsors continue to provide drug coverage to Medicare-eligible retirees. Consistent with our finding that plan sponsors intend to continue offering prescription drug benefits, CMS cited a survey released in January 2005 that indicated that most plan sponsors intended to continue offering prescription drug coverage after the Medicare part D benefit begins. CMS confirmed that many plan sponsors are still considering their options under the MMA. CMS also indicated that some employers may reevaluate their retiree benefits and that some plan sponsors may begin to offer prescription drug benefits. In its comments, CMS noted that it had recently released its final rule implementing the Medicare part D benefit and plan sponsor options. CMS also noted that it plans to provide additional guidance to respond to issues raised by comments on the proposed rule, including guidance on actuarial equivalence. CMS acknowledged that plan sponsors need to have timely guidance because of the complexity of the process, and CMS intends to continue to conduct outreach and education efforts on the options for retirees’ prescription drug coverage available to plan sponsors. (CMS’s comments are reprinted in app. II.) In its written comments, OPM highlighted its role in limiting premium increases while continuing to provide the same level of health insurance coverage at the same premium rates for retirees that it provides to active federal employees. While at the time of our interviews OPM officials indicated that OPM was considering the federal subsidy for FEHBP, in its written comments the agency said that it does not expect to choose the federal subsidy option. We revised the report to reflect that OPM does not expect to choose the subsidy option. (OPM’s comments are reprinted in app. III.) The experts who reviewed the draft report generally indicated that the report provided a comprehensive and accurate portrayal of employment- based retiree health benefits and prescription drug benefits under the MMA. Two of the experts noted that while they concurred that the percentage of employers offering retiree health benefits has leveled off in recent years, this finding may understate the impact of other changes that reduce the extent of retiree health benefits. They highlighted other changes, as we cited in the draft report, such as reduced eligibility for future retirees, increased cost sharing and premium contributions, and financial caps. We agree that as noted in the report, these changes contribute to an overall erosion in the value and availability of retiree health benefits. CMS and several of these experts also provided technical comments, which we incorporated as appropriate. We are sending copies of this report to the Administrator of CMS, the Director of OPM, and interested congressional committees. We will also provide copies to others on request. In addition, this report is available at no charge on the GAO Web site at http://www.gao.gov. If you or your staffs have any questions about this report, please contact me at (202) 512-7118. Another contact and staff acknowledgments are listed in appendix IV. To identify trends in employment-based retiree health benefits, we analyzed data from (1) two annual private sector surveys of employer health benefits conducted since the early 1990s through 2004, (2) one private sector survey on retiree health benefits conducted in 2004, and (3) three surveys conducted by the federal government that included information on Medicare beneficiaries and employment-based health benefits. We also reviewed financial data for fiscal years 2001 through 2003 that a sample of Fortune 500 employers submitted to the Securities and Exchange Commission (SEC) to identify changes in large employers’ retiree health benefit obligations. To supplement the trend and financial data and to identify which options for prescription drug coverage provided under the Medicare Prescription Drug, Improvement, and Modernization Act of 2003 (MMA) sponsors of employment-based retiree health benefits said they planned to implement, we interviewed benefit consultants, private and public sector sponsors of employment-based retiree health benefits, officials at associations and groups representing large and small employers and others. In addition, we reviewed studies and literature addressing retiree health benefits. We conducted our work from April 2004 through February 2005 in accordance with generally accepted government auditing standards. We relied on data from two annual surveys of employment-based health benefit plans. The Kaiser Family Foundation and the Health Research and Educational Trust (Kaiser/HRET) and Mercer Human Resource Consulting each conduct an annual survey of employment-based health benefits, including a section on retiree health benefits. Each survey has been conducted for at least the past decade, including 2004. We also used data from a survey focused solely on 2004 retiree health benefits that the Kaiser Family Foundation and Hewitt Associates (Kaiser/Hewitt) conducted in 2004. For each of these surveys of employment-based benefits, we reviewed the survey instruments and discussed the data’s reliability with the sponsors’ researchers and determined that the data were sufficiently reliable for our purposes. Since 1999, Kaiser/HRET has surveyed a sample of employers each year through telephone interviews with human resource and benefits managers and published the results in its annual report—Employer Health Benefits. Kaiser/HRET selects a random sample from a Dun & Bradstreet list of private and public sector employers with three or more employees, stratified by industry and employer size. It attempts to repeat interviews with some of the same employers that responded in prior years. For the most recently completed annual survey, conducted from January to May 2004, 1,925 employers completed the full survey, giving the survey a 50 percent response rate. In addition, Kaiser/HRET asked at least one question of all employers it contacted—“Does your company offer or contribute to a health insurance program as a benefit to your employees?”—to which an additional 1,092 employers, or cumulatively about 78 percent of the sample, responded. By using statistical weights, Kaiser/HRET is able to project its results nationwide. Kaiser/HRET uses the following definitions for employer size: (1) small—3 to 199 employees—and (2) large—200 and more employees. In some cases, Kaiser/HRET reported information for additional categories of small and large employer sizes. Since 1993, Mercer has surveyed a stratified random sample of employers each year through mail questionnaires and telephone interviews and published the results in its annual report—National Survey of Employer- Sponsored Health Plans. Mercer selects a random sample of private sector employers from a Dun & Bradstreet database, stratified into eight categories, and randomly selects public sector employers—state, county, and local governments—from the Census of Governments. The random sample of private sector and government employers represents employers with 10 or more employees. Mercer conducts the survey by telephone for employers with from 10 to 499 employees and mails questionnaires to employers with 500 or more employees. Mercer’s database contains information from 2,981 employers who sponsor health plans. By using statistical weights, Mercer projects its results nationwide and for four geographic regions. The Mercer survey report contains information for large employers—500 or more employees—and for categories of large employers with certain numbers of employees as well as information for small employers (fewer than 500 employees). We have excluded from our analysis Mercer’s 2002 data on the percentage of employers that offer retiree health plans because Mercer stated in its 2003 survey report that the 2002 data were not comparable to data collected in other years because of a wording change on the 2002 survey questionnaire. In 2003, Mercer modified the survey questionnaire again to make the data comparable to prior years (except 2002). The Kaiser/Hewitt study—Current Trends and Future Outlook for Retiree Health Benefits: Findings from the Kaiser/Hewitt 2004 Survey on Retiree Health Benefits—is based on a nonrandom sample of employers because there is no database that identifies all private sector employers offering retiree health benefits from which a random sample could be drawn. Kaiser/Hewitt used previous Hewitt survey respondents and its proprietary client database—a list of private sector employers potentially offering retiree health benefits. Kaiser/Hewitt conducted the survey online from May 2004 through September 2004 and obtained data from 333 large (1,000 or more employees) employers. According to information provided by Hewitt, these employers included about one-third of the 100 Fortune 500 companies with the largest retiree health obligations in 2003. Because the sample is nonrandom and does not include the same sample of companies and plans each year, survey results for 2004 cannot be compared to results from prior years. We analyzed three federal surveys containing information either on Medicare beneficiaries or on the percentage of public sector employers that offer retiree health benefits. We obtained information on retired Medicare beneficiaries’ sources of health benefits coverage, including former employers and unions, from the Current Population Survey (CPS), conducted by the U.S. Census Bureau. We obtained data on the sources of coverage for all health care expenditures and for prescription drug expenditures for retired Medicare beneficiaries from the Medicare Current Beneficiary Survey (MCBS), sponsored by the Centers for Medicare & Medicaid Services (CMS). We obtained data on the percentage of public sector employers that offer retiree health benefits from the Medical Expenditure Panel Survey (MEPS), sponsored by the Agency for Healthcare Research and Quality. Each of these federal surveys is widely used for policy research, and we reviewed documentation on the surveys to determine that they were sufficiently reliable for our purposes. We analyzed the Annual Supplement of the CPS for information on the demographic characteristics of Medicare-eligible retirees and their access to insurance. The survey is based on a sample designed to represent a cross section of the nation’s civilian noninstitutionalized population. In 2004, about 84,500 households were included in the sample for the survey, a significant increase in sample size from about 60,000 households prior to 2002. The total response rate for the 2004 CPS Annual Supplement was about 84 percent. Because the CPS is based on a sample, any estimates derived from the survey are subject to sampling errors. A sampling error indicates how closely the results from a particular sample would be reproduced if a complete count of the population were taken with the same measurement methods. To minimize the chances of citing differences that could be attributable to sampling errors, we present only those differences that were statistically significant at the 95 percent confidence level. The CPS asked whether a respondent was covered by employer- or union- sponsored, Medicare, Medicaid, private individual, or certain other types of health insurance in the last year. The CPS questions that we used for employment status, such as whether an individual is retired, are similar to the questions on insurance status. Respondents were considered employed if they worked at all in the previous year and not employed only if they did not work at all during the previous year. The CPS asked whether individuals had been provided employment-based insurance “in their own name” or as dependents of other policyholders. We selected Medicare-eligible retirees aged 65 and older who had employment-based health insurance coverage in their own names because this coverage could most directly be considered health coverage from a former employer. For these individuals, we also identified any retired Medicare-eligible dependents aged 65 or older, such as a spouse, who were linked to this policy. We used two criteria to determine that these policies were linked to the primary policyholder: (1) the dependent lived in the same household and had the same family type as the primary policyholder and (2) the dependent had employment-based health insurance coverage that was “not in his or her own name.” MCBS is a nationally representative sample of Medicare beneficiaries sponsored by CMS. The survey is designed to determine for Medicare beneficiaries (1) expenditures and payment sources for all health care services, including noncovered services, and (2) all types of health insurance coverage. The survey also relates coverage to payment sources. The sample represents 16,315 Medicare beneficiaries from CMS’s enrollment files who are interviewed three times a year at 4-month intervals. The complete interview cycle for a respondent consists of 12 interviews over 4 years. Response rates for initial interviews ranged from about 85 to 89 percent. After completing a first interview, individuals had a response rate of 95 percent or more in subsequent interviews. Interview data are linked to Medicare claims and other administrative data, and sample data are weighted so that results can be projected to the entire Medicare population. The MCBS Cost and Use file links Medicare claims to survey-reported events and provides expenditure and payment source data on all health care services, including those not covered by Medicare. Therefore, this file contains data on Medicare beneficiaries’ expenditures and sources of coverage for prescription drugs. Among other items, the prescription drug data include the following payment source categories: Medicare, Medicaid, health maintenance organizations (HMO), Medicare HMO, employment- based insurance, individually purchased insurance, unknown, out-of- pocket, discounts, and other. We analyzed prescription drug expenditure data for retired Medicare beneficiaries aged 65 and older who had employment-based health coverage in 2001, the most current data available at the time we did our analysis. We extrapolated these data to 2006—when the Medicare part D benefit begins—using projections based on National Health Care Expenditures per capita data developed by CMS to provide estimates of prescription drug expenditures paid by employment-based insurance or paid out-of-pocket for retired Medicare beneficiaries with employment- based insurance. We did not make adjustments to reflect significant changes in payment sources for prescription drug coverage once the Medicare part D benefit begins in 2006. For employers that elect to continue covering prescription drugs, these projections provide an estimate of the share of these prescription drug expenditures covered that could be eligible for the MMA subsidy. MEPS, sponsored by the Agency for Healthcare Research and Quality, consists of four surveys and is designed to provide nationally representative data on health care use and expenditures for U.S. civilian noninstitutionalized individuals. We used data from the MEPS Insurance Component, one of the four surveys, to identify the percentage of state entities that offered retiree health benefits in 1998 and 2002. Insurance Component data are collected through two samples. The first, known as the “household sample,” is a sample of employers and other insurance providers (such as unions and insurance companies) that were identified by respondents in the MEPS Household Component, another of the four surveys, as their source of health insurance. The second sample, known as the “list sample,” is drawn from separate lists of private and public employers. The combined surveys provide a nationally representative sample of employers. The target size of the list sample is approximately 40,000 employers each year. The response rate for the public sector MEPS Insurance Component was about 88 percent in 2002. We reviewed selected financial data for a stratified random sample of 2003 Fortune 500 employers, which is a list of the U.S. corporations with the highest annual revenues. First, we stratified the Fortune 500 list into five groups of 100 in descending order of revenues. We then randomly selected 10 Fortune 500 employers from each of the five groups, for a total of 50 employers. To identify the 50 employers’ postretirement benefit obligations, we reviewed the annual financial statements (Form 10-K) that these employers submitted to the SEC. We reviewed the Form 10-K that each employer submitted for its most recent fiscal year, ending in 2003 or early in 2004. Then, to identify each employer’s postretirement benefit obligations for the two previous fiscal years, we reviewed the Form 10-K filed in either 2002 or 2003. To identify the types of changes these employers planned to make to their postretirement benefits in light of the MMA, we reviewed the latest quarterly financial statements (Form 10-Q) that employers submitted to the SEC, most as of November 2004. We interviewed representatives of six large employer benefit consulting firms. Benefit consultants help their clients, which include private sector employers, public sector employers, or both, develop and implement human resource programs, including retiree health benefit plans. While most of these benefit consulting firms’ clients were large Fortune 500 or Fortune 1,000 employers, some also had smaller employers as clients. One benefit consulting firm that we interviewed, in particular, provided actuarial, employee benefit, and other services to a range of public sector clients, including state and local governments, statewide retirement systems and health plans, and federal government agencies. It also provided human resources services to multiemployer plans. To learn more about retiree health benefit trends and MMA options from large private sector plan sponsors, we interviewed 12 Fortune 500 employers that provided retiree health benefits. From the stratified random sample of 50 Fortune 500 employers selected for a financial data review, we judgmentally selected 10 employers for interviews. We interviewed at least 1 employer from each of the five groups of 100 Fortune 500 employers that were stratified on the basis of annual revenues. In addition to considering revenues, where data were available, we considered each employer’s industry, number of employees, postretirement benefit obligations, preliminary MMA option decision as reported on its annual Form 10-K, and union presence when making our selection. We also interviewed officials at two additional Fortune 500 employers at the recommendation of a benefit consultant. While small and midsized employers are less likely than large employers to offer retiree health benefits, we also assessed small and midsized employers’ preliminary reactions to the MMA options. We relied primarily on discussions with officials at two organizations representing the interests of small and midsized employers—the National Federation of Independent Business and the United States Chamber of Commerce—and benefit consultants. To learn more about retiree health benefit trends and MMA options at public sector plan sponsors, we interviewed officials at the Office of Personnel Management (OPM), two state retirement systems, and one association. OPM administers the Federal Employees Health Benefits Program—the country’s largest employment-based health plan. We judgmentally selected two large states’ retiree health benefits systems on the basis of a review of selected state data and referrals from a benefit consultant that works with public sector clients. We also interviewed officials at the National Conference on Public Employee Retirement Systems and reviewed available studies on retiree health benefits in the public sector. To obtain broader-based information about retiree health benefit trends and MMA options, we interviewed officials at several other groups and associations. Specifically, we interviewed the President of the National Business Group on Health and the Director of the Health Research and Education Program of the Employee Benefit Research Institute to obtain more information about large private sector employers. We also interviewed officials from the American Academy of Actuaries, the Kaiser Family Foundation, the American Federation of Labor and Congress of Industrial Organizations, and the National Coordinating Committee for Multiemployer Plans. Finally, we reviewed other available literature on retiree health benefit trends, cost-containment strategies, and plan sponsors’ likely responses to MMA options. Laura Sutton Elsberg, Joseph A. Petko, Kevin Dietz, Elizabeth T. Morrison, and Suzanne Worth made key contributions to this report. | The Medicare Prescription Drug, Improvement, and Modernization Act of 2003 (MMA) created a prescription drug benefit for beneficiaries, called Medicare part D, beginning in January 2006. The MMA included incentives for sponsors of employment-based retiree health plans to offer prescription drug benefits to Medicare-eligible retirees, such as a federal subsidy when sponsors provide benefits meeting certain MMA requirements. Plan sponsors cannot receive a subsidy for retired Medicare beneficiaries who enroll in part D. In response to an MMA mandate, GAO determined (1) the trends in employment-based retiree health coverage prior to the MMA and (2) which MMA prescription drug options plan sponsors said they would pursue and the effect these options might have on retiree health benefits. GAO identified trends using data from federal and private sector surveys of employers' health benefit plans and financial statements of 50 randomly selected Fortune 500 employers. Where data for Medicare-eligible retirees were not available, GAO reported data for all retirees, including Medicare-eligible retirees. To obtain plan sponsors' views about options they were likely to pursue, GAO reviewed the 50 employers' financial reports and interviewed benefit consultants; private and public sector plan sponsors, including the Office of Personnel Management for federal employees' health benefits; and other experts. A long-term decline in the percentage of employers offering retiree health coverage has leveled off in recent years, but retirees face an increasing share of costs, eligibility restrictions, and benefit changes that contribute to an overall erosion in the value and availability of coverage. Although the percentages and time frames differed, two employer benefit surveys showed that the percentage of employers offering health coverage to retirees has declined since the early 1990s; this trend, however, has leveled off. The cost to provide retiree health coverage, including coverage for Medicare-eligible retirees, has increased significantly: one employer benefit survey cited double-digit increases each year from 2000 through 2003. Prescription drugs for Medicare-eligible retirees constituted a large share of retiree health costs. Employers and other plan sponsors have used various strategies to limit overall benefit cost growth that included increasing retiree cost sharing and premiums, restricting eligibility for benefits, placing financial caps on health care expenditures, and revising prescription drug benefits. Many plan sponsors had not made final decisions about which MMA prescription drug options they would choose for their Medicare-eligible retirees at the time of GAO's review. Specifically, 13 of the 15 private and public plan sponsors GAO interviewed were undecided for some or all retirees. However, most plan sponsors interviewed had chosen the federal subsidy option for some or all retirees or were considering the subsidy as one of several options. Alternatively, some plan sponsors that had set caps on their retiree health benefit obligations were considering supplementing (known as "wrapping around") the new Medicare prescription drug benefit for some or all retirees rather than providing their own comprehensive prescription drug coverage in lieu of the Medicare drug benefit. Also, some plan sponsors and benefit consultants said they were waiting to see how the market for other MMA options, such as Medicare Advantage plans, develops. About two-thirds of financial statements GAO reviewed for Fortune 500 employers reporting obligations for retiree health benefits had begun to reflect reduced obligations resulting from the MMA options. While plan sponsors contacted said they did not anticipate reducing their drug coverage in view of new coverage offered through the MMA, increasing health care costs might cause them to do so in the future. Benefit consultants and other experts interviewed said that the MMA was not likely to induce employers to begin to provide prescription drug coverage or to supplement the Medicare drug benefit if they had not previously offered retiree health coverage. In commenting on a draft of this report, the Centers for Medicare & Medicaid Services and four experts generally agreed with the report's findings. The Office of Personnel Management indicated that it has not made final decisions about which MMA prescription drug option it would choose for the Federal Employees Health Benefits Program, but it does not expect to choose the subsidy option. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
Our body of work on interagency collaboration has identified several key areas that are essential for collaboration among U.S. federal agencies in addressing security challenges. Three are particularly important for SOUTHCOM and AFRICOM: (1) developing and implementing overarching strategies, (2) creating collaborative organizations, and (3) building a well- trained workforce. Underlying the success of these key areas is committed and effective leadership. Developing and implementing overarching strategies: Our prior work, as well as that by national security experts, has found that strategic direction is required as a foundation for collaboration on national security goals. The means to operate across multiple agencies and organizations—such as compatible policies and procedures that facilitate collaboration across agencies and mechanisms to share information frequently—enhances and sustains collaboration among federal agencies. Strategies can help agencies develop mutually reinforcing plans and determine activities, resources, processes, and performance measures for implementing those strategies. Moreover, a strategy defining organizational roles and responsibilities can help agencies clarify who will lead or participate in activities, help organize their joint and individual efforts, facilitate decision making, and address how conflicts would be resolved. Creating collaborative organizations: Given the differences among U.S. government agencies—such as differences in structure, planning processes, and funding sources—developing adequate coordination mechanisms is critical to achieving integrated approaches. U.S. government agencies, such as DOD, State, and USAID, among others, spend billions of dollars annually on various defense, diplomatic, and development missions in support of national security. Without coordination mechanisms, the results can be a patchwork of activities that waste scarce funds and limit the overall effectiveness of federal efforts. Developing a well-trained workforce: Collaborative approaches to national security require a well-trained workforce with the skills and experience to integrate the government’s diverse capabilities and resources. A lack of understanding of other agencies’ cultures, processes, and core capabilities can hamper U.S. national security partners’ ability to work together effectively. However, training can help personnel develop the skills and understanding of other agencies’ capabilities needed to facilitate interagency collaboration. Effective leadership is essential to achieving success in each of these areas. The 2010 Quadrennial Defense Review states that by integrating U.S. defense capabilities with other elements of national security— including diplomacy, development, law enforcement, trade, and intelligence—the nation can ensure that the right mix of expertise is at hand to take advantage of emerging opportunities and to thwart potential threats. In addition, the 2010 National Security Strategy calls for a renewed emphasis on building a stronger leadership foundation for the long term to more effectively advance U.S. interests. Our work on SOUTHCOM and AFRICOM found that both commands have demonstrated some practices that will help enhance and sustain interagency collaboration, but areas for improvement remain. Moreover, our preliminary work on counterpiracy efforts in the Horn of Africa region suggests that U.S. agencies have made progress in leading and supporting international efforts to counter piracy, but implementation challenges exist. SOUTHCOM and AFRICOM have sought input from several federal agencies in developing overarching strategies and plans, but AFRICOM has not yet completed many specific plans to guide activities and ensure a U.S. government unity of effort in Africa. In addition, our preliminary work shows that a U.S. action plan has been developed which provides a framework for interagency collaboration, but the roles and responsibilities of the multiples agencies involved in countering piracy in the Horn of Africa region are not clearly assigned. In its Guidance for Employment of the Force, DOD required both SOUTHCOM and AFRICOM, as prototype test cases, to seek broader involvement from other departments in drafting their theater campaign and contingency plans. To meet this requirement, SOUTHCOM held a series of meetings with interagency officials that focused on involving and gathering input from interagency partners. In developing its 2009 theater campaign plan, which lays out command priorities and guides its resource allocations, SOUTHCOM coordinated with over 10 U.S. government departments and offices, including the Departments of State, Homeland Security, Justice, the Treasury, Commerce, and Transportation and the Office of the Director of National Intelligence (see fig. 1). According to both SOUTHCOM and interagency partners, this coordination helped SOUTHCOM understand the diverse missions of its interagency partners and better align activities and resources in the Americas and the Caribbean. As a result of this effort, SOUTHCOM’s 2009 theater campaign plan includes 30 theater objectives, of which 22 are led by interagency partners with SOUTHCOM serving in a supporting role. SOUTHCOM also provides input into State’s regional strategic plans. Both SOUTHCOM and interagency partners told us that this coordination has helped ensure that SOUTHCOM and interagency partner strategic goals were mutually reinforcing and has helped align activities and resources in achieving broad U.S. objectives. Similarly, AFRICOM met with representatives from many agencies to gain interagency input into its theater campaign plan. We spoke with officials from State, USAID, and the U.S. Coast Guard who stated that they provided input into several additional strategy documents, including DOD’s Guidance for Employment of the Force and AFRICOM’s posture statement, and participated in activity planning meetings. Federal agency officials also noted progress in AFRICOM’s interagency coordination since its establishment. State officials said that AFRICOM had made improvements in taking their feedback and creating an environment that is conducive to cooperation across agencies. Similarly, USAID officials said that AFRICOM had improved its coordination with their agency at the USAID headquarters level. Notwithstanding this collaboration, AFRICOM officials told us that aligning strategies among partners can be difficult because of different planning horizons among agencies. For example, AFRICOM’s theater campaign plan covers fiscal years 2010 through 2014, whereas the State/USAID strategic plan spans fiscal years 2007 through 2012. While AFRICOM has collaborated with partners on overarching strategies, it has not yet completed some plans, which hinders planning and implementation efforts with partners. AFRICOM currently lacks regional engagement and country work plans for Africa, which are called for in its theater campaign plan and would provide specific information on conducting activities. One key requirement for the country work plans, for example, is to align them with embassy strategic plans to ensure unity of effort. Figure 2 shows AFRICOM’s plans in the context of national strategies, guidance, and other federal agencies’ planning efforts. AFRICOM’s Army component stated that perhaps the greatest challenge to creating positive conditions in Africa is ensuring that U.S. defense efforts remain synchronized; if plans are not coordinated, their efforts could have unintended consequences, such as the potential for Africans to perceive the U.S. military as trying to influence public opinion in a region sensitive to the military’s presence. At the time we completed our audit work, AFRICOM’s regional plans had not been approved by the command, and the country plans were still in the process of being developed. Therefore, we recommended that the Secretary of Defense direct AFRICOM to expedite the completion of its plans and to develop a process whereby plans are reviewed on a recurring basis to ensure that efforts across the command are complementary, comprehensive, and supportive of AFRICOM’s mission. DOD agreed with our recommendation, stating that some of the plans are in the final stages of review and approval by AFRICOM’s leadership. Our preliminary work on U.S. counterpiracy efforts off the Horn of Africa shows that the United States has an action plan that serves as an overarching strategy and provides a framework for interagency collaboration, but roles and responsibilities have not been clearly assigned. The action plan establishes three main lines of action for interagency stakeholders, in collaboration with industry and international partners, to take in countering piracy. These actions are (1) prevent pirate attacks by reducing the vulnerability of the maritime domain to piracy; (2) interrupt and terminate acts of piracy, consistent with international law and the rights and responsibilities of coastal and flag states; and (3) ensure that those who commit acts of piracy are held accountable for their actions by facilitating the prosecution of suspected pirates by flag, victim, and coastal states and, in appropriate cases, the United States. While piracy in the Horn of Africa region emanates primarily from Somalia, a country located within AFRICOM’s area of responsibility, most attacks are carried out in waters within U.S. Central Command’s jurisdiction. Outside DOD, many other stakeholders are involved in counterpiracy efforts. Specifically, the action plan states that, subject to the availability of resources, the Departments of State, Defense, Homeland Security, Justice, Transportation, and the Treasury and the Office of the Director of National Intelligence shall also contribute to, coordinate, and undertake initiatives. Our preliminary work indicates that the National Security Council, which authored the plan, has not assigned the majority of tasks outlined in the plan to specific agencies. As of July 2010, only one task, providing an interdiction-capable presence, had been assigned to the Navy and Coast Guard. Roles and responsibilities for other tasks—such as strategic communications, disrupting pirate revenue, and facilitating prosecution of suspected pirates—have not been clearly assigned. Without specific roles and responsibilities for essential tasks outlined in the action plan, the U.S. government cannot ensure that agencies’ approaches are comprehensive, complementary, and effectively coordinated. SOUTHCOM and AFRICOM have developed organizational structures to facilitate interagency collaboration, but challenges include fully leveraging interagency personnel and maintaining the ability to organize quickly for large-scale military operations when necessary. Both commands have established key leadership positions for interagency officials within their organizational structures. In addition to a deputy military commander who oversees military operations, each command has a civilian deputy to the commander from State who oversees civil-military activities. At SOUTHCOM, the civilian deputy to the commander—a senior foreign service officer with the rank of Minister Counselor at State— advises SOUTHCOM’s commander on foreign policy issues and serves as the primary liaison with State and with U.S. embassies located in SOUTHCOM’s area of responsibility. At AFRICOM, the civilian deputy to the commander directs AFRICOM’s activities related to areas such as health, humanitarian assistance, disaster response, and peace support operations. Both commands have also embedded interagency officials throughout their organizations. As of June 2010, AFRICOM reported that it had embedded 27 interagency partners into its headquarters staff from several federal agencies (see table 1), and according to officials at AFRICOM and State, it plans to integrate five foreign policy advisors from State later this year. Moreover, DOD has signed memorandums of understanding with nine federal agencies to outline conditions for sending interagency partners to AFRICOM. As of July 2010, SOUTHCOM reported that it had 20 embedded interagency officials (see table 1), with several placed directly into key senior leadership positions. SOUTHCOM has also created a partnering directorate, which among its responsibilities, has the role of embedding interagency personnel into the command. Decisions to embed interagency officials at SOUTHCOM are made on a case-by-case basis, with most agencies sending a representative to SOUTHCOM on a short- term basis to discuss needs, roles, and responsibilities and to assess whether a full-time embedded official would be mutually beneficial. Both AFRICOM and SOUTHCOM have indicated that they currently do not have a specific requirement for the number of embedded interagency personnel at their commands but would benefit from additional personnel. However, limited resources at other federal agencies have prevented interagency personnel from participating in the numbers desired. In February 2009, we reported that AFRICOM initially expected to fill 52 positions with personnel from other government agencies. However, State officials told us that they would not likely be able to provide employees to fill the positions requested by AFRICOM because they were already facing a 25 percent shortfall in midlevel personnel. Similarly, SOUTHCOM has identified the need for around 40 interagency personnel, but had only filled 20 of those positions as of July 2010. According to SOUTHCOM officials, it has taken about 3 years to fill its interagency positions because of lack of funding at the command or the inability of partners to provide personnel. Because many agencies have limited personnel and resources, SOUTHCOM and its interagency partners have, on occasion, developed other means to gain stakeholder input and perspectives. For example, in lieu of embedding a Department of the Treasury (Treasury) official at the command, SOUTHCOM and Treas ury decided that providing a local Treasury representative with access to thecommand and establishing a memorandum of understanding would serve to improve communication and coordination among the organizations. While embedding interagency personnel into a DOD command can be an effective means of coordination, interagency personnel serving at AFRICOM may not be fully leveraged for their expertise within the organization. AFRICOM officials told us that it is a challenge to determine where in the command to include interagency personnel. For example, an embedded interagency staff member stated that AFRICOM initially placed him in a directorate unrelated to his skill set, and he initiated a transfer to another directorate that would better enable him to share his expertise. Moreover, several embedded interagency officials said that there is little incentive to take a position at AFRICOM because it will not enhance one’s career position upon return to the original agency after the rotation. Difficulties with leveraging interagency personnel are not unique to AFRICOM. We have previously reported that personnel systems often do not recognize or reward interagency collaboration, which could diminish interest in serving in interagency efforts. AFRICOM officials said that it would be helpful to have additional interagency personnel at the command, but they understand that staffing limitations, resource imbalances, and lack of career progression incentives for embedded staff from other federal agencies may limit the number of personnel who can be brought in from these agencies. Despite challenges, AFRICOM has made some efforts that could improve interagency collaboration within the command, such as expanding its interagency orientation process. Last fall, the command conducted an assessment of the embedded interagency process to analyze successes and identify lessons learned, including recommendations on how to integrate interagency personnel into command planning and operations. In July 2010, AFRICOM stated that it had established an interagency collaborative forum to assess, prioritize, and implement the recommendations from the assessment. SOUTHCOM’s recent experience in responding to the Haiti earthquake serves as a reminder that while interagency collaboration is important in addressing security challenges, DOD’s commands must also be prepared to respond to a wide range of contingencies, including large-scale disaster relief operations. While our work found that SOUTHCOM has taken significant steps in building partnerships to enhance and sustain collaboration, the command faces challenges preparing for the divergent needs of its potential missions. SOUTHCOM must have an organizational structure that is prepared for military contingencies and that is also effective in supporting interagency partners in meeting challenges such as corruption, crime, and poverty. In 2008, SOUTHCOM developed an organizational structure to improve collaboration with interagency stakeholders, which included a civilian deputy to the vommander, interagency partners embedded into key leadership positions, and a directorate focused on sustaining partnerships. While SOUTHCOM’s organizational structure was designed to facilitate interagency collaboration, the 2010 Haiti earthquake response revealed weaknesses in this structure that initially hindered its efforts to conduct a large-scale military operation. For example, the command’s structure lacked a division to address planning for military operations occurring over 30 days to 1 year in duration. In addition, SOUTHCOM had suboptimized some core functions that were necessary to respond to large-scale contingencies. For example, SOUTHCOM’s logistics function was suboptimized because it was placed under another directorate in the organizational structure rather than being its own core function. As a result, the command had difficulty planning for the required logistics support—including supply, maintenance, deployment distribution, health support, and engineering—during the large-scale Haiti relief effort, which SOUTHCOM reported peaked at more than 20,000 deployed military personnel, about 2 weeks after the earthquake occurred (see fig. 4). According to command officials, SOUTHCOM was able to integrate additional interagency and international partners into its headquarters as Haiti relief operations grew in scale; however, the command had not identified the military personnel augmentation required for a large contingency nor had it developed a plan to integrate military personnel into its headquarters structure. Ultimately, SOUTHCOM received 500 military augmentees to provide additional capabilities to its existing command staff of about 800, including an entire staff office from U.S. Northern Command, filling vital gaps in SOUTHCOM’s ability to support operations in Haiti. However, augmented military personnel were not familiar with SOUTHCOM’s organizational structure and did not initially understand where they could best contribute because many of the traditional joint staff functions were divided among SOUTHCOM’s directorates. To address these challenges, SOUTHCOM’s commander returned the command to a traditional joint staff structure while retaining elements from its 2008 reorganization and plans to retain this structure for the foreseeable future. Our report made recommendations aimed at improving SOUTHCOM’s ability to conduct the full range of military missions that may be required in the region, while balancing its efforts to support interagency partners in enhancing regional security and cooperation. DOD acknowledged the challenges it had faced and agreed with our recommendations. In its response, the department noted that SOUTHCOM’s ability to respond to the Haiti crisis quickly was in part a by-product of close, collaborative relationships developed with a range of U.S. government interagency partners over many years. AFRICOM, as a relatively new command engaged in capacity-building efforts, has emphasized the need to work closely with U.S. embassies to ensure that activities are consistent with U.S. foreign policy and to contribute to a unity of effort among interagency partners (see fig. 5). In addition, the command has designated cultural awareness as a core competency for its staff. However, we found that some AFRICOM staff have limited knowledge about working with U.S. embassies and about cultural issues in Africa, and the training or guidance available to augment personnel expertise in these areas is limited. While AFRICOM has efforts under way to strengthen staff expertise in these areas, the limited knowledge among some staff puts AFRICOM at risk of being unable to fully leverage resources with U.S. embassy personnel, build relationships with African nations, and effectively carry out activities. AFRICOM emphasizes the importance of collaborating with its interagency partners, but some personnel’s limited knowledge of working with U.S. embassies can impose burdens on embassies’ staff who may be taken away from their assigned duties to help AFRICOM. For example, a U.S. embassy official in Uganda stated that AFRICOM personnel arrived in country with the expectations that the embassy would take care of basic cultural and logistical issues for them. Also, AFRICOM’s Horn of Africa task force personnel have, at times, approached the Djiboutian government ministries directly with concepts for activities rather than following the established procedure of having the U.S. embassy in Djibouti initiate the contact. Additionally, while cultural awareness is a core competency for AFRICOM, the limited knowledge of some personnel in the command and its military service components regarding Africa cultural issues has occasionally led to difficulties in building relationships with African nations—such as when AFRICOM’s task force distributed used clothing to local Djibouti villagers during Ramadan, which offended the Muslim population, or proposed drilling a well without considering how its placement could affect local clan relationships. While AFRICOM personnel and forces deploying for activities receive some training on working with interagency partners and on African cultural awareness—and efforts are under way to increase training for some personnel—our review of training presentations indicated that they were insufficient to adequately build the skills of its staff. AFRICOM officials told us that training includes Web courses and seminars, and that there are other training requirements for personnel deploying to Africa such as medical and cultural awareness training. Officials said, however, that while training is encouraged, it is not required, and that the comm does not currently monitor the completion of training courses. Furthermore, officials from several AFRICOM components voiced a preference for more cultural training and capabilities. In our prior work on AFRICOM’s Horn of Africa task force, we similarly was reported that the task force’s training on working with U.S. embassies not shared with all staff, and cultural awareness training was limited. We recommended, and DOD agreed, that the Secretary of Defense dir AFRICOM to develop comprehensive training guidance or a program t augments assigned personnel’s understanding of African cultural awareness and working with interagency partners. In addition, in our report on AFRICOM released today, we recommended that the Secretary of Defense direct AFRICOM, in consultation with State and USAID, to develop a comprehensive training program for staff and forces involved in AFRICOM activities that focuses on working with interagency partners and on cultural issues related to Africa. DOD agreed with the recommendation, describing some efforts that AFRICOM was taking and stating that the command will continue to develop and conduct trainin improve its ability to work with embassies and other agencies. While ou work on SOUTHCOM did not focus on workforce training, comm personnel have expressed the need for more opportunities to improve th eir understanding of working in an interagency environment. Mr. Chairman, this concludes my prepared remarks. I would be pleased to respond to any questions you or other Members of the Subcommittee may have at this time. For future information regarding this statement, please contact John H. Pendleton at (202) 512-3489 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Key contributors to this statement are listed in appendix I. In addition to the contact named above, Directors Stephen Caldwell and Jess Ford; Assistant Directors Patricia Lentini, Marie Mak, and Suzanne Wren; and Alissa Czyz, Richard Geiger, Dawn Hoff, Brandon Hunt, Farhanaz Kermalli, Arthur Lord, Tobin McMurdie, Jennifer Neer, Jodie Sandel, Leslie Sarapu, and Erin Smith made key contributions to this statement. Defense Management: Improved Planning, Training, and Interagency Collaboration Could Strengthen DOD’s Efforts in Africa. GAO-10-794. Washington, D.C.: July 28, 2010. Defense Management: U.S. Southern Command Demonstrates Interagency Collaboration, but Its Haiti Disaster Response Revealed Challenges Conducting a Large Military Operation. GAO-10-801. Washington, D.C.: July 28, 2010. National Security: Key Challenges and Solutions to Strengthen Interagency Collaboration. GAO-10-822T. Washington, D.C.: June 9, 2010. Defense Management: DOD Needs to Determine the Future of Its Horn of Africa Task Force. GAO-10-504. Washington, D.C.: April 15, 2010. Homeland Defense: DOD Needs to Take Actions to Enhance Interagency Coordination for Its Homeland Defense and Civil Support Missions. GAO-10-364. Washington, D.C.: March 30, 2010. Interagency Collaboration: Key Issues for Congressional Oversight of National Security Strategies, Organizations, Workforce, and Information Sharing. GAO-09-904SP. Washington, D.C.: September 25, 2009. Military Training: DOD Needs a Strategic Plan and Better Inventory and Requirements Data to Guide Development of Language Skills and Regional Proficiency. GAO-09-568. Washington, D.C.: June 19, 2009. Influenza Pandemic: Continued Focus on the Nation’s Planning and Preparedness Efforts Remains Essential. GAO-09-760T. Washington, D.C.: June 3, 2009. U.S. Public Diplomacy: Key Issues for Congressional Oversight. GAO-09-679SP. Washington, D.C.: May 27, 2009. Military Operations: Actions Needed to Improve Oversight and Interagency Coordination for the Commander’s Emergency Response Program in Afghanistan. GAO-09-615. Washington, D.C.: May 18, 2009. Foreign Aid Reform: Comprehensive Strategy, Interagency Coordination, and Operational Improvements Would Bolster Current Efforts. GAO-09-192. Washington, D.C.: April 17, 2009. Iraq and Afghanistan: Security, Economic, and Governance Challenges to Rebuilding Efforts Should Be Addressed in U.S. Strategies. GAO-09-476T. Washington, D.C.: March 25, 2009. Drug Control: Better Coordination with the Department of Homeland Security and an Updated Accountability Framework Can Further Enhance DEA’s Efforts to Meet Post-9/11 Responsibilities. GAO-09-63. Washington, D.C.: March 20, 2009. Defense Management: Actions Needed to Address Stakeholder Concerns, Improve Interagency Collaboration, and Determine Full Costs Associated with the U.S. Africa Command. GAO-09-181. Washington, D.C.: February 20, 2009. Combating Terrorism: Actions Needed to Enhance Implementation of Trans-Sahara Counterterrorism Partnership. GAO-08-860. Washington, D.C.: July 31, 2008. Information Sharing: Definition of the Results to Be Achieved in Terrorism-Related Information Sharing Is Needed to Guide Implementation and Assess Progress. GAO-08-637T. Washington, D.C.: July 23, 2008. Force Structure: Preliminary Observations on the Progress and Challenges Associated with Establishing the U.S. Africa Command. GAO-08-947T. Washington, D.C.: July 15, 2008. Highlights of a GAO Forum: Enhancing U.S. Partnerships in Countering Transnational Terrorism. GAO-08-887SP. Washington, D.C.: July 2008. Stabilization and Reconstruction: Actions Are Needed to Develop a Planning and Coordination Framework and Establish the Civilian Reserve Corps. GAO-08-39. Washington, D.C.: November 6, 2007. Homeland Security: Federal Efforts Are Helping to Alleviate Some Challenges Encountered by State and Local Information Fusion Centers. GAO-08-35. Washington, D.C.: October 30, 2007. Military Operations: Actions Needed to Improve DOD’s Stability Operations Approach and Enhance Interagency Planning. GAO-07-549. Washington, D.C.: May 31, 2007. Combating Terrorism: Law Enforcement Agencies Lack Directives to Assist Foreign Nations to Identify, Disrupt, and Prosecute Terrorists. GAO-07-697. Washington, D.C.: May 25, 2007. Results-Oriented Government: Practices That Can Help Enhance and Sustain Collaboration among Federal Agencies. GAO-06-15. Washington, D.C.: October 21, 2005. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | Recognizing the limits of military power in today's security environment, the Department of Defense (DOD) is collaborating with other U.S. federal agencies to achieve its missions around the world. DOD's combatant commands, such as U.S. Southern Command (SOUTHCOM) and U.S. Africa Command (AFRICOM), play key roles in this effort. Both aim to build partner nation capacity and perform humanitarian assistance, while standing ready to perform a variety of military operations. Among its missions, SOUTHCOM supports U.S. law enforcement and intelligence agencies in the Americas and Caribbean in disrupting illicit trafficking and narco-terrorism. As DOD's newest command, AFRICOM works with U.S. diplomacy and development agencies on activities such as maritime security and pandemic response efforts. Today GAO issued reports that the subcommittee requested on SOUTHCOM (GAO-10-801) and AFRICOM (GAO-10-794), which in part evaluated how each collaborates with U.S. interagency partners. This testimony summarizes that work and provides observations from ongoing work on U.S. counterpiracy efforts by focusing on 3 key areas essential for interagency collaboration. GAO's work has shown that developing overarching strategies, creating collaborative organizations, and building a workforce that understands how to fully engage partners are key areas where agencies can enhance interagency collaboration on national security issues. GAO found that DOD's SOUTHCOM and AFRICOM have demonstrated some practices that will help enhance and sustain collaboration, but areas for improvement remain. (1) Overarching strategies: SOUTHCOM and AFRICOM have sought input from several federal agencies in creating their theater campaign plans, which outline command priorities, and for other strategies and plans. However, AFRICOM has not completed plans that detail its activities by country and that align with embassy strategic plans to ensure U.S. government unity of effort in Africa. Also, GAO's preliminary work indicates that a U.S. action plan provides a framework for interagency collaboration to counter piracy in the Horn of Africa region, but the plan does not assign agencies their roles or responsibilities for the majority of tasks in the plan. (2) Collaborative organizations: Both commands have organizational structures that encourage interagency involvement in their missions. Each has a military deputy commander to oversee military operations and a civilian deputy to the commander from the State Department to oversee civil-military activities. Both commands also embed interagency officials within their organizations, but limited resources at other federal agencies have prevented interagency personnel from participating at the numbers desired. However, AFRICOM has struggled to fully leverage the expertise of embedded officials. Moreover, while SOUTHCOM's organizational structure was designed to facilitate interagency collaboration, the 2010 Haiti earthquake response revealed weaknesses in this structure that initially hindered its efforts to conduct a large-scale military operation. (3) Well-trained workforce: AFRICOM has emphasized the need to work closely with U.S. embassies to ensure that activities are consistent with U.S. foreign policy and to contribute to a unity of effort among interagency partners. In addition, the command has designated cultural awareness as a core competency for its staff. However, some AFRICOM staff have limited knowledge about working with U.S. embassies and about cultural issues in Africa, which has resulted in some cultural missteps. Further, limited training is available to enhance personnel expertise. While GAO's work on SOUTHCOM did not focus on training, personnel from the command also expressed the need for more opportunities to improve their understanding of working in an interagency environment. GAO made recommendations to the commands aimed at improving their capabilities to perform their missions through the development of plans and training. DOD agreed with the recommendations. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
Ports comprise many different stakeholders, both public and private. Port authorities also may have jurisdiction over some or all of the geographical area of a port. The port authority can be an agency of the state, county, or city in which the port is located. In most ports in North America, the actual task of loading and unloading goods is carried out by private operators who lease space or equipment from the port authority. (In some ports, the port authority also manages some of these stevedoring activities.) The percentage of the port area over which the port authority has jurisdiction, and the level of involvement of the port authority in the port’s operations, is different from port to port. This variability in port authority jurisdiction and operational involvement has direct consequences for portwide disaster preparedness. Even though a port authority may have a thorough disaster plan in place, that plan may not be binding on any of the private operators in the port. The stakeholders involved at any given port can vary but, in general, they include port authorities, private-sector operators doing business within the port, government agencies, and information-sharing forums. Table 1 summarizes these basic participants and their roles. These various stakeholders interact in a variety of ways. The port authority provides a limited governance structure for the port. Many port authorities lease piers, or “terminals,” and equipment to stevedoring companies and shipping lines that are responsible for the actual loading and transport of cargo. Some port authorities also operate cargo terminals alongside the private operators. Figure 3 depicts the main elements of a typical port. Individual ports may not include all of these elements, or may include some not depicted here. Several federal agencies provide support to ports in natural disaster planning, response, and recovery (see table 2). These agencies have different missions that relate to port operations, including natural disaster planning and response. For example, the Coast Guard is the agency responsible for most federal oversight related to portwide safety and security. It plays the primary role in coordinating efforts for homeland security efforts. FEMA plays a role in homeland security planning and also administers several assistance programs for disaster preparation and recovery. The Maritime Administration plays a general role in coordinating efforts to strengthen the maritime system and also has the ability to provide maritime assets that could be used to support homeland security interests. These vessels are part of the country’s National Defense Ready Reserve Fleet, including ships and barges, which could be used for housing, power generation, or the movement of water and other supplies. The terrorist attacks of September 11, 2001, prompted additional federal efforts to address a broad spectrum of emergencies. The Homeland Security Act of 2002 required DHS to develop a comprehensive National Incident Management System (NIMS). NIMS is intended to provide a consistent framework for incident management at all jurisdictional levels regardless of the cause, size, or complexity of the situation and to define the roles and responsibilities of federal, state, and local governments, and various first responder disciplines at each level during an emergency event. To manage all major incidents, NIMS has a standard incident management system, called the Incident Command System, with five functional areas—command, operations, planning, logistics, and finance and administration. NIMS also prescribes interoperable communications systems and preparedness before an incident happens, including planning, training, and exercises. In December 2004, DHS issued the National Response Plan (NRP), intended to be an all-discipline, all-hazards plan establishing a single, comprehensive framework for the management of domestic incidents where federal involvement is necessary. The NRP includes planning assumptions, roles and responsibilities, concept of operations, and incident management actions. The NRP also includes a Catastrophic Incident Annex, which provides an accelerated, proactive national response to a “catastrophic incident,” defined as any natural or man-made incident, including terrorism, resulting in extraordinary levels of mass casualties, damage, or disruption severely affecting the population, infrastructure, environment, economy, national morale, or government functions. Developing the capabilities needed to deal with large-scale disasters is part of an overall national preparedness effort that should integrate and define what needs to be done, where, based on what standards, how it should be done, and how well it should be done. Along with the NRP and NIMS, DHS has developed the National Preparedness Goal. Considered as a group, these three documents are intended to guide investments in emergency preparedness and response capabilities. The NRP describes what needs to be done in response to an emergency incident, either natural or man-made, the NIMS describes how to manage what needs to be done, and the National Preparedness Goal describes how well it should be done. The National Preparedness Goal is particularly useful for determining what capabilities are needed, especially for a catastrophic disaster. The interim goal addresses both natural disasters and terrorist attacks. It defines both the 37 major capabilities that first responders should possess to prevent, protect from, respond to, and recover from disaster incidents and the most critical tasks associated with these capabilities. An inability to effectively perform these critical tasks would, by definition, have a detrimental impact on effective protection, prevention, response, and recovery capabilities. The Maritime Infrastructure Recovery Plan (MIRP), released by DHS in April 2006, applies these disaster preparedness documents to the maritime sector. The MIRP is intended to facilitate the restoration of maritime commerce after a terrorist attack or natural disaster and reflects the disaster management framework outlined in the National Response Plan. The MIRP addresses issues that should be considered by ports when planning for natural disasters. However, it does not set forth particular actions that should be taken at the port level, leaving those determinations to be made by the port operators themselves. The 9/11 Commission pointed out that no amount of money or effort can fully protect against every type of threat. As a result, what is needed is an approach that considers the relative risks these various threats pose and determines how best to use limited resources to prevent threats, where possible, and to respond effectively if they occur. While the Homeland Security Act of 2002 and Homeland Security Presidential Directive 7 call for the use of risk management in homeland security, little specific federal guidance or direction exists as to how risk management should be implemented. In previous work examining risk management efforts for homeland security and other functions, we developed a framework summarizing the findings of industry experts and best practices. This framework, shown in figure 4, divides risk management into five major phases: (1) setting strategic goals and objectives, and determining constraints; (2) assessing the risks; (3) evaluating alternatives for addressing these risks; (4) selecting the appropriate alternatives; and (5) implementing the alternatives and monitoring the progress made and results achieved. Recent natural disasters—particularly Hurricanes Katrina, Wilma, and Rita in 2005—challenged affected ports on several fronts, according to port authority officials. Since 1998, hurricanes have damaged buildings, cranes, and other equipment owned by seven of the port authorities we interviewed. Ports also reported damage to utility systems and experienced delays in water, sewer, and power restoration. Port authorities cited clearing waterways and debris removal as another difficulty. In the case of Hurricane Katrina, some ports, such as Gulfport and New Orleans, have not yet returned or took about 6 months to return to predisaster operational levels, respectively. Separate from the physical impact of the disasters, challenges occurred with personnel, communications and coordination issues and, according to port authority officials, these challenges proved more difficult than anticipated. In some cases, personnel had evacuated the area, and port officials were unsure when staff would be able to return to work. Given that many phone lines were down, there were delays in restoring phone service and, in most cases, ports did not have communications alternatives in place. Some port authorities also reported difficulties in working with local, state, and federal entities during the recovery process, including coordinating re- entry to the port of port personnel and filing for FEMA disaster recovery assistance. Even though most ports anticipated and had plans in place to mitigate infrastructure damage from natural disasters, over half of the port authorities we contacted reported that the disasters created infrastructure challenges. Twelve of the 17 ports we reviewed had experienced a hurricane or earthquake since 1998, and among those 12 port authorities, 7 reported challenges in restoring infrastructure (see fig.2). While we were unable to review a complete list of disaster assistance estimates, some port authorities were able to provide specific dollar amounts for repair damage to buildings, cranes, or other equipment. For instance, the Port of Miami reported spending more than $6 million on repairs as a result of Hurricanes Katrina, Wilma, and Rita, including damage to facilities, signage, sea wall and storm drainage system. Likewise, The Port of Houston reported spending $200,000 for facility repairs following Hurricane Rita. Ports were still faced with these repair costs even though a majority of the port plans we reviewed included infrastructure damage mitigation. As a way to work around the damaged structures, ports also utilized temporary trailers for administrative and operational functions. For example, this occurred at the Port of Port Arthur, where the strategy of reserving backup equipment with appropriate vendors was included in that port’s Hurricane Readiness Plan. Besides the repair costs involved, another indication of the significance of damage to infrastructure was the effect on port operations. In several cases, tenants left the port and moved elsewhere. For example, Port of New Orleans officials said that because they are unsure if departed tenants at the port will return, they have been reluctant to replace three severely damaged container cranes. Operations have been even more curtailed at the Port of Gulfport, also because of Hurricane Katrina. Port authority officials report that they have been able to repair only 3 of their 12 warehouses, which limited their ability to accommodate storage for some of their major operators. These operators have since moved their operations to other nearby ports, such as Pascagoula, Mississippi, or Mobile, Alabama. Besides damage to buildings, cranes, and other equipment involved specifically in moving cargoes, port authorities also reported damages to their utility systems, including water, sewer, and power. For example, following Hurricane Katrina, the Port of Port Arthur was without power for approximately 2 weeks. Because of a lack of on-site generators, port officials limited port operations to daylight hours only. The power outage also limited operation of certain hangar doors that required electrical power to be opened. Ports with damage to water and sewer included Gulfport, where 2 months were needed to restore its sewer and water capacity. Similarly, the Port of Pascagoula had three damaged water wells as a result of Hurricane Katrina. Port officials told us one of those wells was still not operational almost a year later. While some ports included backup water and power resources in their contingency utility plans, officials at one port said their backup resources may not be adequate to address long-term or extensive outages. In fact, 10 of the 17 ports we reviewed did not have plans for utility system restoration. The lack of anticipation of these vulnerabilities was particularly apparent for ports affected by Hurricanes Katrina, Wilma, and Rita; only 4 of the 10 ports impacted by those storms had planned for utility challenges. For example, Port of New Orleans officials said their supply of 5 to 10 days of water and 3 to 5 days of power through generators was not enough to sustain them through the outages caused by Hurricane Katrina. While many ports indicated that several federal agencies were eventually able to effectively aid in clearing the waterways and restoring aids to navigation, ports’ experiences varied. Their experiences also demonstrated that rapid clearing of waterways is key to reestablishing port operations and emphasizes the need for ports to coordinate and arrange for debris removal and restoring aids to navigation ahead of time. Following are some examples: Following Hurricane Katrina, the Port of Gulfport had to remove large amounts of debris, such as tree limbs that were hanging and leaning over roads, as well as containers, cargo, and other equipment that winds had scattered into the roadways. Port officials said that clearing these obstructions was essential to re-establishing port operations. Immediately after the hurricane, the local Navy construction battalion (called Seabees) volunteered to assist the port by clearing roads with their large bulldozers, which enabled supplies and cargo to move in and out of the port. The Seabees also cleared boat ramps so that Coast Guard search and rescue vessels could safely enter the waterway. Port officials estimated that, over a period of 3 weeks, the Seabees cleared about 30 percent of the debris in the port area. After the Seabees were called to other duties, Port of Gulfport officials hired a contractor to remove the remaining debris at a cost of about $5 million. Port of Gulfport officials said that they applied for FEMA reimbursement of these costs. Further, they explained that the use of and planning for existing federal resources for debris removal, such as the Navy Seabees, could have saved even more time and possibly federal dollars that would later be paid to the port in the FEMA reimbursement process. Inside the port area, the Port of Mobile experienced challenges with debris removal that federal agencies such as the Corps or the Coast Guard were not responsible for removing. These challenges may have caused additional delays in restoring port operations. For instance, port officials explained that storm surge waters from Katrina loosened several oil rigs in the Gulf, one of which made its way into the port’s pier area and damaged several piers. They said the port is currently in litigation to resolve who will pay for the damages. Port of Mobile officials also estimated that dredging expenses, including the removal of branches, sand, and silt from pier areas will be more than $7.5 million. Because the rig obstruction and other pier damages were not in the federal waterway or jurisdiction, Port of Mobile officials said they were only able to receive limited assistance from federal agencies in resolving their internal damage issues. Officials of eight port authorities we contacted reported challenges related to personnel, communications, or coordination with port stakeholders as a result of hurricanes since 1998 and, in conversations with us, they indicated that these challenges were more difficult than anticipated. Port plans we reviewed addressed some of these types of vulnerabilities to natural disasters. However, ports still identified such vulnerabilities as a significant obstacle to their ability to return to predisaster operational levels. Several ports cited examples about how their personnel had evacuated and, for numerous reasons, were unable to return to work. For example, several Port of Gulfport employees lost their homes during Hurricane Katrina and had no local living arrangements for themselves or their families. Likewise, the Port of New Orleans said its operations were stifled by the lack of personnel and labor in both Hurricane Katrina and Hurricane Rita. At the Port of Port Arthur, lack of power for area homes kept employees from retuning immediately, causing temporary delays in port operations. Port authorities also did not anticipate the extent to which their communications systems would be impacted. High winds and flooding from the hurricanes rendered phone lines out of service. With phones lines down, port authorities were unable to get in touch with their staff or other port stakeholders to share information. For instance, we learned that approximately 50 percent of phones at the Port of Mobile were out of service for about 2 to 4 weeks. Other ports, including New Orleans, Pascagoula, and Port Arthur, also experienced phone outages and reported limitations in cell phone reception. Ports also identified coordination challenges with local, state, and federal stakeholders while planning for and recovering from natural disasters. At the local level, one coordination problem port officials experienced was in re-entering the port after the storm. For example, in Gulfport, port officials were denied entry to port property for the first 2 weeks following Hurricane Katrina. Similarly in Houston, law enforcement agencies blocked roads for access back into Houston after the Hurricane Rita evacuation. In some cases, port officials did not have the proper credentials required by local police and other emergency management officials to be allowed roadway access through the city to their port. In other instances, we found that ports experienced varied levels of coordination with local emergency management agencies, especially regarding planning efforts. For example, Mobile County Emergency Management officials affirmed that they have a close working relationship with the Port of Mobile, where they have helped the port conduct risk assessments and emergency planning activities, and where they coordinate with port officials on other plans involving safety, security, and the environment. Conversely, Port of Gulfport and Harrison County Emergency Management officials in Mississippi said they had limited contact and coordination regarding emergency recovery. One county emergency management official said that although the agency has made efforts to share planning documents with the port, the agency is required to work through the Mississippi Emergency Management Agency and follow any guidance in the state emergency plan to request resources from or provide assistance to the port. At the federal level, one coordination issue reported by port stakeholders involved difficulties in coordinating with FEMA for recovery resources. Some local emergency management officials and port officials that we interviewed expressed concerns about the level of interaction with FEMA officials before an incident occurs. For example, Port of Jacksonville officials said they would like to see FEMA take a more active role in the disaster planning process, such as participation on the AMSC at the local level or coordinating with the Florida State Department of Community Affairs at the state level. Similarly, Port of Los Angeles officials said effective communication with FEMA is essential and that they would like to communicate more clearly with FEMA about reimbursement policies before a disaster takes place. In fact, in November 2006, port officials from Los Angeles and Oakland held a joint meeting with FEMA and the California Office of Emergency Services to discuss the current federal and state regulations and practices regarding disaster relief fund and reimbursement policy. Port stakeholders also expressed concerns about coordinating with FEMA after an incident occurred, including inconsistencies in information and difficulty in appropriately completing FEMA forms and other documents required for reimbursement. At the county emergency management level, one agency official cited an inconsistency of the interpretation of FEMA policies and changing personnel as some of the challenges in working with FEMA. This official suggested that interacting with FEMA officials more frequently before a disaster would help the port authority better understand which personnel to contact in an emergency situation. The official said this coordination problem became obvious during the Hurricane Katrina recovery effort when, after the port had made several requests, FEMA did not send a representative to the area. Port officials in Gulfport also found it difficult to reconcile their damages using FEMA’s cost estimate process. To resolve the paperwork confusion, the Port of Gulfport hired an outside company to deal with FEMA directly and to handle all reimbursement-related issues on their behalf. While Port of Gulfport officials recognized that FEMA’s attention to detail was an effort to prevent fraud and abuse, they also said FEMA staff could have done a better job in providing guidance about the reimbursement process. Besides having coordination challenges with FEMA, we learned that several ports were unclear about resources that were available for recovery from the Maritime Administration. Immediately following Hurricane Katrina, the Gulf area was in need of critical resources such as power, water, and personnel. However, due to infrastructure damages around the area, it was difficult to get these resources into ports. As such, The Maritime Administration provided, with the concurrence of the Department of Defense, ready reserve vessels for FEMA’s use. These ready reserve vessels are strategic sealift assets usually used for defense purposes that could be used for command and control, housing, power generation, or the movement of water and other supplies. We found that ports’ knowledge about these assets and how to request them was limited. For example, port authority officials at one port turned down the Maritime Administration’s offer for a housing vessel. The port determined that the deep draft and large size of the vessel might impede commercial traffic and block other vessels from entering their port. Port officials reached this determination without the knowledge that smaller vessels for the same purpose could have been provided by the Maritime Administration. The vessel offered by the Maritime Administration, however, was instead deployed to the Port of New Orleans area to house first responders. Many port authorities have taken steps to address the challenges resulting from recent natural disasters. Individually, they have taken such steps as upgrading communications equipment, adding backup communications approaches and power equipment, and creating alternative sites for administrative operations and storage of computer data. Collectively, they have shared best practices for disaster planning and response, most notably through an industry-wide publication with detailed planning steps and guidelines. Port authorities that were not directly impacted by recent disaster events have also taken steps to revise their planning efforts, including greater coordination with other port stakeholders. Many port authorities have adapted or improved existing stakeholder forums to assist in facilitating port planning for natural disasters. At the federal level, agencies such as the Maritime Administration have taken steps to assist ports in identifying federal resources available for disaster response and recovery. As a result of the lessons learned from recent natural disasters, port authorities report taking many steps to mitigate vulnerabilities. One mitigation tactic reported by many port authorities is to add equipment and develop redundant systems to help during any recovery efforts. The most frequent redundancy added was in creating communications alternatives. Various port authorities reported purchasing communications equipment that does not necessarily rely on traditional land lines for calling, such as analog pagers, wireless handheld devices, CB radios, and satellite phones. They also integrated more sophisticated communications hardware and software programs. Some ports, such as Houston and San Diego, implemented 1-800 phone numbers to receive calls from port personnel. As an additional precaution, the Port of Houston utilizes call centers located out of state in areas that are less likely to have been impacted by the same storm. In another effort to route calls out of the impacted area, the Port of New Orleans has also been assigned phone numbers with alternative area codes. Besides making improvements to communications systems, many port authorities took steps related to power and administrative operations. Seven port authorities reported purchasing or arranging for alternative power supplies that could be used during an outage. For example, the Port of New Orleans purchased generators after the 2005 hurricane season. Ports also recognized the need for administrative and information technology location alternatives. Four port authorities reported changing their alternative administrative sites since recent storms. Port authorities also told us that they have changed the way they back up and store their electronic data and equipment. For example, the Port of New Orleans previously had its alternative work site only 3 miles away from its regular operations location. Since both operations sites could be susceptible to the same disaster event, Port of New Orleans officials have partnered with the Port of Shreveport, Louisiana, almost 200 miles away, to use Shreveport’s facilities as an alternate operations site if the Port of New Orleans is out of business for more than 5 days. Further, the two ports have prepared a mutual agreement, which includes cost sharing efforts for information technology infrastructure upgrades at the Port of Shreveport, to better accommodate New Orleans’ needs in a disaster. Another mitigation tactic by ports has been the sharing of best practices and lessons learned from recent natural disasters. Through efforts by the AAPA, a nationwide industry group, ports from across the U.S. and Canada participated in the development of an industry best practices document. In developing this document, AAPA organized various working groups, which included port officials from ports that had been affected by recent natural disasters, as well as ports that had not been affected. Acting as a forum for port officials to share their experiences with natural disasters, these working groups were able to develop a manual focused on port planning and recovery efforts. Vetted by AAPA members, the manual includes planning for emergency operations, communications, damage assessments, insurance and FEMA claims processes, coordinating with federal agencies, and overall emergency planning objectives. Another industry group, the GICA, has worked closely with the Corps, Coast Guard and other maritime agencies to implement new practices for a more efficient response to maritime related incidents. Many of these efforts have been implemented as result of recent hurricanes. For example, a special Logistics Support Center is set up during response times for the sole purpose of assisting the Corps and Coast Guard with contracting special equipment, including water, fuel and crane barges, towing vessels, pumps, and generators. Regarding clearing the waterways, GICA barge members have provided knowledgeable waterway operators and state-of-the-art boats to assist Coast Guard personnel in conducting channel assessments immediately following a storm. In an effort to restore aids to navigation, GICA contacts also towed 50 temporary buoys and supplied aircraft for aerial surveillance of the waterways. Moreover, the Corps, Coast Guard, and GICA formed the Gulf Coast Inland Waterways Joint Hurricane Team to develop a protocol for storm response. Finalized in July 2006, the Joint Hurricane Response Protocol is an effort to more fully develop lessons learned from previous hurricane seasons and waterways management practices, with the goal of implementing an effective restoration of Gulf Coast maritime commerce following future storms. Ports that have not experienced problems as a result of recent disasters but that are nonetheless susceptible to disaster threats have also responded to these lessons learned by other ports. For example, the Port of Tacoma hired a consultant to assist in developing a business continuity plan. The Port of Jacksonville has also undertaken a comprehensive enhancement to its continuity of operations plan. Likewise, as a result of lessons learned from the Loma Prieta Earthquake in Oakland, the Port of Los Angeles developed more stringent seismic building codes. Additionally, Port of Savannah officials told us that they, too, have changed their prehurricane crane operations based on lessons learned from hurricanes in the Gulf region. We found several examples of port efforts to improve stakeholder coordination, including utilizing existing forums to coordinate disaster planning, as well as realigning and enhancing their current plans. Regarding the use of existing forums, port authorities in both New Orleans and Mobile said they were using their AMSC to coordinate response and recovery efforts. Moreover, GAO has previously reported that in the wake of Hurricane Katrina, information was shared collaboratively through AMSCs to determine when it was appropriate to close and then reopen the port. Port-specific coordination teams, such as those at the Port of Houston, have also used their lessons learned to improve coordination for natural disaster planning. Houston’s port coordination teams are an outgrowth of the port’s relationships with other maritime stakeholders in the Houston-Galveston Navigation Safety Committee, which includes a wide variety of waterway users and operators. In another example, the Port of Oakland works closely with the City Disaster Council on emergency planning and participates in various exercises with city, county, and state officials. We also found several examples of how ports have aligned their local planning with the national planning structure and have identified various ways to enhance their current coordination plans. The national structure, which includes NIMS and NRP, is designed to provide a consistent framework and approach for emergency management. Port plans that we reviewed, in particular those from ports in hurricane impacted areas, have identified the importance of adapting to this national structure and emergency response system. For example, the Port of Mobile’s emergency operations plan explains that the complexity of incident management and the growing need for stakeholder coordination has increased its need for a standard incident management system. Therefore, the Port of Mobile’s emergency operations plan outlines the use of an incident management framework from which all agencies can work together in an efficient and effective manner. Some port authorities making changes have not experienced any significant impact from recent disasters. For instance, Port of Jacksonville officials reported that Hurricane Katrina impacts in the Gulf region prompted them to revise their disaster preparedness plans, including reorganizing the plans to reflect NIMS language and alignment with NRP guidelines. Similarly, Port of San Diego officials said they hired a consultant to assist them with drafting their emergency response and business continuity plan. San Diego’s plan prioritized risks, clarified roles and responsibilities of key departments, and laid out directions on how to better coordinate with local emergency management officials during a disaster event. Since the 2005 hurricane season, federal agencies have also taken steps to help port authorities strengthen ports’ ability to recover from future natural disasters. These efforts have focused on increased coordination and communication with stakeholders and also on building stakeholders’ knowledge about federal resources for port recovery efforts. The efforts primarily involve four federal agencies that in some fashion work directly with ports—the Maritime Administration, the Coast Guard, FEMA and the U.S. Army Corps of Engineers. Efforts for those four agencies are as follows: Maritime Administration Efforts: The Maritime Administration has taken two main steps: developing an approach for activating maritime assets in disaster recovery, and updating a risk management guidebook. During the 2005 hurricane season, the Maritime Administration emerged as a critical resource for the Gulf area by providing vessels from the nation’s National Defense Ready Reserve Fleet to enable recovery operations and provide shelter for displaced citizens. Since that time, FEMA developed a one-time plan—the Federal Support Plan, which was cited specifically for the 2006 Hurricane Season and specific to the federal government’s response efforts in the State of Louisiana. The Maritime Administration contributed to this plan by identifying government and commercial maritime capabilities that could be employed in response to a disaster. According to Maritime Administration officials, while the information is focused on the Gulf area, it could be easily adapted to other areas in the United States if a disaster occurred. To date, the Maritime Administration is completing the process of identifying needs and capabilities and plans to provide a directive regarding capabilities to its regional offices in June 2007. However, no strategy exists for communicating this information to ports. The Maritime Administration is also currently updating its publication titled Port Risk Management and Insurance Guidebook (2001). This publication is the Maritime Administration’s “best practices” guide for port risk management. Developed primarily to assist smaller ports in conducting risk management, it includes information on how ports can obtain insurance coverage, facilitate emergency management and port security, and apply risk management. The Maritime Administration began updating the guidebook after the 2005 hurricane season. According to officials from the Maritime Administration, ports are actively using this guidebook, especially since many of the contributors are port directors and risk managers at the ports. While these efforts demonstrate the Maritime Administration’s increased involvement in assisting ports in planning for future disasters, we also observed that Maritime Administration regions vary in their level of communication and coordination with ports. According to a Maritime Administration official, the Gulf and East Coast regions have been working with FEMA regional offices to quickly activate needed assets in case of a disaster. However, while the Gulf and East Coast regions have been strengthening these relationships, other regions may not have the same level of coordination. We found, in general, port authorities’ interaction with the Maritime Administration was limited for natural disaster planning, and the ports we spoke to said they usually did not work directly with the agency in disaster planning. This view was echoed by Maritime Administration officials who said that the relationship between the agency’s regional offices and the ports in their respective areas varied across the country. Coast Guard efforts: Coast Guard efforts in natural disaster planning varied considerably from port to port and were most extensive in the Gulf. While in general, the Coast Guard was considered successful in its missions during the 2005 hurricane season, its officials said they were taking additional steps in improving planning for recovery efforts with port stakeholders based on their experiences with recent natural disasters. For example, at the Port of Mobile, Coast Guard officials said that participating in an actual Incident Command Systememergency centers has been as helpful as exercises and, since the 2005 hurricane season, they have utilized such a unified command at least 10 times in preparation for potential hurricane landfalls in the region. At other ports, the Coast Guard had a more limited role in assisting ports in planning for natural disasters. Even at ports that had not experienced substantial damage from a recent natural disaster, however, Coast Guard units were applying lessons learned from other ports’ experiences and increasing their level of involvement. For example, the Port of Houston sustained minimal damage from Hurricane Rita; however, Coast Guard officials said that they identified areas where they could make improvements. The Coast Guard at the Port of Houston leads a recovery planning effort through port coordination teams, which include stakeholders such as the port authority, Coast Guard, and private operators, working together during disaster recovery efforts. These teams are all-hazards focused and are activated differently for terrorist incidents or natural disasters. Coast Guard officials said that although the teams were successful in planning for Hurricane Rita, there were areas for improvement, including outreach and training with port stakeholders and communication. Further, Coast Guard officials at the Port of Tacoma said that other ports’ experiences with recent natural disasters has generated interest in them becoming more involved in the planning and coordination of natural disasters. They also indicated they were interested in adapting, in some form, a planning forum similar to the Port of Houston’s port coordination teams. FEMA efforts: While state and local emergency management agencies assist in facilitating FEMA disaster planning at the port level, FEMA has several efforts under way to improve its assistance to ports for disaster recovery. For instance, FEMA officials said that through the Public Assistance Program, FEMA is able to provide assistance to ports that are eligible applicants after a major disaster or emergency. Based on lessons learned from Hurricane Katrina, FEMA is also reviewing and updating its policies and guidance documents associated with this program. To administer the program, FEMA will coordinate closely with federal, state, and local authorities (including emergency management agencies) through its regional offices. Officials also said that through planning, training, and exercise activities sponsored by DHS, they hope to have greater opportunities to interact and coordinate with port authorities and other local agencies before disasters occur. Further, officials agree that coordination with their local counterparts is an important part of emergency management and disaster recovery efforts. U.S. Army Corps of Engineers efforts: Although the U.S. Army Corps of Engineers generally does not conduct natural disaster planning with ports, staff at the district level have made some efforts to increase their level of involvement in this process, particularly in the Gulf region. For example, district U.S. Army Corps of Engineers staff have (1) organized and chaired yearly hurricane planning forums to which all ports in the region are invited; (2) organized prestorm teleconferences for port stakeholders, National Oceanic and Atmospheric Administration, U.S. Navy, and in some instances, the media; (3) participated in the Coast Guard’s Partner Emergency Action Team, which specifically address disaster preparedness; (4) geographically aligned with the Coast Guard to better facilitate coordination during an emergency; and (5) implemented informational training on planning for hurricanes to ports and other maritime stakeholders. Many of these improvements were implemented as a result of Hurricane Ivan (2001) and the hurricanes from the 2005 season. However, the extent of the U.S. Army Corps of Engineers participation in natural disaster planning with ports varies. For instance, U.S. Army Corps of Engineers representatives in Savannah said they do not play a significant role in the port’s natural disaster planning for recovery efforts. Similarly in Jacksonville, U.S. Army Corps of Engineers officials explained that their primary natural disaster recovery duty at the Port of Miami is to repair the federal channel and they do not participate in the port authority’s disaster planning efforts. However, the Jacksonville U.S. Army Corps of Engineers does cooperate with the Coast Guard’s Marine Safety Office in Jacksonville in the development of their hurricane preparedness plan. For this effort, it assisted in determining what vessels could remain in port during a hurricane and what vessels would be required to leave. Most port authorities we reviewed conduct planning for natural disasters separately from planning for homeland security threats. Federal law established security planning requirements that apply to ports. Similar requirements do not exist with regard to natural disaster planning. The ports we contacted used markedly different approaches to natural disaster planning, and the extent and thoroughness of their plans varied widely. A few ports have integrated homeland security and natural disaster planning in what is called an all-hazards approach, and this approach appeared to be generating benefits and is in keeping with experts’ recommendations and with the newest developments in federal risk management policy. A consequence of the divided approach was a wide variance in the degree to which port stakeholders were involved in natural disaster planning and the degree to which port authorities were aware of federal resources available for disaster recovery. For homeland security planning, federal law provides for the establishment of AMSCs with wide stakeholder representation, and some ports are using these committees or another similar forum with wide representation in their disaster planning efforts. DHS, which through the Coast Guard oversees the AMSCs, provides an example of how to incorporate a wider of scope of committee activity. Of the ports we visited, more than half developed plans for natural disasters separately from plans that address security threats. This is likely due to the requirement that port authorities carry out their planning for homeland security under the federal framework created by the Congress in the Maritime Transportation Security Act (MTSA), under which all port operators are required to draft individual security plans identifying security vulnerabilities and approaches to mitigate them. Under the Coast Guard’s implementing regulations, these plans are to include such items as measures for access control, responses to security threats, and drills and exercises to train staff and test the plan. The plans are “performance- based”; that is, the security outcomes are specified, but the stakeholders are free to identify and implement appropriate solutions as long as these solutions achieve the specified outcomes. Because of the similarities in security and natural hazard planning these plans can be useful for guiding natural disaster response. MTSA also provided the Secretary of Homeland Security with the authority to create AMSCs at the port level. These committees—with representatives from the federal, state, local, and private sectors—offer a venue to identify and deal with vulnerabilities in and around ports, as well as a forum for sharing information on issues related to port security. The committee assists the Coast Guard’s COTP in developing an area maritime security plan, which complements the facility security plans developed by individual port operators. The plan provides a framework for communication and coordination among port stakeholders and law enforcement officials and identifies and reduces vulnerabilities to security threats throughout the port area. In contrast, port authority and operator natural disaster planning documents are generally not required by law and vary widely. According to one member from the AAPA, ports will have various interrelated plans, such as hurricane readiness plans, emergency operations plans, engineering plans, and community awareness and emergency response plans. Taken as a whole, the distinct plans for a particular port may represent the port’s risk management approach to disaster planning. In addition, port natural disaster plans are not reviewed by the Coast Guard. Representatives of the Coast Guard at locations we visited confirmed they do not review port authority or port operator planning documents pertaining to natural disaster planning. For example, officials at the Port of Oakland and the Port of Tacoma said they do not review the port or port stakeholders planning documents for natural disaster planning. Coast Guard officials at the Port of Savannah also noted that they do not review the hurricane plans for port operators. They contended that they do not have the expertise to advise the operators on how to protect or restart their particular operations. Moreover, natural disaster plans developed by port authorities generally do not apply to the port’s private operators. Only in one case did a port authority state that it required its private operators to draft a natural disaster plan. We found that the thoroughness of natural disaster plans varied considerably from port to port. For instance, the Port of Mobile had a relatively thorough plan. The Port of Mobile was affected by three major hurricanes in 2005-2006. Roughly a year after Hurricane Katrina, the Alabama State Port Authority completed an extensive emergency operations plan, based on an analysis that considered natural, man-made, and security-related hazards. The operations plan describes preparedness, response, recovery, and mitigation procedures for each identified threat, establishes requirements for conducting exercises, and establishes a schedule for regular plan reviews and updates. In contrast, the Port of Morgan City does not have a written plan for preparing for natural disaster threats but instead relies on port personnel to assess disaster risk and prepare appropriately. Following a disaster, the port authority relies on senior personnel to direct recovery efforts as needed. In the absence of uniform federal guidance for port disaster planning, some local governments have instituted local planning requirements. The differences in these local guidelines account for some of the variation in the content and thoroughness of port disaster plans. For example, the Miami-Dade County Emergency Management Office helps to coordinate disaster preparedness for all county agencies, including the Port of Miami. As such, the port submits its hurricane plans and continuity of operations plan to the office each year for review, which provides a certain level of quality assurance. By comparison, the Port of Los Angeles found local seismic building codes were insufficient to reach the desired level of preparedness, so the port developed its own seismic codes to guide infrastructure construction and repair. In contrast to the disjunctional planning for both natural disasters and security at ports, industry experts encourage the unified consideration of all risks faced by the port. Unified disaster preparedness planning requires that all of the threats faced by the port, both natural and man-made, be considered together. This is referred to as an all-hazards approach. Experts consider it to offer several advantages: Application of planning resources to both security and natural disaster preparedness. Because of the similarities between the effects of terrorist attacks and natural or accidental disasters, much of the planning, personnel, training, and equipment that form the basis of protection, response, and recovery capabilities are similar across all emergency events. As we have previously reported, the capabilities needed to respond to major disasters, whether the result of terrorist attack or nature, are similar in many ways. Unified risk management can enhance the efficiency of port planning efforts because of the similarity in recovery plans for both natural and security-related disasters. One expert noted that responding to a disaster would likely be the same for a security incident and a natural disaster incident from an operational standpoint. Efficient allocation of disaster-preparation resources. An all-hazards approach allows the port to estimate the relative impact of mitigation alternatives and identify the optimal mix of investments in these alternatives based on the costs and benefits of each. The exclusion of certain risks from consideration, or the separate consideration of a particular type of risk, gives rise to the possibility that risks will not be accurately assessed or compared, and that too many or too few resources will be allocated toward mitigation of a particular risk. Port risk management experts noted that, in the absence of an all-hazards risk management process, it is difficult to accurately assess and address the full spectrum of threats faced by a port. At the federal level, the Congress has introduced various elements of an all-hazards approach to risk management and assistance to ports. Examples are as follows: Single response approach to all types of emergency events. NIMS and NRP, which were implemented by DHS, provide a unified framework for responding to security and natural disaster events. NIMS is a policy document that defines roles and responsibilities of federal, state, and local first responders during all types of emergency events. The NRP is designed to integrate federal government domestic prevention, protection, response, and recovery plans into a single operational plan for all-hazards and all-emergency response disciplines. Using the framework provided by NIMS, the NRP describes operational procedures for federal support to emergency managers and organizes capabilities, staffing, and equipment resources in terms of functions that are most likely to be needed during emergency events. In addition, along with the NRP and NIMS, DHS has developed the National Preparedness Goal, as required by Homeland Security Presidential Directive 8. Considered as a group, these three documents are intended to guide investments in emergency preparedness and response capabilities for all hazards. An inability to effectively perform these critical tasks would, by definition, have a detrimental impact on effective protection, prevention, response, and recovery capabilities. Broadened focus for risk mitigation efforts. Security and Accountability for Every Port Act, passed in October 2006, contains language mandating that the Coast Guard institute Port Security Training and Exercise Programs to evaluate response capabilities of port facilities to respond to acts of terrorism, natural disasters, and other emergencies. Officials from the DHS Preparedness Directorate’s Grants and Training Office also noted that the criteria for the Port Security Grant Program is beginning to reflect the movement toward all-hazards planning in the future. DHS officials stated that the program may evolve to focus more on portwide risk management, rather than on risk mitigation for particular assets. Furthermore, grant applications that demonstrate mitigation of natural hazard risks in addition to security risks may be more competitive. Other officials noted that while the program may focus more on all hazards in the future, it will remain focused on security priorities for now. Another agency-level movement toward the all-hazards approach is occurring in the Coast Guard’s improvement of a computer tool it uses to compare security risks for targets throughout a port, including areas not under the jurisdiction of a local port authority. This tool, called the Maritime Security Risk Assessment Model (MSRAM), provides information for the U.S. Coast Guard COPT to use in deciding the most efficient allocation of resources to reduce security risks at a port. The Coast Guard is developing an all-hazards risk assessment and management system, partially fed by MSRAM, which will allow comparison of risks and risk- mitigation activities across all goals and hazards. The Coast Guard directs the Area Maritime Security Committee to use MSRAM in the development of the Area Maritime Security Plan. Given that the Coast Guard is enhancing the MSRAM with a tool that will incorporate natural hazards, the risks addressed in the Area Maritime Security Plan could likely include both natural and security threats in the future. An all-hazards approach is in many ways a logical maturation of port security planning, which saw an aggressive homeland security expansion in the wake of the terrorist attacks of September 11, 2001. One expert in seismic risk management we spoke with said port officials he contacted indicated that they were not focused on natural disaster risk because, in their view, the federal government wanted them to focus on security risks instead. At some ports, hurricanes or earthquakes may be a greater threat than terrorism, and a case can be made that overall risk to a port might be more effectively reduced through greater investment in mitigating these risks. While federal law provides guidance on addressing security risks through MTSA and its implementing regulations, it does not provide similar guidance pertaining to mitigation of natural disaster threats. Our previous work on risk management has examined the challenges involved in comparing risk across broader threat categories. A risk management framework that analyzes risks based on the likelihood that they will occur and the consequences of their occurrence is a useful tool for ensuring that program expenditures are prioritized and properly focused. In light of the competition for scarce resources available to deal with the threats ports face, a clear understanding of the relative significance of these threats is an important step. Two port authorities we reviewed have begun to take an all-hazards approach to disaster planning by developing planning documents and structures that address both security risks and natural disasters, and officials at both ports said this approach yielded benefits. At the Port of Houston, the Coast Guard used its authority to mandate the creation of port coordination teams by creating teams that include all port stakeholders and combine planning and response efforts for both security and natural disaster threats. This unified approach to risk management has allowed the port to respond efficiently to disasters when they occur, according to port officials. In particular, they said, the organization of the team changes to match the nature of the threat. For security threats, the teams are organized geographically and do not require that the entire port close down, thereby appropriately matching resources to the threat being faced. For natural disasters, the teams are organized functionally because of the more dispersed nature of the threat. Following the 2005 hurricane season, the Port of Mobile convened a task force to reorganize its disaster planning to address both security incidents and natural disasters. The task force, which recently completed its emergency operations plan, included the Port Authority Police Chief; Harbormaster; Environmental, Health and Safety Manager; and representatives of the port’s rail, cargo, intermodal and development divisions. A member of the county emergency management agency also served on the task force to provide expert guidance on emergency response planning. Port stakeholders in other ports that had not moved to an all-hazards approach also said preparedness and response practices for security incidents and natural disasters are sufficiently similar to merit combined planning. Officials in several ports said that although they are required to allocate certain resources to security risk mitigation, overall risk to the port would be more effectively reduced if they had the flexibility to allocate some of those resources to mitigating natural disaster risk. We have previously reported that, for homeland security planning, the AMSCs established under federal law have been an effective coordination tool. These committees have provided a structure to improve the timeliness, completeness, and usefulness of information sharing between federal and nonfederal stakeholders. Port stakeholders said that the committees were an improvement over previous information-sharing efforts because they established a formal structure for communicating information and new procedures for sharing information. Stakeholders stated that, among other things, the committees have been used as a forum for sharing assessments of vulnerabilities, providing information on illegal or suspicious activities, and providing input on Area Maritime Security Plans. Stakeholders, including private operators, said the information sharing had increased their awareness of security issues around the port and allowed them to identify and address security issues at their facilities. Likewise, Coast Guard officials said the information they received from nonfederal participants had helped in mitigating and reducing risks. In contrast to the regulatory requirements for the establishment of AMSCs, there are no nationwide federal mandates for all-hazards planning forums that involve a broad spectrum of stakeholders in disaster planning. In the absence of any consistent requirement or approach, we found substantial variation in the maturity of, and participation in, natural disaster planning forums at ports. As table 3 shows, the level of activity and the participants varied considerably. Some ports utilized their AMSC for both types of planning, while others conducted natural disaster planning efforts primarily within the local area’s broader emergency management forums, and still others conducted their planning piecemeal, with various entities meeting separately and not in one coordinated forum. The Port of Savannah provides an example of how separate planning for natural disasters and security can lead to a lack of coordination and information-sharing. While officials from the local emergency management agency said they reviewed and provided comments on the Georgia Port Authority’s most recent Hurricane Plan and Draft Emergency Operations Plan, this had not traditionally been the case over the past several years. According to a representative from the emergency management agency, if the port is not sharing its emergency operations plans, it makes it difficult for responders in the local area to understand what is happening within the port in terms of planning for natural disasters. Additionally, while the local EMA is enjoying an ongoing productive dialogue with port representatives in developing the Emergency Operations Plan and working on port safety and security issues, they are not having the same level of success with port representatives responsible for hurricane planning. Even so, officials said that they had seen marked improvement in the area of portwide cooperation and involvement among stakeholders. Port authorities’ lack of familiarity with FEMA’s programs is another example of the gaps that exist. We found that port authorities’ understanding of FEMA’s assistance was dependent on their relationship with the local or state emergency management office—a stakeholder that is not necessarily involved in the forums where the port’s natural disaster planning occurs. We discussed three FEMA programs with officials from our seven case study ports: the Public Assistance Program, Hazard Mitigation Grant Program and the Predisaster Mitigation Grant Program (see table 4 for brief descriptions). These programs provide ports with funds for disaster mitigation efforts before and after disaster events and assist ports in avoiding costly damages. Of the three programs, port authorities were most knowledgeable about, and most involved with, the Public Assistance Program, although even with this program, some port authorities reported encountering challenges with the process during the 2005 hurricane season. Their knowledge and participation in the two hazard mitigation grant programs was dependent on their involvement with the emergency planning office. FEMA officials told us that no ports have applied as an applicant or subapplicant for the Predisaster Mitigation Program, and only a few had received assistance through the Hazard Mitigation Grant Program since 1998. AAPA officials made the same point—that many ports are unaware, unsure how to navigate or do not understand the resources that are available to them for disasters. In its new best practices manual for natural disaster planning, AAPA included a section regarding various federal resources available, including FEMA. The 2005 hurricane season emphasized the need for ports to plan for other threats in addition to security. Since the terrorist attacks of September 11, 2001, the country has focused on enhancing its security measures, and ports in particular have been targeted due to their vulnerability and their criticality to the U.S. economy. While ports have long prepared to some degree for hurricanes and earthquakes, the hurricanes of 2005 highlighted key areas in which natural disaster planning was often inadequate. Even ports that were not directly impacted by the hurricanes recognized their own vulnerabilities and took additional actions. As ports continue to revise and improve their planning efforts, available evidence indicates that, if ports take a system-wide approach, thinking strategically about using resources to mitigate and recover from all forms of disaster, they will be able to achieve the most effective results. The federally established framework for ports’ homeland security planning appears to provide useful elements for establishing an all-hazards approach and adopting these elements appears to be a logical starting point for an all-hazards approach for port authorities. In particular, greater coordination between stakeholders appears important to ensure that available federal resources can be most effectively applied. A forum for sharing information and developing plans across a wide range of stakeholders, as occurs with a port’s AMSC, is critical for ensuring that local stakeholders can use federal resources effectively. This is especially the case for mitigation grants administered by the Federal Emergency Management Agency and the Maritime Administration’s communication of information regarding making ships and other maritime resources available for disaster recovery. To help ensure that ports achieve adequate planning for natural disasters and effectively manage risk to a variety of threats, we are recommending that the Secretary of the Department of Homeland Security encourage port stakeholders to use existing forums for discussing all-hazards planning efforts and include appropriate representatives from DHS, the port authority, representatives from the local emergency management office, the Maritime Administration, and vessel and facility owner/operators. To help ensure that ports have adequate understanding of maritime disaster recovery resources, we recommend that the Secretary of the Department of Transportation direct the Administrator of the Maritime Administration to develop a communication strategy to inform ports of the maritime resources available for recovery efforts. We provided a draft of this report to DHS, DOT, and DOD for their review and comment. In DHS’s letter, the department generally agreed existing forums provide a good opportunity to conduct outreach to and participation by stakeholders from various federal, state, and local agencies and, as appropriate, industry and nongovernmental organizations. However, the department said it did not endorse placing responsibility for disaster contingency planning on existing committees in ports and said these responsibilities should remain with state and local emergency management planners. Our recommendation was not to place responsibility for such planning within port committees, but rather to use these existing forums as a way to engage all relevant parties in discussing natural disaster planning for ports. The problem we found at various locations we visited was that all parties have not been involved in these efforts. In our view, these committees represent a ready way to accomplish this task. While we understand Coast Guard’s concern with diluting existing statutorily mandated port-related committees, we found during the course of our fieldwork that some ports were already using existing port committees effectively to plan for all hazards. Further, we believe that the unique nature of ports and their criticality to goods movement warrants that all ports be encouraged to have a specific forum for all-hazard planning. DHS’s letter is reprinted in appendix II. DHS officials provided technical comments and clarifications, which we incorporated as appropriate to ensure the accuracy of our report. In general, DOT agreed with the facts presented in the report. Department officials provided a number of comments and clarifications, which we incorporated as appropriate to ensure the accuracy of our report. The department generally concurred with GAO’s recommendation. Additionally, DOD generally agreed with the facts presented in the report. Department officials provided some technical comments and clarifications, which we incorporated as appropriate to ensure the accuracy of our report. We will send copies of this report to the interested congressional committees, the Secretary of Transportation, and other interested parties. We also will make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff has any questions about this report, please contact me at (202) 512-6570 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix III. This report, initiated under the Comptroller General’s authority to examine government operations, examines (1) the challenges port authorities have experienced as a result of recent natural disasters, (2) the efforts under way to address challenges from these disasters, and (3) the manner in which port authorities prepare for disasters and the effect of this approach on their ability to share information with port stakeholders and access federal resources. To address these objectives, we focused much of our work on 17 U.S. ports. We focused primarily on commercial ports and various commercial aspects of these ports. The main criteria we used to select ports for study were as follows: Size of port, based on the value of imported cargo. To ensure a varied size of ports, we selected ports that were among the top 50 in size, but within these 50, we chose ports whose total cargo values were greater than and less than the average cargo value for all 50 top ports. Experience with recent natural disasters. We focused our efforts primarily—but not exclusively—on ports that had some degree of experience with a natural disaster since 1998. Based on Department of Homeland Security (DHS) guidance about the most significant disaster threats and potential hazards, we limited our focus to ports that have hurricane or seismic threats. In particular, we included a number of ports affected by the 2005 hurricane season—primarily hurricanes Katrina, Wilma, and Rita. In all, 10 of the 17 ports we selected were affected by hurricanes that year. Operational type. We chose ports that reflected a range of operating types, including those that (1) manage port operations and provide all services, (2) act as a landlord and lease operations and facilities to tenants, and (3) conduct limited operations in the port and lease facilities to others. Region of the United States. We selected ports from the East, Gulf, and West Coasts. There is an overrepresentation of Gulf region ports to ensure adequate coverage of hurricane affected ports. In making our selections, we used information from the Maritime Administration, including port demographics operational, legal type, and region from the Public Port Finance Survey Report and Maritime Administration waterborne statistics which report the top 50 ports in terms of total cargo value. We determined that what we found at those ports is not generalizable to all U.S. ports. We used disaster data from Federal Emergency Management Agency (FEMA) to assess how many natural disasters had affected the counties in which each port was located. Based on our review of data documentation, we determined that the data we used in applying our criteria for port selection were sufficiently reliable for our purposes. We took two approaches to reviewing these ports—site visits and telephone interviews. We conducted site visits at seven ports, as follows: Tacoma, Washington Houston, Texas Oakland, California Gulfport, Mississippi Mobile, Alabama Miami, Florida Savannah, Georgia During these visits, we gathered information from various maritime stakeholders, including officials from port authorities, emergency management agencies, the U.S. Coast Guard, the U.S. Army Corps of Engineers, and the Maritime Administration. Although we talked to four private operators, we excluded interviewing other private operators because their roles and responsibilities vary greatly from port to port and because their efforts for natural disasters, unlike their efforts for homeland security, are not subject to federal requirements or guidelines. We designed our case study interview questions to provide insight on (1) general governance and operations of the port, (2) impacts from recent natural disasters, (3) lessons learned from previous natural disasters, (4) risk management procedures, and (5) stakeholder collaboration. We conducted telephone interviews with officials at 10 ports, as follows: Freeport, Texas Jacksonville, Florida Los Angeles, California Morgan City, Louisiana New Orleans, Louisiana Pascagoula, Mississippi Port Arthur, Texas Richmond, Virginia San Diego, California Wilmington, North Carolina At these ports, we limited our telephone interviews to port authorities only. These semi-structured interviews addressed the same topics as the case study but focused more on damages and lessons learned as a result of recent natural disasters. For both sets of ports, we also reviewed numerous planning documents from port stakeholders including emergency preparedness plans, disaster recovery plans, hurricane operations, hurricane manuals, seismic guidelines, and business continuity plans. To assess the challenges port authorities experienced as a result of recent natural disasters, we used the interviews we conducted and the documents we obtained from officials at the 17 ports. To determine the efforts under way to address these challenges, we reviewed information from our interviews with and documents from American Association of Port Authorities (AAPA) officials and various federal agencies. In particular, we reviewed the Emergency Preparedness and Continuity of Operations Planning: Manual for Best Practices that was developed through several working groups coordinated by the AAPA. The working groups provided a forum for port officials across the United States and Canada to share their experience in planning for the impacts of recent natural disasters and to share their best practices. We conducted interviews with the Chair of the working groups and other AAPA officials to gather more information about the working group’s procedures and vetting process. Additionally, we interviewed various regional and headquarter officials of the Maritime Administration, U.S. Coast Guard (Coast Guard), Department of Transportation, U.S. Army Corps of Engineers, FEMA, and DHS. We reviewed the following federal risk management plans: The draft appendix for maritime resources for the Federal Support Plan. The appendix is part of a one-time joint planning document between the Department of Transportation and FEMA for the state of Louisiana (2006 Hurricane Season). The Maritime Administration, an agency within the Department of Transportation, developed this appendix to assist in future recovery efforts by identifying resources, protocols, and organizations for maritime resources. The Port Risk Management and Insurance Guidebook, developed by the Maritime Administration. This publication is a best practices guide for port risk management, including information on how ports obtain insurance coverage and facilitate emergency management. To determine how port authorities plan for natural disasters and the effects of that approach on information-sharing among port stakeholders and access to federal resources, we reviewed port and federal disaster planning documents collected from various port stakeholders at each of the seven ports we visited in person. In order to gain an understanding of best practices for such planning efforts, we interviewed academic, industry, and government experts. In particular, we interviewed risk management experts from the following organizations: Georgia Institute of Technology’s Port Seismic Risk Management Team conducted damage assessments at seven ports in south Louisiana in October 2005 immediately following Hurricane Katrina. ABS Consulting has worked with a variety of clients including the Coast Guard, Maritime Administration, and FEMA and thus helped develop several port risk management tools. The Office of Grants and Training at DHS administers both Port Security and Homeland Security Grants. The Coast Guard has expertise in utilizing the Maritime Security Risk Assessment Model (MSRAM) to assess security risk and has plans to incorporate natural disaster risks into the model. We also reviewed related laws and mandates that provide federal oversight to ports—namely the Maritime Transportation Security Act of 2002 and its implementing regulations and other applicable law. We also reviewed the Puget Sound area maritime security plan and attended an Area Maritime Security Committee meeting at the Port of Houston-Galveston. To determine steps that federal agencies were taking with regard to all- hazards risk management, we reviewed (1) the Security and Accountability for Every Port Act (SAFE Port Act), which addresses risk mitigation of transportation disruptions, including disruptions caused natural disasters and (2) policy documents including the National Response Plan and the National Incident Management System. We also reviewed a presentation on the Coast Guard’s MSRAM. Our work, which we conducted from December 2005 through February 2007, was conducted in accordance with generally accepted government auditing standards. In addition to the individual named above, Sally Moino, Assistant Director; Casey Hanewall; Lindsey Hemly; Christoph Hoashi-Erhardt; Bert Japikse; Erica Miles; Sara Ann Moessbauer; Jamilah Moon; Sharon Silas; Stan Stenerson; and Randall Williamson made key contributions to this report. | U.S ports are significant to the U.S. economy, handling more than 2 billion tons of domestic and import/export cargo annually. Since Sept. 11, 2001, much of the national focus on ports' preparedness has been on preventing potential acts of terror, the 2005 hurricane season renewed focus on how to protect ports from a diversity of threats, including natural disasters. This report was prepared under the authority of the Comptroller General to examine (1) challenges port authorities have experienced as a result of recent natural disasters, (2) efforts under way to address these challenges, and (3) the manner in which port authorities plan for natural disasters. GAO reviewed documents and interviewed various port stakeholders from 17 major U.S. ports. Ports, particularly those impacted by the 2005 hurricane season, experienced many different kinds of challenges during recent natural disasters. Of the 17 U.S. ports that GAO reviewed, port officials identified communications, personnel, and interagency coordination as their biggest challenges. Many port authorities have taken steps to address these challenges. Individually, ports have created redundancy in communications systems and other backup equipment and updated their emergency plans. Collectively, the American Association of Port Authorities developed a best practices manual focused on port planning and recovery efforts, as well as lessons learned from recent natural disasters. Even ports that have not experienced problems as a result of recent disasters, but are nonetheless susceptible to disaster threats, have responded to lessons learned by other ports. Additionally, federal maritime agencies, such as the U.S. Coast Guard, the Maritime Administration, and the U.S. Army Corps of Engineers have increased their coordination and communication with ports to strengthen ports' ability to recover from future natural disasters and to build stakeholders' knowledge about federal resources for port recovery efforts. Most port authorities GAO reviewed conduct planning for natural disasters separately from planning for homeland security threats. Unlike security efforts, natural disaster planning is not subject to the same type of specific federal requirements and, therefore, varies from port to port. As a result of this divided approach, GAO found a wide variance in ports' natural disaster planning efforts including: (1) the level of participation in disaster forums, and (2) the level of information sharing among port stakeholders In the absence of appropriate forums and information sharing opportunities among ports, some ports GAO contacted were limited in their understanding of federal resources available for predisaster mitigation and postdisaster recovery. Other ports have begun using existing forums, such as their federally mandated Area Maritime Security Committee, to coordinate disaster planning efforts. Port and industry experts, as well as recent federal actions, are now encouraging an all-hazards approach to disaster planning and recovery. That is, disaster preparedness planning requires that all of the threats faced by the port, both natural (such as hurricanes) and man-made (such as terror events), be considered together. The Department of Homeland Security, which through the Coast Guard oversees the Area Maritime Security Committees, provides an example of how to incorporate a wider scope of activity for ports across the country. Additionally, the Maritime Administration should develop a communication strategy to inform ports of the maritime resources available for recovery efforts. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
The Illegal Immigration Reform and Immigrant Responsibility Act of 1996 (the 1996 Act), which amended the Immigration and Nationality Act (INA),as amended, was enacted September 30, 1996 (P.L. 104-208). Among other things, the 1996 Act included a new provision, which is called expedited removal, for dealing with aliens who attempt to enter the United States by engaging in fraud or misrepresentation (e.g., falsely claiming to be a U.S. citizen or misrepresenting a material fact) or who arrive with fraudulent, improper, or no documents (e.g., visa or passport). The expedited removal provision, which went into effect on April 1, 1997, reduces an alien’s right to seek review of a determination of inadmissibility decision. In the years preceding the passage of the 1996 Act, concerns were raised about the difficulty of preventing illegal aliens from entering the United States and the difficulty of identifying and removing the illegal aliens once they entered this country. The expedited removal process was designed to prevent aliens who attempt to enter the United States by engaging in fraud or misrepresentation or who arrive without proper documents from entering this country at our ports of entry. Several legal services organizations and individual aliens have challenged the constitutionality of the expedited removal process established by the 1996 Act (see app. I for a discussion of these court cases). These suits claim, among other things, that the expedited removal process denies substantive and procedural rights to asylum seekers; creates an unreasonably high risk of erroneous removals of citizens, lawful permanent residents, and other holders of valid visas; denies the organizations’ First Amendment right of access to aliens applying for entry into the United States; and may not be correctly applied to unaccompanied minors. As of March 15, 1998, these cases were pending in federal court. The Immigration and Naturalization Service (INS) and immigration judges have roles in implementing the provisions of the 1996 Act relating to the expedited removal of aliens. INS’ responsibilities include (1) inspecting aliens to determine their admissibility and (2) reviewing the basis and credibility of aliens who are subject to expedited removal but who claim a fear of persecution if returned to their home country or country of last residence. Aliens can request that immigration judges review INS’ negative credible fear determinations. Immigration judges, who report to the Chief Immigration Judge, are in the Executive Office for Immigration Review (EOIR), within the Department of Justice. The immigration judges are located in immigration courts throughout the country. Before the 1996 Act, aliens who wanted to be admitted to the United States at a port of entry were required to establish admissibility to an inspector. This requirement remains applicable under the 1996 Act. INS has about 4,500 inspectors and about 260 staffed ports of entry. Generally, aliens provide inspectors with documents that show they are authorized to enter this country. At this primary inspection, the INS inspector either permits the aliens to enter or sends the aliens for a more detailed review of their documents or further questioning by another INS inspector. The more detailed review is called secondary inspection. In deciding whether to admit the alien, the INS inspector is to review the alien’s documents for accuracy and validity and check INS’ and other agencies’ databases for any information that could affect the alien’s admissibility. After reviewing the alien’s documents and interviewing the alien at the secondary inspection, the inspector may either admit or deny admission to the alien or take other discretionary action. INS can prohibit aliens from entering the United States for a number of reasons (e.g., criminal activity or failing to have a valid visa, passport, or other required documents). Inspectors have discretion to permit aliens to (1) enter the United States under limited circumstances even though they do not meet the requirements for entry or (2) withdraw their applications for admission and depart. Before the April 1, 1997, enactment of the expedited removal process, the INA authorized the Attorney General to exclude certain aliens from admission into the United States. Aliens whom inspectors determined to be excludable from this country generally were allowed either to (1) return voluntarily to the country from which they came or (2) appear for an exclusion hearing before an immigration judge. During this hearing, aliens who said they had a fear of persecution if they were returned to their home country could file an application for asylum. The immigration judges’ decisions could be appealed to EOIR’s Board of Immigration Appeals (BIA), which is a quasi-judicial body that hears appeals of INS’ and immigration judges’ decisions. Furthermore, the alien could appeal BIA’s decision through the federal court system. The scope of the federal court’s review was limited to whether the government followed established procedures. Aliens who were excluded from entering the United States under this process generally were barred from reentering this country for 1 year. The exclusion process is discussed in more detail in chapter 2. From April 1, 1996, to October 31, 1996, the monthly average number of aliens who INS (1) inspected at U.S. ports of entry was about 27.1 million; (2) referred to secondary inspection was about 780,000; and (3) did not admit into this country was about 63,250. Under the 1996 Act, an INS inspector, instead of an immigration judge, can issue an expedited removal order to aliens who (1) are denied admission to the United States because they engage in fraud or misrepresentation or arrive without proper documents when attempting to enter this country and (2) do not express a fear of returning to their home country. INS is to remove the alien from this country. Aliens who are issued an expedited removal order generally are barred from reentering this country for 5 years. The expedited removal provision also established a new process for aliens who express a fear of being returned to their home country and who are subject to expedited removal. Inspectors are to refer such aliens to INS asylum officers for an interview to determine whether the aliens have a credible fear of persecution or harm if returned to their home country. This is called a credible fear interview. The term “credible fear of persecution” is defined by statute as “a significant possibility, taking into account the credibility of the statements made by the alien in support of the alien’s claim and such other facts as are known to the officer, that the alien could establish eligibility for asylum under Section 208” of the INA. INS has a cadre of about 400 asylum officers who are involved with the asylum process. About 300 of these officers have been trained to conduct credible fear interviews. INS has eight asylum offices nationwide. The expedited removal process is discussed in more detail in chapter 2, and the credible fear process is discussed further in chapter 3. From April 1, 1997, to October 31, 1997, the monthly average number of aliens who INS (1) inspected at ports of entry was about 28.9 million;(2) referred to secondary inspection was about 608,000; and (3) did not admit was about 56,500. The 1996 Act requires us to study the implementation of the expedited removal process, including credible fear determinations, and report to the Senate and House Committees on the Judiciary. We address the following aspects of the exclusion and expedited removal processes in this report: how the expedited removal process and INS procedures to implement it are different from the process and procedures used to exclude aliens before the 1996 Act; the implementation and results of the process for making credible fear determinations during the 7 months following April 1, 1997; and the mechanisms that INS established to monitor expedited removals and credible fear determinations and to further improve these processes. We also provide information on INS’ and EOIR’s estimates of costs to implement the expedited removal process and the time required to adjudicate expedited removal cases and credible fear determinations. We did our work at INS and EOIR headquarters offices and INS field locations at five U.S. ports of entry—two land ports and three airports. These five locations had about 50 percent of the expedited removal cases during the first 7 months after the 1996 Act was implemented. We judgmentally selected these 5 of the about 260 staffed ports to include a large number of entries by aliens, geographically diverse areas, and the 2 major types of ports of entry (land ports and airports). We selected San Ysidro (CA), as a southern land port; Niagara Falls (NY), as a northern land port; and Miami International, Los Angeles International, and John Fitzgerald Kennedy International (JFK) Airports. According to INS, these ports were expected to have large volumes of expedited removal orders, and the airports were anticipated to have a large number of credible fear referrals. We discussed these selections with INS officials who said that the ports should provide us with a reasonable representation of its implementation of the new law. Although we visited the Niagara Falls land port, we included in some of our analyses, data for the entire Buffalo district, which includes the Niagara Falls land port. We selected the three asylum offices at which we did our field work—New York, Miami, and Los Angeles—because they conducted credible fear interviews for four (Los Angeles, JFK, Miami, and San Ysidro) of the five ports we visited. The Newark (NJ) asylum office conducted credible fear interviews for the Buffalo District Office. Because Newark was not one of the five ports we included in our review, we decided not to increase our audit costs by adding another location. We did our fieldwork related to EOIR at four of the immigration courts—Wackenhut (New York City), Krome (Miami), San Pedro (Los Angeles), and El Centro (El Centro, CA)—which held reviews of negative credible fear determinations for aliens who attempted entry at the ports we visited. We selected these four courts because they were near the ports of entry included in our review. We limited the data on removal of aliens before April 1, 1997, to the airports because INS did not maintain nationwide data on the reasons aliens were not admitted into the United States. However, the individual airports maintained data on the reasons for aliens’ inadmissibility into the country. Therefore, we analyzed the data for the Miami, Los Angeles, and JFK airports to determine the aliens’ dispositions. To present disposition data on aliens who were subject to the expedited removal process since April 1, 1997, we obtained data from INS on aliens who were processed under expedited removal but were not referred for a credible fear interview, both nationwide and for the five ports in our study. To develop data on inspectors’ completion of required forms, background information about the aliens, and the length of the expedited removal process from the day the alien attempted to enter the country to the day the alien was removed, we reviewed probability samples of 434 files for aliens who entered the expedited removal process but were not referred for a credible fear interview. This effort consisted of five separate reviews of individuals entering the country between May 1, 1997, and July 31, 1997, at the five locations we visited and individuals who were processed through the expedited removal process. To obtain data on the time needed to adjudicate cases before and after expedited removal, we asked INS and EOIR officials to estimate the time required for different steps in the adjudication process, including credible fear determinations, for the locations included in our study. To obtain estimates for the costs to INS and EOIR to implement the expedited removal process, including the credible fear determinations, we asked each agency to develop cost data. To develop workload data related to the credible fear process that went into effect on April 1, 1997, INS provided nationwide data. These data included the number of credible fear interviews held and the results of those interviews. EOIR provided data from a nationwide database on the results of the negative credible fear reviews conducted by the immigration judges. Also, we reviewed the immigration judges’ worksheets, for all cases in which they vacated asylum officers’ negative credible fear determinations, for the period April 1, 1997, to August 31, 1997. To determine, in part, whether it was documented that asylum officers followed certain credible fear determination processes, we reviewed all 84 files of negative credible fear determinations for the months of May through July, 1997. In addition, during our field visits we observed inspectors processing 16 aliens through the expedited removal process, asylum officers conducting 9 credible fear interviews, and immigration judges holding 5 negative credible fear reviews in Miami, the only location where reviews were conducted at the time of our visit. We also met and/or talked with various nongovernmental organizations (e.g., American Bar Association, Lawyers Committee for Human Rights, Lawyers’ Committee for Civil Rights of the San Francisco Bay Area, Office of the United Nations High Commissioner for Refugees, Amnesty International, and American Civil Liberties Union) to discuss our methodology and to get input on the types of data we should collect through these observations and file reviews. Officials from these organizations provided information on their concerns about the expedited removal process, including credible fear determinations, and provided information about specific problems they said were encountered by aliens during the process. To describe INS’ controls to monitor and oversee the expedited removal process, including credible fear determinations, we interviewed INS officials at headquarters and locations we visited and obtained data related to these activities. More details on our objectives, scope, and methodology are in appendix II of this report. Also included in appendix II is a description of the databases we used and our efforts to assess these databases’ reliability. We did our review from November 1996 to March 1998 in accordance with generally accepted government auditing standards. We provided a draft of this report to the Attorney General for review and comment. On March 16, 1998, we met with Department of Justice officials, including INS’ Director, International Affairs, to obtain Justice’s comments. Overall, the officials stated that the report was accurate and fair. They also provided technical comments, which have been incorporated in this report where appropriate. The 1996 Act significantly changed INS’ authority over the removal of aliens requesting admission to the United States at ports of entry. Previously, aliens could have a hearing before an immigration judge and could appeal an immigration judge’s decision ordering their exclusion from this country through BIA and the federal courts. The scope of the federal court’s review was limited to whether the government followed established procedures. Generally, under the 1996 Act, aliens who attempt to enter the United States by engaging in fraud or misrepresentation or who arrive without proper documents are subject to an expedited removal order from an INS inspector that the alien cannot appeal. The penalty for inadmissible aliens, including those subject to expedited removal, generally increased from the aliens’ being prohibited from entry in the United States for 1 year in the pre-1996 Act exclusion process to being prohibited from entry for 5 years under the post-1996 Act expedited removal process. Furthermore, inspectors have added responsibility to identify aliens who have a fear of returning to their home country. Under the expedited removal process, INS has established more specific procedures to guide inspectors than it had in the exclusion process used before the 1996 Act. Finally, the inspections component of the expedited removal process has more steps for INS inspectors than the exclusion process had and, therefore, INS estimated it generally took more of the inspectors’ time than the exclusion process did at the locations we visited. INS implemented the expedited removal process by issuing regulations as well as specific guidance and training for its staff who would be responsible for carrying out the process. Between April 1, 1997, and October 31, 1997, INS data showed that 29,170 aliens went through the expedited removal process, including 1,396 aliens who were referred for a credible fear interview with an asylum officer. Documentation in the INS files that we reviewed at five locations showed some inconsistencies as to whether inspectors and supervisors were documenting that they followed various steps in INS’ expedited removal process, such as signing key forms and asking required questions. INS staff also have reviewed files and found that INS inspectors and supervisors were not always documenting that they followed INS procedures. INS officials told us that they have reinforced with inspectors the need for proper documentation. Before the implementation of the 1996 Act, aliens could be formally ordered removed only by an immigration judge through an exclusion hearing. If inspectors found that an alien was not admissible into this country, options available to the inspector included allowing the alien to withdraw his or her application for admission and voluntarily depart, processing a waiver of inadmissibility, deferring the inspection, paroling the alien into the United States (i.e., a procedure used to temporarily admit an excludable alien into the country for emergency reasons or when in the public interest), or preparing the case for an exclusion hearing. Figure 2.1 shows a flowchart of the exclusion process that was used before the 1996 Act. As shown in the flowchart in figure 2.1, aliens who were denied admission by INS could request an exclusion hearing before an immigration judge. At these exclusion hearings, aliens were to be afforded the following due process procedures: be represented by counsel at no expense to the government; be informed of the nature, purpose, time, and place of the hearing; present evidence and witnesses in their own behalf; examine and object to evidence against them; cross-examine witnesses presented by the government; request the immigration judge to issue subpoenas requiring the attendance of witnesses and/or the production of documentary evidence; and appeal the immigration judge decisions to BIA and the federal courts. At the exclusion hearing, the burden of proving admissibility generally rested with the alien. INS would present evidence and examine and cross-examine the alien and witnesses. At the end of the hearing, the judge would render a decision, such as (1) exclude the alien (i.e., not allow him/her to enter the United States); (2) grant the alien relief from exclusion (i.e., allow the alien to enter this country); or (3) permit the alien to withdraw his or her application for admission (i.e., allow the alien to voluntarily leave the country). Either the alien or INS (or both) could appeal the immigration judge’s decision to BIA. If BIA upheld the judge’s decision to exclude the alien, the alien could appeal BIA’s decision to a U.S. district court. The district court’s review was limited to determining if the government followed established procedures (e.g., that a fair hearing was held, that INS followed its regulations, and that the immigration judge’s decision was supported by the record). The alien then could appeal an adverse district court decision to the U.S. Circuit Court of Appeals and, ultimately, to the U.S. Supreme Court. If an alien were found to be excludable after the final legal action was completed, INS was to arrange for the alien’s removal from this country. Aliens removed under this process generally were to be barred from reentering the United States for 1 year. To provide some perspective on the disposition of aliens prior to April 1, 1997, who could have been subjected to expedited removal if they had attempted entry into this country after April 1, 1997, we obtained INS data for the three airports we visited. The airports’ databases captured up to three charges as the basis for exclusion. Table 2.1 shows the disposition of aliens who were not admitted into this country between October 1, 1995, and March 31, 1997, because at least one of the reasons for their inadmissibility was that they attempted to enter the United States by engaging in fraud or misrepresentation or arriving without proper documents—the only charges for which aliens can be subject to expedited removal. The majority of the aliens denied entry into this country at these three airports were sent to immigration judges for exclusion hearings. INS’ options for those aliens who were not sent to an immigration judge for an exclusion hearing included permitting the alien to withdraw his or her application or waiving or paroling the alien into the United States. We used the data from the three airports because INS did not have a nationwide database on excluded aliens by charge. Under the 1996 Act, on behalf of the Attorney General, the Commissioner of INS carries out the responsibilities to issue expedited removal orders against aliens classified as “arriving aliens.” Justice regulations have defined arriving aliens as those aliens who seek admission to or transit through the United States at a port of entry or who are interdicted in international or United States waters and are brought to this country. The 1996 Act also allows expedited removal orders to be issued to aliens who have entered the United States without being inspected or paroled at a port of entry. INS determined that, at least initially, it would not apply expedited removal orders to the last category of aliens—namely, those who entered the United States without inspection or parole. The specific violations (i.e., aliens attempting to enter the United States by engaging in fraud or misrepresentation or arriving without proper documents) under the 1996 Act that could subject the alien to an expedited removal order are discussed in appendix V. The 1996 Act defines when INS can use expedited removal orders for arriving aliens. As discussed below, INS has established procedures for implementing the new provisions, such as requiring inspectors to read specific information to the aliens. Figure 2.2 shows the expedited removal process, including the credible fear process. In comparing figure 2.1 on the exclusion process with figure 2.2 on the expedited removal process, the expedited removal process for aliens who do not express a fear of being returned to their home country is more streamlined than the exclusion process. However, the expedited removal process for aliens who express a fear of being returned to their home country contains more steps than the exclusion process. According to INS’ regulations and implementing instructions, when an inspector plans to issue an expedited removal order to an alien, the inspector is to follow certain steps, as shown below: Explain the expedited removal process to the alien and read the statement of rights and consequences in a language the alien can understand. Included in this statement are the facts that the alien may be immediately removed from this country without a hearing and, if so, may be barred from reentering for 5 years or longer; that this may be the alien’s only opportunity to present information to the inspector before INS makes a decision; and that if the alien has a fear or concern about being removed from the United States or being sent to his or her home country, the alien should tell the inspector during this interview because the alien may not have another chance to do so. Take a sworn statement from the alien, which is to contain all pertinent facts of the case. As part of the sworn statement process, the inspector provides information to the alien, interviews the alien, and records the alien’s responses. The inspector is to cover and document in the sworn statement such topics as the alien’s identity and reasons for the alien being inadmissible into the United States; whether the alien has a fear of persecution or return to his or her home country; and the INS decision (i.e., issue the alien an expedited removal order, refer the alien for a credible fear interview, permit the alien to withdraw his or her application for admission, admit the alien, allow him or her to apply for any applicable waiver, or defer the inspection or otherwise parole the alien). When the inspector completes the record of the sworn statement, he or she is to have the alien read the statement, or have it read to the alien, and have the alien sign and initial each page of the statement and any corrections that are made. The inspector is to provide a copy of the signed statement to the alien. The alien is to be given an opportunity to respond to INS’ decision. (See app. VI for a copy of the form used to record the alien’s sworn statement.) Complete other administrative processes and paperwork, including the documents needed to remove the alien. Present the sworn statement and all other related paperwork to the appropriate supervisor for review and approval. According to INS instructions, the inspector is to refer an alien for an interview with an asylum officer if, for example, the alien indicates a fear of returning to his or her home country or an intent to apply for asylum. The asylum officer is to determine if the alien has a credible fear of persecution. Immigration officers referred 1,396 aliens who requested admittance to the United States between April 1, 1997, and October 31, 1997, for a credible fear interview. The process for determining whether aliens have a credible fear is discussed in chapter 3. According to INS, to determine if an alien should be referred to an asylum officer for a credible fear interview, the inspector is to consider any statement or signs, verbal or nonverbal, that the alien may have a fear of persecution or a fear of returning to his or her home country. The questions that the inspector is required to ask and to record were designed to help determine whether the alien has such a fear. These questions are as follows: Why did you leave your home country or country of last residence? Do you have any fear or concern about being returned to your home country or being removed from the United States? Would you be harmed if you are returned to your home country or country of last residence? According to INS guidance, if the alien indicates he or she has a fear or concern or intends to apply for asylum, the inspector may ask additional questions to ascertain the general nature of the alien’s fear or concern. The alien does not need to use the specific terms “asylum” or “persecution” for the inspector to refer the alien for a credible fear interview, nor does the alien’s fear have to relate specifically to one of the five bases contained within the definition of refugee, which are the legal basis for an asylum determination. INS training materials note that there have been many cases for which asylum was ultimately granted that may not have initially appeared to relate to the definition of asylum. INS further requires that the inspector should not make eligibility determinations or weigh the strengths or credibility of the alien’s claim. Additionally, the inspector should err on the side of caution and refer to the asylum officer any questionable cases. If the alien asserts a fear or concern that is clearly unrelated to an intention to seek asylum or a fear of persecution, then the inspector should not refer the case to an asylum officer. During our observations, we saw an instance where an alien initially expressed a fear of removal during a sworn statement for which the inspector did not refer the alien for a credible fear interview. The alien expressed concern about not being able to see her boyfriend who lived in the United States. The inspector checked with the supervisor to make sure that she should not refer this alien for a credible fear interview. When an inspector is going to refer an alien for a credible fear interview, the inspector is to process the alien as an expedited removal case.Additionally the inspector is to explain to the alien in a language the alien understands information about the credible fear interview including (1) the alien’s right to consult with other persons, (2) the alien’s right to have an interpreter, and (3) what will transpire if the asylum officer finds that the alien does not have a credible fear of persecution. This information is contained in an INS form that the inspector is to give the alien (see app. VI for a reprint of this form). The inspector also is to provide the alien with a list of free legal services, which is prepared and maintained by EOIR. Generally, INS requires that aliens who are subject to expedited removal should be processed immediately unless they claim lawful status in the United States or a fear of return to their home country. Those aliens who arrive at air and sea ports of entry who are to be removed from the United States are to be returned by the first available means of transportation. Aliens arriving at land ports of entry who are ordered removed usually should be returned to Canada or Mexico. If the inspector is unable to complete the alien’s case or transportation is not available within a reasonable amount of time from the completion of the case, the inspector is to send the alien to an INS detention center or other holding facility until he or she can complete the case or remove the alien. Parole may only be considered on a case-by-case basis for medical emergencies or for legitimate law enforcement purposes. An expedited removal order is not the only option available for the inspector to apply to aliens who are inadmissible because they attempted to enter the United States by engaging in fraud or misrepresentation or arrived without proper documents. Similar to the exclusion process, which was in place before April 1, 1997, depending upon the specific violation, the options available to the inspector include (1) allowing the alien to withdraw his or her application, (2) processing a waiver, (3) deferring the inspection, or (4) paroling the alien into the United States. However, INS can no longer refer these aliens to an immigration judge unless the alien is found to have a credible fear of persecution or the alien swears under oath to be an U.S. citizen or to have lawful permanent residence, refugee, or asylee status, but the inspector cannot verify that claim. On December 22, 1997, INS issued additional guidance on when an inspector should offer aliens an opportunity to withdraw their application for admission. According to this guidance, the inspector should carefully consider all facts and circumstances related to the case to determine whether permitting withdrawal would be in the best interest of justice, or that justice would be ill-served if an order of removal (such as an expedited removal order) were issued. Factors to consider in making this decision may include, but are not limited to, previous findings of inadmissibility against the alien, the alien’s intent to violate the law, the alien’s age or health, and other humanitarian or public interest considerations. The guidance further states that ordinarily, the inspector should issue an expedited removal order when the alien has engaged in obvious, deliberate fraud. If the alien may have innocently or through ignorance, misinformation, or bad advice obtained an inappropriate visa and did not conceal information during the course of the inspection, withdrawal should ordinarily be permitted. The 1996 Act and its implementation affected the immigration proceedings in numerous ways. Two major differences between the exclusion and expedited removal processes are INS’ authority to issue the expedited removal order and the aliens’ limited right of review of that order. Other changes include (1) an increased penalty for inadmissible aliens, including those subject to expedited removal; (2) a more structured inspection process for expedited removal than for exclusion; and (3) estimated additional time taken by inspectors to complete the expedited removal process due to the additional steps in the process. Before the 1996 Act, aliens who attempted to enter the United States by engaging in fraud or misrepresentation or who arrived without proper documents could have received a hearing by an immigration judge to determine if the aliens should be allowed to enter the United States. The aliens could apply for asylum during this hearing. Furthermore, aliens had the right to appeal to BIA the immigration judge’s decision not to allow them to enter the country. Aliens could appeal an adverse decision by BIA through the federal courts. However, the scope of the federal courts’ review was limited to whether the government followed established procedures. Under the 1996 Act, inspectors, as opposed to immigration judges, can issue aliens expedited removal orders if they attempt to enter the United States by engaging in fraud or misrepresentation or arrive without proper documents. Generally, aliens who do not express a fear of being returned to their home country cannot have a review of the INS’ decisions. In addition, inspectors are to look for signs from the aliens of fear of being returned to their home country and, if aliens exhibit such a fear, inspectors are to refer the alien to an asylum officer for a credible fear interview. Before the 1996 Act, aliens who were issued a formal exclusion order generally were barred from reentering the United States for 1 year. With the implementation of the 1996 Act, the reentry restriction for inadmissible aliens, including those subject to expedited removal, generally increased to 5 years. Aliens are allowed to request permission to reapply for admission to this country during the 5-year period. Under the exclusion process, INS had general procedures for its inspectors to follow when referring aliens to an immigration judge. For example, INS guidance stated that inspectors should make every effort to establish the grounds of inadmissibility, including taking a formal question and answer statement from the alien, if necessary. Under the expedited removal process, INS requires the inspectors to follow specific steps. (For information on steps in the expedited removal process, see the previous discussion.) The expedited removal process also added new procedures for asylum officers to follow in determining whether aliens have a credible fear of persecution and, therefore, should not be immediately removed under the new process. These procedures are discussed in chapter 3. For the five INS field units we reviewed, INS estimates of average inspection-related adjudication time generally show that the time it took an inspector at secondary inspection to complete an expedited removal case was greater than the average time it took to complete an exclusion case for aliens who attempt to enter the United States by engaging in fraud or misrepresentation or who arrive without proper documents, as shown in tables 2.2 and 2.3. According to an INS official, the differences in inspectors’ time between the two processes are due, in part, to the additional steps associated with the inspection components of the expedited removal process. The time for inspectors and supervisors to prepare a case in secondary inspection includes interviewing the alien and preparing and reviewing the paperwork related to an exclusion hearing or an expedited removal order. Because the methods each office used to develop its estimates varied, the data are not comparable among the locations. The estimated times presented in tables 2.2 and 2.3 represent cases where interpreters were not used. Officials at some of the ports told us that the use of an interpreter increased the amount of time the inspector spent on the case from 1/2 hour to 1-1/2 hours. We obtained estimates from the four ports of entry and the Buffalo district. The estimated time used by INS inspectors on the exclusion and expedited removal processes are not comparable because of the differences between the two processes. Also, the 1996 Act established a new credible fear referral process for inspectors. In addition, while some locations estimated that the expedited removal process takes more inspection time, the process has reduced options for aliens to appear before an immigration judge and federal courts regarding an INS removal decision. Times involved in those steps of the pre-1996 Act process were not included in our analysis. To implement the expedited removal process, INS developed operating instructions and planned to provide training to all of its immigration and asylum officers. INS’ and EOIR’s estimated cost to implement the expedited removal process was about $4.8 million. The five ports of entry we visited developed port-specific methods to implement INS’ process. Between April 1, 1997, and October 31, 1997, 29,170 aliens, including 1,396 aliens referred for credible fear interviews (discussed in ch. 3), were processed under the expedited removal process. Documentation in the files we reviewed at the locations we visited showed mixed results as to whether inspectors and supervisors were consistently documenting that they followed various steps in INS’ expedited removal process. For the steps we reviewed, the files indicated a range of compliance from an estimated 80 to 100 percent. In addition, at the locations we visited, INS was generally removing aliens to whom it issued expedited removal orders within a few days. INS officials at the locations we visited said that they had not encountered any changes in cooperation from countries and air carriers when removing aliens through the expedited removal process. On January 3, 1997, INS issued proposed rules regarding the implementation of the 1996 Act, including the expedited removal process. On March 6, 1997, INS issued its interim rules. These interim rules are to remain in effect until INS publishes final rules. INS developed and distributed specific guidance for its inspectors on how to implement the expedited removal process. This guidance was incorporated into the training that INS developed for its officers on the 1996 Act. The training information on the expedited removal process included instructions on who would be subject to expedited removal, what information should be obtained in a sworn statement, and when to refer an alien to an asylum officer for a credible fear interview. According to INS, it trained about 16,400 of its staff. INS has modified its existing training for newly hired employees to include the expedited removal process. The 1996 Act required INS and EOIR to implement a number of changes, including the expedited removal process. To identify the cost of implementing only the expedited removal process, which includes the credible fear determination procedures, we asked INS and EOIR to provide data on the cost of getting policies and procedures in place and providing training on the new process and procedures. We asked the offices to limit their estimates to the start-up costs incurred to implement the procedures. The data collected included estimated costs for (1) salary and benefits of employees who worked full- and part-time on the implementation or who took the training; (2) travel; (3) materials and supplies; (4) office space and facilities; and (5) goods and services received (including the use of outside consultants). As shown in table 2.4, the estimated cost to implement the expedited removal process was about $4 million for INS and about $700,000 for EOIR. These estimated costs basically represent one-time costs associated with starting the expedited removal process for INS and EOIR. According to INS data, about 7 percent of the aliens who attempted entry between April 1, 1997, and October 31, 1997, and who were not admitted at ports of entry, were processed under the expedited removal process (27,774 of 395,335 aliens). Table 2.5 shows the number of aliens who requested entry between April 1, 1997, and October 31, 1997, and who entered the expedited removal process (but were not referred for a credible fear interview). Of the 27,774 cases in which aliens were processed under expedited removal, 27,345 (98.5 percent) had been closed as of December 15, 1997. In 99.6 percent of the 27,345 cases that were closed, the alien was removed after receiving a removal order. More detailed information on the characteristics of aliens who were processed under the expedited removal process is provided in appendix VII. The following are some examples from our case file reviews at the ports we visited of reasons inspectors found aliens inadmissible and subject to the expedited removal process: the alien had previously overstayed his or her visa; the alien intended to work in the United States but did not have the proper documents to allow him or her to do so; and the alien had a counterfeit border crossing card or resident alien card. In addition to the national guidance, three approaches for implementing the expedited removal process were employed by the five ports of entry we visited. INS’ Miami airport approach had a separate unit of inspectors to handle the expedited removal cases. If an alien was sent from primary inspection to secondary inspection and the inspector at secondary determined the alien was subject to expedited removal, the inspector was to refer the alien to the specific unit handling expedited removal cases. The expedited removal unit was staffed by inspectors and supervisors at the GS-11 level and above. These inspectors were to take the sworn statement and complete other paperwork related to the expedited removal case. At the San Ysidro port, INS used a three-step approach. First, when an alien admitted to the inspector at secondary that he or she presented a malafide (e.g., fraudulent) application for entry, the inspector was to send the alien to an enforcement team for processing. The enforcement team that handled the expedited removal cases was comprised of inspectors and supervisors at the GS-7 to GS-12 level. Second, among other things, the team was to show the aliens a Spanish-language video tape explaining the expedited removal process. The sworn statements were not taken at the port of entry unless the aliens expressed a fear of returning to their home country. Third, the women were to be transported to a local motel that is used for temporary detention, and the males were to be transported to the El Centro Service Processing Center. At these sites, an enforcement team member was to take the aliens’ sworn statements, complete the paperwork, and serve the aliens with the expedited removal order. The aliens were to be detained at these locations until their removal. The Niagara Falls land port (which consisted of three bridges), JFK airport, and Los Angeles airport did not establish a separate unit to process expedited removal cases. At these locations, an inspector was to send an alien from primary inspection to secondary inspection, where the inspector was to determine if the alien was subject to expedited removal and, if so, was to complete the case. We reviewed the case files on 434 aliens who attempted entry at the five locations between May 1, 1997, and July 31, 1997, and who were charged under the expedited removal provision but were not referred for a credible fear interview. For the Buffalo district, we reviewed all files and for the other four locations we randomly selected case files for review. Our review showed that the documentation in the case files at the five locations we visited indicated inconsistent compliance with the procedures. See appendix II for information on the case file review methodology and the calculation of the sampling error. As part of the case file review, we determined whether (1) the inspectors documented in the sworn statement that they asked the aliens the three required questions designed to identify a fear of returning to their home country, (2) the aliens signed the sworn statements, and (3) the supervisors reviewed the expedited removal orders. Documentation on compliance varied among the locations. Regarding asking the three required questions, our case file review of the documentation showed that inspectors at Miami airport documented that they asked the required questions an estimated 100 percent of the time. At the other four locations the results were less consistent: the case files indicated that inspectors did not document asking at least one of the three required questions, or some version thereof, between an estimated 1 and 18 percent of the time. For example, the documentation in the case files showed that inspectors did not record asking the required question “Why did you leave your home country or country of last residence?” (or some version thereof) an estimated 18 percent of the time at Los Angeles airport, 15 percent of the time in San Ysidro, 5 percent of the time in the Buffalo district, and 2 percent of the time at JFK airport. In addition, the case file documentation showed that the inspectors did not record asking the required question “Do you have a fear or concern about being returned to your home country or being removed from the United States?” (or some version thereof) an estimated 3 percent of the time at Los Angeles, 2 percent of the time in San Ysidro and at JFK airport, and 1 percent of the time in the Buffalo district. In the 434 files we reviewed, we found 6 cases involving 4 locations in which the inspector did not document asking any of the 3 required questions on fear. According to one of its members, INS’ Expedited Removal Working Group also has identified cases in which inspectors did not ask these required questions. She said that the failure to ask the questions generally occurred when the inspectors were using a draft version of the sworn statement, which had a different version of the required questions. As the Working Group became aware of this problem at specific ports of entry, the official said that she informed port officials of the importance of asking these questions and documenting that they were asked and sent the ports of entry the correct version of the sworn statement. In addition to our file reviews, we observed secondary inspectors’ handling of 16 cases of aliens who were subject to expedited removal. In 15 cases, the inspectors asked applicants the required fear of return questions. In one case the inspector asked two of the three required questions. In five cases the applicants expressed a fear of return. In three of the cases, the inspectors referred the aliens to an asylum office for a credible fear interview. In the other two cases, the aliens initially expressed a fear. In one of the two cases, the alien recanted his fear. In the second case, the alien expressed concern about not being able to see her boyfriend who lived in the United States. The inspector checked with the supervisor to make sure that she should not refer this alien for a credible fear interview. Furthermore, for almost all the cases we reviewed, the files contained sworn statements signed by the aliens. For the five locations, the files indicated that aliens signed the statements between an estimated 97 and 100 percent of the time. Lastly, in our case file review at five locations, the documentation showed that the range in which supervisors documented that they reviewed the expedited removal orders was from an estimated 80 to 100 percent. At two of the locations, documentation in the files showed that a supervisor reviewed all of the orders. In addition, INS’ Office of Internal Audit (OIA) conducted reviews of field unit operations, including expedited removal. Its first audit that included the expedited removal process covered the activities of the Newark District Office and was conducted between April 21 and May 2, 1997. OIA found that in 6 of the 27 cases, supervisors did not review and approve removal orders at the Newark International Airport. OIA recommended that the District Director require all removal orders issued by immigration officers be reviewed by a second-line supervisor and that an indication of the review be annotated on the form before its execution. A member of INS’ Working Group said that, through the group’s case file reviews, it has identified cases in which the documentation of supervisory reviews has been missing. She said that when the Working Group has identified this problem, it has informed relevant port officials of the problem. She also said that the Working Group has discussed the need for supervisory review and proper documentation of such review in its field visits and in written guidance distributed to the field. On the basis of our file reviews of cases where aliens were not referred for a credible fear interview, for three of the locations (Los Angeles airport, Miami airport, and Buffalo district) we estimated that at least 95 percent of the aliens who received expedited removal orders were removed either the day they attempted to enter the United States or the day after. At JFK airport, an estimated 84 percent of such aliens were removed either the same day or the day after they attempted to enter this country. We estimated that for the majority of the aliens who requested entry into this country through the San Ysidro land port of entry (90 percent), it took 2 or more days for them to be removed. INS does not maintain nationwide data on the cooperation of foreign countries and air carriers in accepting aliens who were removed under the expedited removal provision. We asked INS officials at the locations we visited if they had problems with air carriers or countries accepting such aliens since April 1, 1997. INS officials said that air carrier cooperation had not been a problem. They added that, generally, delays related to the air carriers have occurred only when there have been a limited number of available flights. Regarding country cooperation, INS officials at four locations said they have encountered problems returning aliens to certain countries. However, these problems also existed before April 1, 1997, and, therefore, were not unique to aliens who received expedited removal orders. Buffalo district officials said that the United States has an agreement with Canada whereby Canada will accept aliens whom the United States denies entry to this country at the U.S.-Canada border. Therefore, the officials said that the Buffalo district did not have problems returning to Canada aliens who received expedited removal orders. Aliens attempting to enter the United States who express to an INS inspector a fear of being returned to their home country are to be referred to an asylum officer for a credible fear interview. The purpose of the interview is to determine if aliens have a credible fear of persecution. The asylum officers are to read information to the alien about the credible fear process. If the asylum officer determines that the alien has a credible fear, the alien is referred to an immigration judge for a removal hearing. If the asylum officer finds that the alien does not have a credible fear, the alien can request that an immigration judge review the asylum officer’s negative credible fear determination. Asylum officers determined that 79 percent of the aliens who attempted to enter the United States from April 1 to October 31, 1997, for whom the officer had completed the credible fear interview, had a credible fear of persecution. On the basis of the documentation in our nationwide case file review and nine observations, the asylum officers read most of the required information to the aliens during the credible fear interviews. INS estimated the amount of time needed to process a credible fear case ranged between about 6 to 10 hours for the Los Angeles, Miami, and New York asylum offices. Nationwide, immigration judges affirmed INS’ negative credible fear determinations about 83 percent of the time. EOIR estimated that the amount of time needed to complete a negative credible fear review was about 1 hour. As discussed in chapter 2, inspectors are to refer aliens who have expressed a fear of persecution to an asylum officer for a credible fear interview. Before holding the credible fear interview, asylum officers are required to inform aliens about the credible fear and asylum processes; to inform aliens of their option to obtain a consultant who can be a lawyer, friend, relative, or anyone of the aliens’ choosing; and to provide a list of people and organizations that provide legal services. According to an INS official, at some locations, this information is provided during an orientation. The regulations require INS to provide interpreters in the credible fear interviews, when necessary. In a credible fear interview, the 1996 Act requires the asylum officer to decide whether there is a significant possibility that the alien could establish eligibility for asylum. To make this determination, INS requires the asylum officer to consider whether a significant possibility exists that (1) the alien’s statements are credible (i.e., that the alien’s testimony is consistent, plausible and detailed); (2) the alien faced persecution in the past or could be harmed in the future; and (3) the alien’s fear is related to one of five bases for obtaining asylum—persecution because of race, religion, nationality, political opinion, or membership in a particular social group. In addition, the asylum officer is to read mandatory information about the process, the right to appeal a negative credible fear determination to an immigration judge, and the fear of being tortured. The asylum officer is to read aloud the mandatory paragraphs from an INS form on which the officer also records the results of the credible fear interview. See appendix VI for a reprint of the Credible Fear Worksheet. For aliens referred to an asylum officer, INS states that the asylum officer is to consider the credible fear standard as a low threshold to screen for persons with promising asylum claims. During the interview with an asylum officer, an alien can have a consultant present. The asylum officer is to record the results of the credible fear interview, including his or her determination of the alien’s ability to meet any of the five grounds for asylum. INS requires supervisory review of asylum officers’ credible fear determinations. If the asylum officer finds that the alien has a credible fear of persecution, the alien will be placed in removal proceedings before 1 of about 200 immigration judges during which the alien can make a formal application for asylum. During these proceedings, the immigration judge is to decide whether the alien’s asylum claim warrants his or her being granted asylum in the United States. If the asylum officer finds that the alien does not have a credible fear, the alien has a right to request that an immigration judge review the negative credible fear determination. If the alien does not request a review of the credible fear determination, the alien is subject to expedited removal. In cases where the alien requests a review of an asylum officer’s negative credible fear determination, the immigration judge is to review this determination. During this review, the immigration judge may receive into evidence any relevant written or oral statements. If the immigration judge agrees with the asylum officer’s negative credible fear decision, the alien cannot appeal the immigration judge’s decision and is to be removed through the expedited removal process. If the immigration judge disagrees with the asylum officer’s negative credible fear decision, the alien is to be placed in removal proceedings, during which he or she can apply for asylum. During the immigration judge’s review, at the discretion of the immigration judge, the alien may enlist the aid of a consultant in the review process. INS data for aliens who attempted to enter the United States between April 1, 1997, and October 31, 1997, show that inspectors referred 1,396 aliens to asylum officers for a credible fear interview. Of the aliens who were referred, 1,108 had completed their interviews as of November 13, 1997. Nationwide, asylum officers determined that 79 percent of these 1,108 aliens had a credible fear of persecution. According to an INS official, about 10 percent of the aliens referred for a credible fear interview have recanted their claim of a fear of persecution before an asylum officer. As shown in table 3.1, positive credible fear determination rates for the eight asylum offices ranged from 59 to 93 percent. We reviewed the files for all 84 negative credible fear determinations made for aliens who requested entry between May 1, 1997, and July 31, 1997. Not all of the 84 case files contained complete documentation and, therefore, some of our analysis was made on fewer than 84 cases. In most of these determinations (55 of 81) the asylum officer concluded that the aliens’ fears were not based on 1 of the 5 grounds for asylum. In 25 of 76 cases, the officer concluded that the aliens’ testimonies were not credible. Documentation in the 84 case files we reviewed nationwide indicated that INS generally followed its procedures for determining an alien’s credible fear, but did not consistently document whether asylum officers provided information regarding the alien’s fear of being tortured. Our review of the 84 negative credible fear case files showed that the asylum officers indicated by marking on the records of interview that they (1) read the required paragraph regarding the aliens’ fear of persecution in all but 1 case and (2) informed the aliens of their right to have an immigration judge review a negative credible fear determination in all but 3 cases. However, our review of the case files showed no documentation on whether the asylum officers read the paragraph on torture in 19 of 83 cases. In addition to the 84 cases, we attended 9 credible fear interviews in which the asylum officers generally followed INS’ procedures regarding credible fear interviews, including reading the mandatory material. In eight out of nine cases we observed, all of the mandatory paragraphs were read or summarized. In the ninth case, the asylum officer did not read the required paragraph on torture. An INS official told us that the headquarters Asylum Office reviewed the cases for which the paragraph on torture was not checked off in the file and found other evidence in the file to indicate that questions related to torture were asked and, therefore, she believed that the problem was related to poor recordkeeping. The INS official also told us that INS has subsequently reiterated to its asylum officers that they are to read the paragraph on torture and ask the related questions in the credible fear interview and to record that the paragraph was read and questions were asked. The asylum officers are to record the results of the credible fear interview, including the alien’s ability to meet any of the five bases for asylum. As part of our observation, we compared asylum officers’ records of the credible fear interviews to our observations. We found the credible fear worksheets completed by the asylum officers to be consistent with our observations in all of the nine cases. Our case file review showed evidence that 69 of 75 cases had supervisory reviews. For the remaining six cases, the files did not have the signatures of supervisors indicating that they had reviewed the files. Furthermore, in the negative credible fear determination case files we reviewed, we also determined whether in credible fear interviews an interpreter was used and if the alien had a consultant. The case files indicated that, in 66 of 82 credible fear interviews, an interpreter was used.Aliens had consultants in 19 of the 84 cases. We requested that the three asylum offices we visited provide the estimated time required to process a credible fear case. The estimate for the asylum officer was to include the time needed to provide aliens with an orientation, prepare for and conduct the credible fear interview, and complete the associated paperwork. The estimate for the supervisor was to include the time spent discussing and reviewing the case and its related paperwork. The New York and Miami asylum offices provided average time estimates on a per case basis to complete these and other tasks associated with the credible fear process. We totaled these time estimates to get an overall average of the amount of time spent per case by the asylum officers and supervisors at each office. The Los Angeles asylum office estimated average time spent by asylum officers and supervisors on the basis of total hours spent for the time period October 1 to December 19, 1997, including travel time, and did not identify the time by specific tasks. To estimate the average time per case, the Los Angeles Office divided the total hours for the asylum officers and supervisors by the number of cases for that period. Therefore, estimates from the Los Angeles asylum office and the other two asylum offices are not comparable because of the different approaches used to develop their time estimates. The data we received are summarized in table 3.2. Aliens who have received a negative credible fear determination from asylum officers have the option of requesting a review of their case by an immigration judge. Between April 1, 1997, and October 31, 1997, EOIR received 198 cases for review of a negative credible fear determination. Of these 198 cases, immigration judges affirmed asylum officers’ negative credible fear determinations in about 83 percent of the cases. We reviewed the bases for immigration judges’ decisions made through August 31, 1997, in which they overturned (vacated) the asylum officers’ determinations. In 14 of the 18 cases we reviewed, the immigration judges found that the aliens had established a “significant possibility” of harm as required by the 1996 Act. The following six nationalities had the most numbers of aliens who requested review of their negative credible fear determinations: Haiti (51), China (24), Albania (15), Guatemala (14), Mexico (12), and El Salvador (8). The remaining 74 aliens were of 33 nationalities. The seven judges we interviewed differed on their court procedures for consultants’ roles. The consultants’ roles ranged from being permitted to speak to not being allowed to speak in the court at all. The position of the Office of the Chief Immigration Judge is that although an alien has no statutory right to consult with anyone during the immigration judge’s review of a negative credible fear finding, nonetheless, there are circumstances where the judge may find it extremely helpful to enlist the aid of the consultant during the review process. To ensure that his or her decision is based on all relevant material available, the immigration judge may permit the consultant to speak with the alien, may question the consultant, or may request a statement from the consultant. According to EOIR data, the average time to complete a negative credible fear review was about 1.5 hours. This time included about 1 hour spent by the immigration judge to prepare and conduct the interview and complete the paperwork. In addition, legal technicians spent about 30 minutes on the administrative process. Furthermore, the cost for interpreters was $71 per hour for Spanish and Creole and $95 per hour for other languages. According to EOIR, interpreters were used in about 85 percent of all of the cases. EOIR based its estimates on data it obtained from the Krome (Miami, FL), Elizabeth (NJ), and Wackenhut (New York, NY) immigration courts for the period April 1, 1997, to September 30, 1997. In addition to the procedures discussed in chapters 2 and 3, INS has developed or is in the process of developing mechanisms to monitor the expedited removal process, including the credible fear determinations. These mechanisms include establishing headquarters working groups and field experts, auditing and reviewing the process, training staff who are involved in the process, establishing procedures to be followed in carrying out the process, and getting input from nongovernmental organizations. INS has instituted activities to monitor and provide information on and identify potential changes to the expedited removal process. INS established the Expedited Removal Working Group to identify and address policy questions, procedural and logistical problems, and quality assurance concerns related to the expedited removal process. The group consists of representatives from the Offices of Inspections, International Affairs, Asylum, Detention and Deportation, Field Operations, and General Counsel. One way in which the group carries out its duties is through visits to INS field units where group members review case files and meet with management and staff involved in the expedited removal process to discuss such things as resource materials, the process, and policy issues. Among other things, the Working Group has provided additional written guidance on the taking of sworn statements and on when to permit aliens to withdraw their applications. INS established an Asylum Office quality assurance team at headquarters to review selected credible fear files. According to INS officials, the quality assurance team is to focus on credible fear determination issues, while the Expedited Removal Working Group is to focus on the entire expedited removal process, including asylum and inspection. This quality assurance team consists of four asylum officers who are to analyze decisions in individual cases, provide feedback to applicable asylum officers, and identify trends or patterns on the basis of the reviews. Initially, the feedback to asylum officers was informal and each member of the group used his or her own review method. Beginning January 2, 1998, the team is to use a checklist to standardize the monitoring and to review all negative credible fear determinations before the decision is served on the alien. The team is to prepare monthly reports that are to address problems faced by all offices and is to address serious problems immediately. INS’ Office of Internal Audit examines field unit functions and operations. The objectives of these reviews include evaluating units’ effectiveness and determining compliance with applicable laws, regulations, and procedures. Beginning in April 1997, the audits were to include reviewing the expedited removal process. In chapter 2, we discussed OIA’s first report that included the expedited removal process. As part of its efforts to communicate with outside entities that deal with immigration issues, INS has met periodically with nongovernmental organizations to discuss issues related to the expedited removal process, including the credible fear process. Some of the concerns of the nongovernmental organizations are discussed in the next section of this chapter. In addition to these previously mentioned mechanisms, INS established certain procedures to help ensure that the expedited removal process is implemented properly and consistently. First, INS stated it trained about 16,400 of its staff on the implementation process for the 1996 Act, including the expedited removal (and credible fear) provision. According to INS officials, asylum officers were to be given additional training on making credible fear determinations. Second, INS issued operating procedures that require inspectors and asylum officers to follow specific steps when considering issuing expedited removal orders and making credible fear determinations. Third, INS requires that all expedited removal orders and credible fear determinations be reviewed by a supervisor. Finally, for aliens who were determined not to have a credible fear of persecution, INS may, at its discretion, offer a second credible fear interview to an alien, even if the alien has not established a credible fear before an asylum officer or after an immigration judge review. Several nongovernmental organizations provided information about their concerns regarding the expedited removal process and including information on (1) issues related to the expedited removal process and credible fear determinations and (2) specific problems that aliens said they encountered when they arrived at ports of entry. In addition, these organizations provided data describing specific situations of INS’ handling of aliens who were subject to expedited removal. We did not verify the data they provided. These organizations’ concerns about the process included allegations that aliens did not understand the expedited removal process because, for example, the removal order was legalistic and incomprehensible; interpreters’ competency varied, which in some instances caused serious mistakes to be made in translation; consultants were denied access to documents in applicants’ case files, such as the sworn statements; and attorneys were not allowed to play a meaningful role at the credible fear interview (e.g., they were not permitted to make opening or closing statements or to ask questions, and they had no opportunity to consult with their clients before deciding whether to request a negative credible fear review by an immigration judge). The organizations also raised allegations of INS officers’ unprofessional treatment of aliens attempting to enter the United States. The alleged actions of INS officers included (1) not explaining the expedited removal process, including applying for asylum; (2) not providing interpretation services; (3) verbally abusing the aliens; and (4) not providing physical amenities, such as food, water, bed and blankets, and bathroom facilities. In addition to the INS mechanisms discussed above that are related to these types of issues, INS officials told us that every alien and consultant has access to the relevant documents regarding the alien’s case (e.g., sworn statement). Regarding the consultant’s role, INS stated that the consultant and the asylum officer should share a cooperative role in developing and clarifying the merits of the alien’s claim. Furthermore, the consultant should generally be given the opportunity to make a statement at the end of the interview, comment on the evidence presented, and ask the alien additional questions. Concerning the competency of interpreters, INS procedures provide that the alien or the alien’s consultant has the right to request a different interpreter if he or she feels that the interpreter is not competent or neutral. Additionally, INS officials said that the alien or the alien’s consultant may request another interpreter for whatever reason. Regarding unprofessional behavior, INS stated that it will do more to ensure that aliens, including those who attempt to enter illegally, know their civil rights and how to register a complaint if abused by an INS officer. Furthermore, the Commissioner said that INS insists on proper, humane, and polite treatment of people who are entering the United States whether their documents are correct or not. We had no way to determine the validity of the issues that the organizations raised, including the specifics about any individual alien. Our limited observations, case file reviews, and discussions with INS officials did not identify problems similar to those raised by the organizations. Concerning our observations, our presence may have affected what took place during inspections, interviews, and reviews, but we have no way of knowing whether, how, or to what extent this happened. INS is in the process of changing aspects of the expedited removal procedures on the basis of input it has received from its internal groups and the nongovernmental organizations. These changes include the following: INS is revising some expedited removal forms that contain explanations to be read to the alien (e.g., Information about Credible Fear Interview). According to INS, as part of this revision process, it has asked some of the nongovernmental organizations to review the forms to make them easier for aliens to understand. INS assigned the responsibility of being an expedited removal expert to selected staff for each region and district to ensure that policy guidance is distributed, understood, and implemented. INS officials have completed training these staff on their new duties associated with being an expedited removal expert. INS said that it permits aliens to provide their own interpreters for credible fear interviews. | Pursuant to a legislative requirement, GAO reviewed the Illegal Immigration Reform and Immigrant Responsibility Act of 1996, focusing on: (1) how the expedited removal process and the Immigration and Naturalization Service (INS) procedures to implement it are different from the process and procedures used to exclude aliens before the act; (2) the implementation and results of the process for making credible fear determinations during the 7 months following April 1, 1997; and (3) the mechanisms that INS established to monitor expedited removals and credible fear determinations and to further improve these processes. GAO noted that: (1) two major differences between the exclusion process used before the act and its expedited removal process are INS inspectors' authority to issue the expedited removal order and the aliens' limited right of review of that order; (2) other changes included an increased penalty for inadmissible aliens, including those subject to expedited removal, and a more structured inspection process for expedited removal than for exclusion; (3) at the five locations GAO visited, INS estimated that the amount of time it took inspectors to complete the expedited removal process was greater than the amount of time used to complete the steps required of INS inspectors in the previous exclusion process; (4) this increased time by INS inspectors could be offset by reductions in time by immigration judges who no longer make these decisions; (5) during the first 7 months that the expedited removal process was in place, 29,170 aliens attempted to enter the country and were placed in expedited removal; (6) INS inspectors referred 1,396 of these aliens to asylum officers for credible fear interviews; (7) as of December 1997, almost all of the approximately 27,800 remaining aliens had been removed from the United States; (8) at the five locations it visited, GAO reviewed documentation in randomly selected case files of aliens subject to expedited removal; (9) the results of this review showed that between an estimated 80 percent and 100 percent of the time INS inspectors and supervisors documented that they followed certain INS procedures; (10) these documented procedures included activities such as supervisors' review of inspectors' removal orders and inspectors' asking aliens specific questions about their fear of being returned to their home country or country of last residence; (11) of the 1,396 aliens referred to asylum officers for credible fear determinations, asylum officers completed interviews with 1,108 and found that 79 percent had a credible fear; (12) immigration judges received 198 cases to review asylum officers' negative credible fear determinations between April 1 and October 31, 1997; (13) the judges affirmed the asylum officers' determinations in 83 percent of these cases; (14) INS has developed or is in the process of developing mechanisms to monitor the expedited removal procedures, including the credible fear determinations; and (15) INS has made changes to its processes on the basis of concerns raised by these internal reviewers and outside organizations. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
The use of wireless phone service has grown rapidly in recent years. By the end of 2008, about 82 percent of adults lived in households with wireless phone service, up from 54 percent at the end of 2005. Furthermore, by the end of 2008, about 35 percent of households used wireless phones as their primary or only means of telephone service, of which about 20 percent had only wireless phones and the other 15 percent had landlines but received all or most calls on wireless phones. Consumers’ use of wireless phones for other purposes, such as text messaging, photography, and accessing the Internet, has also increased dramatically. For example, FCC reports that, while a subscriber’s average minutes of use per month grew from 584 to 769 from 2004 to 2007, the number of text messages grew more than tenfold during the same period. Within the wireless phone industry, four nationwide wireless phone service carriers—AT&T, Sprint, T-Mobile, and Verizon—operate alongside regional carriers of various size. The four major carriers serve more than 85 percent of wireless subscribers, but no single competitor has a dominant share of the market. As recently as 2007, more than 175 companies identified themselves as wireless phone service carriers. To subscribe to wireless phone service, a customer must select a wireless phone service carrier and either sign a contract and choose a service plan or purchase prepaid minutes and buy a phone that works with the prepaid service. Most customers sign contracts that specify the service plan and the number of minutes and text messages the customer is buying for a monthly fee. Also, new customers who sign contracts for wireless phone service sometimes pay up-front fees for “network activation” of their phones and usually agree to pay an “early termination fee” if they should quit the carrier’s network before the end of the contract period. In return for signing a contract, customers often receive wireless phones at a discount or no additional cost. In 1993, the Omnibus Budget Reconciliation Act (1993 Act) was enacted, creating a regulatory framework to treat wireless phone service carriers consistently and encourage the growth of a competitive marketplace. Specifically, the law required FCC to treat wireless carriers as common carriers but gave FCC authority to exempt wireless service carriers from specific regulations that apply to common carriers if FCC could demonstrate that doing so would promote competition, that the regulations were unnecessary to protect consumers, and that the exemption was consistent with the public interest. FCC has specific authority to regulate wireless phone service rates and market entry, while states are preempted from doing so; however, states may regulate the other “terms and conditions” of wireless phone service. The 1993 Act also directed FCC to require wireless carriers, like other common carriers, to provide service upon reasonable request and terms without unjust or unreasonable discrimination, as well as to adhere to procedures for responding to complaints submitted to FCC. Subsequently, the Telecommunications Act of 1996 authorized FCC to exempt wireless service carriers from these sections; however, in a 1998 proceeding to consider whether to exempt certain wireless phone service carriers from these requirements, FCC specifically stated that it would not do so, noting that these respective sections represented the “bedrock consumer protection obligations” of common carriers. FCC’s rules specify that the agency has both informal and formal complaint processes. FCC’s informal complaint process allows consumers to file complaints with FCC that the agency reviews and forwards to carriers for a response. The formal complaint process, which is similar to a court proceeding, requires a filing fee and is rarely used by consumers. State agencies also play a role in wireless phone service oversight. State utility commissions (sometimes called public utility commissions or public service commissions) regulate utilities, including telecommunications services such as wireless phone service and landline phone service. State commissions may also designate wireless phone service carriers as eligible telecommunications carriers (ETC)—a designation that allows carriers to receive universal service funds for serving consumers in high-cost areas. Through this process, state utility commissions may place conditions on how wireless carriers provide services in those high-cost areas in order for them to be eligible for such funds. State attorneys general broadly serve as the chief legal officers of states while also representing the public interest, and their work has included addressing wireless consumer protection issues. For example, in 2004, the attorneys general of 32 states entered into voluntary compliance agreements with Cingular Wireless (now AT&T), Sprint, and Verizon, under which the carriers agreed to disclose certain service terms at the point-of-sale and in their marketing and advertising, provide a service trial period, appropriately disclose certain taxes and surcharges on customers’ bills, and respond to consumers’ complaints and inquiries. According to our consumer survey, overall, wireless phone service consumers are satisfied with the service they receive. Specifically, we estimate that 84 percent of adult wireless users are very or somewhat satisfied with their wireless phone service and that approximately 10 percent are very or somewhat dissatisfied with their service (see fig. 2). Stakeholders we interviewed identified a number of aspects of wireless phone service that consumers have reported problems with in recent years. We identified five key areas of concern on the basis of these interviews and our review of related documents, and we subsequently focused our nationwide consumer survey on these areas (see table 1). Based on our survey results, we estimate that most wireless phone users are satisfied with these five specific aspects of service (see table 2). For example, we estimate that 85 percent of wireless phone users are very or somewhat satisfied with call quality, while the percentages of those very or somewhat satisfied with billing, contract terms, carrier’s explanation of key aspects of service at the point of sale, and customer service range from about 70 to 76 percent. Additionally, we estimate that most wireless phone users are satisfied with their wireless phone service coverage. For example, we estimate that 86 to 89 percent of wireless phone users are satisfied with their coverage when using their wireless phones at home, at work, or in their vehicle. While we estimate that about three-fourths or more of wireless phone service users are satisfied with specific aspects of their service, the percentages of those very or somewhat dissatisfied range from about 9 to 14 percent, depending on the specific aspect of service. For example, we estimate that 14 percent of wireless phone users are dissatisfied with the terms of their service contract or agreement. While the percentages of dissatisfied users appear to be small, they represent millions of people since, according to available estimates, the number of adult wireless phone service users is over 189 million. Other results of our survey suggest that some wireless phone consumers have experienced problems with billing, certain service contract terms, and customer service recently—that is, during 2008 and early 2009. Specifically, our survey results indicate the following: Billing. We estimate that during this time about 34 percent of wireless phone users responsible for paying for their service received unexpected charges and about 31 percent had difficulty understanding their bill at least some of the time. Also during this time, almost one-third of wireless users who contacted customer service about a problem did so because of problems related to billing. Service contract terms. Among wireless users who wanted to switch carriers during this time but did not do so, we estimate that 42 percent did not switch because they did not want to pay an early termination fee. Customer service. Among those users who contacted customer service, we estimate that 21 percent were very or somewhat dissatisfied with how the carrier handled the problem. Our analysis of FCC consumer complaint data also indicates that billing, terms of the service contract, and customer service are areas where wireless consumers have experienced problems in recent years. Furthermore, FCC complaint data indicate that call quality is an area of consumer concern. Specifically, our analysis of FCC data indicates that the top four categories of complaints from 2004 through 2008 regarding service provided by wireless carriers were billing and rates, call quality, early termination of contracts, and customer service, as shown in figure 3 (see app. II for additional discussion of FCC wireless consumer complaint data). Our survey of state utility commissions also found that billing, contract terms, and quality of service were the top categories of consumer complaints related to wireless phone service that commissions received in 2008. Specifically, among the 21 commissions that track wireless consumer complaints, 14 noted billing, 10 noted contract terms, and 10 noted quality of service as among the top three types of complaints commissions received in 2008. Additionally, 3 commissions specifically cited early termination fees as one of the top three categories of complaints they received in 2008. In response to the areas of consumer concern noted above, wireless carriers have taken a number of actions in recent years. For example, officials from the four major carriers—AT&T, Sprint, T-Mobile, and Verizon—reported taking actions such as prorating their early termination fees over the period of the contract, offering service options without contracts, and providing Web-based tools consumers can use to research a carrier’s coverage area, among other efforts. In addition, in 2003, the industry adopted a voluntary code with requirements for dealing with customers and, according to CTIA–The Wireless Association, the wireless industry spent an average of $24 billion annually between 2001 and 2007 on infrastructure and equipment to improve call quality and coverage. Also, carriers told us they use information from third-party tests and customer feedback to determine their network and service performance and identify needed improvements. (See app. III for additional information about industry actions to address consumer concerns.) Representatives of state agencies and various consumer and industry associations we interviewed expressed concern to us that many of the actions the industry has taken to address consumers’ concerns are voluntary and have not effectively addressed some major consumer concerns. For example, officials from some state public utility commissions indicated that there are no data to support the effectiveness of the wireless industry’s voluntary code and that this code lacks the level of oversight that state agencies can offer. Moreover, officials from state utility commissions and consumer associations we spoke with indicated that the industry’s actions to prorate early termination fees may be inadequate because the fees are not reduced to $0 over of the course of the contract period. Furthermore, some representatives of state agencies and consumer groups suggested that the industry has taken voluntary actions such as adopting the voluntary code and prorating early termination fees to avoid further regulation by FCC. Industry representatives, however, told us that the voluntary approach is more effective than regulation, since it gives the industry flexibility to address these concerns. FCC processes tens of thousands of wireless consumer complaints each year but has conducted little additional oversight of services provided by wireless phone service carriers because the agency has focused on promoting competition. The agency receives informal consumer complaints and forwards them to carriers for response; however, our consumer survey results suggest that most wireless consumers with problems would not complain to FCC and many do not know where they could complain. FCC has also not articulated goals and measures that clearly identify the intended outcomes of its complaint-processing effort. Consequently, if wireless consumers do not know where they can complain or what outcome to expect if they do, they may be confused about where to go for help or what assistance they can expect from FCC. Additionally, FCC cannot demonstrate how well it is achieving the intended outcomes of its efforts. While FCC monitors wireless consumer complaints by reviewing the top categories of complaints received, it has conducted few in-depth analyses to identify trends or emerging issues, impeding its ability to determine whether its rules have been violated or if new rules may be needed. FCC receives about 20,000 to 35,000 complaints each year related to services provided by wireless carriers, which the agency forwards to carriers for response. Given that our survey indicates that an estimated 21 percent of consumers who contact their carrier’s customer service about a problem are dissatisfied with the result, FCC’s efforts to process complaints are an important means for consumers to get assistance in resolving their problems. After reviewing a complaint received, FCC responds by sending the consumer a letter about the complaint’s status. If FCC determines that the complaint is valid, the agency sends the complaint to the carrier and asks the carrier to respond to FCC and the consumer within 30 days. Once FCC receives a response from the carrier, the agency reviews the response, and if it determines the response has addressed the consumer’s complaint, it marks the complaint as closed. According to FCC officials, if the response is not sufficient, FCC contacts the carrier again. FCC officials told us they consider a carrier’s response to be sufficient if it responds to the issue raised in the consumer’s complaint; however, such a response may not address the problem to the consumer’s satisfaction. When FCC considers a complaint to be closed, it sends another letter to the consumer, which states that the consumer can call FCC with further questions or, if not satisfied with the carrier’s response, can file a formal complaint. FCC officials also told us that if a consumer is not satisfied, the consumer can request that FCC mediate with the carrier on his or her behalf; however, the letter that FCC sends to a consumer whose complaint has been closed does not identify mediation as an option. FCC closes most wireless phone service complaints within 90 days of receiving them. Specifically, according to FCC’s complaint data, the agency closed 61 percent of complaints received in 2008 within 90 days (see fig. 4). FCC uses several methods to inform consumers that they may complain to the agency about their wireless phone service and has taken steps to improve its outreach. According to FCC officials, the agency provides information on how to complain to FCC on its Web site and in fact sheets that are distributed through various methods, including its Web site. Also, in response to a recommendation from its Consumer Advisory Committee in 2003 to improve outreach to consumers about the agency’s process for handling complaints, FCC switched from using one complaint form to having multiple forms for different types of complaints to make filing complaints easier for consumers. FCC also made its complaint forms and fact sheets available in Spanish and has distributed consumer fact sheets at outreach events and conferences. Furthermore, the agency created an e- mail distribution list for disseminating consumer information materials, which it used to inform consumers about the revised complaint forms. We have previously noted that it is important for an agency’s consumer protection efforts to inform the public effectively and efficiently about its role and how to seek redress. Additionally, we have reported on various ways an agency can communicate with the public about its efforts, including how exploring multiple methods for communicating with the public may improve public outreach. Such outreach methods can include making effective use of Web sites, e-mail listserves, or other Web-based technologies like Web forums, as well as requiring relevant companies to provide information to their customers. For example, many state utility commissions require landline carriers to include information on customers’ bills about how to contact the commission with a complaint. Despite FCC’s efforts to improve its outreach, these efforts may not be adequately informing the public about the agency’s role in handling consumer complaints. Specifically, based on the results of our consumer survey, we estimate that 13 percent of adult wireless phone users would complain to FCC if they had a problem that their carrier did not resolve and that 34 percent do not know where they could complain. Therefore, many consumers that experience problems with their wireless phone service may not know to contact FCC for assistance or may not know at all whom they could contact for help. We reported these survey results in June 2009. In August 2009, noting our survey results, FCC sought public comment on whether there are measures the agency could take to ensure that consumers are aware of FCC’s complaint process, including whether FCC should require carriers to include information for consumers on their bills about how to contact FCC with a complaint. FCC’s goals and measures related to its efforts to process wireless consumer complaints do not clearly identify the intended outcomes of these efforts. The Government Performance and Results Act of 1993 (GPRA) requires an agency to establish outcome-related performance goals for its major functions. GPRA also requires an agency to develop performance indicators for measuring the relevant outcomes of each program activity in order for the agency to demonstrate how well it is achieving its goals. The key goal related to FCC’s consumer complaint efforts is to “work to inform American consumers about their rights and responsibilities in the competitive marketplace.” This key goal also has a subgoal to “facilitate informed choice in the competitive telecommunications marketplace.” According to FCC officials, “informed choice” means consumers are informed about how a particular telecommunications market works, what general services are offered, and what to expect when they buy a service. FCC’s measure related to its efforts to process wireless consumer complaints under this subgoal is to respond to consumers’ general complaints within 30 days, which reflects the time it takes FCC to initially respond to the consumer about the status of a complaint. The measure does not clearly or fully demonstrate FCC’s achievement of its goal to facilitate informed consumer choice. Instead, it is a measure of a program output, or activity, not of the outcome the agency is trying to achieve. Another subgoal is to “improve customer experience with FCC’s call centers and Web site.” While this subgoal does identify an intended outcome, FCC does not have a measure related to this outcome that pertains to consumers who complain about services provided by their wireless carrier. FCC officials told us that they do not measure customer experience with the agency’s call centers and Web sites but sometimes receive anecdotal information from customers about their experiences. We have previously reported that to better articulate results, agencies should create a set of performance goals and related measures that address important dimensions of program performance. FCC’s goals may not represent all of the important dimensions of FCC’s performance in addressing consumer complaints. A logical outcome of handling complaints is resolving problems or, if a problem cannot be resolved, helping the consumer understand why that is the case. However, it is not clear whether resolving problems is an intended outcome of FCC’s consumer complaint efforts. While FCC’s goals in this area indicate that informing consumers is a goal of the agency, some information from FCC implies that another intended outcome of these efforts is to resolve consumers’ problems. For example, FCC’s fact sheets state that consumers can file a complaint with FCC if they are unable to resolve a problem directly with their carrier, which may lead consumers to believe that FCC will assist them in obtaining a resolution. However, FCC officials told us that the agency’s role in addressing complaints, as outlined in the law, is to facilitate communication between the consumer and the carrier and that FCC lacks the authority to compel a carrier to take action to satisfy many consumer concerns. Thus, it is not clear if the intended outcome of FCC’s complaint-handling efforts is resolving consumer problems, fostering communication between consumers and carriers, or both. Furthermore, FCC has not established measures of its performance in either resolving consumer problems or fostering communication between consumers and carriers. For example, FCC does not measure consumer satisfaction with its complaint-handling efforts. Without clear outcome-related goals and measures linked to those goals, the purpose and effectiveness of these efforts are unclear, and the agency’s accountability for its performance is limited. As noted above, consumers may not know to contact FCC if they have a complaint about their wireless phone service. Additionally, because FCC has not clearly articulated the intended outcomes of its complaint- processing efforts, consumers may not know the extent to which FCC can aid them in obtaining a satisfactory resolution to their concerns, and since FCC’s letters to consumers do not indicate that mediation is available, consumers may not know that they can request this service from FCC. Consequently, consumers with wireless service problems may be confused about where to seek assistance and what kind of assistance to expect if they do know they can complain to FCC. FCC has few rules that specifically address services consumers receive from wireless phone service carriers, and in general, the agency has refrained from regulating wireless phone service in order to promote competition in the market. FCC’s rules include general requirements for wireless carriers to provide services upon reasonable request and terms and in a nondiscriminatory manner, and to respond to both informal and formal complaints submitted to FCC by consumers. FCC also has specific rules requiring wireless carriers and other common carriers to present charges on customers’ bills that are clear and nonmisleading, known as truth-in-billing rules. Additionally, FCC’s rules establish other consumer protections, such as requiring wireless carriers to provide enhanced 911 and other emergency services and number portability rules that allow customers to keep their phone numbers when switching between wireless carriers or between landline and wireless services. While FCC has rules that cover billing, the agency has not created specific rules governing other key areas of recent consumer concern that we identified (see table 3). According to FCC, the agency does not regulate issues such as carriers’ contract terms or call quality, since the competitive marketplace addresses these issues, leading carriers to compete on service quality and proactively respond to any related concerns from consumers. Additionally, having determined that exempting carriers from certain regulations will promote competition, FCC has used its authority under the 1993 Act to exempt wireless carriers from some rules that apply to other communications common carriers. For example, in 1994, FCC exempted wireless carriers from rate regulations that apply to other common carriers. FCC has stated that promoting competition was a principal goal of the 1993 Act under which Congress established the regulatory framework for wireless phone service oversight. As required by the 1993 Act, in exempting wireless phone service carriers from regulations in order to promote competition, as FCC has done, FCC must determine that such exemption is in the public interest and that the regulations are not necessary for the protection of consumers. FCC officials told us that the agency has taken a “light touch” in regulating the industry because it is competitive and noted that carriers compete with one another to provide better service. FCC proposed rules in 2005 for wireless carriers to address further regulation of billing practices and, in 2008, to address carriers’ reporting of service quality information such as customer satisfaction and complaint data. FCC has received comments on both proposals but has taken no further action to date. In August 2009, as part of its effort to seek comment on a number of telecommunications consumer issues, FCC sought comment on the effectiveness of its truth-in- billing rules and whether changes in these rules are needed. FCC monitors informal complaints submitted by consumers to determine whether further regulation is needed and if the wireless industry is complying with the agency’s rules, but such monitoring is limited. According to FCC officials, trends in consumer complaint data may alert them to the need for changes in regulation. Furthermore, FCC has acknowledged that when exempting telecommunications service providers, such as wireless carriers, from its regulations, the agency has a duty to ensure that consumer protection needs are still met. FCC’s Consumer and Governmental Affairs Bureau reviews the top categories of complaints reported in the agency’s quarterly reports of consumer complaints and looks for trends. FCC officials said that the agency does not routinely conduct more in-depth reviews of the nature of wireless consumer complaints unless they are needed to support an FCC decision- making effort, such as a rulemaking proceeding. FCC does not document its monitoring of consumer complaints and does not have written policies and procedures for routinely monitoring complaints. FCC has taken a number of actions to enforce its rules that apply to wireless phone service carriers, but the agency has conducted no enforcement of its truth-in-billing rules as they apply to wireless service. One of the agency’s performance goals is to enforce FCC’s rules for the benefit of consumers. According to representatives of FCC’s Enforcement Bureau, trends in consumer complaints that identify potential violations of FCC rules may signal the need for FCC to conduct an investigation, which could lead to an enforcement action. For example, in reviewing complaint data, the bureau identified five wireless carriers that had not responded to consumer complaints, which in 2008, led the agency to initiate enforcement actions against these carriers. However, Enforcement Bureau officials told us that they have not reviewed complaints to look for potential wireless truth-in-billing rules violations. Under the method it currently uses to categorize informal complaints, FCC cannot easily determine whether complaints may indicate a potential violation of FCC’s truth-in-billing rules. For example, FCC officials told us that while the agency uses category codes to identify types of complaints related to billing, such as codes for rates, line items, and fees, FCC officials would have to review complaints individually to determine whether they revealed a potential violation of its truth-in-billing rules—an analysis FCC has not conducted. Furthermore, according to FCC officials, since the application of the agency’s truth-in-billing rules to wireless carriers was expanded in 2005, the agency has conducted no formal investigations of wireless carriers’ compliance with these rules because investigating other issues has been a priority and FCC has received no formal complaints in this area. Since our consumer survey indicates that about a third of consumers responsible for paying their wireless bills have had problems understanding their bill or received unexpected charges, the enforcement of truth-in-billing rules is important for the protection of consumers. Lacking in-depth analysis of its consumer complaints, FCC may not be aware of trends or emerging issues related to consumer problems, if specific rules—such as the truth-in-billing rules—are being violated, or if additional rules are needed to protect consumers. Our standards for internal control in the federal government state that agencies should have policies and procedures as an integral part of their efforts to achieve effective results. Without adequate policies and procedures for conducting such analyses of its consumer complaints, FCC may not be able to ensure that its decisions to exempt carriers from regulation promote competition and protect consumers. Results of our survey of state utility commissions show that while most commissions process wireless consumer complaints, most do not regulate wireless phone service. Representatives of state utility commissions and other stakeholders we interviewed told us that states’ authority under federal law to regulate wireless phone service is unclear, and this lack of clarity has, in some cases, led to costly legal proceedings and some states’ reluctance to provide oversight. Additionally, based on the results of our survey, communication between these commissions and FCC regarding oversight of wireless phone service is infrequent. In response to our survey of 51 state utility commissions, 33 commissions reported receiving complaints about wireless phone service, which they process in different ways. Specifically, 20 of these commissions work with the consumer and/or wireless carrier to resolve wireless complaints, while the other 13 commissions that accept complaints forward the complaint or refer the consumer to the relevant wireless carrier or another government entity. States that forwarded complaints or referred consumers to other government entities most frequently did so to FCC or a state attorney general, with some complaints also going to the Federal Trade Commission, a state consumer advocate, or another state agency. State utility commission officials we spoke with in California, Nebraska, and West Virginia, which all accept complaints and work with carriers and consumers to resolve them, told us that they have access to higher-ranking carrier representatives than consumers who call the carriers directly. This access, they said, helps them resolve wireless consumer complaints in an effective and timely manner. Twenty-one of the 33 commissions that accept complaints reported recording and tracking the number and types of wireless phone service complaints they receive. Based on the responses of commissions to our survey, they received a total of 8,314 wireless service complaints in 2008. Most commissions do not regulate wireless phone service. As noted previously, under federal law, states may regulate “terms and conditions” of wireless phone service, although they are preempted from regulating rates and entry. In response to our survey, 19 commissions reported having rules (or regulations) for wireless phone service, either for telecommunications services generally, including wireless service, or wireless services specifically (see fig. 5). Few commissions have rules within the following five main areas related to the terms and conditions of wireless service we asked about in our survey: service quality, billing practices, contract or agreement terms and conditions, advertising disclosures, and disclosure of service terms and conditions. Specifically, the number of commissions that have rules in these areas ranges from 3 that have rules about disclosure of service terms and conditions to 15 that have rules about service quality (see fig. 6). While fewer than half of the commissions have wireless rules, most designate wireless carriers as eligible telecommunication carriers (ETC) to receive universal service funds for serving high-cost areas. Although ETC status is not required for a wireless carrier to operate in a high-cost area, it is required if the carrier wants to receive universal service funding. We previously reported that wireless carriers often lack the economic incentive to install wireless towers in rural areas where they are unlikely to recover the installation and maintenance costs, but high-cost program support allows them to make these investments. Most commissions place conditions on receiving these funds related to various aspects of service. Specifically, 41 commissions in our survey reported having processes to designate wireless carriers as ETCs, and 31 reported placing such conditions on carriers to receive these funds. For example, the Nebraska state commission requires designated wireless ETCs to submit reports about coverage, service outages, complaints, and their use of universal service funding. For each of the five main areas related to the terms and conditions of service we asked about, more commissions reported having conditions for wireless ETCs than rules for wireless carriers (see fig. 6). Such conditions would not apply to wireless carriers generally—only to those carriers designated as ETCs to provide services in high-cost areas. Few state utility commissions—five—reported taking enforcement action against wireless phone service carriers since the beginning of 2004. According to national organizations representing state agencies, states’ concerns about the cost of pursuing these issues in court have created a reluctance to do so. State utility commissions generally cannot regulate wireless phone service unless they are granted authority to do so by state law. According to our survey of state utility commissions, many state commissions do not have authority to regulate wireless phone service, and most that do have authority indicated that it is limited. Specifically, 21 commissions reported having authority to regulate wireless phone service, with 5 commissions indicating they have authority to regulate in all areas related to the terms and conditions of service (excluding those aspects of service preempted by federal law) and 16 indicating they have authority to regulate in some areas. Twenty-one commissions reported that they do not have wireless regulatory authority and another 9 commissions would not assert whether they did or did not have wireless regulatory authority for various reasons (see fig. 7). As discussed in the next section, according to some state officials, the lack of authority or limited authority in many states to regulate wireless phone service may be due to concerns about the lack of clarity in federal law regarding states’ authority to regulate wireless phone service. State authority under federal law to regulate wireless phone service is not clear, based on the views of stakeholders we interviewed, court cases, FCC proceedings, a 2005 FCC task force report, and comments in our survey of state utility commissions. As discussed earlier, in 1993, Congress developed a wireless regulatory framework that expressly prohibited states from regulating the market entry or rates charged by wireless phone service carriers, while retaining states’ authority to regulate other “terms and conditions” of wireless service. In an accompanying report, Congress stated that “terms and conditions” was intended to include billing practices and disputes, as well as other consumer protection matters. The report further stated that examples of service it provided that could fall within a state’s lawful authority under “terms and conditions” were illustrative and not meant to preclude other matters generally understood to fall under “terms and conditions.” Despite this guidance, whether specific aspects of service are considered “rates” or “terms and conditions” has been the subject of disputes at FCC, in state regulatory bodies, and in the courts. For example, courts have recently been grappling with cases about whether billing line items and early termination fees are defined as “rates,” and are therefore not subject to state regulation, or as other “terms and conditions,” which may be regulated by states. Such cases have not resolved the issue, as courts have reached different conclusions about the meaning of these terms or await action by FCC. (See app. IV for examples of legal proceedings that address states’ authority to regulate terms and conditions of wireless phone service.) FCC has provided limited guidance about the meaning of “terms and conditions.” The agency did offer preliminary observations in response to petitions states filed with FCC seeking to continue regulating wireless rates and in a few other proceedings. For example, in 1995, FCC noted that while states could not set or fix wireless rates in the future, they could process consumer complaints under state law because “terms and conditions” was flexible enough to allow states to continue in this role. FCC has also said that states may designate wireless carriers as ETCs and that states may impose consumer protection requirements on wireless carriers as a condition for ETC designation. In 1999, FCC concluded that billing information, practices, and disputes fall within these other terms and conditions. Subsequently, in 2005, as part of its truth-in-billing proceeding, FCC concluded that regulation of line items by states constituted rate regulation, thereby preempting state authority; however, this conclusion was rejected by the Eleventh Circuit Court of Appeals. In this proceeding, FCC also asked commenters to address the proper boundaries of “other terms and conditions” and to describe what they believe should be the roles of FCC and the states in defining carriers’ billing practices. However, this proceeding is still open, and FCC has taken no further action to define the proper role of states in regulating billing practices. The lack of clarity regarding states’ authority to regulate wireless service has led to delays in deciding some legal matters and some states’ reluctance to provide oversight. In some instances, when hearing cases involving early termination fees, courts have halted proceedings pending FCC’s resolution of its own proceedings examining whether such fees should be defined as “rates” or “terms and conditions.” For example, in 2008, rather than issue a ruling, a U.S. District Court in the state of Washington deferred to FCC a case against a wireless carrier involving early termination fees, citing FCC’s primary jurisdiction over the issue. According to FCC officials, when courts defer cases to FCC, the agency does not automatically address the issue, but requires that a party file a petition asking FCC to do so. Officials of national organizations representing state agencies and officials from state agencies we interviewed told us that some states are reluctant to regulate wireless phone service until their authority is clarified. This is due, in part, to the potential legal costs that could be incurred if their authority is challenged in court by the industry. Such reluctance may lead to less consumer protection in certain states that otherwise might issue regulations. As we have previously reported, to develop an efficient and effective regulatory framework, the appropriate roles of participants, including states, should be identified. Because of the lack of clarity noted above, various stakeholders have expressed a desire for clearer roles for FCC and the states in providing wireless phone service oversight. For example, officials of national organizations representing state agencies, as well as officials from state agencies we interviewed, told us that clarity from Congress or FCC about the scope of state authority in regulating wireless phone service is needed. Some industry representatives also told us that there should be better guidance on the respective roles of state and federal agencies. A report by the FCC Wireless Broadband Access Task Force in 2005 recommended that FCC further clarify states’ authority to regulate “terms and conditions,” saying ambiguity about this authority has resulted in several disputes at FCC, in state regulatory bodies, and in the courts, and has caused significant regulatory uncertainty that will adversely affect investment in and deployment of wireless networks and other services. In 2005, CTIA–The Wireless Association petitioned FCC to declare that early termination fees are rates, and FCC sought comment on the petition. Recently, when CTIA–The Wireless Association withdrew its petition, four consumer groups opposed its withdrawal, hoping that FCC would offer some clarity on whether early termination fees are subject to state laws and regulations in order to help resolve some pending state lawsuits. State, consumer, and industry stakeholders hold varying views about how the meaning of “terms and conditions” should be clarified, which would affect states’ authority to regulate wireless phone service. Industry representatives argue that “terms and conditions” should be defined narrowly, which would preempt states’ ability to regulate aspects of wireless phone service that fall outside the definition. For example, industry representatives have stated that early termination fees and billing line items should be considered “rates,” rather than “terms and conditions,” which would preclude state utility commissions from regulating these aspects of service. In general, industry representatives have supported regulation at only the federal level, which they claim would avoid inconsistent state regulatory requirements they say would add to their costs. In contrast, state agency representatives and some consumer organizations have supported clarifying the meaning of “terms and conditions” to broadly encompass various aspects of wireless phone service, since they oppose efforts to preempt states’ regulatory authority. For example, state consumer advocates and consumer organizations have argued that aspects of service such as early termination fees and billing line items should fall within the definition of “terms and conditions” of service that states have authority to regulate. These representatives argue that states should have authority to create and enforce wireless phone service regulations, since they claim states are better positioned to effectively address consumers’ problems. Based on the results of our survey of state utility commissions, communication between FCC and state commissions about wireless phone service oversight is infrequent. Eleven state commissions indicated they had communicated with FCC about wireless phone service oversight issues during the last 6 months of 2008, and 33 commissions reported they had no contact with FCC about wireless phone service oversight during that time. Four of the 11 state commissions reported having communication with FCC during that 6-month period about wireless phone service complaints the state commissions had received from consumers. State utility commission officials we interviewed in California, Nebraska, and West Virginia said there was a need for better communication between FCC and the states regarding wireless phone service oversight, and the National Association of Regulatory Utility Commissioners has called for more focused and routine dialogue between FCC and the states, including a formal process to discuss jurisdictional issues. While FCC officials told us they routinely coordinate with state utility commissions about the handling of wireless complaints, they have no written policies or procedures on how they communicate with the states about wireless phone service oversight issues. FCC officials do participate in monthly conference calls with state utility commissions and state attorneys general during which wireless phone service oversight issues can be discussed. However, the state utility commission organizer of this conference call told us that wireless issues are rarely discussed, in part because few states actively regulate wireless phone service. Communication between federal and state agencies that share oversight of a particular industry—such as between FCC and state utility commissions—can be useful for sharing expertise and information, such as data on consumer complaints that could be used to identify problems that may warrant regulatory oversight. As noted earlier, federal law provides that oversight of wireless phone service is a responsibility shared by FCC and the states. Also FCC, in issuing its rules for implementing the wireless regulatory framework created by the 1993 Act, agreed with a suggestion by the National Association of Regulatory Utility Commissioners that state and federal regulators should cooperate in monitoring the provision of wireless services and share monitoring information. We previously reported that collaboration between agencies tasked with shared responsibilities produces more public value than independent actions by such agencies. These practices include identifying and addressing needs by leveraging resources to support a common outcome and agreeing on roles and responsibilities in agency collaboration. Additionally, we have recently developed a framework with characteristics of an effective system for providing regulatory oversight. One characteristic of this framework is a systemwide focus—among both federal and state regulators—with mechanisms for identifying consumer concerns that may warrant regulatory intervention, while another characteristic is an efficient and effective system within which the appropriate role of the states has been considered, as well as how the federal and state roles can be better harmonized. Without effective communication between FCC and state regulators, FCC may not be able to ensure such focus and clear delineation of the federal and state roles. Without written policies and procedures for how FCC communicates with states about wireless phone service oversight, FCC may be missing opportunities to work with its state partners in conducting oversight, such as sharing complaint data that could be used for monitoring trends. This lack of communication may also limit FCC’s awareness of issues the states are encountering in their oversight of wireless carriers. Additionally, without clear awareness of state-level efforts, FCC may not be aware of inconsistencies among state oversight efforts that could indicate a need for changes in its regulations. Although the percentages of consumers dissatisfied with various aspects of their wireless phone service are small, these small percentages represent millions of people. By emphasizing its responsibility under the law to foster a competitive marketplace for wireless service, FCC has contributed to the industry’s growth and to innovative products and services that have benefited consumers. Nevertheless, FCC’s responsibility to protect consumers from harm remains critical, particularly given the growing numbers of wireless service consumers and the limited number of requirements governing key aspects of service that are currently of concern to consumers. FCC’s processing of consumers’ informal complaints may be an important means for dissatisfied consumers to get help, but as long as FCC lacks clear outcome-related goals and measures for this process, consumers do not know what they can expect from it, and FCC cannot demonstrate its effectiveness in assisting consumers who need help. While most states accept wireless consumer complaints, many do not work with the carrier and the consumer to resolve those complaints, making FCC’s efforts an important resource for consumers in those states that do not accept or work to resolve complaints. However, if, as our survey of wireless users suggests, most consumers are not aware they can complain to FCC, those with problems may not know how to seek a fair resolution. Furthermore, without policies and procedures to monitor consumers’ concerns and thereby identify problems that may warrant regulatory or enforcement action, the FCC cannot ensure that consumers are adequately protected under the competitive deregulatory framework the agency has fostered. Finally, without clear guidance for states on the extent of their regulatory authority under federal law, or policies and procedures for how to communicate with states about wireless phone service oversight, FCC could be missing opportunities to partner with state agencies in developing an effective regulatory system. The lack of clarity about states’ authority may discourage some states from taking action to protect consumers. While FCC does have efforts to assist consumers, leveraging state resources by clarifying state authority would better ensure that identified problems can be addressed effectively at either the state or the federal level. Additionally, policies and procedures to guide how FCC and the states communicate would help ensure that FCC and the states are sharing information to guide their oversight. Improved communication between FCC and state regulators could help both parties ensure they are providing effective oversight with a systemwide focus and clearer roles enabling them to better identify trends in complaints and emerging consumer concerns that may warrant changes in regulation. We are making the following five recommendations to the Chairman of the Federal Communications Commission: To improve the effectiveness and accountability of FCC’s efforts to oversee wireless phone service, direct the commission to 1. clearly inform consumers that they may complain to FCC about problems with wireless phone service and what they can expect as potential outcomes from this process, and expand FCC’s outreach to consumers about these efforts; 2. develop goals and related measures for FCC’s informal complaint- handing efforts that clearly articulate intended outcomes and address important dimensions of performance; and 3. develop and implement policies and procedures for conducting documented monitoring and analysis of consumer complaints in order to help the agency identify trends and emerging issues and determine whether carriers are complying with existing rules or whether new rules may be needed to protect consumers. To better ensure a systemwide focus in providing oversight of wireless phone service and improve FCC’s partnership with state agencies that also oversee this service, direct the commission to 4. develop and issue guidance delineating federal and state authority to regulate wireless phone service, including pulling together prior rulings on this issue; addressing the related open proceedings on truth- in-billing and early termination fees; and, if needed, seeking appropriate statutory authority from Congress; and 5. develop and implement policies and procedures for communicating with states about wireless phone service oversight. We provided a draft of this report to FCC for its review and comment. FCC provided written comments, which appear in appendix V. FCC agreed with our recommendation on monitoring and had no position on the others, but noted it has started to take steps to address the issues we raise in our report. In particular, FCC noted that its August 2009 notice of inquiry sought comment on a number of issues related to the findings and recommendations in this report. The agency views this action as the first step in implementing several of the report’s recommendations. Regarding clearly informing consumers about its complaint process and expanding outreach to consumers, FCC noted that its notice of inquiry sought comment on whether the agency should take measures to ensure that consumers are aware of its complaint process. Additionally, FCC noted that it intends to do more to better inform consumers of its services to assist consumers, including making it clear that consumers can request that FCC mediate with their carrier on their behalf. Regarding developing goals and measures that clearly articulate the intended outcomes of its complaint-handling efforts, FCC noted that it already has some performance measures for these efforts and, that since the outcome of each complaint varies depending on its particular circumstances, the appropriate performance measures for this effort should measure its procedural aspects rather than its substantive outcomes. We note, however, that as we indicated in this report, it is not clear to consumers what they can expect from FCC’s complaint process. Articulating the intended outcome of this process—whether it be to help consumers resolve their problems, facilitate communication between carriers and consumers, or both—would provide consumers with a better understanding of the purpose of this effort, as well as help the agency better demonstrate results. Regarding our recommendation to develop and implement documented monitoring of its consumer complaints, FCC noted that it has been working to make improvements to its complaint database, including its analytical tools, which will facilitate such monitoring. Regarding the development of guidance delineating federal and state authority to regulate wireless phone service, FCC noted that, in response to its August 2009 notice of inquiry, the agency is currently updating the public record regarding its truth-in-billing rules and carriers’ early termination fees, and expects to use this as the basis for potential federal regulatory action, which could include delineating areas within the states’ authority that the record indicates should be addressed. Regarding policies and procedures for communicating with states about wireless phone service oversight, FCC noted that it is always looking for new and better ways to communicate with its state partners and that its recent notice of inquiry also asks whether FCC can take further action to reach out to state, as well as federal, local, and tribal government entities. We also provided FCC a draft of this report’s related e-supplement, GAO-10-35SP, containing additional results of our surveys of consumers and state utility commissions. FCC indicated it did not have any comments in response to the e-supplement. As we agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution of it until 30 days from the date of this letter. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-2834 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Contact information and major contributors to this report are listed in appendix VI. This report examines (1) consumers’ satisfaction with wireless phone service and problems they have experienced with this service, as well as the industry’s response to these problems; (2) the Federal Communication Commission’s (FCC) efforts to oversee services provided by wireless phone service carriers; and (3) state utility commissions’ efforts to oversee services provided by wireless phone service carriers. To respond to the overall objectives of this report, we interviewed FCC officials and reviewed documents obtained from the agency. We also reviewed relevant laws and FCC regulations. Additionally we interviewed individuals representing consumer organizations, state agencies, and the industry to obtain their views on wireless phone service consumer concerns and oversight efforts. Table 4 lists the organizations with whom we spoke. To obtain information about consumers’ satisfaction with wireless phone service and problems they have experienced with this service, we conducted a telephone survey of the U.S. adult population of wireless phone service users. Our aim was to produce nationally representative estimates of adult wireless phone service users’ (1) satisfaction with wireless service overall and with specific aspects of service, including billing, terms of service, carriers’ explanation of key aspects of service, call quality and coverage, and customer service; (2) frequency of problems with billing and call quality; (3) desire to switch carriers and barriers to switching; and (4) knowledge of where to complain about problems. Percentage estimates have a margin of error of less than 5 percentage points, unless otherwise noted. We conducted this survey of the American public from February 23, 2009, through April 5, 2009. A total of 1,143 completed interviews were collected, and calls were made to all 50 states. Our sampling approach included randomly contacting potential respondents using both landline and cell phone telephone numbers. Using these two sampling frames provided us with a more comprehensive coverage of adult cell phone users than if we had sampled from only one frame. Because we followed a probability procedure based on random selections, our sample is only one of a large number of samples that we might have drawn. Since each sample could have provided different estimates, we express our confidence in the precision of our particular sample’s results as a 95 percent confidence interval. This is the interval that would contain the actual population value for 95 percent of the samples we could have drawn. As a result, we are 95 percent confident that each of the confidence intervals in this report will include the true values in the study population. Each sampled adult was subsequently weighted in the analysis to account statistically for all of the adult cell phone users of the population. The final weight applied to each responding adult cell phone user included an adjustment for the overlap in the two sampling frames, a raking adjustment to align the weighted sample to the known population distributions from the 2009 supplement of the U.S. Census Bureau’s Current Population Survey and the Centers for Disease Control and Prevention’s 2008 National Health Interview Survey, and an expansion weight to ensure the total number of weighted adults represent an estimated adult population eligible for this study. We conducted an analysis of the final weighted estimates from our survey designed to identify whether our results contain a significant level of bias because our results inherently do not reflect the experiences of those who did not respond to our survey—i.e., a nonresponse bias analysis. We compared unadjusted weighted estimates and final, nonresponse-adjusted weighted estimates of the proportion of U.S. adults’ cell phone usage to similar population estimates from the 2008 National Health Interview Survey, which also includes questions about household telephones and whether anyone in the household has a wireless phone. While we identified evidence of potential bias in the unadjusted weighted estimate, the final weighting adjustments appear to address this potential bias, and we did not observe the same level of bias when examining the final weighted estimates. Based on these findings, we chose to include final weighted estimates at the national level from our survey in the report. In addition, we identified all estimates in the report with margins of error that exceeded plus or minus 5 percentage points and we did not publish estimates with a margin of error greater than plus or minus 9 percentage points. Telephone surveys require assumptions about the disposition of noncontacted sample households that meet certain standards. These assumptions affect the response rate calculation. For this survey the response rate was calculated using the American Association of Public Opinion Research (AAPOR) Response Rate 3, which includes a set of assumptions. Based on these assumptions, the response rate for the survey was 32 percent; however, the response rate could have been lower if different assumptions had been made and might also be different if calculated using a different method. We used random digit dial (RDD) sampling frames that include both listed and unlisted landline numbers from working blocks of numbers in the United States. The RDD sampling frame approach cannot provide any coverage of the increasing number of cell-phone-only households and limited coverage of cell-phone-mostly households (i.e., households that receive most of their calls on cell phones in spite of having a landline). Because of the importance of reaching such households for this survey about wireless phone service, we also used an RDD cell phone sampling frame. The RDD cell phone sampling frame was randomly generated from blocks of phone numbers that are dedicated to cellular service. About 43 percent of the completed interviews were from the RDD cell phone sample. Because many households contain more than one potential respondent, obtaining an unbiased sample from an RDD frame of landline numbers requires interviewing a randomly selected respondent from among all potential respondents within the sampled household (as opposed to always interviewing the individual who initially answers the phone). We obtained an unbiased sample by using the most recent birthday method, in which the interviewer asks to speak to the household member aged 18 or older with a wireless phone who had the most recent birthday. If the respondent who was identified as the member of the household with the most recent birthday was unavailable to talk and asked to schedule a callback, the call representative recorded the person’s name and preferred telephone number for the callback. There were also cases when a respondent from the cell phone sample asked to be called back on his or her landline. These respondents, if they completed the survey, were considered a completed interview from the cell phone sample. There were no respondent selection criteria for the cell phone sample; each number dialed from the cell phone sample was assumed to be a cell phone number, and each cell phone was assumed to have only one possible respondent to contact. The results of this survey reflect wireless phone users’ experience with their current or most recent wireless phone service from the beginning of 2008 through the time they were surveyed. Not all questions were asked of all respondents. For example, questions about the prevalence of billing problems were asked only of respondents who indicated they were solely or jointly responsible for paying for their service. Additionally, satisfaction with wireless coverage for particular locations (i.e. at home, at work, and in a vehicle) was calculated only among respondents who indicated they used their wireless phone service in those locations. The survey and a more complete tabulation of the results can be viewed by accessing GAO-10-35SP. To identify the type and nature of problems consumers have experienced in recent years with their wireless phone service, we interviewed officials from FCC, consumer organizations, national organizations that represent state agency officials, and state agency officials from three selected states—California, Nebraska, and West Virginia—representing utility commissions, offices of consumer advocates, and offices of attorneys general (see table 4). We selected these states based on their varying geography, populations, region, and approaches to overseeing wireless phone service, as indicated in part by information obtained from national organizations representing state agency officials. We also interviewed officials from the four major wireless carriers, two selected smaller carriers that serve mostly rural areas, and wireless industry associations. In addition, we reviewed documents obtained from some of these sources. We also analyzed FCC’s wireless complaint data on complaints received from 2004 through 2008. We reviewed FCC’s processes for generating these data and checked the data for errors and inconsistencies. We determined that the data were sufficiently reliable for the purposes of this review. We also obtained the total number of wireless complaints received in 2008 by the 21 state utility commissions that record and track wireless phone service consumer complaints. While we did not assess the reliability of the state complaint data, we are providing the numbers of complaints states reported receiving for illustrative purposes. To identify major actions the industry has taken in recent years to address consumers’ concerns, we interviewed the industry organizations named above and reviewed related documentation (see table 4). We also requested service quality information from the four major carriers, including measures of network performance and the number and types of customer complaints. Carriers told us that this information is proprietary and sensitive, and as we did not obtain comparable information from all four carriers, we were not able to present any aggregate information based on these data. Additionally, we interviewed consumer, state, and federal stakeholders about the effectiveness of industry efforts to address consumers’ concerns (see table 4). To evaluate how FCC oversees wireless phone service, including the agency’s efforts to process complaints, monitor sources of information to inform policy decisions, and create and enforce rules, we interviewed FCC officials about these activities and reviewed related documentation obtained from these officials. We also reviewed relevant laws, regulations, and procedures, as well as FCC’s quarterly complaint reports, strategic plan, and budget with performance goals and measures. In addition, we reviewed requirements of the Government Performance and Results Act of 1993 and our prior recommendations on performance goals and measures and determined whether FCC’s efforts to measure the performance of its efforts to process consumer complaints are consistent with these requirements and recommendations. We also interviewed consumer, state, and industry stakeholders about their views on FCC’s efforts to provide oversight (see table 4). We focused our review on FCC’s oversight of wireless phone service issues that have been major areas of concern for consumers in recent years, specifically targeting consumer protection efforts and those actions related to how wireless carriers interact with and serve their customers. We did not assess how FCC oversees a number of other facets of the wireless industry, including competition, spectrum allocation, licensing, construction, technical issues such as interference, public safety, or the agency’s obligations under the Telephone Consumer Protection Act and the Controlling the Assault of Non-Solicited Pornography and Marketing Act. To describe state utility commissions’ efforts to oversee wireless phone service, we surveyed commissions in all 50 states and the District of Columbia. We conducted this survey from March 3, 2009, through April 1, 2009. We received responses from all 51 commissions, which we obtained through a Web-based survey we administered and subsequent follow-up with some states. The survey and a more complete tabulation of the results can be viewed by accessing GAO-10-35SP. To obtain illustrative information about these issues, we interviewed state officials in public utility commissions, consumer advocate offices, and offices of attorneys general in three selected states (California, Nebraska, and West Virginia). Although we met with the offices of the state attorneys general in the three selected states and a national organization representing state attorneys general, we did not attempt to assess the full breadth of involvement of state attorneys general in addressing wireless phone service consumer concerns. Overall, the number of informal consumer complaints FCC has received about the service provided by wireless phone carriers has decreased since 2004 (see table 5). FCC received 20,753 complaints about the service provided by wireless phone carriers in 2008, the second-lowest total since 2004. From our analysis of FCC data on complaints about the service provided by wireless phone carriers from 2004 through 2008, we identified specific problem areas that complaints cited within the major complaint categories: Billing and rates: Within this category, specific issues consumers complained about included problems obtaining credits, refunds, or adjustments to their bills; charges for minutes talking on a wireless phone; recurring charges on their bills; rates; and unauthorized or misleading charges. Of the nearly 55,000 billing complaints FCC received during this period, there were 28,000 focused on obtaining credits, refunds, or billing adjustments. FCC also received almost 9,000 billing complaints about charges for minutes talking on a wireless phone. Additionally, there were more than 5,500 complaints about recurring charges on consumers’ bills and more than 5,500 complaints about the rates they received from their wireless service providers. Finally, our analysis of FCC’s data identified more than 2,100 wireless complaints concerning unauthorized, misleading, or deceptive charges (known as “cramming”). Call quality: Within this category, the majority of consumers complained about three issue areas: the quality of wireless phone service in their local service area, the premature termination of calls (i.e., “dropped calls”), and the inability to use their wireless phone because of service interruption by wireless phone service providers. Specifically, of the more than 14,000 call quality complaints FCC received during this period, more than 7,300 were about the quality of wireless phone service in the local service area. FCC also received more than 3,200 complaints about dropped calls and more than 2,000 complaints about interruption of service by wireless service providers. Contract early termination: This category includes termination of wireless phone service by the consumer or by the carrier. Nearly 12,000, or just under 90 percent, of all terms-of-service contract complaints FCC received were about termination by consumers prior to the end of a specified contract term, which would result in an early termination fee. Customer service: Customer service complaints were the fourth largest category of complaints; however, FCC did not report customer service complaints as a top category of complaints in its quarterly reports from 2004 through 2008. In comparison, FCC identified carrier marketing and advertising as a top category of complaint in each year from 2004 through 2008, even though there were more customer service complaints in 2005, 2006, and 2007. An FCC official told us they did not include customer service complaints in the quarterly reports because they fell within the “other” category, which FCC does not report. FCC also indicated that the large decrease in the number of customer service complaints from more than 3,500 in 2007 to fewer than 500 in 2008 was due in part to the agency’s redesign of its complaint forms, which allows for more accurate coding of complaints under specific topics rather than placing them in the “service treatment” category FCC uses to track customer service issues. The wireless phone service industry has taken some actions to address the types of consumer concerns we identified. Specifically, in 2003, the industry adopted a voluntary code, and since then, carriers have taken other measures. Table 6 outlines how elements of the industry code and examples of subsequent major actions we identified among the four largest carriers correspond to the key areas of consumer concern we identified. Federal law provides that while a state may not regulate a wireless carrier’s rates or entry, it may regulate the other terms and conditions of wireless phone service. Section 332(c)(3)(A) of title 47 of the U.S. Code does not define what constitutes rate and entry regulation or what comprises other terms and conditions of wireless phone service. This has left it up to FCC and courts to further define which specific aspects of service fall within the scope of these respective terms. Recently, two areas have garnered much attention at FCC and in the courts—the ability of states to regulate billing line items and the imposition of early termination fees. However, clarity has not yet been achieved. One area of disagreement is whether billing line items, such as surcharges and taxes that appear on consumers’ wireless bills, should be considered a rate or a term and condition of service. In 2005, under its truth-in-billing proceeding, FCC held that state regulations requiring or prohibiting the use of line items for wireless carriers constituted rate regulation and therefore were preempted. In the same proceeding, FCC solicited comments on the proper boundaries of “other terms and conditions” within the statute and asked commenters to delineate what they believe should be the relative roles of FCC and the states in defining carriers’ proper billing practices. The National Association of State Utility Consumer Advocates challenged FCC’s preemption finding in court, and the United States Court of Appeals for the Eleventh Circuit (Eleventh Circuit) found that FCC had exceeded its authority. Specifically, the court found that the presentation of a line item on a bill is not a “charge or payment” for service, but rather falls within the definition of “other terms and conditions” that states may regulate. Subsequent to the Eleventh Circuit’s ruling, the Western District Court of Washington rejected the Eleventh Circuit’s analysis and concluded that FCC did not exceed its statutory authority when it preempted line-item regulation and that line items are charges. However, the United States Court of Appeals for the Ninth Circuit (Ninth Circuit) reversed the district court, finding that the Eleventh Circuit decision is binding outside of the Eleventh Circuit. Furthermore, the Ninth Circuit stated that it agreed with the Eleventh Circuit’s determination that how line items are displayed or presented on wireless consumers’ bills does not fall within the definition of “rates.” FCC has not responded to these court decisions, nor has FCC concluded its truth-in-billing proceeding. While FCC has received comments on its 2005 truth-in-billing proposal, it has taken no further action in this proceeding. Accordingly, the issue of how states may regulate billing line items remains unclear. In August 2009, as part of its effort to seek comment on a number of telecommunications consumer issues, FCC sought comment on the effectiveness of its truth-in-billing rules and whether changes in these rules are needed. Early termination fees are another area where the distinction between “rates” and “terms and conditions” is not clear. Wireless carriers routinely offer customers discounts on cell phones in exchange for the customer’s commitment to a 1- or 2-year contract. If the contract is canceled before the end of the contract term, the customer is generally charged a fee, commonly referred to as an early termination fee. The Western District Court of Washington, in recently considering an early termination fees case, noted that it is not clear whether a wireless service carrier’s early termination fees are within the preemptive scope of “rates charged” under the statute. The court noted that federal courts that have considered the matter appear to be split on the issue, citing the examples of a district court that found early termination fees to fall under “terms and conditions” and another district court that found them to be “rates charged.” Because of the ongoing FCC efforts in this area, the Western District Court of Washington halted its proceeding pending a determination from FCC about this issue. In 2005, FCC was drawn into this debate at the request of a South Carolina court. In February 2005, SunCom, a wireless carrier, at the request of the court, filed a petition with FCC on whether early termination fees are rates charged. In May 2005, FCC released a public notice seeking comments on this matter. Subsequently, the parties to the litigation entered into a settlement agreement and jointly requested that FCC dismiss the matter without further review. FCC issued an order terminating the proceeding; however, the agency noted that it had a similar petition under review that it intended to address “in the near future.” The similar petition was filed by CTIA–The Wireless Association in March 2005, asking for an “expedited” ruling on whether early termination fees are rates. FCC sought comments on the matter from interested parties, who have submitted over 37,000 filings in this proceeding. In view of the growing concern over early termination fees and the number of complaints that FCC receives from consumers on this issue, FCC held a hearing in June 2008. At this hearing, expert panelists testified on the use of early termination fees by communications service providers. A year after the hearing, CTIA–The Wireless Association notified FCC that it was withdrawing its petition, citing the evolution of the competitive wireless marketplace as a reason for its withdrawal. However, the National Association of State Utility Consumer Advocates, the National Consumer Law Center, U.S. Public Interest Research Group, and Consumers Union filed a joint response in opposition to the petition’s withdrawal, arguing that a ruling from FCC would help clarify this issue and help resolve some pending lawsuits about it. FCC has not responded to CTIA–The Wireless Association’s notice or the consumer advocates’ joint response. Thus, this is another area that remains unresolved. In addition to the individual named above, Judy Guilliams-Tapia, Assistant Director; Eli Albagli; James Ashley; Scott Behen; Nancy Boardman; Bess Eisenstadt; Andrew Huddleston; Eric Hudson; Mitchell Karpman; Josh Ormond; George Quinn; Ophelia Robinson; Kelly Rubin; Andrew Stavisky; and Mindi Weisenbloom made key contributions to this report. | Americans increasingly rely on wireless phones, with 35 percent of households now primarily or solely using them. Under federal law, the Federal Communications Commission (FCC) is responsible for fostering a competitive wireless marketplace while ensuring that consumers are protected from harm. States also have authority to oversee some aspects of service. As requested, this report discusses consumers' satisfaction and problems with wireless phone service and FCC's and state utility commissions' efforts to oversee this service. To conduct this work, Government Accountability Office (GAO) surveyed 1,143 adult wireless phone users from a nationally representative, randomly selected sample; surveyed all state utility commissions; and interviewed and analyzed documents obtained from FCC and stakeholders representing consumers, state agencies and officials, and the industry. Based on a GAO survey of adult wireless phone users, an estimated 84 percent of users are very or somewhat satisfied with their wireless phone service. Stakeholders GAO interviewed cited billing, terms of the service contract, carriers' explanation of their service at the point of sale, call quality, and customer service as key aspects of wireless phone service with which consumers have experienced problems in recent years. The survey results indicate that most users are very or somewhat satisfied with each of these key aspects of service, but that the percentages of those very or somewhat dissatisfied with these aspects range from about 9 to 14 percent. GAO's survey results and analysis of FCC complaint data also indicate that some wireless phone service consumers have experienced problems with billing, certain contract terms, and customer service. While the percentages of dissatisfied users appear small, given the widespread use of wireless phones, these percentages represent millions of consumers. FCC receives tens of thousands of wireless consumer complaints each year and forwards them to carriers for response, but has conducted little other oversight of services provided by wireless phone service carriers because the agency has focused on promoting competition. However, GAO's survey results suggest that most wireless consumers with problems would not complain to FCC and many do not know where they could complain. FCC also lacks goals and measures that clearly identify the intended outcomes of its complaint processing efforts. Consequently, FCC cannot demonstrate the effectiveness of its efforts to process complaints. Additionally, without knowing to complain to FCC or what outcome to expect if they do, consumers with problems may be confused about where to get help and about what kind of help is available. FCC monitors wireless consumer complaints, but such efforts are limited. Lacking in-depth analysis of its consumer complaints, FCC may not be aware of emerging trends in consumer problems, if specific rules are being violated, or if additional rules are needed to protect consumers. FCC has rules regarding billing, but has conducted no enforcement of these rules as they apply to wireless carriers. This August, FCC sought public comment about ways to better protect and inform wireless consumers. In response to GAO's survey, most state commissions reported receiving and processing wireless phone service consumer complaints; however, fewer than half reported having rules that apply to wireless phone service. Stakeholders said that states' authority to regulate wireless service under federal law is unclear, leading, in some cases, to costly legal proceedings and reluctance in some states to provide oversight. FCC has provided some guidance on this issue but has not fully resolved disagreement over states' authority to regulate billing line items and fees charged for terminating service early. State commissions surveyed indicated that communication with FCC about wireless phone service oversight is infrequent. As such, FCC is missing opportunities to partner with state agencies in providing effective oversight and to share information on wireless phone service consumer concerns. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
In June 2015, DHS delivered its CBRNE Functions Review Report to Congress which proposed consolidating the agency’s core CBRNE functions (see fig. 1), into a new Office of CBRNE Defense. According to DHS officials, the agency’s proposal to consolidate its CBRNE functions adopts the primary recommendation from a previous DHS study on CBRN consolidation conducted in 2013. At that time, DHS assembled a review team to evaluate CBRN alignment options and produced a report on its findings for the Secretary of Homeland Security. According to DHS officials, the alignment options from the 2013 report were updated in 2015 based on the Secretary’s Unity of Effort Initiative, to include transferring CBRNE threat and risk assessment functions from the DHS Science and Technology Directorate to the proposed CBRNE Office, as well as including the DHS Office for Bombing Prevention from the National Protection and Programs Directorate. In December 2015, legislation that would amend the Homeland Security Act of 2002 to establish within DHS a consolidated CBRNE Office was passed in the House and referred to the Senate for consideration. This legislation, if approved, would direct the agency to create a new CBRNE office led by an Assistant Secretary responsible for: (1) developing, coordinating, and maintaining DHS’s overall CBRNE strategy and policy; (2) developing, coordinating, and maintaining periodic CBRNE risk assessments; (3) coordinating DHS’s CBRNE activities with other federal agencies; (4) providing oversight for DHS's preparedness for CBRNE threats; and (5) providing support for operations during CBRNE threats or incidents. As described in figure 2, the new CBRNE Office would be comprised of Chemical, Biological, Nuclear, and Explosives mission support divisions. As of July 2016, this legislation had not been taken up by the Senate. The CBRNE report and summaries provide some insights into factors considered, but did not include associated underlying data or methodological information, such as how benefits and costs were compared or the extent to which stakeholders were consulted. According to DHS officials, DHS could not locate the underlying information associated with analyses that informed the consolidation proposal due to staff turnover. Without such underlying documentation, we could not fully determine the extent to which DHS considered the benefits and limitations of a CBRNE consolidation as part of its decision-making process. According to DHS’s report to Congress and the summary documents provided to us, the department developed decision-making criteria, identified as “desired outcomes” and “near-term goals” for its proposed reorganization, and consulted with DNDO, OHA, S&T and leadership of other DHS components, the Office of Management and Budget (OMB) and National Security Council Staff. Also, according to an Office of Policy official, DHS consulted with the Executive Office of the President as well as Congressional staff on its consolidation plan. DHS considered five alignment options, as shown in figure 3, and provided a general assessment of the effects of reorganization on its CBRNE mission. In May 2012, we reported on key questions for agency officials to consider when evaluating an organizational change that involves consolidation. Table 1 provides a summary of the key questions for evaluating consolidation proposals from this previous work and a summary of our assessment of whether documentation provided to us and interviews with agency officials indicated whether each question was addressed. DHS’s June 2015 report to Congress and the supporting documentation we reviewed included an evaluation of some, but not all, key questions listed above in Table 1. As previously noted in our May 2012 report, these questions are important to consider when evaluating an organizational change that involves consolidation. Specifically, DHS’s consolidation proposal: Identified strategic outcomes and goals and considered problems to be solved, but did not fully assess and document potential problems that could result from consolidation. DHS’s proposal and supporting documents identified eight near-term goals to be achieved within two years of consolidation, such as providing appropriate CBRN focus and visibility within the department and preserving programs and activities that are currently operating effectively, as shown in figure 3. DHS officials also indicated in documents provided to us several problems that may be solved by a CBRNE program consolidation. For example, in a November 2014 letter from the Secretary of Homeland Security to a congressional committee chair, the Secretary states that consolidation will provide a clearer focal point for external and DHS component engagement on CBRNE issues, among other things. In addition, in a briefing to Congress, DHS officials defined challenges a CBRNE consolidation may address, such as inconsistent CBRN‐related messaging to DHS stakeholders and confusion over a CBRN focal point within DHS and for external stakeholders. The proposal and supporting documents did not adequately address problems that consolidation may create. Component officials we interviewed provided several examples of potential problems due to consolidation. For example, officials told us that merging staff into one office could result in a need for additional support staff to manage day-to-day functions such as human resources, contracting, and financial management for a larger number of employees. Officials further stated that they may not have sufficient staff to complete these mission needs in a consolidated CBRNE unit. Additionally, component officials expressed concern over the potential allocation of resources in the consolidated office. According to these officials, there is a difference between components with missions that focus on potential terrorism events that are more likely to occur but with limited consequence versus components that focus on potential events that are not as likely to occur but have the potential to be far more catastrophic. These officials added that consolidating these components may complicate resource allocation decisions due to the varying degree to which certain CBRNE activities are seen as a priority over others. According to a DHS official, Office of Policy officials met with two of the five affected CBRNE components to determine potential unintended problems and to develop mitigation measures. However, not all affected components were included in the discussions and the problems and measures were not documented. According to our May 2012 report, the key to any consolidation initiative is validating specific goals that have been evaluated against a realistic assessment of how the consolidation can help achieve these goals. In our past work, we have also found that it is important for agencies to recognize that delays and disruptions are common during consolidations, which can compromise or introduce new problems during these initiatives. As such, it is key that agencies work to anticipate and mitigate these issues or they risk seeing costs increase. Did not conduct and document a comparison of benefits and costs. While committee report language directed DHS to include an assessment of whether consolidation could produce cost savings, as of May 2016, DHS had not documented a comparison of benefits and costs for its consolidation plan. DHS officials told us that in 2013 they developed a rough cost estimate for the consolidation option, but provided no documentation or analysis supporting the estimate. According to DHS’s proposal, additional analysis is required to determine if budgetary efficiencies can be gained by the recommended consolidation option. An Office of Policy official told us that DHS has yet to conduct this additional analysis, noting that as a result of an appropriations act restriction, officials decided to take few concrete steps to plan for or move forward with the consolidation. Our May 2012 report highlights the importance of benefits and cost considerations as part of the decision- making process for potential organizational consolidations. More specifically, given the potential benefits and costs of consolidation, it is imperative that Congress and the executive branch have the information needed to help effectively evaluate consolidation proposals. Demonstrating that a consolidation proposal is based on a clearly-presented business case or an analysis of benefits and costs can show decision-makers why the initiative is being considered. If agencies cannot reasonably conclude that benefits will outweigh costs, the agency may need to consider consolidation alternatives to meet its goals. Did not fully identify or document consideration of up-front costs. DHS considered potential up-front costs associated with a CBRNE consolidation but did not document these costs or how they were considered during the reorganization decision-making process. For example, an Office of Policy official told us that DHS considered some potential up-front costs associated with detailing 19 Office of Bombing Prevention staff to DNDO. However, documentation we reviewed did not describe the extent to which these up-front costs were considered in the decision-making process. Additionally, DHS officials stated they did not conduct an up-front cost estimate associated with changes to physical infrastructure for the consolidation proposal, because the agency intended to leverage existing plans to move to a new location. According to an Office of Policy official, to address the up-front costs associated with the consolidation, DHS plans to take advantage of plans to move staff and resources to the St. Elizabeth’s site in fiscal years 2017 and 2018 in an effort to reduce some of the expenses born out of the consolidation. Even if some of the up-front costs are expected to be covered through existing relocation plans, identifying and accounting for the full amount of up-front funding is important to fully evaluate or prepare for a consolidation. Our May 2012 report indicates that consolidation initiatives often have up-front costs, and agencies must pay them before they can realize any intended gains or savings. For example, agencies may need to pay for equipment and furniture moves or fund employee transfers and buyouts. Further, we also found that a lack of up-front funding can prevent a potentially beneficial initiative from getting off the ground or derail an initiative already underway. Our review of DHS’s proposal does not indicate that these potential expenses or any other up-front costs were fully considered in developing the proposal. Conducted limited external stakeholder consultations. DHS conducted limited external stakeholder outreach in developing the consolidation proposal, and thus the proposal may not sufficiently account for stakeholder concerns. According to an Office of Policy official, the review team consulted with OMB, National Security Council Staff, the Executive Office of the President (EOP) and Congressional staff. Among the six components involved in the proposed consolidation, DHS officials stated that two of these components, DNDO and OHA, have significant working relationships with a wide range of external stakeholders including the Departments of Defense, State, Energy, and Health and Human Services. However, while the impact of consolidation to external stakeholders was a consideration, agency officials did not solicit input directly from the full range of interagency stakeholders associated with each of the CBRNE components in developing the proposal. According to a DHS Office of Policy official, DHS’s assessment of its consolidation was that it was an internal reorganization with a goal to improve outward-facing messaging and collaboration. This official also indicated that both DNDO and OHA are considered useful sources for identifying potential positive or negative consolidation impacts for their stakeholders. DHS leadership was satisfied that discussions with the EOP in addition to DNDO and OHA’s engagement with their respective external stakeholders sufficiently accounted for the perspectives of interagency partners, according to the DHS Office of Policy official. However, DHS did not provide documentation of any external stakeholder consultations, including the outcome of any discussions related to the consolidation proposal or how this information was used in the decision-making process. In May 2012, we reported that consolidation success depends on a wide range of factors, including getting incentives right for those affected by the consolidation. External stakeholders often view a consolidation as working against their own interests. For example, agency clients and customers may have concerns about potential reduction in service or access to agency officials. Moreover, stakeholders frequently raise valid concerns on the basis of their familiarity with an agency’s operations, and the concerns need to be addressed openly and objectively. Failure to effectively engage with external stakeholders and understand and address their views can undermine or derail the initiative. We have found that, as a result, it is critical that agencies identify who the relevant external stakeholders are and develop a two- way communication strategy that both addresses their concerns and conveys the rationale for and overarching benefits associated with a consolidation initiative. According to Standards for Internal Control in the Federal Government, documenting management oversight of processes intended to improve the effectiveness and efficiency of operations provides reasonable assurance that the organization is addressing risks and being good stewards of government resources and achieving results. DHS officials acknowledged that without source documentation underlying the analysis behind the consolidation proposal, the full extent to which the reorganization options were considered is not discernable. By documenting its decision-making process, DHS would provide a means to retain organizational knowledge and mitigate the risk of having that knowledge limited to a few personnel, as well as communicate that knowledge as needed to affected parties. Additionally, attention to the key questions identified from our analysis of previous organizational consolidations would help provide DHS, Congress, and other stakeholders with assurance that important aspects of effective organizational change, including a consideration of the plan’s benefits and limitations, are addressed as part of the agency’s CBRNE reorganization decision-making process. Not fully considering these factors could make the agency’s decision to consolidate vulnerable to risk of failure, increased costs, or stakeholder skepticism. Should Congress approve its plan to consolidate, DHS could benefit from incorporating change management approaches such as the key practices and implementation steps derived from organizational transformations undertaken by large private and public sector organizations identified in our previous work. Doing so would help ensure that DHS’s consolidation initiative is results oriented, customer focused, and collaborative in nature. The Consolidated Appropriations Act, 2016, provides that none of the funds appropriated may be used to establish an Office of CBRNE Defense until Congress has authorized such establishment and, as of July 2016, Congress had not approved the proposed consolidation. As a result of this restriction, DHS officials told us they have taken few concrete steps to plan for or move forward with the consolidation. However, if Congress passes authorizing legislation, DHS intends to permanently establish the new CBRNE Office, transfer all requisite personnel, and announce a new leader for the office, according to DHS Office of Policy officials. As DHS was formed, we reported in July 2003 on key practices and implementation steps for mergers and organizational transformations. The factors listed in Table 2 were built on the lessons learned from the experiences of large private and public sector organizations. The practices outlined in our July 2003 report are intended to help agencies transform their cultures so that the federal government has the capacity to deliver its promises, meet current and emerging needs, maximize its performance, and ensure accountability. DHS has not evaluated each of these practices. According to DHS officials, the agency is awaiting congressional approval of the proposed consolidation before developing implementation steps. However, should DHS receive this approval to reorganize its CBRNE functions, consulting each of these practices would ensure that lessons learned from other organizations are considered. According to our prior work on organizational change, implementing large-scale change management initiatives, such as mergers and organizational transformations, are not simple endeavors and require the concentrated efforts of both leadership and employees to realize intended synergies and to accomplish new organizational goals. In addition, the practices will be helpful in a consolidated CBRNE environment. For example, overall employee morale differs among the components to be consolidated, as demonstrated by the difference in the 2015 employee satisfaction and commitment scores of DNDO and S&T, making employee involvement to gain their ownership for the transformation a key step to consider. Also, given the range of activities conducted by the consolidated entities, establishing a coherent mission and integrated strategic goals to guide the transformation will be important. Given the critical nature of DHS’s CBRNE mission, considering key factors from our previous work would help inform a consolidation effort should Congress approve it. The lessons learned by other organizations involved in substantial transformations could provide key insights for agency officials if they implement reorganization and attention to the factors we identified would improve the chances of a successful CBRNE consolidation. Preventing a terrorist attack in the United States remains the foundation of homeland security, especially when CBRNE threats continue to be enduring areas of concern. DHS’s CBRNE consolidation proposal is intended to centralize CBRNE functions within DHS headquarters while also becoming a focal point for CBRNE issues. However, limited information and analysis related to assessing the benefits and limitations of its consolidation plan prevent DHS from fully demonstrating how its consolidation will lead to an integrated, high-performance organization. Additionally, should Congress approve CBRNE consolidation at DHS, the department could improve the likelihood of a successful consolidation effort if lessons identified in our previous work are considered. To better provide Congress and affected stakeholders with assurance that important aspects of effective organizational change are addressed as part of the agency’s CBRNE reorganization decision-making process, we recommend that the Secretary of Homeland Security direct the Assistant Secretary for the Office of Policy to complete, document, and make available analyses of key questions related to its consolidation proposal, including: what problems, if any, consolidation may create; a comparison of the benefits and costs the consolidation may entail; a broader range of external stakeholder input including a discussion of how it was obtained and considered. If DHS’s proposed CBRNE program consolidation is approved by Congress, we recommend that the Secretary of Homeland Security direct the Assistant Secretary for the Office of Policy to use, where appropriate, the key mergers and organizational transformation practices identified in our previous work to help ensure that a CBRNE consolidated office benefits from lessons learned from other organizational transformations. We provided a draft of this report to DHS for comment. DHS provided technical comments, which we incorporated as appropriate. On July 14, 2016, DHS also provided written comments, reproduced in full in appendix I. DHS concurred with one of our two recommendations, and described actions planned to address it, but did not concur with the other. DHS did not concur with our first recommendation that the Secretary of Homeland Security direct the Assistant Secretary for the Office of Policy to complete, document, and make available analyses of key questions related to its consolidation proposal, including: what problems, if any, consolidation may create; a comparison of the benefits and costs the consolidation may entail; a broader range of external stakeholder input including a discussion of how it was obtained and considered. In its comments, DHS stated that completing a study to answer the questions raised in our report and inform a decision that has already been made is redundant. According to DHS, our recommendation does not acknowledge the extent to which these questions have been discussed both internally within DHS and externally with Congress. DHS indicated that it considered the cost and benefits of reorganization within the conduct of the 2013 study, the follow-on work in 2014, and senior leadership meetings as part of the decision-making process. According to DHS, the department reviewed its CBRNE programs and functions by analyzing organizational models and identified several alignment options, each with its own cost and benefits. As we stated in this report, committee report language directed DHS to include an assessment of whether consolidation could produce cost savings, and as of July 2016, DHS had not documented a comparison of benefits and costs for its consolidation plan. DHS officials told us that in 2013 they developed a rough cost estimate for the consolidation option, but provided no documentation or analysis supporting the estimate. Further, according to the CBRNE consolidation proposal DHS submitted to Congress in June 2015, additional analysis is required to determine if budgetary efficiencies can be gained by the recommended consolidation option. Based on our review of available CBRNE consolidation documentation and our prior work on evaluating consolidation proposals, we continue to believe that considering benefits and costs as part of the decision-making process for potential organizational consolidation is important as it would provide Congress and the executive branch the information needed to help effectively evaluate consolidation proposals. Also in its comments, DHS stated that both monetary and non-monetary costs associated with its proposed reorganization were considered. According to DHS, monetary costs of the proposed consolidation were within the current and planned budget of the affected organizations. DHS also indicated that non-monetary costs such as impact on appropriations and staff morale would likely result in increased benefits to operational effectiveness and efficiency and morale in the new office. Our report acknowledges that DHS considered potential up-front costs associated with a CBRNE consolidation; however, DHS did not document these costs or how they were considered during the reorganization decision- making process. We previously reported in May 2012 that consolidation initiatives often have up-front costs, and agencies must pay them before they can realize any intended gains or savings. For example, agencies may need to pay for equipment and furniture moves or fund employee transfers and buyouts. Based on our review of DHS’s proposal, the department did not fully consider similar potential expenses or up-front costs in developing its proposal. Our prior work has shown that a lack of up-front funding can prevent a potentially beneficial initiative from getting off the ground or derail an initiative already underway. Until DHS completes this analysis and documents its findings, we continue to believe that these potential challenges have yet to be mitigated. DHS commented that it consulted Congress on its proposed consolidation. Specifically, DHS commented that it provided briefings to the appropriate authorizing and appropriations committees on numerous occasions. Although the Department of Homeland Security CBRNE Defense Act of 2015 (H.R. 3875), which has passed the House, and the President’s budget submission for fiscal year 2017 include DHS’s proposed CBRNE reorganization, authorizing legislation has not been enacted. Implementing our recommendation to complete, document, and make available analyses of key questions related to DHS’s proposal would provide additional information to help decision-makers understand the basis and implications of the proposal. However, according to DHS, the passage of the Consolidated Appropriations Act, 2016 (P.L. 114-113) is a complicating factor. Specifically, DHS stated that the department is concerned that conducting any reorganization-related activities, including further study on the matter, may undermine the department’s original reorganization recommendation with Congress and disrupt ongoing authorizing legislation deliberations. Section 521 of the Consolidated Appropriations Act, 2016, provides that none of the funds appropriated may be used to “establish” an Office of CBRNE Defense until Congress has authorized such establishment. Although DHS cannot use appropriated funds to establish a CBRNE office without authorization, we believe that completing, documenting, and making available the analysis supporting the reorganization recommendation will not disrupt, but rather will assist in ongoing legislative deliberations by providing additional information to decision-makers. Also in its comments, DHS remarked that our report did not mention the department’s headquarters realignment that occurred between FY 2014 and FY 2015 as part of Secretary of Homeland Security Jeh Johnson’s Unity of Effort Initiative. According to DHS, we did not acknowledge how the proposed CBRNE consolidation would contribute to principal Unity of Effort objectives such as integrating broad and complete DHS mission spaces and empowering DHS components to effectively execute their missions. However, while the department’s Unity of Effort initiative was not the focus of our review, our report acknowledges that according to DHS officials, the CBRNE alignment options from the department’s 2013 report were updated in 2015 based on the Secretary’s Unity of Effort Initiative, to include transferring CBRNE threat and risk assessment functions from the DHS Science and Technology Directorate to the proposed CBRNE Office, as well as including the DHS Office for Bombing Prevention from the National Protection and Programs Directorate. Our report also recognizes that DHS’s CBRNE consolidation proposal is intended to centralize CBRNE functions within DHS headquarters while also becoming a focal point for CBRNE issues. We believe that the additional context provided by DHS, more closely tying its CBRNE consolidation to the department’s larger headquarters realignment efforts, further underscores the importance of our findings. As noted in our report, limited information and analysis related to assessing the benefits and limitations of its consolidation plan prevent DHS from fully demonstrating how its proposal will lead to an integrated, high-performance organization. DHS concurred with our second recommendation related to using, where appropriate, the key mergers and organizational transformation practices identified in our previous work to help ensure that a CBRNE consolidated office benefits from lessons learned from other organizational transformations. DHS stated that upon receiving congressional approval for its CBRNE consolidation plan, it will use GAO’s report on evaluating consolidation proposals as well as other resources to develop a detailed implementation plan as appropriate. These actions, if fully implemented, should address the intent of the recommendation. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the Secretary of Homeland Security and selected congressional committees. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. Should you or your staff have any questions, please contact me at 404- 679-1875 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix II. In addition to the individual named above, Ben Atwater (Assistant Director) and Landis Lindsey (Analyst-in-Charge) managed this audit engagement. Chuck Bausell, Eric Hauswirth, Hayden Huang, Tracey King, Tovah Rom, Sarah Veale and Josiah Williams made significant contributions to this report. | Committee reports accompanying the Consolidated and Further Continuing Appropriations Act, 2013, directed DHS to undertake an in-depth review of the department's weapons of mass destruction programs, including potential consolidation of CBRNE mission functions. DHS conducted its review, and in June 2015 provided a report to Congress, including a proposal to consolidate the agency's core CBRNE functions. The Consolidated Appropriations Act, 2016, prohibits DHS from using funds to establish a CBRNE office until Congress approves it. GAO was asked to review the proposed consolidation of DHS's CBRNE programs. This report discusses: (1) the extent to which DHS's proposal assessed the benefits and limitations of consolidation and (2) GAO's key practices from past organizational transformations that could benefit DHS, should Congress approve the proposed consolidation. The Department of Homeland Security's (DHS) documentation related to its proposed consolidation of Chemical, Biological, Radiological, Nuclear and Explosives (CBRNE) programs offers some insights into benefits and limitations considered, but the information provided to GAO did not include several key factors to consider when evaluating an organizational consolidation. While developing its consolidation plan, DHS identified strategic goals, such as eight near-term goals to be achieved within the first two years. DHS also considered problems its consolidation is intended to solve, including providing a clearer focal point for external and DHS component engagement on CBRNE issues. However, DHS: Did not fully assess and document potential problems that could result from consolidation. Did not include a comparison of benefits and costs. Conducted limited external stakeholder outreach in developing the consolidation proposal and thus the proposal may not sufficiently account for stakeholder concerns. Attention to the these key areas, identified from GAO's analysis of previous organizational consolidations, would help provide DHS, Congress, and other stakeholders with assurance that important aspects of effective organizational change are addressed as part of the agency's CBRNE reorganization decision-making process. Key mergers and organizational transformation practices identified in previous GAO work could benefit DHS if Congress approves the proposed CBRNE consolidation. GAO reported in July 2003 on key practices and implementation steps for mergers and organizational transformations that range from ensuring top leadership drives the transformation to involving employees in the implementation process to obtain their ideas and gain their ownership for the transformation. In addition, the practices would be helpful in a consolidated CBRNE environment. For example, overall employee morale differs among the components to be consolidated, making the key practice of employee involvement to gain their ownership for the transformation a crucial step. Also, given the wide range of activities conducted by the consolidated entities, the key practice of establishing a coherent mission and integrated strategic goals to guide the transformation will be important. The Consolidated Appropriations Act, 2016, prohibits DHS from using funds to establish a CBRNE office until Congress approves it, and, as of June 2016, Congress had not approved DHS's consolidation proposal. However, should DHS receive this approval, consulting GAO's key practices would help ensure that lessons learned from other organizations are considered. GAO recommends that DHS complete, document, and make available analyses associated with identifying: (1) unintended problems, if any, that consolidation may create; (2) a comparison of the consolidation's benefits and costs; and (3) a broader range of external stakeholder input. Although DHS did not concur, GAO continues to believe that findings documented in the report support the recommendation. DHS concurred with GAO's additional recommendation that should Congress approve DHS's plan, the department use key mergers and organizational transformation practices identified in previous GAO work. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
In their efforts to modernize their health information systems and share medical information, VA and DOD start from different positions. As shown in table 1, VA has one integrated medical information system—the Veterans Health Information Systems and Technology Architecture (VistA)—which uses all electronic records. All 128 VA medical sites thus have access to all VistA information. (Table 1 also shows, for completeness, VA’s planned modernized system and its associated data repository.) In contrast, DOD has multiple medical information systems (table 2 illustrates certain selected systems). DOD’s various systems are not integrated, and its 138 sites do not necessarily communicate with each other. In addition, not all of DOD’s medical information is electronic: some records are paper-based. For nearly a decade, VA and DOD have been undertaking initiatives to exchange data between their health information systems and create comprehensive electronic records. However, the departments have faced considerable challenges in project planning and management, leading to repeated changes in the focus and target completion dates of the initiatives. As shown in figure 1, the departments’ efforts have involved both long- term initiatives to modernize their health information systems and short- term initiatives to respond to more immediate information-sharing needs. The departments’ first initiative was the Government Computer-Based Patient Record (GCPR) project, which aimed to develop an electronic interface that would allow physicians and other authorized users at VA and DOD health facilities to access data from each other’s health information systems. The interface was expected to compile requested patient information in a virtual record (that is, electronic as opposed to paper) that could be displayed on a user’s computer screen. We reviewed the GCPR project in 2001 and 2002, noting disappointing progress exacerbated in large part by inadequate accountability and poor planning and oversight, which raised questions about the departments’ abilities to achieve a virtual medical record. We determined that the lack of a lead entity, clear mission, and detailed planning to achieve that mission made it difficult to monitor progress, identify project risks, and develop appropriate contingency plans. In both years, we recommended that the departments enhance the project’s overall management and accountability. In particular, we recommended that the departments designate a lead entity and a clear line of authority for the project; create comprehensive and coordinated plans that include an agreed-upon mission and clear goals, objectives, and performance measures; revise the project’s original goals and objectives to align with the current strategy; commit the executive support necessary to adequately manage the project; and ensure that it followed sound project management principles. In response, by July 2002, the two departments had revised their strategy, refocusing the project and dividing it into two initiatives. A short-term initiative, the Federal Health Information Exchange (FHIE), was to enable DOD to electronically transfer service members’ health information to VA when the members left active duty. VA was designated as the lead entity for implementing FHIE, which was completed in 2004. A longer-term initiative was to develop a common health information architecture that would allow a two-way exchange of health information. The common architecture is to include standardized, computable data, communications, security, and high-performance health information systems (these systems, DOD’s Composite Health Care System II and VA’s HealtheVet VistA, were already in development, as shown in the figure). The departments’ modernized systems are to store information (in standardized, computable form) in separate data repositories: DOD’s Clinical Data Repository (CDR) and VA’s Health Data Repository (HDR). The two repositories are to exchange information through an interface named CHDR. In March 2004, the departments began to develop the CHDR interface. They planned to begin implementation by October 2005; however, implementation of the first release of the interface (at one site) occurred in September 2006, almost a year beyond the target date. In a report in June 2004, we identified a number of management weaknesses that could have contributed to this delay and made a number of recommendations, including creation of a comprehensive and coordinated project management plan. The departments agreed with our recommendations and took steps to improve the management of the CHDR initiative, designating a lead entity with final decision-making authority and establishing a project management structure. However, as we noted in subsequent testimony, the initiative did not have a detailed project management plan that described the technical and managerial processes necessary to satisfy project requirements (including a work breakdown structure and schedule for all development, testing, and implementation tasks), as we had recommended. In October 2004, responding to a congressional mandate, the departments established two more short-term initiatives: the Laboratory Data Sharing Interface, aimed at allowing VA and DOD facilities to share laboratory resources, and the Bidirectional Health Information Exchange (BHIE), aimed at giving both departments’ clinicians access to records on shared patients (that is, those who receive care from both departments). As demonstration projects, these initiatives were limited in scope, with the intention of providing interim solutions to the departments’ needs for more immediate health information sharing. However, because BHIE provided access to up-to-date information, the departments’ clinicians expressed strong interest in expanding its use. As a result, the departments began planning to broaden this capability and expand its implementation considerably. Extending BHIE connectivity could provide each department with access to most data in the other’s legacy systems, until such time as the departments’ modernized systems are fully developed and implemented. According to a VA/DOD annual report and program officials, the departments now consider BHIE an interim step in their overall strategy to create a two-way exchange of electronic medical records. The departments’ reported costs for the various sharing initiatives and the modernization of their health information systems through fiscal year 2007 are shown in table 3. Beyond these initiatives, in January 2007, the departments announced a further change to their information-sharing strategy: their intention to jointly develop a new inpatient medical record system. On July 31, 2007, they awarded a contract for a feasibility study. According to the departments, adopting this joint solution is expected to facilitate the seamless transition of active-duty service members to veteran status, and make inpatient health care data on shared patients immediately accessible to both DOD and VA. In addition, the departments believe that a joint development effort could enable them to realize significant cost savings. We have not evaluated the departments’ plans or strategy for this new system. Throughout the history of these initiatives, evaluations besides our own have found deficiencies in the departments’ efforts, especially with regard to the lack of comprehensive planning. For example, a recent presidential task force identified the need for VA and DOD to improve their long-term planning. This task force, reporting on gaps in services provided to returning veterans, noted problems in sharing information on wounded service members, including the inability of VA providers to access paper DOD inpatient health records. The task force stated that although significant progress has been made towards sharing electronic information, more needs to be done, and recommended that VA and DOD continue to identify long-term initiatives and define the scope and elements of a joint inpatient electronic health record. In addition, in fiscal year 2006, Congress did not provide all the funding requested for HealtheVet VistA because it did not consider that the funding had been adequately justified. VA and DOD have made progress in both their long-term and short-term initiatives to share health information. In the long-term project to modernize their health information systems, the departments have begun, among other things, to implement the first release of the interface between their modernized data repositories. The departments have also made progress in their short-term projects to share information in existing systems, having completed two initiatives, and are making important progress on another. In addition, the departments have undertaken ad hoc activities to accelerate the transmission of health information on severely wounded patients from DOD to VA’s four polytrauma centers. However, despite the progress made and the sharing achieved, the tasks remaining to reach the goal of a shared electronic medical record are substantial. In their long-term effort to share health information, VA and DOD have completed the development of their modernized data repositories, agreed on standards for various types of data, and begun to populate the repositories with these data. In addition, they have now implemented the first release of the CHDR interface. According to the departments’ officials, all DOD sites can now access the interface, and it is expected to be available across VA when necessary software updates are released. (Currently 103 of 128 VA sites have received these updates.) At 7 sites, VA and DOD are now exchanging limited medical information for shared patients: specifically, computable outpatient pharmacy and drug allergy information. CHDR is the conduit for exchanging computable medical information between the departments. Data transmitted via the interface are permanently stored in each department’s new data repository, CDR, and HDR. Once in the repositories, these computable data can be used by DOD and VA at all sites through their existing systems. CHDR also provides terminology mediation (translation of one agency’s terminology into the other’s). The departments’ plans call for further developing the capability to exchange computable laboratory results data through the interface during fiscal year 2008. Although implementing this interface is an important accomplishment, the departments are still a long way from completing the modernized health information systems and comprehensive longitudinal health records. While DOD and VA had originally projected completion dates of 2011 and 2012, respectively, for their modernized systems, the departments’ officials told us that there is currently no scheduled completion date for either system. VA is evaluating a proposal that would result in completion of its system in 2015; DOD is evaluating the impact of the new study on a joint inpatient medical record and has not indicated a new completion date. Further, both departments have still to identify the next types of data to be stored in the repositories. The departments will then have to populate the repositories with the standardized data. This involves different tasks for each department. Specifically, while VA’s medical records are already electronic, it must still convert them into the interoperable format appropriate for its repository. DOD, in addition to converting current records from its multiple systems, must also address medical records that are not automated. As pointed out by a recent Army Inspector General’s report, some DOD facilities are having problems with hard copy records. The report also identified inaccurate and incomplete health data as a problem to be addressed. Before the departments can achieve the long- term goal of seamless sharing of medical information, all of these tasks and challenges will have to be addressed. Accordingly, it is essential that the departments develop a comprehensive project plan to guide these efforts to completion, as we have previously recommended. In addition to the long-term effort previously described, the two departments have made some progress in meeting immediate needs to share information in their respective legacy systems through short-term projects which, as mentioned earlier, are in various stages of completion. They have also set up special processes to transfer data from DOD facilities to VA’s polytrauma centers in a further effort to more effectively treat traumatic brain injuries and other especially severe injuries. DOD has been using FHIE to transfer information to VA since 2002. According to DOD officials, 194 million clinical messages on more than 4 million veterans had been transferred to the FHIE data repository as of September 2007, including laboratory results, radiology results, outpatient pharmacy data, allergy information, consultation reports, elements of the standard ambulatory data record, and demographic data. Further, since July 2005, FHIE has been used to transfer pre- and post-deployment health assessment and reassessment data; as of September 2007, VA had access to data for more than 793,000 separated service members and demobilized Reserve and National Guard members who had been deployed. Transfers are done in batches once a month, or weekly for veterans who have been referred to VA treatment facilities. According to a joint VA/DOD report, FHIE has made a significant contribution to the delivery and continuity of care of separated service members as they transition to veteran status, as well as to the adjudication of disability claims. One of the departments’ demonstration projects—the Laboratory Data Sharing Interface (LDSI)—is now fully operational and is deployed when local agencies have a business case for its use and sign an agreement. It requires customization for each locality and is currently deployed at nine locations. LDSI currently supports a variety of chemistry and hematology tests, and, at one of the nine locations, anatomic pathology and microbiology tests. Once LDSI is implemented at a facility, the only nonautomated action needed for a laboratory test is transporting the specimens. If a test is not performed at a VA or DOD doctor’s home facility, the doctor can order the test, the order is transmitted electronically to the appropriate lab (the other department’s facility or in some cases a local commercial lab), and the results are returned electronically. Among the benefits of the LDSI interface, according to VA and DOD, are increased speed in receiving laboratory results and decreased errors from manual entry of orders. The LDSI project manager in San Antonio stated that another benefit of the project is the time saved by eliminating the need to rekey orders at processing labs to input the information into the laboratories’ systems. Additionally, the San Antonio VA facility no longer has to contract out some of its laboratory work to private companies, but instead uses the DOD laboratory. Developed under a second demonstration project, the BHIE interface permits a medical care provider to query selected health information on patients from all VA and DOD sites and to view that data onscreen almost immediately. It not only allows the two departments to view each other’s information, but it also allows DOD sites to see previously inaccessible data at other DOD sites. VA and DOD have been making progress on expanding the BHIE interface. As initially developed, the interface provided access to information in VA’s VistA and DOD’s Composite Health Care System, but it is currently being expanded to query data in other DOD systems and databases. In particular, the interface has been expanded to DOD’s: Modernized data repository, CDR, which has enabled department-wide access to outpatient data for pharmacy and inpatient and outpatient allergy, radiology, chemistry, and hematology data since July 2007, and to microbiology data since September 2007. Clinical Information System (CIS), an inpatient system used by some DOD facilities; the interface enables bidirectional views of discharge summaries and is currently deployed at 13 large DOD sites. Theater Medical Data Store, which became operational in October 2007, enabling access to inpatient and outpatient clinical information from combat theaters. The departments are also taking steps to make more data elements available through BHIE. VA and DOD staff told us that by the end of the first quarter of fiscal year 2008, they plan to add provider notes, procedures, and problem lists. Later in fiscal year 2008, they plan to add vital signs, scanned images and documents, family history, social history, and other history questionnaires. In addition, a VA/DOD demonstration site in El Paso began sharing radiological images between the VA and DOD facilities in September 2007 using the BHIE/FHIE infrastructure. Although VA and DOD are sharing various types of health data, the type of data being shared has been limited and significant work remains to expand the data shared and integrate the various initiatives. Table 4 summarizes the types of health data currently shared via the long- and short-term initiatives we have described, as well as additional types of data that are currently planned for sharing. While this gives some indication of the scale of the tasks involved in sharing medical information, it does not depict the full extent of information that is currently being captured in the health information systems at VA and DOD. In addition to the information technology initiatives described, DOD and VA have set up special procedures to transfer medical information to VA’s four polytrauma centers, which treat active duty service members and veterans severely wounded in combat. Some examples of polytrauma include traumatic brain injury, amputations, and loss of hearing or vision. When service members are seriously injured in a combat theater overseas, they are first treated locally. They are then generally evacuated to Landstuhl Medical Center in Germany, after which they are transferred to a military treatment facility in the United States, usually Walter Reed Army Medical Center in Washington, D.C.; the National Naval Medical Center in Bethesda, Maryland; or Brooke Army Medical Center, at Fort Sam Houston, Texas. From these facilities, service members suffering from polytrauma may be transferred to one of VA’s four polytrauma centers for treatment. At each of these locations, the injured service members will accumulate medical records, in addition to medical records already in existence before they were injured. According to DOD officials, when patients are referred to VA for care, DOD sends copies of medical records documenting treatment provided by the referring DOD facility along with them. The DOD medical information is currently collected in several different systems: 1. In the combat theater, electronic medical information may be collected for a variety of reasons, including routine outpatient care, as well as serious injuries. These data are stored in the Theater Medical Data Store. As mentioned earlier, the BHIE interface to this database became operational in October. 2. At Landstuhl, inpatient medical records are paper-based (except for discharge summaries). The paper records are sent with a patient as the individual is transferred for treatment in the United States. DOD officials told us that the paper record is the official DOD medical record, although AHLTA is used extensively to provide outpatient encounter information for medical records purposes. 3. At the DOD treatment facility (Walter Reed, Bethesda, or Brooke), additional inpatient information is recorded in CIS and outpatient pharmacy and drug information are stored in CDR; other health information continues to be stored in local CHCS databases. When service members are transferred to a VA polytrauma center, VA and DOD have several ad hoc processes in place to electronically transfer the patients’ medical information: DOD has set up secure links to enable a limited number of clinicians at the polytrauma centers to log directly into CIS at Walter Reed and Bethesda Naval Hospital to access patient data. Staff at Walter Reed, Brooke, and Bethesda medical centers collect paper records, print records from CIS, scan all these, and transmit the scanned data to the four polytrauma centers. DOD staff pointed out that this laborious process is feasible only because the number of polytrauma patients is small. According to VA officials, 460 severe traumatic brain injury patients had been treated at the polytrauma centers through fiscal year 2007. According to DOD officials, the medical records for 81 patients planned for transfer or already at a VA polytrauma center were scanned and provided to VA between April 1 and October 11 of this year. Digital radiology images were also provided for 48 patients. Staff at Walter Reed and Bethesda are transmitting radiology images electronically to the four polytrauma centers. Access to radiology images is a high priority for polytrauma center doctors, but like scanning paper records, transmitting these images requires manual intervention: when each image is received at VA, it must be individually uploaded to VistA’s imagery viewing capability. This process would not be practical for large volumes of images. VA has access to outpatient data (via BHIE) from all DOD sites, including Landstuhl. These special efforts to transfer medical information on seriously wounded patients represent important additional steps to facilitate the sharing of information that is vital to providing polytrauma patients with quality health care. In summary, VA and DOD are exchanging health information via their long- and short-term initiatives and continue to expand sharing of medical information via BHIE. However, these exchanges have been limited, and significant work remains to fully achieve the goal of exchanging interoperable, computable data. Work still to be done includes agreeing to standards for the remaining categories of medical information; populating the data repositories with all this information; completing the development of HealtheVet VistA, and AHLTA; and transitioning from the legacy systems. To complete this work and achieve the departments’ ultimate goal of a maintaining a lifelong electronic medical record that will follow service members as they transition from active to veteran status, a comprehensive and coordinated project management plan that defines the technical and managerial processes necessary to satisfy project requirements and to guide their activities continues to be of vital importance. We have previously recommended that the departments develop such a plan and that it include a work breakdown structure and schedule for all development, testing, and implementation tasks. Without such a detailed plan, VA and DOD increase the risk that the long-term project will not deliver the planned capabilities in the time and at the cost expected. Further, it is not clear how all the initiatives we have described today are to be incorporated into an overall strategy toward achieving the departments’ goal of a comprehensive, seamless exchange of health information. This concludes my statement. I would be pleased to respond to any questions that you may have. If you have any questions concerning this testimony, please contact Valerie C. Melvin, Director, Human Capital and Management Information Systems Issues, at (202) 512-6304 or [email protected]. Other individuals who made key contributions to this testimony are Barbara Oliver (Assistant Director), Nancy Glover, Glenn Spiegel, and Amos Tevelow. Computer-Based Patient Records: Better Planning and Oversight by VA, DOD, and IHS Would Enhance Health Data Sharing. GAO-01-459. Washington, D.C.: April 30, 2001. Veterans Affairs: Sustained Management Attention Is Key to Achieving Information Technology Results. GAO-02-703. Washington, D.C.: June 12, 2002. Computer-Based Patient Records: Short-Term Progress Made, but Much Work Remains to Achieve a Two-Way Data Exchange Between VA and DOD Health Systems. GAO-04-271T. Washington, D.C.: November 19, 2003. Computer-Based Patient Records: Sound Planning and Project Management Are Needed to Achieve a Two-Way Exchange of VA and DOD Health Data. GAO-04-402T. Washington, D.C.: March 17, 2004. Computer-Based Patient Records: VA and DOD Efforts to Exchange Health Data Could Benefit from Improved Planning and Project Management. GAO-04-687. Washington, D.C.: June 7, 2004. Computer-Based Patient Records: VA and DOD Made Progress, but Much Work Remains to Fully Share Medical Information. GAO-05-1051T. Washington, D.C.: September 28, 2005. Information Technology: VA and DOD Face Challenges in Completing Key Efforts. GAO-06-905T. Washington, D.C.: June 22, 2006. DOD and VA Exchange of Computable Pharmacy Data. GAO-07-554R. Washington, D.C.: April 30, 2007. Information Technology: VA and DOD Are Making Progress in Sharing Medical Information, but Are Far from Comprehensive Electronic Medical Records, GAO-07-852T. Washington, D.C.: May 8, 2007. Information Technology: VA and DOD Are Making Progress in Sharing Medical Information, but Remain Far from Having Comprehensive Electronic Medical Records, GAO-07-1108T. Washington, D.C.: July 18, 2007. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | The Department of Veterans Affairs (VA) and the Department of Defense (DOD) are engaged in ongoing efforts to share medical information, which is important in helping to ensure high-quality health care for active-duty military personnel and veterans. These efforts include a long-term program to develop modernized health information systems based on computable data: that is, data in a format that a computer application can act on--for example, to provide alerts to clinicians of drug allergies. In addition, the departments are engaged in short-term initiatives involving existing systems. GAO was asked to testify on the history and current status of the departments' efforts to share health information. To develop this testimony, GAO reviewed its previous work, analyzed documents about current status and future plans and interviewed VA and DOD officials. For almost a decade, VA and DOD have been pursuing ways to share health information and to create comprehensive electronic medical records. However, they have faced considerable challenges in these efforts, leading to repeated changes in the focus of their initiatives and target completion dates. Currently, the two departments are pursuing both long- and short-term initiatives to share health information. Under their long-term initiative, the modern health information systems being developed by each department are to share standardized computable data through an interface between data repositories associated with each system. The repositories have now been developed, and the departments have begun to populate them with limited types of health information. In addition, the interface between the repositories has been implemented at seven VA and DOD sites, allowing computable outpatient pharmacy and drug allergy data to be exchanged. Implementing this interface is a milestone toward the departments' long-term goal, but more remains to be done. Besides extending the current capability throughout VA and DOD, the departments must still agree to standards for the remaining categories of medical information, populate the data repositories with this information, complete the development of the two modernized health information systems, and transition from their existing systems. While pursuing their long-term effort to develop modernized systems, the two departments have also been working to share information in their existing systems. Among various short-term initiatives are a completed effort to allow the one-way transfer of health information from DOD to VA when service members leave the military, as well as ongoing demonstration projects to exchange limited data at selected sites. One of these projects, which builds on the one-way transfer capability, developed an interface between certain existing systems that allows a two-way view of current data on patients receiving care from both departments. VA and DOD are now expanding the sharing of additional medical information by using this interface to link other systems and databases. The departments have also established ad hoc processes to meet the immediate need to provide data on severely wounded service members to VA's polytrauma centers, which specialize in treating such patients. These processes include manual workarounds (such as scanning paper records) that are generally feasible only because the number of polytrauma patients is small. While these multiple initiatives and ad hoc processes have facilitated degrees of data sharing, they nonetheless highlight the need for continued efforts to integrate information systems and automate information exchange. At present, it is not clear how all the initiatives are to be incorporated into an overall strategy focused on achieving the departments' goal of comprehensive, seamless exchange of health information. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
DOD Instruction 5100.73, Major DOD Headquarters Activities, defines major headquarters activities as those headquarters (and the direct support integral to their operation) whose primary mission is to manage or command the programs and operations of DOD, its components, and their major military units, organizations, or agencies. The instruction provides an official list of the organizations that it covers, including OSD; the Joint Staff; the Offices of the Secretary of the Army and Army Staff; the Office of the Secretary of the Navy and Office of the Chief of Naval Operations; Headquarters, Marine Corps; and the Offices of the Secretary of the Air Force and Air Staff. These organizations have responsibilities that include developing guidance, reviewing performance, allocating resources, and conducting mid-to-long-range budgeting as they oversee, direct, and control subordinate organizations or units. In addition to OSD, the Joint Staff, and the secretariats and staffs of the military services, other headquarters organizations include portions of the defense agencies, DOD field activities, and the combatant commands, along with their subordinate unified commands and respective service component commands. OSD is responsible for assisting the Secretary of Defense in carrying out his or her duties and responsibilities for the management of DOD.These include policy development, planning, resource management, and fiscal and program evaluation responsibilities. The staff of OSD comprises military and civilian personnel and contracted services. While military personnel may be assigned to permanent duty in OSD, the Secretary may not establish a military staff organization within OSD. The Joint Staff is responsible for assisting the Chairman of the Joint Chiefs of Staff, the military advisor to the President, in accomplishing his responsibilities for the unified strategic direction of the combatant forces; their operation under unified command; and their integration into a team of land, naval, and air forces. The Joint Staff is tasked to provide advice and support to the Chairman and the Joint Chiefs on matters including personnel, intelligence doctrine and architecture, operations and plans, logistics, strategy, policy, communications, cyberspace, joint training and education, and program evaluation. In addition to civilian personnel and contracted services, the Joint Staff comprises military personnel who represent, in approximately equal numbers, the Army, the Navy and Marine Corps, and the Air Force. The Office of the Secretary of the Army has sole responsibility within the Office of the Secretary and the Army Staff for the following functions: acquisition, auditing, financial management, information management, inspector general, legislative affairs, and public affairs. Additionally, there is an Army Staff, which is to furnish professional assistance to the Secretary and the Chief of Staff of the Army. Headquarters functions to be performed by the Army Staff include, among others, recruiting, organizing, training, and equipping of the Army.the Secretary of the Army and the Army Staff comprise military and civilian personnel and contracted services. The staff of the Office of The Office of the Secretary of the Navy is solely responsible within the Office of the Secretary of the Navy, the Office of the Chief of Naval Operations, and the Headquarters, Marine Corps, for oversight of the following functions: acquisition, auditing, financial management, information management, inspector general, legislative affairs, and public affairs. The Office of the Chief of Naval Operations is to provide professional assistance to the Secretary and Chief of Naval Operations in preparing for the employment of the Navy in areas such as: recruiting, organizing, supplying, equipping, and training. The Marine Corps also operates under the authority, direction, and control of the Secretary of the Navy. Headquarters, Marine Corps, consists of the Commandant of the Marine Corps and staff who are to provide assistance in preparing for the employment of the Marine Corps in areas such as recruiting, organizing, supplying, equipping and training. The staffs of Office of the Secretary of the Navy, Office of the Chief of Naval Operations, and Headquarters, Marine Corps, comprise military and civilian personnel and contracted services. The Office of the Secretary of the Air Force has sole responsibility and oversight for the following functions across the Air Force: acquisition, auditing, financial management, information management, inspector general, legislative affairs, and public affairs. Staff, which is to furnish professional assistance to the Secretary and the Chief of Staff of the Air Force. The headquarters functions to be performed by the Air Staff include recruiting, organizing, training, and equipping of the Air Force, among others.Secretary of the Air Force and the Air Staff comprise military and civilian personnel and contracted services. 10 U.S.C. § 8014. expenditures. In 2013, the Secretary of Defense set a target for reducing DOD components’ total management headquarters budgets by 20 percent for fiscal years 2014 through 2019, including costs for civilian personnel and contracted services, while striving for a goal of 20 percent reductions to authorized military and civilian personnel. However, the department has not finalized its reduction plans. OSD experienced an overall increase in its authorized military and civilian positions from fiscal years 2001 through 2013, representing a net increase of 20 percent from 2,205 authorized positions in fiscal year 2001 to 2,646 authorized positions in fiscal year 2013. Since fiscal year 2011, OSD’s authorized positions have slightly decreased from their peak levels. The number of authorized military and civilian positions within the Joint Staff remained relatively constant since fiscal year 2005, the first year we could obtain reliable data, at about 1,262 authorized positions, with an increase in fiscal year 2012 to 2,599 positions, which Joint Staff officials said was associated with the realignment of duties from U.S. Joint Forces Command after its disestablishment.Staff trends are illustrated in figure 1. The military service secretariats and staffs also experienced varied increases in their number of authorized military and civilian positions from fiscal years 2001 through 2013.increases are attributed to increased mission responsibilities for the war and other directed missions such as business transformation, sexual assault response and prevention, and cyber. In addition, DOD officials said converting functions performed by contracted services to civilian positions, and the transfer of positions from other organizations also contributed to the increases. However, military service officials said that DOD-wide initiatives and service-specific actions since fiscal year 2010 have generally begun to slow these increases or resulted in declines, as illustrated in figure 3. DOD identified planned savings in its fiscal year 2015 budget submission, but it is unclear how the department will achieve those savings or how the reductions will affect the headquarters organizations in our review. In 2013, the Secretary of Defense set a target for reducing the headquarters budgets by 20 percent, to include costs for civilian personnel, contracted services, facilities, information technology, and other costs that support headquarters functions. DOD budget documents project the reductions will yield the department a total savings of about $5.3 billion from fiscal years 2015 through 2019, with most savings coming in 2019; however, specific details of the reductions through fiscal year 2019 were not provided. Moreover, in June 2014, we found that the starting point for the reductions was not clearly defined so it is difficult to assess whether these projected savings reflect meaningful savings when the reductions are a small portion of DOD’s budget.National Defense Authorization Act for Fiscal Year 2014 to report its DOD was required by Section 904 of the efforts to streamline management headquarters in June 2014.DOD provided Congress with an interim response stating that, due to the recent turnover of key staff, it would not develop its initial plan on streamlining until the end of summer 2014. As of December 2014, DOD’s plan had not been issued. Officials from the headquarters organizations in this review stated that they are using different processes to identify the 20 percent reductions to their operating budgets. DOD’s guidance called for components to achieve a 20 percent reduction to their headquarters operating budgets, while striving for a goal of 20 percent reductions to authorized military and civilian personnel. According to DOD officials, this flexibility allows DOD components to determine the most cost-effective workforce—retaining military and civilian personnel while reducing dollars spent on contracted services. For example, OSD officials stated that the Under Secretaries of Defense were asked to strive for a goal of reducing their operating budgets by 20 percent. However, some OSD senior officials stated that it was unfair to smaller OSD offices, such as General Counsel, Public Affairs, and Legislative Affairs, to take the same reduction as the larger offices, and consequently OSD elected to take larger reductions from the larger offices of OSD Policy; Acquisitions, Technology and Logistics; Intelligence; and Personnel and Readiness. OSD officials added that they are in the process of determining how best to apply the budget reductions, preferably through attrition. Overall, DOD projected the reductions will result in at least $1 billion in savings for OSD’s headquarters over a 5-year period, but it is unclear what the size will ultimately be. The Joint Staff projects reductions of about $450,000 from fiscal year 2015 through fiscal year 2019. Joint Staff officials stated that they plan to reduce the number of authorized positions by about 150 civilian positions (about 14 percent of their fiscal year 2013 authorized civilian positions) and by about 160 military positions (about 11 percent of their fiscal year 2013 authorized military positions). Specifics about the plans for the military service secretariats and staffs were also in development, as of December 2014. Army officials estimate a reduction of about 560 civilian full-time equivalent positions in the Army Secretariat and Army Staff (about 21 percent of fiscal year 2013 authorized civilian positions); however, the officials said that the reductions in military positions will be determined through an Army review of military personnel in time for the fiscal year 2017 budget submission. Additionally, in July 2014, the Secretary of the Army announced plans for an additional review to determine the optimal organization and strength and, subsequently, any adjustment of programmed reductions in Headquarters, Department of the Army, that is to be completed by March 2015. Navy officials stated that the Navy will take 20 percent reductions in both civilian and military personnel, but the exact reductions through fiscal year 2019 would not be available before the issuance of the Section 904 report to Congress. A Marine Corps official stated that after submitting its fiscal year 2015 budget information, the Marine Corps conducted a structural review over a period of 6 to 8 months that identified a larger number of positions in Headquarters, Marine Corps, that should be subject to the reduction. The official further stated that these changes should better position the Marine Corps to more accurately report its headquarters structure for the fiscal year 2016 budget, but added that the actual reductions would likely be different than it originally estimated for fiscal year 2015. The revised Marine Corps data were not available as of January 2015. More specific information was available from the Air Force. In July 2014, the Air Force completed its management headquarters review and notified Congress of its reorganization plans, including a reduction of 300 authorized military and civilian positions (about 12 percent of fiscal year 2013 authorized positions) and a 20 percent reduction to the headquarters operating budgets for the Air Force Secretariat and Air Staff by fiscal year 2019. The headquarters organizations we reviewed—OSD, the Joint Staff, and the secretariats and staffs for the Army, Navy, and Air Force, and Headquarters, Marine Corps—do not determine their personnel requirements as part of a systematic requirements-determination process, nor do they have procedures in place to ensure that they periodically reassess them as outlined in DOD and other guidance. Current personnel levels for these headquarters organizations are traceable to statutory limits enacted during the 1980s and 1990s to force efficiencies and reduce duplication. However, these limits have been waived since fiscal year 2002 and have little practical utility because of statutory exceptions to certain categories of personnel and because the limits do not include personnel in supporting organizations that perform headquarters-related functions. OSD, the Navy, and the Marine Corps have recognized problems with their existing requirements-determination processes and are beginning to take steps to modify their processes, but their efforts are not yet complete. Without systematic determinations of personnel requirements and periodic reassessments of them using organizational and workforce analyses, DOD will not be well-positioned to proactively identify efficiencies and limit personnel growth within these headquarters organizations. Moreover, until such requirements are determined, Congress will not have the information needed to reexamine existing statutory limits. Most of the DOD headquarters organizations that we reviewed are subject to statutory limits on the number of authorized personnel, although these limits have been waived since fiscal year 2002 and are of limited utility due to statutory exceptions and exclusions of certain personnel. Congress placed statutory limits on authorized military and civilian personnel for the military departments’ secretariats and staffs in 1986, in part, to force a comprehensive management review of duplication and identify effective solutions to existing personnel duplication among the services. In 1996, Congress also established a statutory limit for OSD military and civilian personnel because it was concerned about the growth of OSD personnel despite a declining defense budget and military force structure. The military departments’ statutory limits were set at 85 percent of the total number of personnel in the secretariats and military staffs prior to 1986, while the OSD statutory limit represented a 15 percent reduction from 1994 personnel levels. The Joint Staff is not currently subject to a statutory limit. Although Congress placed statutory limits on the OSD and the military departments’ secretariats and military staffs, the President has declared a national emergency each year from fiscal years 2002 to 2014, which had the effect of waiving the limits for the military departments each year.While the limits have been waived, officials from the Army, Navy, and Air Force stated that they seek to keep their number of authorized military and civilian positions within or close to these limits because the waiver is valid only for 1 year at a time, and they are uncertain whether the waiver will be granted again. However, we found the secretariats and military staffs of the departments of the Army and Navy have totals for fiscal year 2013 that would exceed the existing statutory limits were they in effect. Table 1 shows the statutory limits of the headquarters organizations that we reviewed and the total number of authorized positions they reported in fiscal year 2013, including, where applicable, the percentage by which they vary from the statutory limits. In addition, the numbers of authorized military and civilian positions counted against the statutory limits may not accurately reflect or be inclusive of all personnel supporting the headquarters due to statutory exceptions and the exclusion of certain personnel in support organizations conducting headquarters-related functions. Beginning in fiscal year 2009, Congress provided exceptions to the limitations on personnel for certain categories of acquisition personnel and for those hired pursuant to a shortage category designated by the Secretary of Defense or the Director of the Office of Personnel Management. These exceptions to the limitations on personnel allow DOD to adjust its baseline personnel limitation or exclude certain personnel from the limitation. For example, the Army reported for fiscal year 2015 that it has 1,530 military and civilian personnel that are subject to these exceptions and therefore do not count against its statutory limits. An official in OSD’s Office of the Under Secretary for Personnel and Readiness told us that the exceptions that were added to the statutory limits as of fiscal year 2009 make the statutory limits virtually obsolete. The statutory limits also do not apply to personnel in supporting organizations to the military service secretariats and staffs who do perform headquarters-related functions. For example, the Army and Air Force each have some personnel within their field operating agencies that support their military service secretariats or staffs in accomplishing their mission but which we found are not subject to the statutory limits. Organizations that support the Air Force Secretariat and Air Staff in conducting their mission include, but are not limited to, the U.S. Air Force Cost Analysis Agency, the U.S. Air Force Inspection Agency, the U.S. Air Force Personnel Center, and the U.S. Air Force Audit Agency, and include thousands of personnel. As illustrated in figure 4, in the case of the Army, the organizations and agencies that support the Army Secretariat and Army Staff are almost three times as large as the Secretariat and Staff, and include the U.S. Army Finance Command, the U.S. Army Manpower Analysis Agency, and the U.S. Army Force Management Support Agency, among others. By contrast, elements of the Washington Headquarters Services, a support organization for OSD, are included in OSD’s statutory limits. This means that some personnel in the Washington Headquarters Services who conduct management headquarters-related functions count toward OSD’s statutory limit. In addition, the applicable statute contains a provision limiting OSD’s ability to reassign functions; specifically, that DOD may not reassign functions solely in order to evade the personnel limitations required by the statute. The statutes governing personnel limitations for the military services’ secretariats and staffs do not contain similar limitations on the military services’ ability to reassign headquarters-related functions elsewhere. Military service officials have explained that the existing statutory limits preclude organizational efficiencies by causing them to move personnel performing headquarters- related functions elsewhere within the department, including the field operating agencies. In addition, DOD officials also stated the statutory limits may have unintended consequences, such as causing DOD to use contracted services to perform headquarters-related tasks when authorized military and civilian personnel are unavailable; this contractor work force is not subject to the statutory limits. We also found that Headquarters, Marine Corps, plans to revise the number of military and civilian personnel it counts against the statutory limits to exclude certain personnel. Officials in Headquarters, Marine Corps, said that, unlike their counterparts in the other three services, their headquarters is not entirely a management headquarters activity, because it incorporates some nonheadquarters functions for organizational and efficiency reasons, and thus the limits should not apply to those personnel. However, this planned change seems in contradiction with the intent of the statute to establish a limit on personnel within the Navy Secretariat, Office of the Chief of Naval Operations, and Headquarters, Marine Corps. Also, DOD Instruction 5100.73, Major DOD Headquarters Activities, states that Headquarters, Marine Corps, is a management headquarters organization in its entirety, which would include all its personnel and operating costs. Marine Corps officials told us that DOD plans to revise DOD Instruction 5100.73 to classify only certain functions within Headquarters, Marine Corps, as management headquarters activities. According to an official, Headquarters, Marine Corps,’ personnel totals in fiscal year 2013 do not reflect these changes and may account for the large percentage difference between the existing statutory limits and the number of Navy and Marine Corps authorized personnel in fiscal year 2013. An official from the Department of the Navy also noted that they have not reexamined the number of personnel who would fall under the statutory limits since the limit was first waived in September 2001. According to internal-control standards for the federal government, information should be recorded and communicated to others who need it in a form that enables them to carry out their responsibilities. An organization must have relevant, reliable, and timely communications as well as information needed to achieve the organization’s objectives. However, DOD’s headquarters reporting mechanism to Congress, the Defense Manpower Requirements Report, reflects a lack of key information. This annual report to Congress includes information on the number of military and civilian personnel assigned to major DOD headquarters activities in the preceding fiscal year and estimates of such numbers for the current and subsequent fiscal year, as well as the amount of any adjustment in personnel limits made by the Secretary of Defense or the secretary of a military department. However, in the most recent report for fiscal year 2015, only the Army reports information on the number of baseline personnel within the Army Secretariat and Army Staff that count against the statutory limits, along with the applicable adjustments to the limits. Similar information for OSD, the Air Force Secretariat and Air Staff, the Navy Secretariat, the Office of the Chief of Naval Operations, and Headquarters, Marine Corps, is not included because DOD’s reporting guidance does not require this information. Without information to identify what personnel in each organization are being counted against the statutory limits, it will be difficult for Congress to determine whether the existing statutory limits are effective in limiting personnel growth within the department or should be revised to reflect current requirements. While the organizations we reviewed are currently assessing their personnel requirements—driven by department-wide efforts to reduce management overhead in response to budget constraints—we found that all of the headquarters organizations within our review have not determined their personnel requirements as part of a systematic requirements-determination process. Such systematic personnel- requirements processes are considered a good human-capital practice across government, including DOD, and these processes include certain key elements. Among these elements are that organizations should (1) identify an organization’s mission, functions, and tasks; and (2) determine the minimum number and type of personnel—military personnel, civilian personnel, and contracted services—needed to fulfill those missions, functions, and tasks by conducting a workforce analysis. Such a workforce analysis should identify mission-critical competencies as well as gaps and deficiencies, and systematically define the size of the total workforce needed to meet organizational goals. By contrast, the headquarters organizations we reviewed use authorized personnel levels from the previous year as a baseline from which to generate any new requirements, and these personnel levels are ultimately based not on a workforce analysis but on the statutory limits that were established by Congress in the 1980s and 1990s. According to DOD officials, it is more difficult to determine personnel requirements for OSD, military service secretariats, or military staffs, whose tasks include developing policy or strategy, than it is for military services’ major commands or units that have distinct tasks, such as repairing aircraft or conducting ship maintenance. DOD officials stated that headquarters organizations’ workload is unpredictable and not only includes traditional policy and oversight responsibilities, but also managing unforeseen events and initiatives, such as the Fort Hood shooting, Secretary of Defense-directed reductions, and responding to congressionally mandated reviews or reports. However, systematically determining personnel requirements for the total force—military personnel, civilian personnel, and contracted services—by conducting a workforce analysis, rather than relying on historic personnel levels and existing statutory limits, would better position these headquarters organizations to respond to unforeseen events and initiatives by allowing them to identify critical mission requirements as well as mitigate risks to the organizations’ efficiency and effectiveness. Without such determination of personnel requirements for the total force, DOD headquarters organizations may not be well positioned to identify opportunities for efficiencies and reduce the potential for headquarters- related growth. In addition, submitting these personnel requirements to Congress would provide Congress with key information to determine whether the existing statutory limits on military and civilian personnel are effective in limiting headquarters personnel growth. In addition to not systematically determining their personnel requirements, we also found that the headquarters organizations do not have procedures in place to ensure that they periodically reassess these personnel requirements. This is contrary to guidance from DOD and all of the military services suggesting that they conduct periodic reassessments of their personnel requirements. For example, DOD guidance states that existing policies, procedures, and structures should be periodically evaluated to ensure efficient and effective use of personnel resources, and that assigned missions should be accomplished using the least costly mix of military, civilian and contractor personnel. Moreover, the military services have more specific guidance indicating that personnel requirements should be established at the minimum essential level to accomplish the required workload and should be periodically reviewed. For example, the Air Force states that periodic reviews should occur at least every 2 years. In addition, systematic personnel requirements processes are considered a good human-capital practice across government, including in DOD. These practices call for organizations to have personnel requirements-determination processes that, among other things, reassess personnel requirements by conducting analysis on a periodic basis to determine the most efficient choices for workforce deployment. These reassessments should include analysis of organizational functions to determine appropriate structure, including identifying any excess organizational layers or redundant operations, and workforce analysis to determine the most effective workloads for efficient functioning. None of the headquarters organizations we reviewed have procedures in place to ensure that they periodically reassess their personnel requirements. This is unlike the military services’ major commands or units, for which officials within the military departments stated they do reassess personnel requirements. While Navy officials stated that the Navy may occasionally reassess the requirements for a particular organization within the Secretariat or Office of the Chief of Naval Operations, such reassessments are conducted infrequently and without the benefit of a standardized methodology. Officials at Headquarters, Marine Corps, stated that they are beginning to implement a new requirements-determination process, which requires commanders to conduct an annual analysis to determine their organizations’ personnel requirements. However, this process is not expected to be fully implemented until October 2015. Officials from headquarters organizations that we reviewed said that they do not periodically reassess personnel requirements because their organization’s requirements do not change much from year to year and they adjust requirements when new missions or tasks are assigned to their organization. DOD officials also maintained that the process of reassessing these personnel requirements would be lengthy and require an increase in personnel to conduct the analysis. Officials also stated that they believe the department’s recent efficiency efforts have allowed their organizations to reassess personnel requirements and identify opportunities for efficiencies. For example, officials stated that they conducted comprehensive reviews of their organizations’ personnel requirements as part of the effort to identify efficiencies as directed by former Secretary of Defense Robert Gates in 2010, as part of the OSD organizational review conducted by former Secretary of the Air Force Mike Donley in 2013, and most recently as part of Secretary of Defense Chuck Hagel’s effort to reduce management headquarters. However, these reviews have generally been ad hoc and done in response to internally driven or directed reductions, rather than as part of the organization’s systematic requirements-determination process. Conducting periodic reassessments as part of a systematic requirements- determination process, rather than in response to various DOD-directed efforts, would allow headquarters organizations to proactively identify any excess organizational layers or redundant operations and to inform decision making during any future efficiency efforts and budget reviews. In addition, reassessments of personnel requirements could occur periodically, not necessarily annually, thereby lessening the amount of time and labor that headquarters organizations devote to conducting reassessments. For example, Army guidance states that such reassessments should occur every 2 to 5 years. Without periodic reassessment of personnel requirements for the total force, it will be difficult for the headquarters organizations in our review to be well positioned to effectively identify opportunities for efficiencies and limit personnel growth. All but one of the organizations we reviewed have recognized problems with requirements determination and some are beginning to take steps to modify their related processes, but these efforts are not yet complete. For example, OSD conducted a set of studies, directed by the Secretary of Defense in December 2013, aimed at further improving management and administration of personnel. According to OSD officials, the data and insights from these studies will inform DOD-wide business process and system reviews being directed by the Deputy Secretary of Defense. For example, officials stated that an OSD-wide process for determining and reassessing personnel requirements may replace the current process whereby each OSD office sets its personnel requirements individually. OSD officials also stated that the new process, if implemented, might include a standard methodology to help OSD conduct a headquarters workforce analysis and determine and periodically reassess its personnel requirements. DOD did not provide a time frame for implementing the results of the studies and did not confirm whether implementation would include establishment of an OSD-wide personnel requirements- determination process. Department of the Navy, Navy Shore Manpower Requirements Determination Final Report (revised July 17, 2013). methodology for analyzing workload and determining and assessing personnel requirements. Based on this report, the Navy is conducting its own review of the shore personnel requirements-determination process, with the goal of establishing guidance for use in 2015. In 2011, the Marine Corps developed a standardized approach, known as the Strategic Total Force Management Planning process, for determining and reassessing headquarters personnel requirements on an annual basis. According to Marine Corps officials and guidance, this process requires commanders to annually assess their organization’s mission, analyze its current and future organizational structures, conduct a gap analysis, and develop, execute, and monitor a plan of action to address any gaps. The Marine Corps is currently revising its guidance to reflect this new process, and commanders are not required to develop their requirements and submit an action plan until October 2015. Despite these efforts, none of these processes have been fully implemented or reviewed. Therefore, it is too early to know whether the new processes will reflect the key elements of a personnel requirements-determination process by enabling the organizations to identify missions, systematically determine personnel requirements, and reassess them on a periodic basis using organizational and workforce analysis. Over the past decade, OSD, the Joint Staff, and the military service secretariats and staffs have grown to manage the increased workload and budgets associated with a military force engaged in conflict around the world. Today, DOD is facing a constrained budget environment and has stated that it needs to reduce the size of its headquarters, to include all components of its workforce–military personnel, civilian personnel, and contracted services. DOD and the military services have undertaken reviews to reduce headquarters but these budget-driven efforts have not been the result of systematic determinations of personnel needs. Statutory limits on these headquarters have been waived since 2002, but these limits would likely be counterproductive today were the waiver dropped, because they were set in the 1980s and 1990s and are inconsistently applied due to statutory exceptions and DOD’s exclusion of personnel conducting headquarters-related functions. Specifically, these limits omit personnel in supporting organizations to the military service secretariats and staffs that perform headquarters-related functions. Because of these exceptions and omissions, the statutory limits may be of limited utility in achieving Congress’s original aim of stemming the growth of headquarters personnel and reducing duplication of effort. The existing statutory limits encourage the headquarters organizations to manage the number of military and civilian personnel requirements at or near the limit, according to DOD officials, rather than using a systematic requirements-determination process that establishes the total force that is truly needed and whether any efficiencies can be identified. Headquarters organizations in our review have not systematically determined how many personnel they need to conduct their missions. While some organizations have begun to take such steps, their plans are not firm and their processes have not been finalized. Unless the organizations conduct systematic analyses of their personnel needs for the total force and establish and implement procedures to ensure that they periodically reassess those requirements, the department will lack assurance that its headquarters are sized appropriately. Looking to the future, systematically determining personnel requirements and conducting periodic reassessments could inform decision making during any future efficiency efforts and support budget formulation. In addition, determining these personnel requirements and submitting the results to Congress as part of DOD’s Defense Manpower Requirements Report or through separate correspondence, along with any recommendations about adjustments needed to the statutory limits, could form a foundation upon which Congress could reexamine the statutory limits, as appropriate. To ensure that headquarters organizations are properly sized to meet their assigned missions and use the most cost-effective mix of personnel, and to better position DOD to identify opportunities for more efficient use of resources, we recommend that the Secretary of Defense direct the following three actions: conduct a systematic determination of personnel requirements for OSD, the Joint Staff, and the military services’ secretariats and staff, which should include analysis of mission, functions, and tasks, and the minimum personnel needed to accomplish those missions, functions, and tasks; submit these personnel requirements, including information on the number of personnel within OSD and the military services’ secretariats and staffs that count against the statutory limits, along with any applicable adjustments to the statutory limits, in the next Defense Manpower Requirements Report to Congress or through separate correspondence, along with any recommendations needed to modify the existing statutory limits; and establish and implement procedures to conduct periodic reassessments of personnel requirements within OSD and the military services’ secretariats and staffs. Congress should consider using the results of DOD’s review of headquarters personnel requirements to reexamine the statutory limits. Such an examination could consider whether supporting organizations that perform headquarters functions should be included in statutory limits and whether the statutes on personnel limitations within the military services’ secretariats and staffs should be amended to include a prohibition on reassigning headquarters-related functions elsewhere. We provided a draft of this report to DOD for review and comment. In written comments on a draft of this report, DOD partially concurred with the three recommendations and raised concerns regarding what it believes is a lack of appropriate context in the report. DOD’s comments are summarized below and reprinted in their entirety in appendix IX. In its comments, DOD raised concerns that the report lacks perspective when characterizing the department’s headquarters staff, stating that it is appropriate for the department to have a complex and multi-layered headquarters structure given the scope of its missions. We agree that DOD is one of the largest and most complex organizations in the world, and make note of its many broad and varied responsibilities in our report. Notwithstanding these complexities, the department itself has repeatedly recognized the need to streamline its headquarters structure. For example, in 2010, the Secretary of Defense expressed concerns about the dramatic growth in DOD’s headquarters and support organizations that had occurred since 2001, and initiated a series of efficiency initiatives aimed at stemming this growth. The Secretary of Defense specifically noted the growth in the bureaucracy that supports the military mission, especially the department’s military and civilian management layers, and called for an examination of these layers. In addition, in January 2012, the administration released defense strategic guidance that calls for DOD to continue to reduce the cost of doing business, which includes reducing the rate of growth in personnel costs and finding further efficiencies in overhead and headquarters, in its business practices, and in other support activities. Our report discusses some of the department’s efficiency-related efforts and thus, we believe it contains appropriate perspective. DOD also expressed concerns that the report lacks appropriate context when addressing the causes for workforce growth, stating that such growth was in response to rapid mission and workload increases, specific workforce-related initiatives, realignments, streamlining operations, and reducing redundancies and overhead. Our draft report noted some of these causes of headquarters workforce growth, but we have added additional information to the report on other causes, such as increased mission responsibilities for the war and other directed missions such as business transformation, intelligence, cyber, suicide prevention, sexual assault response and prevention, wounded warrior care, family support programs, transition assistance and veterans programs, to provide context and address DOD’s concerns. DOD partially concurred with the first recommendation that the Secretary of Defense direct a systematic determination of the personnel requirements of OSD, the Joint Staff, and the military services’ secretariats and staffs, which should include analysis of mission, functions, and tasks, and the minimum personnel needed to accomplish those missions, functions, and tasks. The department noted in its letter that it will continue to use the processes and prioritization that is part of the Planning, Programming, Budgeting, and Execution process, and will also investigate other methods for aligning personnel to missions and priorities. DOD also stated that it is currently conducting Business Process and System Reviews of the OSD Principal Staff Assistants, defense agencies, and DOD field activities to aid in documenting mission responsibilities to resource requirements. However, the department did not provide any details specifying whether any of these actions would include a workforce analysis to systematically determine personnel requirements, rather than continuing to rely on historic personnel levels and existing statutory limits as the basis for those requirements, nor does the department acknowledge the need for such analysis. Moreover, according to DOD’s implementation guidance for the Business Process and Systems Review, which we reference in our report, this review is focused on business processes and supporting information technology systems within certain defense headquarters organizations, rather than a systematic determination of personnel requirements for those organizations. DOD also stated in its comments that headquarters staff provide knowledge continuity and subject matter expertise and that a significant portion of their workload is unpredictable. We agree, but believe that headquarters organizations would be better positioned to respond to unforeseen events and initiatives if their personnel requirements were based on workforce analysis, which would allow them to identify critical mission requirements as well as mitigate risks to the organizations’ efficiency and effectiveness while still responding to unpredictable workload. Without a systematic determination of personnel requirements, DOD headquarters organizations may not be well positioned to identify opportunities for efficiencies and reduce the potential for headquarters-related growth. Several headquarters organizations provided comments on their specific requirements determination processes in connection with this first recommendation. The Army noted that it has an established headquarters requirements determination process in the G-3, supported by the U.S. Army Manpower Analysis Agency. While the Army does have a requirements determination process, we note in our report that this process did not result in the systematic determination of requirements for the Army Secretariat and Staff; rather, the Army headquarters organizations we reviewed use authorized personnel levels from the previous year as a baseline from which to generate any new requirements, and these personnel levels are ultimately based not on a workforce analysis, but on the statutory limits that were established by Congress in the 1980s. In addition, while the Army’s requirements determination process does call for reassessments of personnel requirements every 2 to 5 years, Army officials stated that they do not conduct these periodic reassessments of the personnel requirements for the Army headquarters organizations in our review, in part because the U.S. Army Manpower Analysis Agency lacks the authority to initiate such reassessments or enforce their results. In the letter, the Army also noted concerns that a statement in our draft report—namely, that the organizations that support the Army Secretariat and staff are almost three times as large but are excluded from the statutory limits—may be misleading and lack appropriate context. In response to the Army’s concerns and to provide additional context, we have clarified the report’s language to state that only some personnel in these organizations support their military service secretariats and staffs in accomplishing their mission and are not subject to the statutory limits. The Marine Corps noted that they conducted a full review of force structure in 2012, which included a Commandant-directed consideration to look at the functions of every headquarters and staff. We state in our report that the Marine Corps and others in the department have previously conducted efficiency-related efforts, which officials believe have allowed their organizations to reassess personnel requirements and identify opportunities for efficiencies. However, these reviews have generally been ad hoc and done in response to internally driven or directed reductions, rather than as part of an organization’s systematic requirements-determination process. Having workforce and organizational analyses as part of a systematic requirements- determination process, rather than in response to DOD-directed efficiency efforts, would allow headquarters organizations to proactively identify any excess organizational layers or redundant operations and inform decision making during future efficiency efforts and budget reviews. Finally, the Joint Staff stated that it utilizes its existing Joint Manpower Validation Process as a systematic requirements determination process when requesting permanent joint manpower requirements, adding that this process reviews mission drivers, capability gaps, impact assessments, and determines the correct size and characteristics of all new billets. However, as we found in May 2013, this process focuses on requests for additional positions or nominal changes in authorized positions, rather than evaluating whether authorized positions are still needed to support assigned missions. Moreover, we found that personnel levels for the headquarters organizations that we reviewed, including the Joint Staff, are ultimately not based on a workforce analysis that systematically defines the size of the total workforce needed to meet organizational goals. Rather, these organizations use authorized personnel levels from the previous year as a baseline and do not take steps to systematically determine and periodically reassess them. Thus, we continue to believe that DOD should conduct a systematic determination of personnel requirements, including an analysis of missions, functions, and tasks to determine the minimum personnel needed to accomplish those missions, functions, and tasks. DOD partially concurred with the second recommendation that the Secretary of Defense direct the submission of these personnel requirements, including information on the number of personnel within OSD and the military services’ secretariats and staffs that count against the statutory limits, along with any applicable adjustments to the statutory limit, in the next Defense Manpower Requirements Report to Congress or through separate correspondence, along with any recommendations needed to modify the existing statutory limits. DOD stated that it has ongoing efforts to refine and improve its reporting capabilities associated with these requirements, noting that the department has to update DOD Instruction 5100.73, Major DOD Headquarters Activities before it can determine personnel requirements that count against the statutory limits. In March 2012, we recommended that DOD revise DOD Instruction 5100.73, Major DOD Headquarters Activities, but DOD has not provided an estimate of when this revised Instruction would be finalized. DOD also did not indicate in its letter whether the department would submit personnel requirements that count against the statutory limits in the Defense Manpower Requirements Report, as we recommend, once the Instruction is finalized. We believe that submitting these personnel requirements to Congress in this DOD report would provide Congress with key information to determine whether the existing statutory limits on military and civilian personnel are effective in limiting headquarters personnel growth. In addition, the Marine Corps provided more specific comments in connection with the second recommendation, noting that in 2014 it had reviewed and validated all headquarters down to the individual billet level, identifying billets that should be coded as performing major DOD headquarters activities, resulting in a net increase of reported headquarters structure. The Marine Corps stated they planned to report this information as part of DOD’s fiscal year 2016 budget and in the Defense Manpower Requirements Report. Our report specifically notes the review and the Marine Corps effort to more accurately report its headquarters structure for the fiscal year 2016 budget. However, until the department as a whole takes concrete steps to gather reliable information about headquarters requirements, and report this information to Congress, neither the department nor Congress will have the information needed to oversee them. DOD partially concurred with the third recommendation that the Secretary of Defense direct the establishment and implementation of procedures to conduct periodic reassessments of personnel requirements within OSD and the military service secretariats and staffs. DOD said that it supports the intent of the recommendation, but such periodic reassessments require additional resources and personnel, which would drive an increase in the number of personnel performing major DOD headquarters activities. Specifically, DOD stated it intends to examine the establishment of requirements determination processes across the department, to include the contractor workforce, but this will require a phased approach across a longer timeframe. However, DOD also did not provide any estimated timeframes for its examination of this process. As we noted in the report, reassessments of personnel requirements could occur periodically, not necessarily annually, thereby lessening the amount of time and labor that headquarters organizations devote to conducting reassessments. Further, until a periodic reassessment of requirements takes place, the department will lack reasonable assurance that its headquarters are sized appropriately for its current missions, particularly in light of the drawdown from Iraq and Afghanistan and its additional mission responsibilities. In addition, the Marine Corps and the Joint Staff provided specific comments in connection with the third recommendation in DOD’s letter. First, the Marine Corps noted that they conduct periodic reviews through the Quadrennial Defense Review and through force structure review boards that shape the Marine Corp to new missions and in response to combatant commander demands. However, these reviews are focused on forces as a whole and not specifically on headquarters. Second, the Joint Staff stated that it has set personnel requirements twice since 2008, and noted that it has taken reductions during various budget- or efficiency- related efforts, such as the Secretary of Defense’s 2012 efficiency review and the Secretary of Defense’s 20-percent reductions to headquarters budgets, which is ongoing. However, conducting periodic reassessments as part of a systematic requirements-determination process, rather than in response to ad hoc, DOD-directed efficiency efforts, would allow headquarters organizations to proactively identify any excess organizational layers or redundant operations. This, in turn, would prepare the headquarters organizations to better inform decision-making during any future efficiency efforts and budget reviews. DOD stated that, although it appreciates our inclusion in the report of a matter calling for Congress to consider using the results of DOD’s review of personnel requirements to re-examine the statutory limits, it believes any statutory limitations on headquarters personnel place artificial constraints on workforce sizing and shaping, thereby precluding total force management. Therefore, DOD states that it opposes any legislative language that imposes restrictions on the size of the department’s workforce. Both the Marine Corps and Joint Staff provided specific comments in regard to GAO’s matter for congressional consideration, although these comments were directed toward the specific statutory limits for their organizations, not the GAO matter for congressional consideration itself. As we noted in our report, we believe that the statutory limits are of limited utility. The intent of this matter is to not to prescribe specific modifications to the statutory limits on headquarters personnel to Congress but rather to suggest that Congress consider making those modifications that it considers most appropriate based on a review of personnel requirements provided by the department. Finally, the Army also provided input regarding the overall methodology behind the report, noting that tracking contract support of headquarters organizations solely through funding source may skew attempts at general trend analysis because funding source does not always correlate to a function being performed in the headquarters. Our report notes some of the challenges in tracking contract support of headquarters organizations, but to add context and address the Army’s concerns, we have modified text in Appendix V, which focuses on the resources of the Headquarters, Department of the Army. Specifically, we have modified Figure 12 to note that, according to Army officials, the costs for contracted services provided from its financial accounting systems may not accurately reflect costs incurred by the headquarters because the accounting systems show the funding for contractors but not necessarily where the contracted work was performed, which is the information displayed in DOD’s Inventory of Contracted Services. DOD also provided technical comments, which we have incorporated, as appropriate. We are sending copies of this report to the appropriate congressional committees, the Secretary of Defense, the Chairman of the Joint Chiefs of Staff, and the Secretaries of the military departments. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-3489 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix X. We have issued several reports since 2012 on defense headquarters and on the department’s ability to determine the right number of personnel needed to perform headquarters functions. In March 2012, we found that while the Department of Defense (DOD) has taken some steps to examine its headquarters resources for efficiencies, additional opportunities for savings may exist by further consolidating organizations and centralizing functions. We also found that DOD’s data on its headquarters personnel lacked the completeness and reliability necessary for use in making efficiency assessments and decisions. In that report, we recommended that the Secretary of Defense direct the Secretaries of the military departments and the heads of the DOD components to continue to examine opportunities to consolidate commands and to centralize administrative and command support services, functions, or programs. Additionally, we recommended that the Secretary of Defense revise DOD Instruction 5100.73, Major DOD Headquarters Activities, to include all headquarters organizations; specify how contractors performing headquarters functions will be identified and included in headquarters reporting; clarify how components are to compile the information needed for headquarters-reporting requirements; and establish time frames for implementing actions to improve tracking and reporting of headquarters resources. DOD generally concurred with the findings and recommendations in our March 2012 report. DOD officials have stated that, since 2012, several efforts have been made to consolidate or eliminate commands and to centralize administrative and command support services, functions, or programs. For example, OSD officials said that DOD has begun efforts to assess which headquarters organizations are not currently included in its guiding instruction on headquarters, but as of July 2014, it has not completed its update of the instruction to include these organizations. DOD officials also identified further progress on including contractors performing major DOD headquarters activities in headquarters reporting. In May 2013, we found that authorized military and civilian positions at the geographic combatant commands—excluding U.S. Central Command—had increased by about 50 percent from fiscal year 2001 through fiscal year 2012, primarily due to the addition of new organizations, such as the establishment of U.S. Northern Command and U.S. Africa Command, and increased mission requirements for the theater special operations commands. We also found that DOD’s process for sizing its combatant commands had several weaknesses, including the absence of a comprehensive, periodic review of the existing size and structure of these commands and inconsistent use of personnel-management systems to identify and track assigned personnel. DOD did not concur with our recommendation that it conduct comprehensive and periodic reviews of the combatant commands’ existing size, but we continue to believe that institutionalizing a periodic evaluation of all authorized positions would help to systematically align manpower with missions and add rigor to the requirements process. DOD concurred with our recommendation that it revise its guiding instruction on managing joint personnel requirements—Chairman of the Joint Chiefs of Staff Instruction 1001.01A, Joint Manpower and Personnel Program—to require the commands to improve its visibility over all combatant command personnel. DOD has established a new manpower tracking system, the Fourth Estate Manpower Tracking System, that is to track all personnel data, including temporary personnel, and identify specific guidelines and timelines to input/review personnel data. Additionally, DOD concurred with our recommendation to develop and implement a formal process to gather information on authorized manpower and assigned personnel at the service component commands and to revise DOD’s Financial Management Regulation. As of September 2014, the process outlined by DOD to gather information on authorized and assigned personnel at the service component commands is the same as the one identified during our prior work. DOD concurred with our recommendation to revise volume 2A, chapter 1 of DOD’s Financial Management Regulation 7000.14R to require the military departments, in their annual budget documents for operation and maintenance, to identify the authorized military positions and civilian and contractor full-time equivalents at each combatant command and provide detailed information on funding required by each command for mission and headquarters support, such as civilian pay, contracted services, travel, and supplies. As of September 2014, DOD plans to prepare an exhibit that reflects the funding and full-time equivalent information by combatant command and include it in an update to the DOD Financial Management Regulation prior to preparation of the fiscal year 2016 budget estimate submission. In June 2014, we found that DOD’s functional combatant commands have shown substantial increases in authorized positions and costs to support headquarters operations since fiscal year 2004, primarily to support recent and emerging missions, including military operations to combat terrorism and the emergence of cyberspace as a warfighting domain. Further, we found that DOD did not have a reliable way to determine the resources devoted to management headquarters as a starting point for DOD’s planned 20 percent reduction to headquarters budgets, and thus we concluded that actual savings would be difficult to track. We recommended that DOD reevaluate the decision to focus reductions on management headquarters to ensure meaningful savings and set a clearly defined and consistently applied baseline starting point for the reductions. Further, we recommended that DOD track the reductions against the baselines in order to provide reliable accounting of savings and reporting to Congress. DOD partially concurred with our recommendation to reevaluate its decision to focus reductions on management headquarters, questioning, in part, the recommendation’s scope. We agreed that the recommendation has implications beyond the functional combatant commands, which was the scope of our review, but the issue we identified is not limited to these commands. DOD generally concurred with our two other recommendations that it set a clearly defined and consistently applied baseline starting point and track reductions against the baselines. To address these two recommendations, DOD said that it planned to use the Future Years Defense Program data to set the baseline going forward. DOD stated that it was enhancing data elements within a DOD resource database to better identify management headquarters resources to facilitate tracking and reporting across the department. House Report 113-102 mandated GAO to review the military, civilian personnel, and contracted services resources devoted to the Office of the Secretary of Defense (OSD), the Joint Staff, and the military departments’ secretariats and military staffs from fiscal year 2001 through fiscal year 2013. This report (1) identifies past trends, if any, in personnel resources devoted to OSD, the Joint Staff, and the secretariats and staffs of the military services and any plans for reductions to these headquarters organizations; and (2) evaluates the extent to which the Department of Defense (DOD) determines and reassesses personnel requirements for these headquarters organizations. In addition to OSD, the Joint Staff, and the secretariats and staffs of the military departments, other headquarters organizations include portions of the defense agencies, DOD field activities, and the combatant commands, along with their subordinate unified commands and respective service component commands. Joint Staff J-2 (Intelligence), which receives its personnel and funding from the Defense Intelligence Agency, provided personnel data that it deemed sensitive but unclassified, so we excluded it from this report. The Navy was unable to provide complete personnel data prior to fiscal year 2005 due to a change in personnel management systems used by the Office of the Chief of Naval Operations. Similarly, Headquarters, Marine Corps, was unable to provide personnel data prior to fiscal year 2005 due to a change in personnel management systems. We requested available data on contracted services performing functions for the organizations within our review, but we were only able to obtain and analyze information from OSD and the Army. We compared these data to data we had obtained from OSD and the Army on authorized military and civilian positions. We present DOD data on contracted services for context as a comparison against authorized military and civilian positions. Because we did not use these data to support our findings, conclusions, or recommendations, we did not assess their reliability. DOD is still in the process of compiling complete data on contractor full-time equivalents. Our review also focused on operation and maintenance obligations— because these obligations reflect the primary costs to support the headquarters operations of OSD, the Joint Staff, and secretariats and staffs for the military services—including the costs for civilian personnel, contracted services, travel, and equipment, among others. Our review excluded obligations of operation and maintenance funding for DOD’s overseas contingency operations that were not part of DOD’s base budget. Unless otherwise noted, we reported all costs in this report in nominal dollars. Only the Air Force was able to provide historical data for the entire fiscal year 2001 through fiscal year 2013 time frame, so we provided an analysis of trends in operation and maintenance obligations at the individual organizations included in our review for the fiscal years for which data were available. OSD was unable to provide cost data prior to fiscal year 2008 because, per National Archives and Records Administration regulations, it does not maintain financial records older than 6 years and 3 months. The Joint Staff was unable to provide cost data prior to fiscal year 2003 due to a change in financial systems. The Army was unable to provide cost data for fiscal year 2001 in the time frame we requested for inclusion in this report. The Navy Secretariat was able to provide cost data for fiscal years 2001 through 2013. However, the Office of the Chief of Naval Operations was only able to provide cost data for fiscal years 2009 through 2013 because the Office of the Chief of Naval Operations did not exist as an independent budget-submitting office until fiscal year 2009, and it would be difficult to separate out the Office of the Chief of Naval Operations’ data from other Navy data prior to fiscal year 2009 in the Navy’s historical data system. Headquarters, Marine Corps, was unable to provide cost data prior to fiscal year 2005 due to a change in financial systems. Our analyses are found in appendixes III through VIII. The availability of historical data limited our analyses of both authorized military and civilian positions and operation and maintenance obligations for the reasons identified by the individual included organizations. To assess the reliability of the data we collected, we interviewed DOD officials about the data they provided to us and analyzed relevant personnel and financial-management documentation to ensure that the data on authorized military and civilian positions and operation and maintenance obligations were tied to mission and headquarters support. We also incorporated data-reliability questions into our data-collection instruments and compared the multiple data sets received from the included organizations against each other to ensure that there was consistency in the data that they provided. We determined the data were sufficiently reliable for our purposes of identifying trends in the personnel resources and headquarters support costs of OSD, the Joint Staff, and secretariats and staffs for the military services. To identify DOD’s plans for reductions to these headquarters organizations, we obtained and reviewed guidance and documentation on steps to implement DOD’s 20 percent reductions to headquarters budgets starting in fiscal year 2015, the first full budget cycle for which DOD was able to include the reductions, such as the department-issued memorandum outlining the reductions and various DOD budget-related documents. We also obtained data, where available, on the number of positions at OSD, the Joint Staff, and the secretariats and staffs for the military services for fiscal year 2013 (the most recent fiscal year for which data were available during our review), as well as the number of positions deemed by these organizations to be performing headquarters functions and included in DOD’s planned headquarters reductions for fiscal years 2015 through 2019, the time frame DOD identified in its reduction plans. We assessed the reliability of the personnel and cost data given these and other limitations by interviewing DOD officials about the data they provided to us and analyzing relevant personnel and financial- management documentation. We determined that the data were sufficiently reliable for our purposes of identifying trends in the personnel resources and headquarters support costs, and DOD’s plans for reductions to OSD, the Joint Staff, and secretariats and staffs for the military services. To evaluate the extent to which DOD determines and reassesses personnel requirements for these headquarters organizations, we obtained and reviewed guidance from OSD, the Joint Staff, and the secretariats and staffs for the military services regarding each of their processes for determining and reassessing their respective personnel requirements. For example, we reviewed the Chairman of the Joint Chiefs of Staff Instruction 1001.01A (Joint Manpower and Personnel Program); Air Force Instruction 38-201 (Manpower and Organization, Management of Manpower Requirements and Authorizations); Army Regulation 570-4 (Manpower and Equipment Control, Manpower Management); Office of the Chief of Naval Operations Instruction 1000.16K (Navy Total Force Manpower Policies and Procedures); and Marine Corps Order 5311.1D (Total Force Structure Process). We also interviewed officials from each of these organizations to determine how their processes are implemented, the results of any studies that were conducted on these processes, and any changes being made to these processes. We then compared the information we obtained on these processes to key elements called for in DOD Directive 1100.4 (Guidance for Manpower Management) and the military services’ guidance we had previously obtained; specifically, that personnel requirements should be established at the minimum essential level to accomplish the required workload, and should be periodically reviewed. We also compared this information to key elements of a systematic personnel requirements-determination process, which we obtained from documents that address leading practices for workforce planning. Specifically, we reviewed prior GAO work on effective strategic workforce planning, DODs guidance on manpower management, and workforce planning guidance issued by the Office of Personnel Management. We then synthesized common themes from these documents and summarized these as key elements that should be included in organizations’ personnel requirements- determination processes, namely, that an organization should have a requirements process that identifies the organization’s mission, functions, and tasks; determines the minimum number and type of personnel needed to fulfill those missions, functions, and tasks by conducting a workforce analysis; and reassesses these requirements on a periodic basis to determine the most efficient choices for workforce deployment. We also reviewed DOD Instruction 5100.73 (Major DOD Headquarters Activities), which guides the identification and reporting of headquarters information. Finally, we identified a standard on information and communications from internal-control standards for the federal government and compared this standard to the headquarters-related information provided to Congress in the fiscal year 2015 Defense Manpower Requirements Report. We obtained and assessed data on the number of management headquarters personnel in the organizations in our review for fiscal year 2013 and on the Army’s field operating agencies for fiscal years 2001 through 2013. We assessed the reliability of the personnel data through interviews with Army officials about the data they provided to us and by conducting data-reliability assessments of the Army personnel data and the information systems that produced them. We determined that the data were sufficiently reliable for our purposes. We also met with OSD and the military services to discuss how these organizations identify these headquarters personnel. Finally, we reviewed the legislative history of the statutory personnel limitations for OSD, the Joint Staff, and the services contained in sections 143, 155, 3014, 5014, and 8014 of Title 10 of the U.S. Code, and discussed these limits with knowledgeable officials in OSD, the Joint Staff, and the military services. We interviewed officials or, where appropriate, obtained documentation from the organizations listed below: Office of the Secretary of Defense Office of the Director of Administration and Management; Office of Cost Assessment and Program Evaluation; and Washington Headquarters Services, Financial Management Directorate. Directorate of Management, Comptroller; Manpower and Personnel Directorate; and Intelligence Directorate. Department of the Air Force A1, Joint and Special Activities Manpower Programming Branch. Assistant Secretary of the Army for Manpower and Reserve Affairs; G8, Program Analysis and Evaluation; and Business Operations Directorate, Army Office of Business Transformation. Assistant Secretary of the Navy for Manpower and Reserve Assistant for Administration; Office of the Chief of Naval Operations, Deputy Chief of Naval Operations for Integration of Capabilities and Resources, Programming Division; Office of the Chief of Naval Operations, Manpower Management; Office of the Chief of Naval Operations, Assessment Division; and U.S. Fleet Forces Command. Headquarters, U.S. Marine Corps Marine Corps Combat Development Command, Combat Development and Integration / Total Force Structure Division; Budget and Execution Division, Programs and Resources; and Manpower and Reserve Affairs. We conducted this performance audit from July 2013 to January 2015 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix III: Resources of the Office of the Secretary of Defense (OSD) OSD is responsible for assisting the Secretary of Defense in carrying out his or her duties and responsibilities for the management of the Department of Defense (DOD). These include policy development, planning, resource management, and fiscal and program evaluation responsibilities. The staff of OSD comprises military and civilian personnel and personnel performing contracted services. This appendix shows how these resources are distributed in the OSD organization, as well as the changes in these resources from fiscal year 2001 through fiscal year 2013. Table 2 shows the organizational structure and composition of OSD for fiscal year 2013, including both authorized military and civilian positions, as well as estimated contractor full-time equivalents. Figure 5 illustrates annual changes in the number of authorized personnel positions since fiscal year 2001. According to DOD officials, both authorized military and civilian positions remained relatively unchanged until fiscal year 2010, when the number of authorized civilians increased mainly due to the conversion of contracted services to civilian positions and the conversion of military to civilian positions. This increase in authorized civilian positions, according to DOD officials, is the result of attempts to rebalance workload and become a cost-efficient workforce. Figure 6 shows the headquarters support costs changes associated with OSD for fiscal year 2008 through fiscal year 2013. Headquarters costs have experienced an overall increase during the 5-year period, primarily due to costs for contracted services, but have recently begun to decline, according to OSD officials, because of sequestration and furloughs. The Joint Staff is responsible for assisting the Chairman of the Joint Chiefs of Staff, military advisor to the President, in accomplishing his responsibilities for the unified strategic direction of the combatant forces; their operation under unified command; and their integration into a team of land, naval, and air forces. The Joint Staff is tasked to provide advice and support to the Chairman and the Joint Chiefs on matters including personnel, intelligence doctrine and architecture, operations and plans, logistics, strategy, policy, communications, cyberspace, joint training and education, and program evaluation. In addition to civilian personnel and personnel performing contracted services, the Joint Staff comprises military personnel who represent, in approximately equal numbers, the Army, Navy and Marine Corps, and Air Force. This appendix shows how these resources are distributed in the Joint Staff, as well as the changes in these resources from fiscal year 2003 through fiscal year 2013. Table 3 shows the organizational structure and composition of the Joint Staff for fiscal year 2013, including both authorized military and civilian positions. Figure 7 illustrates annual changes in the overall number of authorized personnel positions since fiscal year 2005. Both military and civilian positions remained relatively unchanged until fiscal year 2012, when, according to Joint Staff officials, U.S. Joint Forces Command was disestablished and some of its responsibilities and personnel were moved to the Joint Staff. According to documentation and interviews with Joint Staff officials, of those positions acquired by the Joint Staff in fiscal years 2012 and retained in 2013, most of the military positions (415 authorized positions) and civilian positions (690 authorized positions) are stationed at Hampton Roads, Virginia, to manage and support the Combatant Command Exercise Engagement and Training Transformation program reassigned to the Joint Staff when U.S. Joint Forces Command was disestablished. Figure 8 shows the changes in headquarters support costs for the Joint Staff for fiscal year 2003 through fiscal year 2013. The increase in overall headquarters support costs from fiscal years 2011 through 2013 was, according to Joint Staff officials, due to the previously mentioned influx of civilian personnel to the Joint Staff from U.S. Joint Forces Command following its disestablishment in fiscal year 2011. The Office of the Secretary of the Army has sole responsibility within the Office of the Secretary and the Army Staff for the following functions: acquisition, auditing, financial management, information management, inspector general, legislative affairs, and public affairs. Additionally, there is an Army Staff, which is to furnish professional assistance to the Secretary and the Chief of Staff of the Army. Headquarters functions to be performed by the Army Staff include, among others, recruiting, organizing, training, and equipping of the Army.the Secretary of the Army and the Army Staff comprise military and civilian personnel and personnel performing contracted services. This appendix shows how these resources are distributed in the Army, as well as the changes in these resources from fiscal year 2001 through fiscal year 2013. Table 4 shows the organizational structure and composition of the Army Secretariat and Staff for fiscal year 2013, including both authorized military and civilian positions, as well as estimated contractor full-time equivalents. The Office of the Secretary of the Navy is solely responsible among the Office of the Secretary of the Navy, the Office of the Chief of Naval Operations, and the Headquarters, Marine Corps, for oversight of the following functions: acquisition, auditing, financial management, information management, inspector general, legislative affairs, and public affairs. The Office of the Chief of Naval Operations is to provide professional assistance to the Secretary and Chief of Naval Operations in preparing for the employment of the Navy in areas such as: recruiting, organizing, supplying, equipping, and training. The staffs of Office of the Secretary of the Navy and the Office of the Chief of Naval Operations comprise military and civilian personnel and personnel performing contracted services. This appendix shows how these resources are distributed in the Navy, as well as the changes in these resources from fiscal year 2001 through fiscal year 2013. Table 5 shows the organizational structure and composition of the Navy Secretariat and Office of the Chief of Naval Operations for fiscal year 2013, including both authorized military and civilian positions. Figure 13 illustrates annual changes in the number of authorized military and civilian positions within the Navy Secretariat since fiscal year 2003. From fiscal years 2003 through 2008, the total number of authorized positions within the secretariat decreased from fiscal year 2003 to 2004 and remained relatively constant through fiscal year 2008 due to reductions in its baseline budget, recalculation of civilian pay and benefits, and internal reorganizations within the Navy, according to officials within the Navy Secretariat. From fiscal years 2009 through 2013, authorized civilian positions within the Navy Secretariat have steadily increased. Navy Secretariat officials attributed this increase primarily to reorganization of functions across the Department of the Navy that moved positions into the secretariat and the conversion of contracted services to civilian positions. Headquarters support costs for the Navy Secretariat have generally increased from fiscal years 2001 through 2013, as seen in the inset of figure 14. According to Navy officials, significant drivers of this overall increase include continued increases in civilian personnel costs, and additional contracted services costs to support both a 2005 DOD initiative and compliance in fiscal years 2011 and 2012 with congressional direction to improve the auditability of its financial statements. Figure 15 illustrates annual changes in the number of authorized military and civilian positions within the Office of the Chief of Naval Operations since fiscal year 2005. The Office of the Chief of Naval Operations has experienced some increase in authorized civilian positions over that period, which Navy officials attributed to conversion of contracted services to civilian positions and reorganizations of the Office of the Chief of Naval Operations under new Chiefs of Naval Operations. Our analysis shows that much of the overall increase in authorized civilian positions at the Office of the Chief of Naval Operations was offset by decreases in military positions since fiscal year 2010. Headquarters support costs for the Office of the Chief of Naval Operations have generally decreased from fiscal years 2009 through 2013, as seen in the inset of figure 16. According to Office of the Chief of Naval Operations’ officials, the decrease in costs in fiscal 2010 was the result of the removal of some centrally managed costs from the Office of the Chief of Naval Operations budget in 2010 and efforts to convert contracted services to civilian positions. As seen in figure 16, civilian personnel costs have increased over the period, which Office of the Chief of Naval Operations’ officials attributed to the conversion of contracted services to civilian positions and organizational restructuring that moved additional civilian positions to the Office of the Chief of Naval Operations headquarters staff, resulting in higher civilian personnel costs. The Marine Corps also operates under the authority, direction, and control of the Secretary of the Navy. Headquarters, Marine Corps, consists of the Commandant of the Marine Corps and staff who are to provide assistance in preparing for the employment of the Marine Corps in areas such as recruiting, organizing, supplying, equipping, and training. The staff of Headquarters, Marine Corps, comprises military and civilian personnel and personnel performing contracted services. This appendix shows how these resources are distributed in the Marine Corps, as well as the changes in these resources from fiscal year 2005 through fiscal year 2013. Table 6 shows the organizational structure and composition of Headquarters, Marine Corps, for fiscal year 2013, including both authorized military and civilian positions. Headquarters, Marine Corps, experienced an increase in its overall number of authorized military and civilian positions from fiscal years 2005 to 2013, as shown in figure 17, but there have been variations within those years. Headquarters, Marine Corps, officials attributed some of the increases in authorized positions to the conversion of military positions to civilian positions, and additional personnel requirements needed to support the Foreign Counterintelligence Program and National Intelligence Program and to stand up and operate the National Museum of the Marine Corps. Headquarters, Marine Corps, officials also explained that some of the decreases in authorized positions were due to a number of organizational realignments that transferred civilian positions from Headquarters, Marine Corps, to operational or field support organizations. From fiscal years 2005 through 2013, the total headquarters support costs for Headquarters, Marine Corps, have slightly increased, as seen in the inset in figure 18, but there has been variation in total costs year-to- year, and costs are down from their peak in fiscal year 2012. As seen in figure 18, there has been a consistent increase in costs for civilian personnel from fiscal year 2005 through fiscal year 2012, which the Marine Corps attributed to the conversion of military positions to civilian positions, organizational realignments that moved civilian positions to Headquarters, Marine Corps, and recalculation of civilian pay and benefits, all of which increased costs for civilian personnel. From fiscal years 2005 through 2013, other headquarters support costs generally decreased due to transfers and realignment of resources from Headquarters, Marine Corps, to other organizations and operating forces. The Office of the Secretary of the Air Force has sole responsibility and oversight for the following functions across the Air Force: acquisition, auditing, financial management, information management, inspector general, legislative affairs, and public affairs. Additionally, there is an Air Staff, which is to furnish professional assistance to the Secretary and the Chief of Staff of the Air Force. The headquarters functions to be performed by the Air Staff include recruiting, organizing, training, and The staffs of Office of the equipping of the Air Force, among others.Secretary of the Air Force and the Air Staff comprise military and civilian personnel and personnel performing contracted services. This appendix shows how these resources are distributed in the Air Force, as well as the changes in these resources from fiscal year 2001 through fiscal year 2013. Table 7 shows the organizational structure and composition of the Air Force Secretariat and Staff for fiscal year 2013, including both authorized military and civilian positions. Figure 19 illustrates annual changes in the number of authorized positions in the Office of the Secretary of the Air Force since fiscal year 2001. The number of authorized military and civilian positions remained relatively unchanged until fiscal year 2010 when, according to Air Force officials, the conversion of contracted services to civilian positions and the conversion of military to civilian positions contributed to the increasing number of authorized civilian personnel. This increase in authorized civilian positions, according to DOD officials, is the result of attempts to rebalance workload and become a cost-efficient workforce. Air Force officials stated that authorized positions within the secretariat have gradually decreased from peak levels reached in fiscal year 2010 due to direction from the Secretary of Defense to hold the number of civilian positions at or below fiscal year 2010 levels and to cut civilian positions that had yet to be filled after they had converted contracted services to civilian positions in previous years. Figure 20 illustrates annual changes in the number of authorized positions in the Office of the Chief of Staff of the Air Force since fiscal year 2001. The total number of authorized military and civilian positions remained relatively stable until fiscal year 2006, when the number of authorized military personnel reached its peak level. Since then, the number of authorized civilian personnel has generally increased, which an Air Force official said was mainly due to the conversion of contracted services to civilian positions and the conversion of military to civilian positions, although these numbers have begun to decline since fiscal year 2011. This increase in authorized civilian positions, according to DOD officials, is the result of attempts to rebalance workload and become a cost-efficient workforce. Figure 21 shows the changes associated with Air Force Secretariat and Air Staff headquarters support costs for fiscal year 2001 through fiscal year 2013. According to Air Force officials, the dramatic increase in civilian personnel costs in fiscal year 2010 was driven by the conversion of contracted services to civilian positions, resulting in higher costs for civilian personnel. The subsequent drop in civilian personnel costs was primarily due to restraints placed on the growth in the number of civilian positions by Secretary Gates in fiscal year 2010 and the Budget Control Act of 2011. According to an Air Force official, the rapid spike in other support costs in fiscal year 2012 was primarily due to the costs for a civil engineering project billed to the Air Force Secretariat and Staff for renovating the Air Force Headquarters space in the Pentagon. In addition to the contact named above, Richard K. Geiger (Assistant Director), Tracy Barnes, Gabrielle A. Carrington, Neil Feldman, David Keefer, Carol D. Petersen, Bethann E. Ritter Snyder, Michael Silver, Amie Steele, and Cheryl Weissman made key contributions to this report. | Facing budget pressures, DOD is seeking to reduce headquarters activities of OSD, the Joint Staff, and the military services' secretariats and staffs, which primarily perform policy and management functions. GAO was mandated to review personnel resources devoted to these headquarters organizations from fiscal years 2001 through 2013. This report (1) identifies past trends in personnel resources for these organizations and any plans for reductions; and (2) evaluates the extent to which DOD determines and reassesses personnel requirements for the organizations. GAO analyzed data on authorized military and civilian positions and contracted services from fiscal years 2001 through 2013. GAO reviewed DOD's headquarters reductions plans and processes for determining and reassessing personnel requirements. Over the past decade, authorized military and civilian positions have increased within the Department of Defense (DOD) headquarters organizations GAO reviewed—the Office of the Secretary of Defense (OSD), the Joint Staff, and the Army, Navy, Marine Corps, and Air Force secretariats and staffs—but the size of these organizations has recently leveled off or begun to decline, and DOD's plans for future reductions are not finalized. The increases varied by organization, and DOD officials told GAO that the increases were due to increased mission responsibilities, conversion of functions performed by contracted services to civilian positions, and institutional reorganizations. For example, authorized military and civilian positions for the Army Secretariat and Army Staff increased by 60 percent, from 2,272 in fiscal year 2001 to 3,639 in fiscal year 2013, but levels have declined since their peak of 3,712 authorized positions in fiscal year 2011. In addition to civilian and military personnel, DOD also relies on personnel performing contracted services. Since DOD is still in the process of compiling complete data on personnel performing contracted services, trends in these data could not be identified. In 2013, the Secretary of Defense set a target to reduce DOD components' headquarters budgets by 20 percent through fiscal year 2019, including costs for contracted services, while striving for a similar reduction to military and civilian personnel. However, DOD has not finalized plans to achieve these reductions. DOD was required to report to Congress by June 2014 on efforts to streamline management headquarters, but needed an extension until late summer 2014 for the report due to staff turnover. As of December 2014, DOD's plan had not been issued. GAO found that DOD headquarters organizations it reviewed do not determine their personnel requirements as part of a systematic requirements-determination process, nor do they have procedures in place to ensure that they periodically reassess these requirements as outlined in DOD and other guidance. Current personnel levels for these headquarters organizations are traceable to statutory limits enacted in the 1980s and 1990s to force efficiencies and reduce duplication. However, these limits have been waived since fiscal year 2002. If the limits were in force in fiscal year 2013, the Army and Navy would exceed them by 17 percent and 74 percent, respectively. Moreover, the limits have little practical utility because of statutory exceptions for certain categories of personnel and because the limits exclude personnel in supporting organizations that perform headquarters-related functions. For example, the organizations that support the Army Secretariat and Army Staff are almost three times as large as the Secretariat and Staff, but personnel who perform headquarters-related functions in these organizations are excluded from the limits. All but one of the organizations GAO reviewed have recognized problems in their existing requirements-determination processes. The OSD, the Navy, and the Marine Corps are taking steps to modify their processes, but their efforts are not yet complete. Without a systematic determination of personnel requirements and periodic reassessment of them, DOD will not be well positioned to proactively identify efficiencies and limit personnel growth within these headquarters organizations. Moreover, until DOD determines personnel requirements, Congress will not have critical information needed to reexamine statutory limits enacted decades ago. GAO recommends that DOD (1) conduct a systematic determination of personnel requirements at these headquarters organizations; (2) submit the requirements to Congress with adjustments and recommended modifications to the statutory limits; and (3) periodically reassess personnel requirements within OSD and the military services' secretariats and staffs. Congress should consider using DOD's review of headquarters personnel requirements to reexamine existing statutory limits. DOD partially concurred, stating it will use its existing processes, but will investigate other methods to improve the determination and reporting of requirements. GAO believes the recommendations are still valid, as discussed in the report. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
The nation’s long-term fiscal outlook is daunting under any realistic policy scenarios and assumptions. For over 14 years, GAO has periodically prepared various long-term budget simulations that seek to illustrate the likely fiscal consequences of our coming demographic challenges and rising health care costs. Indeed, the health care area is especially important because the long-term fiscal challenge is largely a health care challenge. While Social Security is important because of its size, health care spending is both large and projected to grow much more rapidly. Our most recent simulation results illustrate the importance of health care in the long-term fiscal outlook as well as the imperative to take action soon. These simulations show that over the long term we face large and growing structural deficits due primarily to known demographic trends and rising health care costs. These trends are compounded by the presence of near-term deficits arising from new discretionary and mandatory spending as well as lower federal revenues as a percentage of the economy. Continuing on this imprudent and unsustainable fiscal path will gradually erode, if not suddenly damage, our economy, our standard of living, and ultimately our national security. Our current path will also increasingly constrain our ability to address emerging and unexpected budgetary needs and increase the burdens that will be faced by our children, grandchildren, and future generations of Americans. Figures 1 and 2 present our long-term simulations under two different sets of assumptions. For both simulations, Social Security and Medicare spending is based on the 2006 Trustees’ intermediate projections, and we assume that benefits continue to be paid in full after the trust funds are exhausted, although current law does not provide for such. Medicaid spending is based on the Congressional Budget Office’s (CBO) December 2005 long-term projections under its midrange assumptions. In figure 1, we start with CBO’s 10-year baseline, constructed according to the statutory requirements for that baseline. Consistent with these specific yet unrealistic requirements, discretionary spending is assumed to grow with inflation for the first 10 years and tax cuts scheduled to expire are assumed to expire. After 2016, discretionary spending and revenue are held constant as a share of gross domestic product (GDP) at the 2016 level. Under this fiscally restrained scenario, spending for Social Security and health care programs would grow to consume over three-quarters of federal revenues by 2040. In figure 2, two assumptions are changed: (1) discretionary spending is assumed to grow with the economy after 2006 rather than merely with inflation, and (2) all expiring tax provisions are extended. In this less restrained but possibly more realistic scenario, federal revenues will cover little more than interest on the large and growing federal debt by 2040. While many alternative scenarios could be developed incorporating different combinations of possible policy choices and economic assumptions, these two scenarios can be viewed as possible “bookends” to a range of possible outcomes. Budget flexibility—the ability to respond to unforeseen events—is key to being able to successfully deal with the nation’s and the world’s uncertainties. By their very nature, mandatory spending programs— entitlement programs like Medicare and Social Security—limit budget flexibility. They are governed by eligibility rules and benefit formulas, which means that funds are spent as required to provide benefits to those who are eligible and wish to participate. As figure 3 shows, mandatory spending has grown as a share of the total federal budget. For example, mandatory spending on programs (i.e., mandatory spending excluding interest) has grown from 27 percent in 1965—the year Medicare was created—to 42 percent in 1985 to 53 percent last year. (Total spending not subject to annual appropriations—mandatory spending and net interest— has grown from 34 percent in 1965 to 61 percent last year.) Under both the CBO baseline estimates and the President’s Budget, this spending would grow even further. Figure 3 illustrates that while it is important to control discretionary spending, the real challenge is mandatory spending. Accordingly, substantive reform of the major health programs and Social Security is critical to recapturing our future fiscal flexibility. The aging population and rising health care costs will have significant implications not only for the budget but also our economy and competitive posture. Figure 4 shows the total future draw on the economy represented by Social Security, Medicare, and Medicaid. Under the 2006 Trustees’ intermediate estimates and CBO’s 2005 midrange and long-term Medicaid estimates, spending for these entitlement programs combined will grow to over 15 percent of GDP in 2030 from today’s 8.9 percent. It is clear that taken together, Social Security, Medicare, and Medicaid represent an unsustainable burden on the federal budget and future generations. Ultimately, the nation will have to decide what level of federal benefits and spending it wants and how it will pay for these benefits. While Social Security, Medicare, and Medicaid are the major drivers of the long-term spending outlook in the aggregate, they are not the only promises the federal government has made for the future. The federal government undertakes a wide range of responsibilities, programs, and activities that may either obligate the government to future spending or create an expectation for such spending. Specific fiscal exposures vary widely as to source, likelihood of occurrence, magnitude, and strength of the government’s legal obligations. If we think of fiscal exposures as extending from explicit liabilities (like military and civilian pensions) to specific contingencies (like pension, flood, and other federal insurance programs) to the commitments implicit in current policy and/or public expectations (like the gap between the present value of future promised and funded Social Security and Medicare benefits), the federal government’s fiscal exposures totaled more than $46 trillion at the end of 2005, up from about $20 trillion in 2000. This translates into a burden of about $156,000 per American, or approximately $375,000 per full-time worker—more than double what it was in 2000. These amounts are growing every second of every minute of every day due to continuing deficits, known demographic trends and compounding interest costs. Many are beginning to realize that difficult choices must be made, and soon. A crucial first step in acting to improve our long-term fiscal outlook will be to face facts and identify the many significant commitments already facing the federal government. If citizens and government officials come to better understand our nation’s various fiscal exposures and their implications for the future, they are more likely to insist on prudent policy choices today and sensible levels of fiscal risk in the future. How do we get started? Today you are focusing on budget process improvements. That’s a good start. While the process itself cannot solve the problem, it is important. It can help policymakers make tough but necessary choices today rather than defer them until tomorrow. Restoration of meaningful budget controls—budgetary caps and a pay-as- you-go (PAYGO) rule on both the tax and spending side of the ledger—is a start toward requiring that necessary trade-offs be made rather than delayed. Although the restoration of caps and a PAYGO rule are important, they are not enough. Among the characteristics a budget process needs for that to happen are increased transparency and better incentives, signals, triggers and default mechanisms to address the fiscal exposures/commitments the federal government has already made and better transparency for and controls over the long-term fiscal exposures/commitments that the federal government is considering. Let me elaborate. There is broad consensus among observers and analysts who focus on the budget that the controls contained in the expired Budget Enforcement Act constrained spending for much of the 1990s. In fact, annual discretionary budget authority actually declined in real terms during the mid-1990s. I endorse the restoration of realistic discretionary caps and PAYGO discipline applied to both mandatory spending and revenue legislation. But the caps can only work if they are realistic; while caps may be seen as tighter than some would like, they are not likely to bind if they are seen as totally unreasonable given current conditions. While PAYGO discipline constrained the creation or legislative expansion of mandatory spending and tax cuts, it accepted the existing provisions of law as given. Looking ahead, the budget process will need to go beyond limiting expansions. Cost increases in existing mandatory programs cannot be ignored and the base of existing spending and tax programs must be reviewed and re- engineered to address our long-range fiscal gap. Specifically, as I have said before, I would like to see a process that forces examination of “the base” of the federal government—for major entitlements, for other mandatory spending, and for so-called “discretionary” spending (those activities funded through the appropriations process). Reexamining “the base” is something that should be done periodically regardless of fiscal condition—all of us have a stewardship obligation over taxpayer funds. As I have said before, we have programs still in existence today that were designed 20 or more years ago—and the world has changed. I would suggest that as constraints on discretionary spending continue to tighten, the need to reexamine existing programs and activities becomes greater. One of the questions this Congress is grappling with— earmarks—can be seen in this context. Whatever the agreed-upon level for discretionary spending, the allocation within that total will be important. How should that allocation be determined? What sort of rules will you want to impose on both the allocation across major areas (defense, education, etc.) and within those areas? By definition, earmarks specify how some funds will be used. How will the process manage them? After all, not all earmarks are bad but many are highly questionable. It is not surprising that in times of tight resources, the tension between earmarks and flexibility will likely rise. Although mandatory spending is not amenable to caps, such spending need not—and should not—be permitted to be on autopilot and grow to an unlimited extent. Since the spending for any given entitlement or other mandatory program is a function of the interaction between eligibility rules and the benefit formula—either or both of which may incorporate exogenous factors such as economic downturns—the way to change the path of spending for any of these programs is to change their rules or formulas. We recently issued a report on “triggers”—some measure that when reached or exceeded, would prompt a response connected to that program. By identifying significant increases in the spending path of a mandatory program relatively early and acting to constrain it, Congress may avert much larger and potentially disruptive financial challenges and program changes in the future. A trigger is a measure and a signal mechanism—like an alarm clock. It could trigger a “soft” response—one that calls attention to the growth rate of the level of spending and prompts special consideration when the threshold or target is breached. The Medicare program already contains a “soft” response trigger: the President is required to submit a proposal for action to Congress if the Medicare Trustees determine in 2 consecutive years that the general revenue share of Medicare spending is projected to exceed 45 percent during a 7-fiscal-year period. The most recent Trustees’ report to Congress for the first time found that the general revenue share of financing is projected to exceed that threshold in 2012. Thus, if next year’s report again concludes that it will exceed the threshold during the 7- fiscal-year period, the trigger will have been tripped and the President will be required to submit his proposal for action. Soft responses can help in alerting decision makers of potential problems, but they do not ensure that action to decrease spending or increase revenue is taken. In contrast, a trigger could lead to “hard” responses under which a predetermined, program-specific action would take place, such as changes in eligibility criteria and benefit formulas, automatic revenue increases, or automatic spending cuts. With hard responses, spending is automatically constrained, revenue is automatically increased, or both, unless Congress takes action to override—the default is the constraining action. For example, this year the President’s Budget proposes to change the Medicare trigger from solely “soft” to providing a “hard” (automatic) response if Congress fails to enact the President’s proposal. Any discussion to create triggered responses and their design must recognize that unlike controls on discretionary spending, there is some tension between the idea of triggers and the nature of entitlement and other mandatory spending programs. These programs—as with tax provisions such as tax expenditures—were designed to provide benefits based on eligibility formulas or actions as opposed to an annual decision regarding spending. This tension makes it more challenging to constrain costs and to design both triggers and appropriate responses. At the same time, with less than 40 percent of the budget under the control of the annual appropriations process, considering ways to increase transparency, oversight, and control of mandatory programs must be part of addressing the nation’s long-term fiscal challenges. Besides triggers, transparency of existing commitments would be improved by requiring the Office of Management and Budget (OMB) to report annually on fiscal exposures—the more than $46 trillion figure I mentioned earlier—including a concise list, description, and cost estimates, where possible. OMB should also ensure that agencies focus on improving cost estimates for fiscal exposures. This should complement and support continued and improved reporting of long-range projections and analysis of the budget as a whole to assess fiscal sustainability and flexibility. Others have embraced this idea for better reporting of fiscal exposures. Senator Voinovich has proposed that the President report each January on the fiscal exposures of the federal government and their implications for the long-term financial health and Senator Lieberman introduced legislation to require better information on liabilities and commitments. This year Representatives Cooper, Chocola, and Kirk have sponsored legislation also aimed at improving the attention paid to our growing federal commitments. And, in his last few budgets the President has proposed that reports be required for any proposals that would worsen the unfunded obligations of major entitlement programs. These proposals provide a good starting point for discussion. Reporting is a critical first step—but, as I noted above, it must cover not only new proposals but also existing commitments, and it should be accompanied by some incentives and controls. We need both better information on existing commitments and promises and information on the long-term costs of any new significant proposed spending increases or tax cut. Ten-year budget projections have been available to decision makers for many years. We must build on that regime but also incorporate longer-term estimates of net present value (NPV) costs for major spending and tax commitments comprising longer-term exposures for the federal budget beyond the 10- year window. Current budget reporting does not always fully capture or require explicit consideration of some fiscal exposures. For example, when Medicare Part D was being debated, much of the debate focused on the 10-year cost estimate—not on the long-term commitment that was obviously much greater. While the budget was not designed to and does not provide complete information on long-term cost implications stemming from some of the government’s commitments when they are made, progress can and should be made on this front. For example, we should require NPV estimates for major proposals—whether on the tax side or the spending side—whose costs escalate outside the 10-year window. And these estimates should be disclosed and debated before the proposal is voted on. Regarding tax provisions, it is important to recognize that tax policies and programs financing the federal budget can be reviewed not only with an eye toward the overall level of revenue provided to fund federal operations and commitments, but also the mix of taxes and the extent to which the tax code is used to promote overall economic growth and broad-based societal objectives. In practice, some tax expenditures are very similar to mandatory spending programs even though they are not subject to the appropriations process or selected budget control mechanisms. Tax expenditures represent a significant commitment and are not typically subject to review or reexamination. This should not be allowed to continue nor should they continue to be largely in the dark and on autopilot. Finally, the growing use of emergency supplemental appropriations raises concerns that an increasing portion of federal spending is exempt from the discipline and trade-offs of the regular budget process. Some have expressed concern that these “emergency” supplementals are not always used just to meet the needs of unforeseen emergencies but also include funding for activities that could be covered in regular appropriation acts. According to a recent Congressional Research Service report, after the expiration of discretionary limits and PAYGO requirements at the end of fiscal year 2002, supplemental appropriations net of rescissions increased the budget deficit by almost 25 percent per year. On average, the use of supplemental appropriations for all purposes has grown almost 60 percent each year, increasing from about $17 billion in fiscal year 2000 to about $160 billion in fiscal year 2005. Constraining emergency appropriations to those which are necessary (not merely useful or beneficial), sudden, urgent, unforeseen, and not permanent has been proposed in the past. The issue of what constitutes an emergency needs to be resolved and discipline exerted so that all appropriations for activities that are not true emergencies are considered during regular budget deliberations. We cannot grow our way out of our long-term fiscal challenge. We have to make tough choices and the sooner the better. A multi-pronged approach is necessary: (1) revise existing budget processes and financial reporting requirements, (2) restructure existing entitlement programs, (3) reexamine the base of discretionary and other spending, and (4) review and revise tax policy and enforcement programs. Everything must be on the table. Fundamentally, we need to undertake a top-to-bottom review of government activities to ensure their relevance and fit for the 21st century and their relative priority. Our report entitled 21st Century Challenges: Reexamining the Base of the Federal Government presents illustrative questions for policymakers to consider as they carry out their responsibilities. These questions look across major areas of the budget and federal operations, including discretionary and mandatory spending and tax policies and programs. We hope that this report, among other things, will be used by various congressional committees as they consider which areas of government need particular attention and reconsideration. The understanding and support of the American people will be critical in providing a foundation for action. The fiscal risks I have discussed, however, are a long-term problem whose full impact will not likely be felt for some time. At the same time, they are very real and time is currently working against us. The difficult but necessary choices we face will be facilitated if the public has the facts and comes to support serious and sustained action to address the nation’s fiscal challenges. That is why if an Entitlement and Tax Reform Commission is created to develop proposals to tackle our long-term fiscal imbalance, its charter may have to include educating the public as to the nature of the problem and the realistic solutions. While public education may be part of a Commission’s charge, we cannot wait for it to begin. As you may know, the Concord Coalition is leading a public education effort on this issue and I have been a regular participant. Although along with Concord the core group is the Heritage Foundation, the Brookings Institution, and the Committee for Economic Development, others are also actively supporting and participating in the effort—the state treasurers, auditors and comptrollers, the American Institute of Certified Public Accountants, AARP, and the National Academy of Public Administration. I am pleased to take part in this national education and outreach effort to help the public understand the nature and magnitude of the long-term financial challenge facing this nation. This is important because while process reform can structure choices and help, broad understanding of the problem is also essential. After all, from a practical standpoint, the public needs to understand the nature and extent of our fiscal challenge before their elected representatives are likely to act. Thank you, Mr. Chairman. This concludes my prepared remarks. I would be happy to answer any questions you may have. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this testimony. For further information on this testimony, please contact Susan J. Irving at (202) 512- 9142 or [email protected]. Individuals making key contributions to this testimony include Christine Bonham, Assistant Director; Carlos Diz, Assistant General Counsel; and Melissa Wolf, Senior Analyst. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | The nation's long-term fiscal outlook is daunting. While the budget process has not caused the problems we face, the absence of meaningful budget controls and other mechanisms has served to compound our fiscal challenge. Conversely, a process that illuminates the looming fiscal pressures and provides appropriate incentives can at least help decision makers focus on the right questions. Meaningful budget controls and other mechanisms can also help to assure that difficult but necessary choices are made. The budget process needs to provide incentives and signals to address commitments the government has already made and better transparency for and controls on the long-term fiscal exposures being considered. Improvements would include the restoration of realistic discretionary caps; application of pay-as-you-go (PAYGO) discipline to both mandatory spending and revenue legislation; the use of "triggers" for some mandatory programs; and better reporting of fiscal exposures. Over the long term we face a large and growing structural deficit due primarily to known demographic trends and rising health care costs. Continuing on this imprudent and unsustainable fiscal path will gradually erode, if not suddenly damage, our economy, our standard of living, and ultimately our national security. Our current path will also increasingly constrain our ability to address emerging and unexpected budgetary needs and increase the burdens that will be faced by our children, grandchildren, and future generations. The budget process itself cannot solve this problem, but it can help policymakers make tough but necessary choices. If citizens and government officials come to better understand various fiscal exposures and their implications for the future, they are more likely to insist on prudent policy choices today and sensible levels of fiscal risk in the future. We cannot grow our way out of our long-term fiscal challenge. We must make tough choices and the sooner the better. A multi-pronged approach is needed: (1) revise existing budget processes and financial reporting requirements, (2) restructure existing entitlement programs, (3) reexamine the base of discretionary and other spending, and (4) review and revise tax policy and enforcement programs--including tax expenditures. Everything must be on the table and a credible and effective Entitlement and Tax Reform Commission may be necessary. Fundamentally we need a top-to-bottom review of government activities to ensure their relevance and fit for the 21st century and their relative priority. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
According to CMS documentation, the transition to value-based payment generally involves two major shifts from traditional fee-for-service payment. 1. Accountability for both quality and efficiency. Value-based payment models link payments to providers to the results of health care quality and efficiency measures. CMS uses a variety of measures to assess health care quality and efficiency and to hold physicians and other providers accountable for the health care they deliver. Quality measures include process and outcome measures. Process measures assess the extent to which providers effectively implement clinical practices (or treatments) that have been shown to result in high-quality or efficient care. Examples of process measures are those that measure care coordination, such as the percentage of patients with major depressive disorder whose medical records show that their physician is communicating with the patients’ other physicians who are treating comorbid conditions. Outcome measures track results of health care, such as mortality, infections, and patients’ experiences of that care. Efficiency measures may vary across models. For example, models may require that a minimum savings rate be achieved, which is established using a benchmark based on fee-for-service claims as well as other information such as patient characteristics, or that cost targets are achieved for various episodes of care. 2. Focus on population health management. Value-based payment models encourage physicians to focus on the overall health and well- being of their patients. Population health management includes provider activities such as coordination of patient care with other providers; identification and provision of care management strategies for patients at greatest risk, such as those with chronic conditions; promotion of health and wellness; tracking patient experience; and using health information technology (IT) to support population health. In value-based payment models, physicians and other providers are paid and responsible for the care of a beneficiary for a long period and accountable for the quality and efficiency of the care provided. In contrast, Medicare fee-for-service payments to providers are tied only to volume, rewarding providers, for example, on the basis of the number of tests run, patients seen, or procedures performed, regardless of whether these services helped (or harmed) the patient. This shift in care delivery can require substantial investments by providers. For example, providers may need to invest in health IT to manage patients and record data necessary for quality and efficiency measurement and reporting. Providers may also need to hire additional staff to assist with population health management activities, such as care coordination. The CMS Innovation Center has developed and is testing a number of value-based payment models. The following are examples of Medicare value-based payment models in which physician practices can participate. These models are often referred to as alternative payment models. ACOs. As noted earlier, ACOs are groups of physicians—including independent physician practices—hospitals, and other health care providers who voluntarily work together to give coordinated care to the Medicare patients they serve. When an ACO succeeds in delivering high-quality care and spending health care dollars more efficiently, part of the savings generated goes to the ACO and part is kept by Medicare. ACOs participate in models with upside risk only or models with both upside and downside risk. Bundled payment models. Bundled payment models provide a “bundled” payment intended to cover the multiple services beneficiaries receive during an episode of care for certain health conditions, such as cardiac arrhythmia, hip fracture, and stroke. If providers are able to treat patients with these conditions for less than the target bundled payment amount and can meet performance accountability standards, they can share in the resulting savings with Medicare. CMS’s initiative, Bundled Payments for Care Improvement (BPCI), tests four broadly defined models of care, under which organizations enter into payment arrangements that include financial and performance accountability for episodes of care. Comprehensive primary care models. Comprehensive primary care models are designed to strengthen primary care. CMS has collaborated with commercial and state health insurance plans to form the Comprehensive Primary Care (CPC) initiative. The CPC initiative provides participating primary care physician practices two forms of financial support: (1) a monthly non-visit-based care management payment and (2) the opportunity to share in any net savings to the Medicare program. In January 2017, CMS will build upon the CPC initiative, which ends December 31, 2016, by beginning CPC Plus, a comprehensive primary care model that includes downside risk. In November 2016, CMS published a final rule with comment period to implement a Quality Payment Program under MACRA, which established a new payment framework to encourage efficiency in the provision of health care and to reward health care providers for higher-quality care instead of a higher volume of care. The Quality Payment Program is based on eligible Medicare providers’ participation in one of two payment methods: (1) MIPS or (2) an advanced alternative payment model. Under MIPS, providers will be assigned a final score based on four performance categories: quality, cost, clinical practice improvement activities, and advancing care information through the meaningful use of EHR technology. This final score may be used to adjust providers’ Medicare payments positively or negatively. CMS will begin assessing providers’ performance in three of the four performance categories in 2017. Cost will not be measured in the first year. The first year that payments will be adjusted is 2019 (based on the 2017 performance year). Under the final rule, an alternative payment model will qualify as an advanced alternative payment model if it has downside risk, among other requirements. Providers with sufficient participation in advanced alternative payment models are excluded from MIPS and qualify to receive incentive payments beginning in 2019 (based on performance in 2017). Providers who participate in alternative payment models that do not include downside risk, such as some ACO models, will be included in MIPS. The final rule refers to these models as MIPS “alternative payment models.” To coincide with the final rule, CMS also issued a fact sheet with information on the supports available to providers participating in the Quality Payment Program. In the final rule, CMS stated that protection of small, independent practices was an important thematic objective and that in performance year 2017 many small practices will be excluded from the new MIPS requirements due to the low-volume threshold. CMS also stated that while it is not implementing “virtual groups” for 2017—which would allow small practices to be assessed as a group across the four MIPS performance categories—the agency looks forward to stakeholder engagement on how to structure and implement virtual groups in future years of the program. Further, CMS is reducing the number of clinical practice improvement activities that small and rural practices will have to conduct to receive full credit in this performance category in performance year 2017. CMS also announced in April 2016 that it intends to solicit and award multiple contracts to qualified contractors for MACRA quality improvement direct technical assistance. Direct technical assistance through this program will target providers in small group practices of 15 or fewer, and especially those in historically under resourced areas, such as rural areas. CMS indicated that the purpose of the contracts is to provide a flexible and agile approach to customized direct technical assistance and support services to ensure success for providers who either participate in MIPS or want to transition to an alternative payment model, thereby easing the transition to a payment system based on performance and patient outcomes. In addition, CMS has been testing models aimed at helping small and rural providers participate in value-based payment models. For example, in 2016, CMS began the ACO Investment Model, which provides advanced up-front and monthly payments to providers so they can make important investments in their care coordination infrastructure. According to information on CMS’s website, the ACO Investment Model was developed in response to stakeholder concerns and available research suggesting that some providers lack access to the capital to invest in infrastructure that is necessary to successfully implement population care management. According to literature we reviewed and the 38 stakeholders we interviewed, small and rural physician practices face many challenges associated with deciding whether to participate, when to begin participating, or whether to continue participating in value-based payment models. We identified 14 challenges that can be classified into five key topic areas: (1) financial resources and risk management, (2) health IT and data, (3) population health management care delivery, (4) quality and efficiency performance measurement and reporting, and (5) effects of model participation and managing compliance with requirements. (See table 1.) These 14 challenges are discussed in detail in the sections that follow. Small and rural practices need financial resources to make initial investments, such as those to make EHR systems interoperable, and need financial reserves or reinsurance to participate in models that have downside risk. Recouping investments may take years because the models must have a year of performance data, which then must be analyzed to determine any shared savings payment. Limited ability to take on financial risk because of having fewer financial resources/reserves compared with larger providers. Some stakeholders told us that small and rural practices have few financial resources and financial reserves. This limits their ability to take on the downside risk associated with some value-based payment models. In some value-based payment models, providers are financially responsible if their actual spending for treating Medicare beneficiaries exceeds the payment amount they receive from Medicare. In other models, a provider’s spending is compared to its historical spending, and if spending is higher than the historical benchmark, the provider has to repay a portion of the exceeded spending to Medicare. As a result, in order to participate, practices need either to have financial reserves to cover instances such as patients with unexpectedly costly medical events or to purchase reinsurance to cover such expenditures, according to some stakeholders we interviewed. Some stakeholders suggested that for reinsurance to help small and rural practices, it must be affordable, and the types of reinsurance currently available are costly. High costs of initial and ongoing investments needed for participation. Some stakeholders reported that significant investments are needed for participation in value-based payment models. Initial investments can cost practices thousands if not millions of dollars, and it can be difficult for small practices to pay for this out of their own pockets, according to some stakeholders. For example, one stakeholder told us that most small practices are on a month-to- month budget and have small profit margins. Some stakeholders told us that the costs of making EHR systems interoperable between providers can be expensive and often is the same cost regardless of practice size. A stakeholder from a physician practice told us that it cost about $20,000 for the group to connect two EHR systems, which would be the same cost for a small or large practice. Small practices have fewer physicians to spread these costs among. Additionally, some stakeholders reported that capital is needed to hire additional staff to help with the care coordination activities that are part of model participation. Difficulties with recovering investments in a timely manner. Small and rural practices often struggle with the amount of time it takes for them to recoup the investments they have made to participate in a model, according to some stakeholders we interviewed and literature we reviewed. After making initial investments, practices must wait for the completion and analysis of a performance year before they can receive a shared savings payment. Some stakeholders told us that it can take 2 or more years for this to occur. Furthermore, some stakeholders expressed concern about model sustainability and commented on the unpredictability of the models, which could affect physicians’ confidence in their ability to recuperate investments made if a model becomes obsolete or changes significantly. For example, at the beginning of calendar year 2017, CMS is making a significant change by replacing a 4-year-old model, the CPC initiative, with CPC Plus—a model in which practices must take on downside risk to participate. This change may prevent some small and rural practices from participating in the successor model, and consequently affect their ability to recoup the investments they made to participate in the CPC initiative. Small and rural practices need to have access to data that is important for care management and cost control. Also, these practices need to hire and train staff, as well as develop experience using EHR systems and analyzing data needed for participation. Difficulties with data system interoperability and limited ability to access data outside the practices’ own systems. Some stakeholders reported that having access to other providers’ data through interoperable EHR systems is beneficial as it can provide information to help coordinate and determine the appropriate care for a patient; however, they also reported difficulties in constructing interoperable systems. One small physician practice stakeholder told us that the practice has had difficulties accessing the results of tests conducted in an outside lab because the lab scans rather than types the test results into its system. The stakeholder said that the practice is working with its EHR vendor to address the problem but that he suspected the vendor may be less concerned about the practice’s challenges because the practice is small. He stated that such challenges are common for many rural health care facilities. Separate from interoperability, some stakeholders also reported that providers and payers may not be willing to share information, such as claims and price data, that would aid analysis and help a practice manage patient care—such as tracking when patients visit specialists or fill prescriptions—as well as control costs. It may be especially challenging for small and rural physician practices to gain access to such data as they may not have the relationships with payers that larger practices may have, which is needed for data sharing. According to a publication from our literature review, physician practices reported that price data for services and supplies could be difficult to obtain, maybe in part due to payer confidentiality and agreements with pharmaceutical and device companies regarding rebates or discounts. Difficulties with educating and training staff about EHR systems and the data entry, management, and analysis needed for participation. Some stakeholders reported that significant resources are needed for staff education and training to properly enter data required for model participation. These data are often needed for quality measurement associated with a specific value-based payment model, and physician practices need to ensure that staff have accurately and appropriately captured these data for patients to meet the model’s requirements. Additionally, some stakeholders stated that managing and analyzing data can be difficult and time-consuming, as small and rural practices often struggle with how to use their EHR systems to obtain data for analysis and timely decision making. For example, one stakeholder told us that practices often do not know how to use their EHR system to make a list of all patients with a certain disease, which could help the practice develop population health management strategies for that particular disease, among other activities. Further, another stakeholder told us that uniquely qualified staff are often needed to complete this work. Practices’ ability to manage care of their entire patient population is affected by patients’ geographic location and preferences, and this is especially true for rural physician practices whose patients may have to travel distances to receive regular wellness visits and seek specialists when recommended. In addition, the transition to value-based care, which focuses on population health management, will require adjustment by some physician practices, such as rural practices, that are generally more experienced with a fee-for-service system, especially as the two systems may have incentives that are difficult to reconcile. Patient preferences and geographic location affect practices’ ability to implement population health management care delivery and account for total cost of care. Literature we reviewed and some stakeholders indicated that physician practices’ ability to succeed in value-based payment models can be hindered by the preference and location of patient populations. For example, one stakeholder stated that physicians may have difficulty getting patients to complete wellness visits or other activities necessary for them to stay healthy. This is especially relevant for rural physician practices, as some patients in rural areas may have to travel long distances for wellness care or care from specialists, which can influence how often they actually seek such care. If patients do not receive recommended care, this can affect the rural physician’s ability to effectively manage patients’ conditions. Patient behavior and location can also make it difficult for providers to control the total cost of patient care or know about all the costs. For example, one stakeholder said that under a bundled payment model, practices are responsible for costs during an entire episode of care, but practices cannot influence where the patient receives post-acute care, which could affect the total cost of patient care. Additionally, another stakeholder told us that it can be difficult to engage patients using technology. This ACO has tried to manage patients’ post-acute care by communicating with patients through a technology system. However, the effectiveness of the system has been limited because some patients do not want to use it, preferring to speak with their physician directly. Provider resistance to making adjustments needed for population health management care delivery. Small and rural physician practices are having difficulty adjusting to a value-based care system, which focuses on population health management, as opposed to being paid based on volume, according to some stakeholders. For example, because providers are paid for each service under Medicare fee-for-service, providers have an incentive to provide a high volume of services without consideration of the costs or value of such services. Rural practices have a larger percentage of their Medicare patients enrolled in fee-for-service compared to non- rural practices, which have a larger percentage of their Medicare patients enrolled in Medicare Advantage, the private plan alternative to Medicare fee-for-service. Therefore, rural practices may be more influenced than others by the incentives under Medicare fee-for- service. In contrast, under value-based payment models, population health is a major component that requires care coordination and consideration about whether certain services are necessary that might involve additional attention and time from physicians. According to a publication from our literature review, some practices experience conflicting incentives—to increase volume under their fee-for-service contracts while reducing costs under their risk-based contracts—and not knowing which patients will be included in the value-based payment model can also make managing care difficult. Additionally, some providers in small and rural practices may be concerned about relying on the care of the other providers over which they have little or no influence, according to some stakeholders. One stakeholder we interviewed told us that this lack of trust in the ability of others to effectively coordinate and co-manage care spawns an unwillingness to enter into value-based payment models that require extensive care coordination across numerous providers to achieve shared savings. Value-based payment models require a full year of performance data, and the time lag between data submission and when a practice receives its performance report delays practices’ understanding of actions needed to improve care delivery and receive financial rewards. Further, the number and variation of quality measures required by Medicare and private payers are burdensome for small and rural practices, and practices with small patient populations face quality and efficiency measurement that may be more susceptible to being skewed by patients who require more care or more expensive care. Difficulties with receiving timely performance feedback. Some stakeholders mentioned a variety of issues related to delays in performance assessments associated with value-based payment models. As noted previously, it takes a full year of performance in addition to the time it takes for data about that year to be analyzed before information is known about a physician practice’s performance within a model. According to some stakeholders, this time lag makes it difficult for the practices to efficiently identify the areas that are working well and those that need improvement. For example, one stakeholder told us that a physician may receive the results of his or her performance within a model in 2016 for care that was provided in 2014. This limits physicians’ ability to make meaningful and timely changes to the care they provide. Additionally, some stakeholders reported that practices may not understand how best to improve their performance due to the limited information they receive from CMS. Misalignment of quality measures between various value-based payment models and payers. Some stakeholders told us that physician practices can be overwhelmed and frustrated by the number of quality measures that they need to report on for participation in value-based payment models and that the measures used by Medicare value-based payment models are not well-aligned with those used by commercial payers. Even if payers have similar quality measures, there may be slight variations in their calculation, which makes reporting burdensome. One stakeholder who works within an ACO stated that there are 58 unique quality measures across all the payers he works with. Performance measurement accuracy for practices with a small number of Medicare patients. Since small and rural physician practices often have fewer patients to measure, their performance may be more susceptible to being skewed by outliers, according to some stakeholders we interviewed. Even if these practices have only a few patients that require more comprehensive or expensive care, these few can disproportionately affect their performance negatively, and in turn the financial risk they bear, compared to practices with much larger patient populations. For at least one model type— ACOs—this challenge may be addressed by a requirement that an ACO have a minimum number of patients to participate, as well as by CMS adjusting the performance of some ACOs to account for their size. This patient size requirement and adjustment can help ensure statistical reliability when assessing an ACO’s performance against measures. However, some stakeholders told us that this requirement also has its challenges. For example, it can be particularly difficult for rural practices to find other practices to group with to meet this patient requirement. To participate in value-based payment models, small and rural physician practices may feel pressure to join with other practices. Model participation may also mean that physician and other practice staff must take on additional administrative responsibilities to meet conditions of participation. Furthermore, practices must work to stay abreast of regulations and model requirements as the models evolve. Difficulties with maintaining practice independence. Literature we reviewed and some stakeholders indicated that, in the movement toward value-based payment models, many small and rural practices feel pressure to join other practices or providers (such as a hospital or health system) to navigate these models even if the practices would prefer to remain independent. Limited time of staff and physicians to complete administrative duties required for model participation. Some stakeholders reported that both physicians and practice staff had to juggle many administrative responsibilities as part of participating in value-based payment models, which may be especially challenging for small and rural practices that tend to have fewer staff. Administrative duties may conflict with time needed for patient care. For example, one stakeholder told us that physicians are often busy seeing patients throughout the day and are unable to complete administrative tasks, such as attending meetings. Small physician practices may have limited staff time to devote to other administrative duties, including completing required documentation or collecting and reporting data on quality measures needed for participation in value-based payment models. Practices that want to add staff may also face challenges, such as finding qualified staff that are experts within their field and that understand the requirements associated with value-based payment models. Difficulties with understanding and managing compliance with the terms and conditions of waivers related to various fraud and abuse laws. The Secretary of Health and Human Services is authorized to waive certain requirements as necessary to implement the Shared Savings Program to encourage the development of ACOs and to test innovative payment and service delivery models, such as BPCI. However, some stakeholders stated that understanding and navigating the terms and conditions of waivers can be difficult and overwhelming for practices to manage. This may be especially true for small and rural practices that have less time to develop the knowledge necessary to understand waiver options or the resources to hire assistance in doing so, such as legal counsel. Difficulties with staying abreast of regulatory changes and managing compliance with multiple requirements of value-based payment models. Some stakeholders said that small and rural physician practices find it challenging to stay informed of and to incorporate regulation and requirement changes associated with value-based payment models. This may be due, in part, to small and rural practices often having fewer staff and resources to monitor changes. We found that organizations that can help small and rural practices with challenges to participating in value-based payment models can be grouped into two categories: partner organizations and non-partner organizations. Partner organizations share in the financial risk associated with model participation and provide comprehensive services. Non- partner organizations do not share financial risk but provide specific services that can help mitigate certain challenges. However, not all small and rural physician practices have access to services provided by these organizations. Based on the 38 stakeholder interviews we conducted and the related documentation collected, we found that some organizations serve as partners to small and rural physician practices. As partners, these organizations share in the financial risk associated with the models and provide comprehensive services that help with challenges in each of the five key topic areas affecting small and rural physician practices. Partner organizations can help with a variety of value-based payment models, including ACOs, comprehensive primary care models, and bundled payments. Certain partner organizations, known as awardee conveners, have binding agreements with CMS to assist providers with participation in BPCI, including helping them plan and implement care redesign strategies to improve the health care delivery structure. Other partner organizations may bring small and rural practices together to help form and facilitate an ACO. In this role, these partner organizations can help small and rural practices fulfill any requirements for an ACO to have a minimum number of patients and facilitate the reporting of performance measures as a larger group while still allowing practices to remain independent. This type of assistance can mitigate two of the challenges stakeholders have identified—performance measurement accuracy for practices with a small number of Medicare patients and maintaining practice independence. Depending on the arrangement between the practices and the partner organization, the partner organization may receive all or some of the savings generated by the ACO or bundled payment, as well as share in any financial losses incurred. For example, a partner organization stakeholder stated that the organization—which helps form ACOs— retained 40 percent of the shared savings, and the physician practices received the remaining 60 percent. Similarly, another partner organization stakeholder told us that the organization took on the entire share of any financial losses incurred and received a third of any gains. In some agreements, practices may receive different distributions of the financial savings based on their performance compared to set performance goals or to other practices in the group. In this type of arrangement with a partner organization, a practice will receive, at most, a portion of its shared savings, which could extend the time it takes practices to realize financial gains. See figure 1 for how sharing financial risk can mitigate a challenge faced by small and rural physician practices. Comprehensive services provided by partner organizations can either directly or indirectly help to mitigate many of the participation challenges faced by small and rural physician practices. As a way of directly assisting, for example, partner organizations can aid small and rural physician practices with population health management by analyzing data to identify high-risk patients such as those with chronic conditions who need comprehensive care management. Conversely, one challenge identified for small and rural physician practices was their limited ability to take on financial risk because they have fewer financial reserves when compared to their larger counterparts. While partner organizations do not directly address that these practices have fewer financial reserves, they can indirectly assist by taking on part or all of the financial risk of model participation. A small physician practice stakeholder told us that without the services provided by a partner organization, the practice would not be able to participate in the model. While the services offered by partner organizations can vary, they generally include the following. Provide or share resources. Partner organizations can support the cost of resources needed for model participation, such as health IT and care coordination resources, or help share resources across many practices to reduce costs for individual small and rural practices. For example, an awardee convener stakeholder told us that the organization manages a care innovation center staffed with about 70 nurses who work with patients and providers to make appointments and coordinate services, among other population management activities. Another partner organization stakeholder told us that the organization had formed a pharmacy hub in which the pharmacist works directly with the practices on comprehensive medication management. Further, some stakeholders stated that partner organizations can help reduce the costs of EHR systems and data analytics for the practices by, for example, sharing the EHR system and data analytics staff across practices. One partner organization stakeholder told us that, in another type of arrangement, the partner organization provides up-front funding for technology and other resources in return for 40 percent of any shared savings generated by the ACO. This arrangement can be particularly helpful to small and rural practices that may not have a lot of capital to invest. See figure 2 for the challenges mitigated by partner organizations by providing or sharing resources. Manage health IT systems and data. Partner organizations generally work with practices to enhance the interoperability of the practices’ data systems so that data can be shared and easily retrieved for analysis. For example, an awardee convener stakeholder told us that the organization had developed a way to connect providers’ EHR systems to its data system, as well as developed software that providers can use to more easily share data among themselves. Similarly, partner organizations can manage data and provide analytics. Some partner organization stakeholders stated that they conduct analysis and provide reports and data to physicians to help them with population management, such as identifying high-risk patients and practice improvement needs. A partner organization stakeholder told us that the organization collects beneficiary level data from all payers—including those that the partner organization does not work with—to monitor quality improvements and identify where physicians missed opportunities to diagnose patients. See figure 3 for the challenges that are mitigated by partner organizations managing health IT systems and data. Provide education and training related to population care management. Partner organizations can provide on-site training and mentoring for the practices’ staff related to population management care delivery. This can help small and rural physician practices transition their staff, who may be accustomed to being payed based on volume, to a value-based care system that focuses on population health management. It can also provide practices with tools on how to manage and engage patients, such as patients who are not accustomed to having regular wellness visits or using technology. For example, one partner organization stakeholder we interviewed said that the organization holds quality improvement workshops for physicians every quarter to work on implementing population health management activities, such as wellness visits. Another partner organization stakeholder said that the organization has practice transformation staff who spend about 4 hours each week working directly with each physician practice to implement a care management program. This stakeholder stated that it was important to provide physician practices with the tools, but it was just as important to provide in-practice support on how to use those tools and help to strengthen the practice. See figure 4 for the challenges that are mitigated by partner organizations providing education and training on population health management. Provide population health management services. Partner organizations can provide population health management activities, including identifying and tracking high-risk patients, scheduling wellness visits, and managing patients with chronic conditions. For example, an awardee convener stakeholder told us that the organization helps providers by checking on whether the patients have rides to their appointments, setting up patients’ appointments, and contacting other social services. Another partner organization stakeholder told us that the organization has care navigators, who work with physician practices to engage with patients and help those at high health risk, as well as patient care advocates, who identify patients with gaps in care or who need annual wellness visits. See figure 5 for the challenges that are mitigated by partner organizations providing population health management services. Measure quality and efficiency performance. Partner organizations can conduct analyses and provide reports to physician practices to help them understand and track their performance. For example, some partner organization stakeholders we spoke with measured physician practice performance against a defined set of quality measures and compared practices with their peers. These reports can help physician practices identify opportunities for quality improvement and savings without waiting for performance feedback from CMS. For example, one partner organization stakeholder told us that the organization analyzes data at the patient and physician level looking for opportunities to help the physician practice gain efficiencies, as well as identify differences in quality among practices. This partner organization also uses the data to educate the physician practices about patient attribution and differences in quality. According to another partner organization stakeholder, the analysis the organization conducts for their physician practices helps these practices manage the number and variety of performance measurements associated with value-based payment models. See figure 6 for the challenges that are mitigated by partner organizations helping physician practices measure their quality and efficiency performance. Manage compliance with requirements of value-based payment models. Partner organizations can provide assistance with value- based payment model requirements, as small and rural physician practices may not be structured to handle this administration. For example, an awardee convener stakeholder stated that it liaisons with CMS and prepares and submits all CMS-required documentation on behalf of providers. Another partner organization stakeholder stated that the organization’s legal counsel explains the various waivers relevant to the ACO, as well as the requirements of these waivers to providers in the ACO. See figure 7 for the challenges that are mitigated by partner organizations helping physician practices manage compliance with the rules and regulations of value-based payment models. Based on the 38 stakeholder interviews we conducted and the related documentation collected, the other category of organizations we identified that help small and rural practices participate in value-based payment models are non-partner organizations. Non-partner organizations provide services that are generally not as comprehensive as partner organizations, and they do not share in the financial benefits or risks with the practices. The specific services they provide—primarily in the key topic areas of health IT and data, quality and efficiency performance measurement and reporting, and population health management care delivery—help with certain challenges. The source of funding for non- partners also varies. For example, non-partner organizations might be hired by the practice itself or funded separately by government grants. The following are the types of non-partner organizations identified in our review and the types of services they can provide to small and rural physician practices. Facilitator conveners. These organizations have arrangements with providers or awardee conveners to provide administrative and technical assistance to aid with participation in BPCI. Although facilitator conveners do not bear risk, they are similar to awardee conveners in that they can assist physician practices and other providers with quality measurement and performance activities. For example, a facilitator convener could help track quality measures for providers. They can also help physician practices transition toward population health management care delivery by providing education to physician practices through webinars, for example, and by helping providers develop processes to coordinate episodes of care across providers. Health IT vendors. These technology companies are hired by physician practices to provide EHR systems, as well as data analytics software and services. Health IT vendors can assist practices with system interoperability challenges. For example, one health IT vendor stakeholder said that the vendor provides a connectivity engine so that physician practices’ EHR systems are interoperable with other providers and payers. Health IT vendors can also conduct analyses— such as using data to evaluate physician practices against performance measures to identify additional opportunities for improvement—or help develop population health management processes. Health IT vendors can help practices manage misalignment of quality measures between payers. A health IT vendor stakeholder told us that the organization uses numerous codes within practices’ datasets to allow practices to produce reports for multiple payers whose quality measures do not align; however the stakeholder added that this process is time intensive and could increase costs for the practices. Health IT vendors can also provide education and training for physician practices on best practices for EHR integration and optimization. A health IT vendor stakeholder told us that for small physician practices they generally provide EHR services; revenue and practice management service; and patient engagement services, which can include automatic check-in for patients, patient payment collection, and patient portals so practices can communicate electronically with patients. Regional Extension Centers (REC). RECs provide on-the-ground technical assistance intended to support small and rural physician practices, among others, that lack the resources and expertise to select, implement, and maintain EHRs. According to Department of Health and Human Services’ (HHS) documentation, RECs stay involved with physician practices to provide consistent long-term support, even after the EHR system has been implemented. REC services include outreach and education on systems, EHR support (e.g., working with vendors, helping to choose a certified EHR system), and technical assistance in implementing health IT. Technical assistance in implementing health IT includes using it in a meaningful way to improve care, such as using systems to support quality improvement and population health management activities. Sixty-two RECs were funded through cooperative agreements by HHS’s National Learning Consortium. RECs include public and private universities and nonprofits. Quality Innovation Network-Quality Improvement Organizations (QIN-QIO). QIN-QIOs work with small and rural physician practices, among others, to improve the quality of health care for targeted health conditions. For example, if a QIN-QIO has an initiative related to a specific health condition, such as a heart condition, the QIN-QIO would help practices improve clinical quality measures for patients with this condition, such as measures for blood pressure, cholesterol, and smoking cessation. The assistance provided and work performed by QIN-QIOs can vary greatly. A QIN-QIO stakeholder we interviewed told us that the QIN-QIO helps providers learn how to produce a quality report, how to interpret quality measures, and how to improve those measures, as well as educates providers on various requirements of value-based payment models. Other activities the network performs include educating physician practices on how to capture and understand EHR data since, according to this same stakeholder, small and rural physician practices often struggle with proper documentation for quality and performance management. The 14 QIN-QIOs each cover a region of two to six states and are awarded contracts from CMS. Practice Transformation Networks (PTN). PTNs are learning networks designed to coach, mentor, and assist clinicians in developing core competencies specific to population health management to prepare those providers that are not yet enrolled in value-based payment models. According to CMS officials, PTNs work with physician practice leadership to assist with patient engagement, use data to drive transformation of care toward population health management, and develop a comprehensive quality improvement strategy with set goals. The degree of help provided by the PTN depends on how far along the physician practice is in transforming to value-based care, according to CMS officials. PTNs provide technical assistance to physician practices on topics such as how to use data to manage care and move toward population health management. For example, a PTN stakeholder told us that the PTN makes sure the physician practice creates a registry to track high-risk patients and then uses the registry to perform outreach to patients to initiate follow- up care appointments. Similarly, PTNs can help ensure that practices use a referral tracking system, such as a system to determine whether a patient that a practice referred for a mammogram actually had the mammogram. PTNs can also provide other educational resources such as live question-and-answer chat sessions, peer-to-peer webinars, and computer modules that cover topics including quality improvement and patient engagement. The 29 PTNs receive funding through CMS grants and are part of CMS’s Transforming Clinical Practice Initiative. The PTNs include public and private universities, health care systems, and group practices. The services of non-partner organizations could help assist with some challenges we identified for small and rural practices. (See fig. 8.) Although we found that organizations can assist with many of the challenges identified for small and rural practices, not all such practices can access these services for a variety of reasons. First, some stakeholders we interviewed said that small or rural physician practices do not necessarily have access to an organization, such as an organization that forms ACOs. For example, some ACO stakeholders told us that they used criteria to determine which physician practices they would reach out to for inclusion in the ACO. One ACO stakeholder stated that the organization analyzes public data to identify the physician practices that look like good candidates for population health management and then talks to the practices about a possible partnership. Therefore, some small or rural physician practices struggling with changes needed to deliver population health management may not be contacted by an organization that forms ACOs. Second, we heard from some stakeholders that the limited resources of many small and rural physician practices may hinder their access to services provided by organizations. For example, small and rural physician practices may not have the financial resources to hire organizations that could assist them with participation, such as health IT vendors. Also, according to some stakeholders, organizations’ ability to assist practices is hindered when the practices struggle to make the initial investments needed to participate, such as hiring new staff or developing necessary data systems. Last, even if practices have access to an organization, that organization may not offer the services that the practice needs since the services offered can vary by organization. For example, not all partner organizations that form ACOs have access to and use other payers’ data to aid in the management of patient care. When we asked one partner organization stakeholder how the organization received access to data, the stakeholder stated that it was because of long-standing relationships it had with payers. Other partners that form ACOs may not be able to provide similar data to share. Additionally, according to CMS officials, each facilitator convener and awardee convener has discretion in the services it provides, and the services can vary, as can the services provided by CMS and HHS grantees—RECs, QIN-QIOs, and PTNs. We provided a draft of this report to CMS for comment. CMS provided technical comments, which we incorporated as appropriate. We are sending copies of this report to the appropriate congressional committees, the Secretary of Health and Human Services, and the CMS administrator. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staffs have any questions about this report, please contact me at (202) 512-7114 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix I. In addition to the contact named above, Greg Giusto, Assistant Director; Christie Enders, Analyst-in-Charge; Deirdre Gleeson Brown, Analyst-in- Charge; and Samantha Pawlak made key contributions to this report. Also contributing were George Bogart, Beth Morrison, and Vikki Porter. | Based on a review of literature and interviews with 38 stakeholders, GAO identified challenges faced by small and rural physician practices when participating in Medicare's new payment models. These models, known as value-based payment models, are intended to reward health care providers for resource use and quality, rather than volume, of services. The challenges identified are in five key topic areas. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
IRS does not have internal controls over its financial reporting process adequate to provide reasonable assurance that its principal financial statements are reliable. As a result, IRS (1) was unable to prepare reliable statements of net cost, changes in net position, budgetary resources, and financing and (2) could not support material amounts reported on its balance sheet, including fund balance with Treasury, accounts payable, and net position. In addition, we found that property and equipment is likely to be materially understated. We found that the custodial and administrative general ledger systems which support the principal financial statements are not in conformance with the U.S. Government Standard General Ledger (SGL) at the transaction level and do not provide a complete audit trail for recorded transactions, material balances reported on IRS’ principal financial statements are not supported by detailed subsidiary records, and IRS’ principal financial statements are not subject to management oversight adequate to provide reasonable assurance that significant errors and omissions are identified and corrected before the principal financial statements are issued. In an effort to overcome these deficiencies, IRS employs a costly, labor intensive, and time-consuming process involving extensive and complex analysis and ad hoc procedures to assist in preparing its principal financial statements. IRS continues to utilize specialized computer programs to extract information from databases underlying the administrative and custodial general ledgers to derive and/or support amounts to be reported in the principal financial statements. For example, IRS must use this process to identify the portion of its unpaid assessments that represent taxes receivable for financial reporting purposes. However, as in fiscal year 1997, the amounts produced by this approach needed material audit adjustments totaling tens of billions of dollars to produce reliable balances for custodial activities. With respect to IRS’ administrative activities, this approach was unsuccessful in producing reliable balances. In addition, IRS’ basic approach was designed specifically for the narrowly defined purpose of preparing auditable balances at year-end only. This mechanism is not capable of producing reliable agencywide principal financial statements or financial performance information to measure results throughout the year as a management tool, which is standard practice in private industry and some federal entities. We also found that IRS’ previously separate financial reporting processes for its custodial and administrative activities have not been integrated under unified supervision at the operational level. This unnecessarily complicates IRS’ year-end financial reporting process and hampers efforts to provide interim IRS-wide financial information as a management tool. IRS’ complex and often manual financial reporting process requires extensive technical computer and accounting expertise and is highly vulnerable to human error. It is therefore critical that this process be adequately staffed and supervised and be subject to adequate management oversight at each stage as balances and disclosures are developed. However, IRS’ financial reporting process often lacked these basic controls. For example, during fiscal year 1998, key personnel with responsibilities for financial systems and reporting on IRS’ administrative activities left IRS and had not been replaced by year-end. Consequently, IRS was compelled to attempt to prepare its financial statements without the necessary staff. This occurred at the same time as the implementation of new federal accounting and reporting requirements that required IRS to prepare four new financial statements. In addition, throughout the process, we found numerous errors and omissions in financial reporting documentation as well as in the draft financial statements themselves, which likely would have been caught and corrected had these records been appropriately reviewed by management. In our previous audit, we reported that IRS’ custodial financial management systems did not substantially comply with Federal Financial Management Systems Requirements (FFMSR), federal accounting standards, and the SGL at the transaction level, which are the core requirements of FFMIA. During fiscal year 1998, we found that this condition continued and that IRS’ administrative financial management systems also had significant problems. IRS (1) cannot reliably prepare four of the six principal financial statements required by the Office of Management and Budget, which prescribes the form and content of federal financial statements, (2) does not have a general ledger(s) that conforms to the SGL, (3) lacks a subsidiary ledger for its unpaid assessments, accounts payable, and undelivered orders, and (4) lacks an effective audit trail from its general ledgers back to subsidiary detailed records and transaction source documents. In addition, IRS does not consistently capture costs as required by federal accounting standards to permit it to (1) routinely prepare reliable cost-based performance measures for inclusion in the management discussion and analysis that accompanies its principal financial statements or (2) prepare the information to be included in its annual performance plan as required by the Government Performance and Results Act (GPRA) of 1993. This deficiency also renders IRS unable to include reliable cost-based performance information in its budget submission to Congress. As we have previously reported, IRS does not have a subsidiary ledger which tracks and accumulates unpaid assessments and their status on an ongoing basis, the absence of which adversely affects its ability to effectively manage and accurately report unpaid assessments. To compensate for this, IRS runs computer programs against its master files—the only detailed record of taxpayer information it maintains—to identify, extract, and classify the universe of unpaid assessments for financial reporting purposes. However, this approach is only designed for the limited purpose of allowing IRS to report auditable financial statement totals at year-end and is not an adequate substitute for a reliable subsidiary ledger which provides an accurate outstanding balance for each taxpayer on an ongoing basis. Additionally, this approach still resulted in the need for tens of billions of dollars of audit adjustments to IRS’ principal financial statements to correct duplicate or otherwise misstated unpaid assessment balances identified by our testing. Without the information an effective subsidiary ledger should provide, IRS cannot ensure that payments and assessments are promptly posted to the appropriate taxpayer accounts. We found in our statistical sample of unpaid assessments that this problem resulted in inaccurate taxpayer account balances and led IRS to pursue collection efforts against taxpayers who had already paid their taxes in full. In addition, in our sample we found that IRS inappropriately issued refunds to taxpayers with outstanding tax assessment balances. We previously reported that IRS had significant problems locating supporting documentation for unpaid assessment transactions. To address this issue, we worked closely with IRS and identified various forms of documentation to support these items, and we requested these documents in performing our fiscal year 1998 testing. While we did note some improvement, we continued to find that IRS experienced difficulties in providing supporting documentation. The lack of adequate supporting documentation made it difficult to assess the classification and collectibility of unpaid assessments reported in the principal financial statements as federal taxes receivable and may make it difficult for IRS to readily identify and focus collection efforts. As in prior years, we continued to find that IRS does not have sufficient preventive controls over refunds to reduce to an acceptable level the risk that inappropriate payments for tax refunds will be disbursed. Inappropriate refund payments continued to be issued in fiscal year 1998 due to (1) IRS comparing the information on tax returns and third party data such as W-2s (Wage and Tax Statement) too late to identify and correct discrepancies between these documents, (2) significant levels of invalid Earned Income Tax Credit (EITC) claims, and (3) deficiencies in controls that allowed duplicate refunds to be issued. We also found instances of erroneous refunds being issued as a result of errors or delays in posting assessments to taxpayer accounts. Errors and posting delays such as these impair IRS’ ability to effectively offset refunds due taxpayers against amounts owed by the same taxpayers on another account. Although IRS has detective (post-refund) controls in place, the lack of sufficient preventive controls exposes the government to potentially significant losses due to inappropriate disbursements for refunds. According to IRS’ records, IRS investigators identified over $17 million in alleged fraudulent refunds that had been disbursed during the first 9 months of calendar year 1998 and prevented the disbursement of an additional $65 million in alleged fraudulent refund claims. During calendar year 1997, IRS’ records indicate that intervention by IRS investigators prevented the disbursement of additional alleged fraudulent refund claims totaling over $1.5 billion. However, the full magnitude of invalid refunds disbursed by IRS is unknown. In addition, rates of invalid EITC claims have historically been high. During fiscal year 1998, IRS reported that it processed EITC claims totaling over $29 billion, including over $23 billion (79 percent) in refunds. In an effort to minimize losses due to invalid EITC claims, IRS electronically screens tax returns claiming EITC to identify those exhibiting characteristics considered indicative of potentially questionable claims based on past experience and then selects those claims considered most likely to be invalid for detailed examination. During fiscal year 1998, IRS examiners reviewed over 290,000 tax returns claiming $662 million in EITC of which $448 million (68 percent) was found to be invalid. These examinations are an important control mechanism for detecting questionable claims and providing a deterrent to future invalid claims. However, because examinations are often performed after any related refunds are disbursed, they cannot substitute for effective preventive controls designed to identify invalid claims before refunds are disbursed. In fiscal year 1998, IRS began implementing a 5-year EITC compliance initiative intended to expand customer service to increase taxpayers’ awareness of their rights and responsibilities related to EITC, strengthen enforcement of EITC requirements, and enhance research into the sources of EITC noncompliance. However, most of IRS’ efforts under that initiative had not progressed far enough at the time we completed our audit for us to make any judgment about their effectiveness. While we were able to substantiate the amounts of refunds disbursed as reported on IRS’ fiscal year 1998 principal financial statements, IRS nevertheless lacks effective preventive controls to minimize its vulnerability to payment of inappropriate refunds. Once an inappropriate refund has been disbursed, IRS is compelled to expend both the time and expense to attempt to recover it, with dubious prospects of success. As we have previously reported, IRS’ controls over cash, checks, and related hardcopy taxpayer data it manually receives from taxpayers are not adequate to reduce to an acceptably low level the risk that these payments will not be properly credited to taxpayer accounts and deposited in the Treasury or that proprietary taxpayer information will not be properly safeguarded. Strong physical security is critical to ensure that receipts are not lost or stolen or that sensitive taxpayer data are not compromised, and is thus critical to IRS’ customer service goals. However, we found that (1) unattended checks and tax returns were often stored in open and easily accessible areas, (2) hundreds of millions of dollars of receipts in the form of checks, and in one case cash, were transported from IRS field offices to financial institutions by unarmed couriers who often used unmarked civilian vehicles including, in one instance, a bicycle, and (3) individuals were hired and entrusted with access to cash, checks, and sensitive taxpayer data before completion of background or fingerprint checks. This problem is particularly acute during peak filing season when IRS typically hires thousands of temporary employees. IRS’ investigations of 80 thefts at service centers between January 1995 and July 1997 found that 15 percent of these were committed by individuals who had previous arrest records or convictions that were not identified prior to their employment. At commercial lockbox banks IRS contracts with to process tax receipts, we found similar weaknesses, including the use of unarmed couriers and the hiring of temporary employees before background checks are completed. In fiscal years 1997 and 1998, IRS identified 56 actual or alleged cases of employee theft of receipts at IRS field offices and lockbox banks totaling about $1 million. An additional 100 cases were opened during the period in which the amount potentially stolen was not quantified. Further, the magnitude of thefts not identified by IRS is unknown. The weaknesses we identified also expose taxpayers to increased risk of losses due to financial crimes committed by individuals who inappropriately gain access to confidential information entrusted to IRS. For example, this information — which includes names, addresses, social security and bank account numbers, and details of financial holdings —may be used to commit identity fraud. Although receipts and taxpayer information will always be vulnerable to theft, IRS has a responsibility to protect the government and taxpayers from such losses. Throughout fiscal year 1998, IRS did not reconcile its administrative fund balance with Treasury accounts. Such reconciliations are required by Treasury policy and are analogous to companies or individuals reconciling their checkbooks to monthly bank statements. When in January 1999, IRS’ contractor provided what it considered to be reconciliations of IRS’ Treasury fund balance for the 12 months of fiscal year 1998, we found amounts on the reconciliations for Treasury and IRS balances did not agree with Treasury and IRS records and reconciling items listed on the reconciliations were not investigated and resolved. Similarly, IRS has not been investigating and resolving amounts in its administrative suspense accounts. As of September 30, 1998, IRS had items totaling a net credit balance of over $100 million in its fund balance with Treasury suspense account, including some items dating back to 1989 appropriations. The lack of timely, thorough reconciliations makes it difficult if not impossible for IRS to determine if operating funds have been properly spent or if reported amounts for operating expenses, assets, and liabilities are reliable. Without performing such reconciliations, IRS has no assurance that its fund balance with Treasury is accurate. The lack of appropriate reconciliations also impacts IRS’ ability to ensure that it complies with the law governing the use of its budget authority. Because this fundamental internal control was not followed, we were unable to conclude whether IRS’ fund balance with Treasury account was reliable at September 30, 1998. Additionally, we were unable to test to determine whether IRS had complied with the Anti-Deficiency Act, as amended. As we have reported in prior year audits, IRS’ controls over its property and equipment (P&E) records are not adequate to ensure that these records provide a complete and reliable record of P&E assets. Without current and accurate records, IRS cannot ensure that the P&E items it owns are not lost or stolen, that new purchases of equipment are appropriately capitalized in its accounting records, or that related principal financial statement balances are reliable. IRS does not have policies and procedures in place to ensure that material P&E are recorded in IRS’ financial statements. For example, IRS’ computer systems information shows substantial funding available and used for computer systems, such as mainframe consolidation and a new receipts processing system. IRS’ computer systems information also shows evidence of contractor services related to design, plans, and specifications for computer hardware and software projects—costs required to be capitalized under federal accounting standards. Finally, IRS’ financial records show equipment-related expenses of $339 million in fiscal year 1998. Although this significant P&E activity occurred, only about $30 million was recognized as P&E additions in fiscal year 1998. We also saw evidence of substantial unrecorded capital expenditures in fiscal year 1997. These problems are compounded by IRS’ use of a $50,000 minimum financial statement cost capitalization threshold, which is permitted by Treasury policy. This amount far exceeds the cost of most of the P&E items IRS purchases and results in a material distortion of IRS’ reported P&E in its financial statements. Based on assets included in IRS’ property systems, we found that $1.2 billion, or 69 percent of IRS’ gross P&E, was not included as property and equipment in the financial statements because of the use of this threshold to capitalize P&E assets. Consequently, P&E balances are likely to be materially understated. In addition to the P&E completeness problem, IRS’ policies and procedures for recording P&E transactions impede its ability to reconcile the general ledger to related P&E subsidiary records. IRS’ field offices record individual property acquisitions and dispositions on site throughout the year. However, IRS’ accounting system expenses property purchases during the year, then records adjustments at year-end to reflect P&E dispositions and to move property purchases from expenses to P&E based on field office subsidiary records. As a result, IRS has no assurance that the amounts it records in its general ledger and underlying P&E subsidiary systems, respectively, are complete and agree with each other. IRS is compelled to manually adjust the general ledger at year-end to force it to agree with its P&E subsidiary records. However, the reliability of these subsidiary P&E records is highly questionable. In many cases, the items in the records that we selected for testing could not be located by IRS, including a Chevrolet Blazer motor vehicle and a laser printer costing over $300,000. Additionally, a significant number of items that we selected from the floor of IRS’ field offices were not included in IRS’ detailed property records. Physical inventories we observed being performed by IRS personnel at two IRS field offices produced similar results. We also found instances where different IRS field offices had recorded substantially identical items at significantly different costs. These discrepancies and reported problems reflect weaknesses in IRS property management controls that impair its ability to ensure that P&E are used only in accordance with IRS policy and that related records are accurate. It is important to note that IRS has itself reported deficiencies in its property management controls for the last 17 consecutive years. IRS places extensive reliance on its computer information systems to perform basic functions, such as processing tax returns, maintaining sensitive taxpayer data, calculating interest and penalties, and generating refunds. Consequently, weaknesses in controls over its computer information systems could render IRS unable to perform these vital functions or result in the unauthorized disclosure, modification, or destruction of taxpayer data. In December 1998, we reported that while significant weaknesses in computer information controls remain, IRS had made significant progress in improving its computer security. For example, IRS has centralized responsibility for its security and privacy issues in its Office of Systems Standards and Evaluation. This Office is implementing a servicewide security program to manage risk and has led IRS’ efforts in mitigating about 75 percent of the weaknesses identified in one of our previous reports. Serious weaknesses, however, continue to exist in (1) security program management, (2) access control, (3) application software development and change controls, (4) system software, (5) segregation of duties, and (6) service continuity. Continued weaknesses in these areas can allow unauthorized individuals access to critical hardware and software where they may intentionally or inadvertently add, alter, or delete sensitive data or programs. Such individuals can also obtain personal taxpayer information and use it to commit financial crimes in the taxpayers’ name (identity fraud), such as fraudulently establishing credit, running up debts, and taking over and depleting banks accounts. IRS continues to be plagued by serious internal control and systems deficiencies that hinder its ability to achieve lasting financial management improvements. IRS has acknowledged the issues and concerns identified in our fiscal year 1998 audit and the Commissioner and Deputy Commissioner of Operations have pledged their commitment to addressing these long-standing issues. IRS already has a number of initiatives underway to try to address continued weaknesses with respect to its unpaid assessments. Additionally, significant progress continues to be made on the serious computer security issues we have reported for several years. Most recently, IRS has established a corrective action team under the direction of the Chief Financial Officer to formulate a detailed plan for addressing the issues identified in our audit. IRS expects to complete the formulation of this plan by March 31, 1999. IRS also plans to bring in outside experts to assist its staff in resolving the issues relating to its administrative operations. IRS has stated that while its financial management systems were not designed to meet current systems and financial reporting standards, it is in the process of planning and implementing interim solutions until enhanced systems are available over the next several years. We have assisted IRS in formulating corrective actions to address its serious internal control and financial management issues by providing numerous recommendations over the years. We will continue to provide such assistance as necessary as IRS faces its significant financial and other management challenges. We recognize that IRS’ financial management systems were not designed to meet current systems and financial reporting standards, that these problems did not occur overnight, and that the task ahead of IRS to fully correct its systems-related deficiencies will take years to achieve. We do, however, believe that serious internal control issues can be addressed in the near term through a dedicated effort on the part of IRS management. We realize that IRS’ ability to successfully meet the financial management challenges it faces must be balanced with the competing demands placed on its resources by its customer service and tax law compliance responsibilities. However, it is critical that IRS rise to the challenges posed by these financial management issues, because its success in achieving all aspects of its strategic objectives depends in part upon reliable financial management information and effective internal controls. It is also important to recognize that several of the financial management issues we have raised in our financial audits directly or indirectly affect IRS’ ability to meet its customer service and tax law compliance responsibilities. Mr. Chairman, this concludes my prepared statement. I would be pleased to answer any questions. Taxes Receivable - Collectible ($26) Taxes Receivable - Uncollectible ($55) Compliance Assessments ($22) Write-offs ($119) The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | Pursuant to a congressional request, GAO discussed the results of its audit of the Internal Revenue Service's (IRS) fiscal year (FY) 1998 financial statements. GAO noted that: (1) serious internal control and financial management issues continue to plague the IRS; (2) pervasive weaknesses in the design and operation of IRS' financial management systems, accounting procedures, documentation, recordkeeping, and internal controls, including computer security controls, prevented IRS from reliably reporting on the results of its administrative activities; (3) in contrast, IRS was able to report reliably on the results of its custodial activities for FY 1998, including tax revenue received, tax refunds disbursed, and taxes receivable due from the public; (4) this was the second year GAO has been able to render an unqualified opinion with respect to IRS' financial reporting of its custodial activities; (5) this achievement, however, required extensive, costly, and time-consuming ad hoc procedures to overcome pervasive internal control and systems weaknesses; and (6) IRS' major accounting, reporting and internal control deficiencies include: (a) an inadequate financial reporting process that resulted in IRS' inability to reliably prepare several of the required principal financial statements, and financial management systems that do not comply with the requirements of the Federal Financial Management Improvement Act of 1996; (b) the lack of a subsidiary ledger to properly manage taxes receivable and other unpaid assessments, resulting in instances of both taxpayer burden and lost revenue to the government; (c) deficiencies in preventive controls over tax refunds that have permitted the disbursement of millions of dollars of fraudulent refunds; (d) vulnerabilities in controls over tax receipts and taxpayer data that increase the government's and taxpayers' risk of loss or inappropriate disclosure of sensitive taxpayer data; (e) a failure to reconcile its fund balance to Treasury records during FY 1998, and an inability to provide assurance that its budgetary resources are being properly accounted for, reported, and controlled; (f) the inability to properly safeguard or reliably report its property and equipment; and (g) vulnerabilities in computer security that may allow unauthorized individuals to access, alter, or abuse proprietary IRS programs and data, and taxpayer information. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
Pension advances and pension investments are products that, while based on or related to pension benefits, are generally distinct from the pensions themselves. A pension advance is an up-front lump sum provided to a consumer in exchange for a certain number and dollar amount of the consumer’s future pension payments plus various fees. Pension investments, the related product, provide investors a future income stream when they make an up-front lump-sum investment in one or more pensioners’ incomes. Multiple parties can be involved in pension advance transactions, including consumers (pensioners), investors, and pension advance companies. After the pensioner signs the pension advance contract, the pension advance company gives the lump sum to the pensioner after deducting, if applicable, life-insurance premiums or other fees from the lump sum. Pension advance companies may also be involved in the related pension investment transaction. These companies can identify financing sources (investors) to provide the lump-sum monies to a specific pensioner or to multiple pensioners. The investor pays the lump- sum amount by depositing the funds into the bank or escrow account that was previously established. The investor receives periodic payments, such as on a monthly basis, over the agreed-upon period either from the pension advance company or through the escrow account. See figure 1 for an illustration of the parties that we identified as part of our June 2014 report in the multistep pension advance processes that we reviewed. Various state and federal laws could potentially apply to pension advances, depending on the structure of the product and transaction, among other things. For example, certain provisions that prohibit the assignment of benefits could apply to pension advances, depending on whether these advances involve directly transferring all or part of the pension benefit to a third party. In addition, potentially applicable state laws include each state’s consumer protection laws such as those governing Unfair and Deceptive Acts and Practices (UDAP) and usury laws that specify the maximum legal interest rate that can be charged on a loan. Depending on the overall structure of the products involved, state securities laws could also apply. Various state and federal agencies have oversight roles and responsibilities related to consumer and investor issues. CFPB, FTC, and SEC may have consumer and investor-related oversight roles related to pension advance transactions depending on a number of factors, including the structure of the pension advance product and transaction. Many other federal agencies may have pension oversight roles related to the pension itself depending on whether the pensioner was a private- sector or federal employee or a military veteran: EBSA, Treasury, and PBGC have oversight over private-sector pensions; OPM has oversight over federal civilian pensions; DOD has oversight over military pensions; and VA has oversight over a needs-based benefit program called a “pension.” States may also oversee and investigate pension advance transactions. As we describe later in this testimony, the state of New York worked with CFPB to file a lawsuit in August of 2015 against two of the firms that we referred to CFPB for review and investigative action. In June 2014, we reported on the number and characteristics of entities offering pension advances and the marketing practices that pension advance companies employ. During our review, we identified at least 38 companies that offered lump-sum advance products in exchange for pension payment streams. Eighteen of the 38 companies we identified were concentrated in one state and 17 of these 38 companies also offered lump-sum cash advances for a wide range of other income streams, in addition to pension advances, including lottery winnings, insurance settlements, and inheritances. Another 17 companies exclusively focused on offering pension advances. We also found that at least 30 out of 38 companies that we identified had a relationship or affiliation with each other, including working as a subsidiary or broker, or the companies were the same entity operating with more than one name. However, only 9 out of those 30 companies clearly disclosed these relationships to consumers on the companies’ websites. While companies having affiliations is not uncommon, the lack of transparency to consumers regarding with whom they are actually conducting business can make it difficult to know whom to file a complaint against if the pensioner is dissatisfied or make it difficult to research the reputability of the company before continuing to pursue the business relationship. See figure 2 for an illustration of some of the relationships between companies that we identified during the June 2014 review. At least 34 out of 38 pension advance companies that we identified marketed and offered their services to customers nationwide, operating primarily as web-based companies and marketing through websites and other social-media outlets. Twenty-eight of the 38 companies that we identified used marketing materials or sales pitches designed to target consumers in need of cash to address an urgent need such as paying off credit-card debts, tuition costs, or medical bills, or appealed to consumers’ desire to have quick access to the cash value of the pension that they have earned. Eleven of the 38 companies that we identified used marketing materials or sales pitches designed to target consumers with poor or bad credit. These 11 companies encouraged those with poor credit to apply, stating that poor or bad credit was not a disqualifying factor. We also observed this type of marketing during our undercover investigative phone calls. For example, a representative from one company stated that the company uses a credit report to determine the maximum lump sum that it can provide to the pensioner, and stated that no application would likely be declined. Six pension advance companies provided our undercover investigator with quotes for pension advances with terms that did not compare favorably with other financial products such as loans and lump-sum payment options provided directly through private-sector pension plans. We compared the 99 offers provided to our undercover investigators by six pension advance companies in response to phone calls and online quote requests with those of other financial products. Specifically, we compared the terms with: (1) relevant state usury rates for loans and (2) lump-sum options offered through defined-benefit pension plans. As discussed below, we found that most of the six pension advance companies’ lump-sum offers (1) had effective interest rates that were significantly higher than equivalent regulated interest rates, and (2) were significantly smaller than the lump-sum amounts that would have to be offered in a private-sector pension plan that provided an equivalent lump- sum option. We determined that the effective interest rate for 97 out of 99 offers provided to our undercover investigator by six companies ranged from approximately 27 percent to 46 percent. Most of these interest rates were significantly higher than the legal limits set by some states on interest rates assessed for consumer credit, known as usury rates or usury ceilings. For example, in comparison to the usury rate for California of 12 percent, we determined that the quotes for lump-sum payments that our undercover investigator received from three pension advance companies for a resident of California had effective interest rates ranging from approximately 27 percent to 83 percent. The effective interest rates on some of these offers could be even higher than the rates we calculated to the extent some pension advance companies require the pensioner to purchase life insurance, and “collaterally assign” the life- insurance policy to the company, to protect the company in the event of the pensioner’s death during the term of the contract. For many of the quotes our undercover investigator received, it was unclear whether the pensioner would be responsible for any life-insurance premium payments. See table 1 for additional examples of usury-rate comparisons for states where our fictitious pensioners resided for our case studies. We compared pension advance offers that our undercover investigator received to lump-sum options that can be offered in pension plans, where a lump sum can be elected by plan participants in lieu of monthly pension payments. The amount of such a lump-sum option of a private-sector plan must comply with Employee Retirement Income Security Act of 1974 (ERISA) and Internal Revenue Code requirements that regulate the distribution of the present value of an annuity by defining a minimum benefit amount to be paid as a lump sum if the plan offers a lump-sum option and a private-sector pensioner chooses that option. We determined the minimum lump-sum amount under ERISA rules for private defined-benefit plan sponsors. On the basis of our analysis of 99 pension advances offered by six companies, we determined that the vast majority of the offers our undercover investigator received (97 out of 99) were for between approximately 46 and 55 percent of the minimum lump sum that would be required under ERISA regulations. This means that if these transactions were covered under ERISA regulations, the pensioners would receive about double the lump sum that they were offered by pension advance companies. Again, to the extent pension advance companies require the pensioner to pay for life insurance, the terms of the deal would be even more unfavorable than indicated by these lump-sum comparisons. Additional information on the basis for the ERISA calculations is included in our June 2014 report. In January 2015, we reported that pension plan participants potentially face a reduction in retirement income if they accept a lump sum offer. Since the time of our review, Treasury announced plans to amend regulations related to the use of lump-sum payments to replace lifetime income received by retirees under defined benefit pension plans. Specifically, these amendments generally would prohibit plans from replacing a pension currently being paid with a lump sum payment. As noted above, our June 2014 comparison observed that ERISA-regulated lump-sum payments from pension plan sponsors were considerably higher than the lump sum amounts offered by pension advance companies. In the future, pension advance offers may appear more appealing to some consumers who require money immediately that do not otherwise have the option to obtain an ERISA-regulated lump sum payment. Our June 2014 report identified questionable elements of pension advances, such as the lack of disclosure and unfavorable agreement terms. Whether certain disclosure laws apply to pension advance products depends partly on whether the product and its terms meet the definition of “credit” as set in the Truth in Lending Act (TILA), and whether pension advances are actually loans and should be subject to relevant TILA laws is a long-standing unsettled question. During our June 2014 review, we found that the costs of pension advances were not always clearly disclosed to the consumer and some companies were inconsistent about whether the product was actually a loan. For example, 31 out of the 38 companies we identified did not disclose to pensioners an effective interest rate or comparable terms on their websites. For loans, under TILA, companies would be required to disclose an effective interest rate for the transaction. We also found that some of the offers provided to our undercover investigator by six pension advance companies were not clearly presented. Specifically, these companies provided a variety of offers based on differing number of years for the term as well as differing amounts of the monthly pension to be paid to the company. For example, one company provided a quote including 63 different offers with varying terms and monthly payment amounts to our fictitious federal pensioner. We considered this volume of information to be overwhelming while not including basic disclosures, such as the effective interest rate or an explanation of the additional costs of life insurance. In addition, the full amount of additional fees such as life-insurance premiums was not always transparently disclosed in the written quotes that six pension advance companies provided to our undercover investigator. We also found that some of the 38 companies we reviewed were not consistent in identifying whether pension advances are loans. For example, while nine companies referred to these products as a loan or “pension loan” on their websites, six of these companies stated elsewhere on their websites that these products are not loans. During our review we found that there was limited federal oversight related to pension advances. Both CFPB and FTC are authorized to protect consumers and to regulate the types of financial and commercial practices that consumers should be protected against, some of which appear to be relevant to practices that we describe in our June 2014 report. However, at the time of our 2014 review, neither agency had undertaken any direct oversight or public enforcement actions regarding pension advances. According to CFPB officials, they were concerned about the effect of pension advances on consumers, but stated that they had not taken an official position or issued any regulations regarding pension advance transactions or products, or taken any related enforcement actions. According to FTC officials, the agency had not taken any public law-enforcement action as they had not received many complaints regarding this issue. As noted in our 2014 report, conducting a review to identify whether some questionable practices—such as the ones highlighted in our report—are unfair or deceptive or are actually loans that should be subject to disclosure rules under TILA, and taking any necessary oversight or enforcement action, could help CFPB and FTC ensure that vulnerable pensioners are not harmed by companies trying to exploit them. Hence, we recommended that CFPB and FTC review pension advance practices and companies, and exercise oversight and enforcement as appropriate. CFPB agreed with this recommendation and took action by investigating pension advance companies with questionable business practices. We also referred the 38 companies that we identified in our review to CFPB for further review and investigative action, if warranted. In August 2015, CFPB filed suit against two of the companies included in our review for a variety of violations including, among others, unfair, deceptive, and abusive acts or practices in violation of the Consumer Financial Protection Act of 2010 and false and misleading advertising of loans. FTC also agreed with our recommendation and, according to FTC officials, the agency has also taken actions to review consumer complaints related to pension advances, pension advance advertising, and the pension advance industry overall. In our June 2014 report, we highlighted that consumer financial education can play a key role in helping consumers understand the advantages and disadvantages of financial products, such as pension advances. As we reported, it can be particularly important for older adults to be informed about potentially risky financial products, given that this population can be especially vulnerable to financial exploitation. The federal government plays a wide-ranging role in promoting financial literacy, with a number of agencies providing financial-education initiatives that seek to help consumers understand and choose among financial products and avoid fraudulent and abusive practices. CFPB plays a role in financial education, having been charged by statute to develop and implement initiatives to educate and empower consumers (in general) and specific target groups to make informed financial decisions. At the time of our 2014 review, we found that CFPB and four other agencies had taken some actions to provide consumer education on pension advances. However, several other federal agencies—including some that regularly communicate with pensioners as part of their mission—did not provide information about pension advance products and their associated risks and were not aware of CFPB publications at the time of our review. Also, these agencies reported that they had not identified many related complaints and some were just learning about pension advance products. We recommended that CFPB coordinate with the federal agencies that regularly communicate with pensioners on the dissemination of existing consumer-education materials on pension advances. CFPB agreed with this recommendation and released a consumer advisory about pension advances in March 2015. In addition, CFPB provided the Financial Literacy and Education Commission with material related to pension advances in April of 2015. Similarly, FTC—which educates consumers on consumer products and avoiding scams through multimedia resources— had not previously provided any specific consumer education about pension advances. However, in response to our review, in 2014, FTC also posted additional consumer-education information about pension advances on its agency website. In conclusion, some older Americans are both at greater risk of being in financial distress and of being financially exploited as they typically live off incomes below what they earned during their careers and assets that took a lifetime to accumulate. Some pension advance companies market their products as a quick and easy financial option that retirees may turn to when in financial distress from unexpected costly emergencies or when in need of immediate cash for other purposes. However, pension advances may come at a price that may not be well understood by retirees. As illustrated by examples in my statement and by related consumer complaints and lawsuits, the lack of transparency and disclosure about the terms and conditions of these transactions, and the questionable practices of some pension advance companies, could limit consumer knowledge in making informed decisions, put retirement security at risk, and make it more difficult for consumers to file complaints with federal agencies, if needed. CFPB and FTC have taken actions to implement the recommendations that we made to review pension advance practices and companies, and exercise oversight and enforcement as appropriate, as well as to disseminate consumer-education materials on pension advances. We believe their implementation of these recommendations will help to strengthen federal oversight or enforcement of pension advance products while ensuring that consumer-education materials on pension advances reach their target audiences, especially given that Treasury’s recent announcement restricting permitted benefit increases may make these products more desirable to pensioners. Chairman Collins, Ranking Member McCaskill, and Members of the Committee, this concludes my prepared remarks. I look forward to answering any questions that you may have at this time. For further information on this testimony, please contact Stephen Lord at (202) 512-6722 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Individuals making key contributions to this testimony include Latesha Love, Assistant Director; Gabrielle Fagan; John Ahern; and Nada Raoof. Also contributing to the report were Julia DiPonio, Charles Ford, Joseph Silvestri, and Frank Todisco. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | Recent questions have been raised about companies attempting to take advantage of retirees using pension advances. In June 2014, GAO issued a report on pension advances. The report (1) described the number and characteristics of pension advance companies and marketing practices; (2) evaluated how pension advance terms compare with those of other products; and (3) evaluated the extent to which there is federal oversight. This testimony summarizes GAO's June 2014 report (GAO-14-420) and actions taken by CFPB and FTC in response to GAO's recommendations. In June 2014, GAO identified 38 pension advance companies and related marketing practices. GAO conducted a detailed nongeneralizable assessment of 19 of these companies. GAO used undercover investigative phone calls to identify additional marketing practices and obtain pension advance offers. This information was compared with the terms of other financial products, such as personal loans. GAO also examined the role of selected federal agencies with oversight of consumer protection and pension issues. In a June 2014 report, GAO identified at least 38 companies that offered individuals lump-sum payments or “advances” in exchange for receiving part or all of their pension payment streams. The 38 companies used multistep pension advance processes that included various other parties. At least 21 of the 38 companies were affiliated with each other in ways that were not apparent to consumers. Some companies targeted financially vulnerable consumers with poor or bad credit nationwide. GAO undercover investigators received offers from 6 out of 19 pension advance companies. These offers did not compare favorably with other financial products or offerings, such as loans and lump-sum options through pension plans. For example, the effective interest rates on pension advances offered to GAO's investigators typically ranged from approximately 27 percent to 46 percent, which were at times close to two to three times higher than the legal limits set by the related states on the interest rates assessed for various types of personal credit. GAO identified questionable elements of pension advance transactions, including lack of disclosure of some rates or fees, and certain unfavorable terms of agreements. GAO recommended that the Bureau of Consumer Financial Protection (CFPB) and Federal Trade Commission (FTC)—the two agencies with oversight responsibility over certain acts and practices that may harm consumers—provide consumer education about these products, and that CFPB take appropriate action regarding identified questionable practices. Since the time of GAO's review, CFPB has investigated pension advance companies that GAO referred to the agency and disseminated additional consumer-education materials on pension advances. Similarly, FTC posted consumer education on pension advances on its website, and FTC officials report that they have reviewed consumer complaints related to pension advances, pension advance advertising, and the pension advance industry overall. CFPB's and FTC's actions are a positive step toward strengthening federal oversight or enforcement of pension advance products. In its June 2014 report, GAO recommended that CFPB and FTC review the pension advance practices identified in that report and exercise oversight or enforcement as appropriate. GAO also recommended that CFPB coordinate with relevant agencies to increase consumer education about pension advances. CFPB and FTC agreed with and have taken actions to address GAO's recommendations. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
Rail transit is an important component of the nation’s transportation network, particularly in large metropolitan areas. Rail transit systems provide around 4.3 billion passenger trips annually. The five largest heavy rail systems carried 3.2 billion passengers in 2008, 90 percent of all heavy rail trips. The NYCT system surpassed all the other heavy rail systems by carrying almost 2.4 billion passengers—2.1 billion more than the next largest heavy rail system. Conversely, the five largest light rail systems are much smaller, collectively carrying 244 million passengers in 2008. The largest light rail system, operated by MBTA, carried 74 million passengers. Public transit is seen as an affordable mode of transportation and a means to alleviate roadway congestion and emissions. Increases in gasoline prices over the past decade also have resulted in higher ridership, which peaked in fall 2008. Although ridership declined in 2009 by about 4 percent, following the 2008 economic recession and a decrease in gasoline prices, transit ridership is expected to grow in years to come. Heavy and light rail transit systems have developed throughout the nation over the past 100 years. The oldest systems in cities such as Boston, New York, and Chicago, among others, were generally built by private companies which eventually went out of business, requiring the systems’ respective local governments to provide financial help to keep the systems operating. During the 1960s, Congress established a federal capital assistance program for mass transportation. With federal capital assistance, many other cities constructed rail transit systems, including heavy rail systems in Atlanta, San Francisco, and Washington, D.C. Heavy rail systems tend to be larger and carry many more passengers than light rail systems. While there are currently more than twice as many light rail systems as there are heavy rail systems, the heavy rail systems carry about seven times as many passengers and cover more than 50 percent more miles of track than light rail systems (see fig.1). The types of safety risks associated with each rail mode differ somewhat. For example, the higher volume of passengers, the higher speed of the trains, and the third rail on the track pose safety risks for heavy rail systems; the numerous interfaces between rail cars and vehicular traffic and pedestrians pose safety risks for light rail systems. Since the 1980s, newly constructed systems have been predominantly light rail systems. Rail transit systems are managed by public transit agencies accountable to their local government. However, rail transit agencies rely on a combination of local, state, and federal funds, in addition to system- generated revenues such as fares, to operate and maintain their systems. Some states and local governments provide a dedicated revenue source for transit, such as a percentage of the state or local sales tax, or issue bonds for public transportation. In 2008, about 57 percent of all funds for both operating expenses and capital investments were from local and state government. Other sources, such as farebox revenues, provided 26 percent. The federal government’s share was about 17 percent. Even though federal funding has predominantly been for capital investments, by 2008 local government replaced the federal government as the largest source of capital investment funds. However, in the past few years there have been decreases in the amounts of state and local funding available to transit agencies, especially for those agencies that depend on tax revenues, which have experienced decreases as a result of the general economic slowdown faced by the nation. As a result, many transit agencies have faced budget cutbacks. FTA uses many funding programs to support transit agencies. In particular, two FTA programs—the Urbanized Area Formula Program and the Fixed Guideway Modernization Program—provide funding that can be used by existing transit agencies in urbanized areas to modernize or improve their systems. Specifically, these funds can be used for purchasing and rehabilitating rail cars and preventive maintenance, among other things. In 2009, additional funds were made available through the American Recovery and Reinvestment Act (Recovery Act). Recovery Act funds are used primarily for capital projects, although some funds were made available for and have been used for operating expenses. In comparison with other modes of transportation, rail transit is relatively safe. For example, occupants of motor vehicles are more than 70 times more likely to die in accidents while traveling as are passengers of rail transit systems. However, several large rail transit agencies in recent years have had major accidents that resulted in fatalities, injuries, and significant property damage. NTSB has investigated a number of these accidents and has issued reports identifying the probable causes of and factors that contributed to them. Since 2004, NTSB has reported on eight rail transit accidents that, collectively, resulted in 13 fatalities, 297 injuries, and about $29 million in property damages. In five of these accident investigations, NTSB found the probable cause to involve employee errors, such as the failure of the train operator to comply with operating rules and of track inspectors to maintain an effective lookout for oncoming trains while working on the tracks. Of the remaining three accidents, NTSB found that problems with equipment were a probable cause of two accidents and that weaknesses in management of safety by the transit agency were a probable cause in all three accidents. In six of these investigations, NTSB reported that contributing factors involved deficiencies in safety management or oversight, such as weaknesses in transit agencies’ safety rules and procedures, lack of a safety culture within the transit agency, and lack of adequate oversight by the transit agency’s state safety oversight agency and FTA. See appendix I for further information on these accident investigations. Transit agencies are responsible for the operation, maintenance, and safety and security of their rail systems but are subject to a tiered state and federal safety oversight program. The Intermodal Surface Transportation Efficiency Act of 1991 mandated FTA to establish a State Safety Oversight Program for rail fixed guideway public transportation systems that are not subject to FRA regulation. Through this program, FTA monitors 27 state safety oversight agencies that oversee the safety of rail transit operations in 25 states, the District of Columbia, and Puerto Rico. While FTA has discretionary authority to investigate safety hazards at transit systems it funds, it does not have authority to directly oversee safety programs of rail transit agencies. FTA, however, does have the authority and responsibility for overseeing transit agencies’ workplace drug and alcohol testing programs. FTA also collects safety data, including data on types of accidents and causes, from the state safety oversight agencies and the transit agencies they oversee. Transit agencies provide safety data for FTA’s National Transit Database while the state safety oversight agencies provide safety data through annual reports to FTA. Under FTA regulations, state safety oversight agencies must develop a program standard that outlines transit agencies’ safety responsibilities. In particular, transit agencies are required to develop and implement safety programs that include, among other things, standards and processes for identifying safety concerns and hazards, and ensuring that they are addressed; a process to develop and ensure compliance with rules and procedures that have a safety impact; and a safety training and certification program for employees. Moreover, FTA requires state safety oversight agencies to perform safety audits of their transit agencies at least once every 3 years, investigate transit accidents, and ensure that deficiencies are corrected. FTA, however, does not fund state safety oversight agencies to carry out this work. Our earlier work found that many state safety oversight agencies lacked adequate staffing, employed varying practices, and applied FTA’s regulations differently. As noted earlier, FTA’s role in overseeing safety on rail transit systems is relatively limited, which is reflected in the number of staff that it employs to fill that role. FTA’s Office of Safety and Security has 15 to 17 staff members managing safety, security, and emergency management programs. They are supported by contractor staff. In December 2009, DOT proposed to Congress major changes in FTA’s role that would shift the balance of federal and state responsibilities for oversight of rail transit safety. DOT proposed the following: FTA, through legislation, would receive authority to establish and enforce minimum safety standards for rail transit systems not already regulated by FRA. A state may continue to have a state safety oversight program to oversee public transportation safety—by “opting in”—given that its program complies with the federal laws, regulations, and policies that FTA would implement if it receives expanded authority proposed in the legislation. DOT would provide federal assistance to states with FTA-approved state safety programs to enforce the federal minimum safety standards. Participating states could set more stringent safety standards if they chose to do so. In states that decided to “opt out” of participation or where FTA has found the program proposals inadequate, FTA would oversee compliance with and enforce federal safety regulations. Subsequently, during the 111th Congress, several bills including these changes were proposed. Instilling a safety culture agencywide is a challenge the largest transit agencies face that can impact their ability to ensure safe operations. The concept of safety culture can be defined in different ways and the level of safety culture in an organization can be difficult to measure. As we have previously reported, safety culture can include: organizational awareness of and commitment to the importance of safety, individual dedication and accountability for those engaged in any activity that has a bearing on safety in the workplace, and an environment in which employees can report safety events without fear of punishment. According to NTSB officials, in organizations with effective safety cultures, senior management demonstrates a commitment to safety and a concern for hazards that are shared by employees at all levels within the organization. Furthermore, such organizations have effective safety management systems that include appropriate safety rules and procedures, employee adherence to these rules and procedures, well- defined processes for identifying and addressing safety-related problems, and adequate safety training available for employees and management. FTA officials told us that it is difficult to define safety culture but noted that attributes of a strong safety culture include open communication about safety throughout the agency, nonpunitive safety reporting by employees, and the identification of safety trends based on agency- collected data. In addition, APTA officials told us that another attribute of safety culture is the accountability of individuals for how their actions and the actions of others affect safety. According to FTA, a strong safety culture can energize and motivate transit employees to improve safety performance. As we subsequently discuss, FTA currently has efforts underway that may more clearly communicate what a strong safety culture entails. All 12 of the rail transit experts we interviewed agreed that safety culture was important in helping transit agencies lower their accident rates. The experts we consulted offered several views about safety culture at large transit agencies. Seven experts noted that the extent of safety culture varies at large transit agencies across the country. Four experts stated that the extent of safety culture was generally low throughout the rail transit industry and needed to be improved. Some experts also noted that despite system differences, a major reason why certain systems have more or fewer incidents is the extent of safety culture present at the transit agency. One expert in particular said that all the other safety challenges transit agencies faced flow from safety culture issues. Some experts we interviewed identified the importance of training to help instill a safety culture at all levels of a transit agency. We have reported that training should support an agency’s goal of changing workplace culture to increase staff awareness of, commitment to, and involvement in safety. Thus, the challenge faced by the largest transit agencies in providing sufficient training for staff—discussed below—can increase the challenge of instilling a safety culture at those same agencies. FTA officials have identified the need to improve safety culture as a continuing problem for the transit industry as a whole, which requires changing behaviors and processes that have become engrained over decades of service. FTA has reported that, to get to the root of safety culture, transit agency management and employees need to understand the current state of their safety programs, how employees perceive management’s commitment to safety, how employees actively follow established safety rules and procedures and how they are held accountable for doing so, and how management monitors employees’ safety performance. FTA officials noted that limitations in transit agencies’ collection and analysis of safety data impede their ability to improve their safety culture, because these limitations affect their ability to identify and address safety hazards. Safety culture can have a significant impact on safety performance. In two of its reports on accidents since 2004, NTSB has noted that an inadequate safety culture contributed to the accidents. Probable causes in the accidents that the NTSB investigated included employee errors, such as failure to comply with operating rules, and inadequate safety management and oversight by transit agencies. Problems such as these may reflect a poor safety culture, as employees may not be motivated to follow operating rules and management may not be properly managing safety programs to ensure that hazards are identified and addressed. In its report on the 2008 accident on MBTA’s system that resulted in one fatality and eight injuries, NTSB found that the probable cause was the failure of the train operator to comply with a controlling signal resulting from an episode of micro-sleep, and noted an MBTA report of an internal audit that stated the success of any new safety plan was largely dependent on the safety culture that MBTA fostered within each agency department and work group. Additionally, NTSB cited this report as stating that MBTA management needed to define, understand, and integrate effective practices into day-to-day work activities to ensure that the safety of employees and passengers remained a top priority. In its report on CTA’s 2006 derailment that resulted in 152 injuries, NTSB found that ineffective management and oversight of its track inspection and maintenance program was a probable cause. Specific problems included ineffective supervisory oversight of track inspections, lack of complete inspection records and follow-up to ensure defects were corrected, and insufficient training and qualification requirements for track inspectors. NTSB found that these identified problems were all part of a deficient safety culture that allowed the agency’s track infrastructure to deteriorate to an unsafe condition. In its report on WMATA’s June 2009 collision that resulted in nine fatalities and 52 injuries, NTSB identified the lack of an effective safety culture as a contributing factor to the accident. According to NTSB, shortcomings in WMATA’s internal communications, recognition of hazards, assessment of risk from those hazards, and implementation of corrective actions were all evidence of an ineffective safety culture and were symptomatic of a general lack of importance assigned to safety management functions across the WMATA organization. NTSB made recommendations to WMATA to improve its safety culture. In response to NTSB’s recommendations to improve its safety culture, WMATA is taking a number of actions, including: the development of procedures to ensure clear communication and distribution of safety-related information and the monthly review of data and trend analyses, the establishment of a safety hotline and email for employees to report safety concerns, an updated whistleblower policy to encourage employee participation and upper management review of identified safety concerns, an amended mission statement to reflect the agency’s commitment to a newly formed committee of WMATA’s Board of Directors to make recommendations monthly on assuring safety at WMATA. Some other transit agencies have also made efforts to increase the extent of safety culture present in their agencies. For example, officials from three transit agencies we spoke with stated that their transit agencies created and supported nonpunitive safety reporting programs such as whistleblower policies and anonymous tip hotlines to encourage employees to keep management aware of safety problems. One agency told us they have a close call reporting program. These programs can encourage employees to voluntarily and confidentially report close call incidents without fear of reprisal. We have previously reported that it is unlikely that employees would report safety events in organizations with punishment-oriented cultures in which employees are distrustful of management and each other. Blaming individuals for accidents not only fails to prevent accidents but also limits workers’ willingness to provide information about systemic problems. To promote reporting in such environments, systems can be designed with nonpunitive features to help alleviate employee concerns and encourage participation. In addition, some transit agencies we visited are reaching outside of the organization for support to further instill safety culture at their agencies. For example, officials at three transit agencies told us they had hired or planned to hire consultants to audit the system and make recommendations for improvements to increase the safety culture at all levels of the organization. According to APTA officials, the transit industry recognizes that labor organizations must be engaged in a visible partnership at all stages of safety culture development. In addition to instilling safety culture at transit agencies, maintaining an adequate level of skilled staff and ensuring that they receive needed safety training are also challenges the largest transit agencies face in ensuring safety. Staffing challenges involve recruiting and hiring qualified employees to fill positions with safety responsibilities—such as safety department staff, maintenance staff, track workers, and operation managers—and adequately planning for the loss of such staff through vacancies and retirements. For example, several transit agencies told us it has been difficult to hire maintenance employees with the necessary expertise and knowledge of both aging and new technology systems. Officials from two transit agencies noted the difficulty in hiring maintenance employees who have experience working with older electronic technology—some of which dates from the 1960s—and who are also knowledgeable of current computer technology. In addition, many transit agencies face an aging workforce and the potential for large numbers of upcoming retirements. For example, one transit agency we visited identified more than 50 percent of its staff as eligible for retirement within the next 5 years. FTA officials told us that staffing is a challenge facing transit agencies nationwide due to the large number of employees nearing retirement eligibility and the difficulty in retaining and replacing qualified employees. In addition, a recent APTA report identified that the transit industry has an experienced but aging workforce, with a significant number of potential retirements expected in the next 10 years. The staffing challenge has been further exacerbated for transit agencies by recent budget cutbacks as a result of flat or decreased funding from state and local governments. Officials at six of the seven transit agencies we visited stated that their staffing levels have been or will be cut, including some safety staff at three of these agencies. For example, at one transit agency we visited officials stated that, due to their current budget shortfall, staffing levels would be reduced, including in the safety department where 3 positions from the overall 93 positions were cut. In addition, at another transit agency we visited, one official cited staffing levels being stretched to the point where it is difficult to conduct the necessary rail car maintenance to keep the system running. Training challenges for large transit agencies have included difficulties in ensuring that staff receive needed safety-related training—such as training in track safety, fire and evacuation, risk assessment, and the inspection and maintenance of track and equipment—due to financial constraints as well as the limited availability of technical training. Some experts identified ensuring adequate levels and frequency of training as key challenges for large transit agencies. Some cited training cuts as being commonplace when budget cutbacks occur despite its importance and link to safety. All the transit agencies we visited identified a challenge in having employees participate in safety training either due to the inability of their agencies to pay for training or to cover employees’ positions while they attend training, or a combination of both. For example, some officials explained that if a train operator attends safety training another train operator must work an extra shift to cover for the operator attending training. The transit agency pays for overtime hours for the extra shift worked by the train operator. Officials at some of the transit agencies we visited told us that these additional costs for training can be prohibitive. A recent APTA report identified safety training as well as supervisory and leadership training as top training needs for the industry. The large transit agencies we visited have different types of training programs available for their staff. For example, one transit agency has a large in-house training program that provides safety training and certification for their staff. Each department within the agency tracks employee training schedules, participation, and goals. At another transit agency, officials explained that, while they do some of their training in- house, they rely to a great extent on on-the-job training. Officials from three transit agencies noted that the availability of apprenticeship programs and external technical training, such as training in how to inspect rail and signals, is limited. One transit agency official and one state safety oversight agency official mentioned that the transit industry often relies on on-the-job training. According to APTA officials, on-the-job training is a vital part of transit agencies’ training programs and can mitigate institutional knowledge loss as attrition occurs. However, they also noted that transit agencies often have not formalized their on-the-job training by documenting key elements to be covered and that this type of training is not carried out consistently among transit agencies. The transit agencies we visited also sent staff to training courses offered by DOT’s Transportation Safety Institute and FTA’s National Transit Institute. However, due to the high costs of traveling for training—including lodging and transportation costs—most of the transit agencies we visited cited difficulty in participating in such training opportunities. Transit agencies have attempted to find more cost effective ways of addressing this problem. For example, officials from three transit agencies told us they have offered to host DOT and FTA training at their agencies to reduce the travel costs associated with staff attending safety training courses. Employees who have not had adequate safety-related training may be more likely to commit errors that can cause accidents. For example, in a 2009 investigation on how NYCT inspectors identified and reported defects in subway platform edges—which caused three transit riders within 3 years to fall onto the tracks after defective boards broke under their weight—the transit system’s Office of the Inspector General identified the lack of training on accurately and consistently identifying safety hazards at platform edges as contributing to the accidents. The office recommended that NYCT provide intensive and continuing training for platform inspectors. In response, NYCT developed and implemented a training program in May 2009 on identifying platform edge defects for all station managers and supervisors. In addition, in five of the eight rail transit accident investigations conducted by the NTSB since 2004, employee errors, such as not following procedures, were identified as a probable cause of the accidents. According to one expert we interviewed, training can help prevent accidents by preventing employee complacency and inattention in regards to safety rules and procedures. Some experts noted that attention to safety becomes more, not less, important as employees gain experience, as system familiarization leads some workers to drop their focus on safety. NTSB officials cited the importance of periodic refresher training for employees to ensure that staff maintain the skill set needed to identify and resolve safety issues. Another benefit of adequate training is helping to prepare the transit workforce to handle pending retirements. Currently, no industry standards exist for what an adequate level of safety- related training should be for transit agency staff. According to APTA, the transit industry lacks a standard training curriculum for transit employees and, as a result, transit safety-related training at transit agencies lacks consistency and is not always of high quality. FTA officials have also identified a lack of consistent training throughout the transit industry. According to one expert we interviewed, because of the lack of consistent training standards, the management of individual transit agencies has to determine on its own what safety training is needed for agency employees. According to NTSB officials, without minimum training requirements, the level of training available at each transit agency will vary, which can result in differing safety outcomes for each agency. Achieving a state of good repair is a challenge the largest transit agencies face that can impact their ability to ensure the safety of their heavy and light rail systems. In general, state of good repair is a term that transit officials use to refer to the condition of transit assets—for example, rail tracks, elevated and underground structures, rail cars, signals, ties, and cables (see fig. 2). In a study of the seven largest rail transit systems completed in 2009, FTA determined that more than a third of these agencies’ assets were in poor or marginal condition, indicating that they were near or had already surpassed their expected useful life. At six of the large transit agencies we visited, according to FTA estimates, the proportion of rail transit assets considered to be in poor or marginal condition ranged from zero percent, at LA Metro’s relatively new system, to 41 percent, at the much older and larger NYCT system. Efforts to achieve a state of good repair include maintaining, improving, rehabilitating, and replacing assets. The delay of some of these efforts can affect safety. Officials at one transit agency identified potential safety risks that could arise from delayed repairs, including worn tracks that could contribute to derailments, failures with the signal system that could allow for collisions, and failures with the traction power cable that could cause fires in subway tunnels. However, according to FTA and transit agency officials, transit agencies prioritize funding for state of good repair efforts to ensure that repairs important for safety are not delayed. All the transit systems we visited reported taking measures to ensure that their systems are safe in planning their state of good repair efforts. For example, one transit agency has reduced cleaning and other maintenance not critical for system safety as it continues to fund safety improvements. According to officials from this transit agency, less critical system safety items, such as escalator and elevator maintenance, have been put on a prolonged maintenance schedule. However, officials at this transit agency also stated that the agency had reached a point where further budget cuts would cause deterioration in system safety. In another example, one transit agency we visited has delayed the approximately $500 million replacement of subway fans which would provide for better ventilation because the agency determined that this was not a high safety priority. Agencies have made efforts to maintain safe operation of their system despite delays in addressing identified state of good repair maintenance or replacement needs. For example, officials at one transit agency we visited told us that they have implemented “slow zones” where trains run at lower speeds to help ensure safe operating conditions on aging track. In some cases, unaddressed poor asset conditions have contributed to accidents. For example, in its investigation of a 2006 derailment on the CTA system that injured 152 people, NTSB found that rail track problems that should have placed the tracks out of service were not identified and repaired. NTSB found that the track problems were readily observable and should have been identified and corrected. According to FTA officials, the transit industry has been slow to adopt asset management practices that would allow transit agencies to efficiently manage state of good repair needs. Officials noted that reasons for this slowness include the cost of development and implementation of asset management practices as well as the diversity of assets across and within transit systems. Transit asset management is a strategic approach for transit agencies to manage their transit assets and plan appropriately for rehabilitation and replacement. Asset management practices can help agencies decide how best to prioritize their investments, which can help ensure that safety needs are addressed. Such practices include tracking assets and their conditions and using this information to conduct long- term capital planning. However, no common standards for asset management practices exist and transit agencies use varying methods for determining the condition of their assets. A recent FTA study found that the use of these asset management practices at large transit agencies varied widely. Another component of asset management is the compilation of asset inventories by transit agencies. FTA defines an asset inventory as a current and comprehensive listing of all major assets used in the delivery of transit services, compiling the attributes of asset type, location, condition, age, and history, among other things. According to FTA, while some of the nation’s larger transit systems, among others, have developed asset inventories specifically to assist with capital planning purposes, not all have done so and currently no industry standard or preferred method for retaining asset inventory data exists. Furthermore, not all large transit agencies conduct comprehensive assessments of their asset conditions on a regular basis. Investments that transit agencies have made in previous years on state of good repair efforts have not kept pace with asset deterioration. According to FTA’s 2009 study, an estimated $50 billion is needed to bring the seven largest rail transit systems into a state of good repair. FTA found that these agencies were investing $500 million less than the annual investment needed to prevent this state of good repair backlog from increasing. Based on FTA’s estimates, the proportion of these agencies’ assets exceeding their useful life would increase from 16 percent to more than 30 percent by 2028 if funding levels remain unchanged. The state of good repair backlog for six of the seven transit agencies that we visited varies, in part due to system characteristics such as age, size, and use of the system (see fig. 3). According to NTSB and FTA officials, having a large state of good repair backlog does not necessarily mean that a transit system is unsafe. NYCT has a considerably higher backlog in comparison with the other transit agencies we visited. For example, its backlog is more than five times that of CTA, the next largest backlog of the agencies we visited. The backlog for the five remaining transit agencies ranges from $5 million to about $5 billion. LA Metro’s state of good repair backlog is much smaller in comparison to the other transit agencies we visited in part due to the young age of its heavy and light rail systems. These backlogs can be much larger than these agencies’ capital budgets. For example, the state of good repair backlog for NYCT is $27.31 billion while its 5-year capital budget is $12.32 billion. According to a 2010 FTA study of the transit industry as a whole, state of good repair investment backlogs are higher for heavy rail than light rail, reflecting the relatively young age of light rail assets in comparison to heavy rail assets. Recent budget cutbacks and budgetary shortfalls have negatively impacted transit agencies’ ability to sufficiently invest to prevent the worsening of their state of good repair backlogs and asset conditions. All of the rail transit agencies we visited cited financial constraints as affecting their ability to achieve a state of good repair. FTA has various efforts underway that may help instill a more robust safety culture at transit agencies. Through the Transit Cooperative Research Program, FTA has recently begun a study on safety culture at transit agencies. Given the difficulty of defining safety culture, this effort has the potential to more clearly communicate what a strong safety culture at transit agencies entails. The project will look at the culture of the working environment in which serious accidents occur, elements of an effective safety culture in a transit agency, and best practices for transit organizations to implement an effective safety culture. DOT’s draft Strategic Plan also notes the importance of encouraging DOT, government partners, safety advocates, and industry leaders to adopt a strong and consistent safety culture that does not accept the inevitability of fatalities on the nation’s transportation systems. According to FTA officials, their safety guidance, outreach, and training provided by the National Transit Institute and Transportation Safety Institute, have helped encourage transit agencies to discuss and examine institutional safety culture. An example of these efforts cited by FTA officials is a FTA-produced video, “A Knock at Your Door.” The video re- enacts fatal rail transit accidents to underscore the importance of safety procedures. FTA officials also mentioned that they have encouraged discussions about the importance of safety culture at roundtable meetings with transit agency management and other officials, teleconferences, and training classes. In addition, FTA has also sent letters to transit agencies following incidents to, among other things, bring incidents and safety culture trends to the attention of transit agency management. FTA officials were uncertain how much transit agencies use such guidance and outreach, as well as what impact these efforts have on safety. FTA has distributed nearly 500 copies of its safety video to rail transit agencies, state safety oversight agencies, and others. More information on current and planned efforts by FTA to address safety culture challenges at transit agencies is available in appendix III. Proposed legislation would give FTA the authority to set and enforce safety standards, which could also strengthen transit agencies’ safety culture through increased oversight, in addition to assistance. If passed, this legislation would result in FTA receiving authority to directly regulate rail transit safety and, in cooperation with the states, to oversee and enforce rail transit systems’ compliance with these regulations. We testified in December 2009 that these changes in oversight would bring FTA’s authority more in line with that of some other modal administrations within DOT, such as FRA. Additionally, the DOT Secretary has testified that with such authority, FTA would become more proactive in setting safety thresholds that would result in greater consistency and uniformity across transit systems in the United States. In our testimony, we noted that providing FTA and participating states with such authority could help ensure compliance with standards and improved safety practices, and might prevent some accidents as a result. However, we also noted that Congress may need to consider a number of issues in deciding whether and how to implement such legislation. These include how best to balance federal versus state responsibilities, how to ensure that FTA has adequate qualified staff to carry out such a program, and what level of resources to devote to the program. In addition to these efforts, FTA has recently formed the Transit Rail Advisory Committee for Safety. The committee is expected to provide information, advice, and recommendations—including recommendations for instilling a safety culture at transit agencies—to the Secretary of Transportation and the FTA Administrator on all matters relating to the safety of U.S. public transportation systems and activities. Members of the committee include representatives with expertise in safety, transit operations, or maintenance; representatives of stakeholder interests that would be affected by transit safety requirements; persons with policy experience, leadership, or organizational skills; and regional representatives. The committee held its first meeting on September 9–10, 2010, and established two workgroups, one tasked with researching safety planning models for transit agencies and the other with identifying the best model for organizing a state safety oversight agency organization. Both of these workgroups were tasked to produce recommendations based on their work in May 2011. The safety planning model workgroup could help strengthen safety culture through its work to determine the best safety management system principles for transit agencies of any size to enhance rail transit safety, including policy practices, stakeholder relationships, and any desired changes to current law or regulations. NTSB officials, transit agency officials, experts we met with, and others have proposed that FTA take additional steps to help transit agencies address safety culture challenges. These have included: Develop nonpunitive safety reporting programs. As previously discussed, nonpunitive systems can alleviate employee concerns and encourage participation in safety reporting. Nonpunitive systems can include voluntary, anonymous reports by employees that are reviewed by an independent, external entity. NTSB has recommended that FTA facilitate the development of nonpunitive safety reporting programs at all transit agencies that would collect safety reports and operations data from employees in all divisions. Safety department representatives from their operations, maintenance, and engineering departments and representatives from labor organizations would regularly review these reports and share the results of those reviews across all divisions of their agencies. FRA is piloting a voluntary confidential reporting program for workers in the railroad industry consistent with NTSB’s recommendation and the Federal Aviation Administration has established such a program for air carrier employees, air traffic controllers, and others. FTA officials told us that identifying operating errors in a nonpunitive way is important and that they have begun research through the Transit Cooperative Research Program to examine ways to improve compliance with safety rules at transit agencies, including the use of nonpunitive reporting models. FTA plans to report on the results of this work by late 2011. Increase efforts to encourage a strong safety culture. In addition, APTA and some transit agency officials have called on FTA to do more to develop and share information on establishing a strong safety culture at transit agencies. One expert we met with noted that establishing and enforcing regulations will not necessarily bring about an improvement in safety culture in the rail transit industry. APTA officials and officials at one large transit agency noted that FRA pilot projects aimed at addressing accidents caused by human error and identifying ways to better manage safety have helped encourage a strong safety culture in the freight railroad industry and that FTA could foster positive changes in safety culture in the rail transit industry through such methods. While FTA has various efforts underway to instill safety culture at transit agencies, these do not include pilot projects to evaluate or test safety culture concepts and ideas. FTA has provided some assistance to help transit agencies address staffing challenges, but its safety-related assistance has focused primarily on providing training. FTA has reported that it has a compelling interest in transit workforce development given its large investment in and oversight of transit. FTA has supported research on transit workforce challenges— including recruitment and retirement issues—through its Transit Cooperative Research Program. FTA’s Southern California Regional Transit Training Consortium has worked to establish a model mentor/internship program that can be used by transit agencies of any size. These programs run in conjunction with local community colleges, where a primary objective is to introduce students to transit work, particularly maintenance and other support. Ultimately, this program allows transit agencies to hire from a greater pool of transit-trained interns. FTA’s fiscal year 2011 budget request also described a proposed effort to design programs to help transit agencies build and develop a workforce with sufficient skills to fill transit jobs of the future. These efforts can help transit agencies recruit and hire qualified employees and address staffing challenges involving an aging workforce. To help address transit agency safety training challenges, FTA has provided funding to support a variety of training classes. Through programs managed by the National Transit Institute and the Transportation Safety Institute, FTA has supported training for transit agency employees. Both of these organizations offer safety classes attended by transit agency employees, as well as by state safety oversight agency staff. To avoid duplication, the National Transit Institute focuses on training for frontline employees, such as track workers and operators, while the Transportation Safety Institute provides classes for supervisory and management personnel. Classes have included current rail system safety principles and online fatigue awareness. In fiscal year 2010, the National Transit Institute and the Transportation Safety Institute held 220 training sessions related to safety and more than 6,700 transit agency staff took part in this training. FTA has also provided specialized training aimed at transit agencies that have experienced recent safety incidents. For example, FTA recently concluded training on rail incident investigation and system safety for WMATA staff. In all, FTA has delivered seven courses to assist WMATA staff in receiving critical safety training. In another example, through the Transit Technology Career Ladder Partnership Program, FTA has funded partnerships in four states aimed at training transit employees to become proficient in safety practices and procedures. Currently, FTA is drafting a 5-year safety and security strategic plan for training. The plan will cover safety technical training for staff working at FTA, state safety oversight agencies, and transit agencies. While one aim of the plan will be to prepare FTA and state staff to handle new responsibilities should legislation be enacted that would change their oversight role for rail transit safety, FTA also intends to use the plan to identify improvements needed in the training it provides to transit agencies. Potential improvements include re-evaluating the levels and types of training that FTA supports. FTA officials estimated the training plan would be completed in May 2011. Officials also told us that they are collaborating with officials at APTA, state safety oversight agencies, and FRA to obtain their views on how to better provide training to transit agencies. In its fiscal year 2011 budget request, FTA has proposed additional resources to provide training for transit agencies, state safety oversight agencies, and FTA officials. More information on current and planned efforts by FTA to address staffing and training challenges at transit agencies is available in appendix III. A legislative proposal, as well as some APTA officials and others, identified additional efforts that, if adopted, might improve transit agencies’ abilities to address their staffing and training challenges. These include: Formulate a national approach to staffing and training. In 2009, the House of Representatives Committee on Transportation and Infrastructure issued draft legislation to reauthorize surface transportation programs that would require FTA to form a national council to identify skill gaps in transit agency maintenance departments, develop programs to address the recruitment and retention of transit employees, and make recommendations to FTA and transit agencies on how to increase apprenticeship programs, among other things. Furthermore, this proposed legislation as well as APTA and the Transportation Learning Center called for a national curriculum or certification program that would establish some level of training standardization for transit agency employees. APTA and transit agency officials have noted that potential benefits include achieving a level of consistency in safety training across the country as well as minimum thresholds for transit agency staff. FTA has created curriculum development guidelines to help transit agencies establish their own training curricula. Due in part to differences in transit agencies’ operating environments and system technologies, FTA officials reported that in developing their upcoming safety and security strategic plan for training, they may examine whether setting standards for a national training curriculum would be appropriate. Increase technical training. NTSB officials and some of the experts and transit agency officials we met with stated that FTA should increase the technical components of the training for transit agency employees that it supports. Transit agency officials reported that training provided by the National Transit Institute and Transportation Safety Institute includes valuable safety information, but overall the training provided is introductory and does not cover enough technical aspects of safety. According to NTSB officials, transit agency safety staff need periodic, refresher training to continue to learn and more technical training to adequately understand and perform their job. Technical aspects could include the overall mechanics and engineering involved in rail transit operations, as well as how problems with equipment can lead to unsafe conditions. Some state safety oversight and transit agency officials we met with said that available technical training is limited and that FTA could create a training curriculum that other organizations, such as local community colleges, could use to teach safety-related classes. Similarly, APTA has reported the need to develop core curricula to be used at universities and community colleges and to enhance partnerships between transit agencies and higher education in order to provide additional training and educational opportunities for current and future transit workers. Increase federal support for training. In a past report, the Transportation Learning Center has noted that, of the billions of dollars the federal government provides to transit agencies annually, little is invested in human capital—that is, the people, knowledge, and skills necessary to provide reliable and safe service. In response, the center has recommended that federal funding provide support for transit agencies’ workforce training. In addition, officials at APTA and transit agencies, as well as some experts we met with, favored increasing federal support to cover training and related travel costs for transit agency employees. FTA has provided funding to state safety oversight agency staff to cover such costs to attend training offered by the National Transit Institute and the Transportation Safety Institute, but this support generally has not been extended to transit agency staff. FTA officials reported that they support training offered around the country and that demand is high. Transit agencies also have the option of hosting training to reduce travel and other costs. FTA’s assistance to transit agencies to help achieve a state of good repair—and therefore help ensure safe operations—has primarily consisted of providing grant funding, although FTA has also conducted studies and is taking steps to provide more guidance to agencies on asset management. The two major FTA grant programs transit agencies have used to help achieve a state of good repair are the Fixed Guideway Modernization Program and the Urbanized Area Formula Program. In fiscal year 2010, these FTA grants provided nearly $6 billion for transit agencies’ capital projects and related planning activities. This support has helped transit agencies maintain system facilities such as stations and other equipment. Funding also has assisted transit agencies in rehabilitating or purchasing rail vehicles and modernizing track and other infrastructure to improve operations. Besides supporting achieving a state of good repair, FTA’s grant funding programs can support other safety- related improvements, such as upgrading signal and communications systems. In its fiscal year 2011 budget request, FTA has proposed increasing assistance to transit agencies through a new $2.9 billion state of good repair program for bus and rail systems. This program would, for the first time, provide funding to transit agencies that exclusively focus on achieving a state of good repair. Besides providing funds, another activity FTA has recently engaged in involves helping transit agencies improve their asset management practices in order to enhance their ability to achieve a state of good repair and ensure safety. As previously discussed, FTA officials reported that the transit industry has been slow to adopt asset management practices that would allow efficient management of state of good repair and some related safety needs. As a result, transit agencies may have limited knowledge of asset conditions and how to best use scarce resources to ensure an efficient and safe operation. In DOT’s fiscal year 2010 appropriation, $5 million was made available to FTA to develop standards for asset management plans, provide assistance to grant recipients engaged in the development or implementation of an asset management system, improve data collection, and conduct a pilot program designed to identify best practices for asset management. FTA has begun to undertake these efforts. It has reviewed national and international asset management practices and concluded that major opportunities for improvements exist in the United States. FTA is also currently soliciting for projects with transit agencies of various modes and sizes to demonstrate different aspects of good asset management practices. According to FTA officials, improved asset management by transit agencies will include better approaches for prioritizing rehabilitation and replacement projects and will therefore allow agencies to better ensure safety. Other FTA technical assistance in this area includes the development of capital planning tools and asset inventory guidelines, research on integrating maintenance management with capital planning, training and guidance to educate transit agency staff on asset management, and enhanced asset data collection. As previously discussed, while no common standards exist for asset management, it can include tracking asset condition and use, as well as planning appropriately for rehabilitation and replacement. The National Surface Transportation Policy and Revenue Study Commission has reported that, to achieve a state of good repair, local governments, states, and other entities must develop, fund, and implement an asset management system to ensure the maximum effectiveness of federal capital support. We have previously reported that in some surface transportation programs, including transit programs, agencies often do not employ the best tools and approaches to ensure effective investment decisions, an area where asset management can help. See appendix III for other current and planned efforts by FTA to help transit agencies address state of good repair challenges. Legislative proposals, one FTA study, and several organizations we met with have identified additional efforts that, if adopted, might hold transit agencies accountable for improving the management of their assets and therefore better ensure safety. These included: Linking grant funding to the establishment of asset management systems. Congress has considered legislation that would direct DOT to establish and implement a national transit asset management system. This legislation would direct FTA to define a state of good repair and for the first time require transit agencies that receive federal funding to establish asset management systems. This would help transit agencies to prioritize which assets to maintain, rehabilitate, and replace to help ensure safe operating conditions. Separately, a report by the Senate Committee on Appropriations directs FTA to issue a notice of proposed rulemaking by September 30, 2011, to implement asset management standards requiring transit agencies that receive FTA funds to develop capital asset inventories and condition assessments. FTA officials told us that they have no plans to develop such a rulemaking at this time, but would do so if required by statute. FTA is to report to Congress in June 2011 on its investigations into asset management. We have previously identified principles that could help drive re-examination of federal surface transportation programs, including ensuring accountability for results by entities receiving federal funds and using the best tools and approaches, such as grant eligibility requirements, to emphasize return on targeted federal investment. Increasing available transit agency asset data. Another option that FTA has reported on for it and Congress to consider involves establishing a system that ensures regular reporting of transit agencies’ capital assets and a consistent structure and level for this reporting. FTA officials noted that they already collect transit vehicle data from agencies, but that they need more information to effectively report on transit agency assets. FTA is considering expanding transit agency reporting requirements to include data on local agency asset inventory, holdings, and conditions. FTA has reported that these data would support better national needs assessments and transit asset condition monitoring than is currently possible. Also, this would encourage transit agencies to develop and maintain their own asset inventory and condition monitoring systems. Besides the additional efforts outlined above, there are other proposals that would make more grant funding available to large transit agencies. FTA and transit agency officials reported that transit agencies maintaining older systems have received a smaller percentage of available federal funding as the number of transit systems competing for the same amount of funding has increased. For example, in its 2009 study on the state of good repair in the transit industry, FTA reported that the seven largest rail transit systems, which carry 80 percent of the nation’s rail transit riders and maintain 50 to 75 percent of the nation’s rail transit infrastructure, have received 23 percent of the total federal funding eligible for rail state of good repair investment. These agencies’ percentage share of total federal funding for achieving a state of good repair has declined. In short, while total federal support for transit infrastructure has increased, the share allocated to the nation’s oldest and largest systems has shrunk. To address this, FTA included an option in its 2009 study for it and Congress to consider modifying existing funding formulas to factor in system age, among other things. Congress is also considering new ways to potentially fund transit and other surface transportation projects, including the formation of a National Infrastructure Bank. The various proposals suggesting additional steps that FTA could take to provide safety-related assistance to transit agencies or strengthen their accountability for effectively managing their assets have the potential, if implemented, to enhance rail transit safety. Past reauthorizations of surface transportation programs have provided an avenue for Congress to identify programs to address problems, including those involving transportation safety. DOT is currently developing a surface transportation reauthorization proposal. As part of its effort to develop a surface transportation reauthorization proposal, DOT officials have conducted outreach events to collect input from experts on possible surface transportation initiatives to include in the proposal and have held internal discussions to develop the proposal. Additionally, DOT is considering options for improving transportation safety, including rail transit safety. Therefore, the proposal that DOT eventually puts forward may address some or all of the safety challenges that we cite. Furthermore, FTA’s 5- year safety and security training plan, when it is completed, may include improvements that help address the training challenges that transit agencies face. As FTA undertakes efforts to help transit agencies address their safety culture, staffing and training, and state of good repair challenges, setting performance goals and measures can help it target these efforts and track results. Performance goals can help organizations clearly identify the results they expect to achieve, prioritize their efforts, and make the best use of available resources. Performance measures can help organizations track the extent to which they are achieving intended results. In the case of FTA, such prioritization is essential, given the relatively small number of staff it has devoted to safety and state of good repair efforts. For example, while FTA has requested 30 additional staff in fiscal year 2011 in anticipation of receiving authority to strengthen its safety oversight role, it currently has 15 to 17 full-time employees working in its Office of Safety and Security, as well as staff from other FTA offices working on state of good repair efforts. The ability to prioritize efforts and track progress will become even more important in the event that Congress enacts legislation that would give FTA greater oversight authority of transit agencies and expand its transit safety responsibilities. Furthermore, as FTA is faced with proposals to assume even more responsibility for transit safety in the future—through, for example, setting asset management or training curriculum standards for transit agencies—it is even more essential that it clearly identify the specific results it is trying to achieve, target its efforts, and track progress toward achieving those results. We have identified a number of leading practices agencies can implement to help set or enhance performance goals and measures. While FTA has created plans and other tools to help guide and manage its safety efforts, it has not fully adopted these practices. The next sections discuss these leading practices and the extent to which FTA has followed them. We have found that successful organizations try to link performance goals and measures to strategic goals and that, in developing these goals and measures, such organizations generally focus on the results that they expect their programs to achieve. DOT has identified an overall strategic safety goal of reducing transportation-related injuries and fatalities, including rail transit injuries and fatalities, and FTA has identified measures in its fiscal year 2011 budget request related to that goal. In its Annual Performance Plan for fiscal year 2011, FTA identified a general safety goal of further defining its leadership role in the area of surface transportation safety as well as some desired outcomes of its safety efforts, such as increased public confidence in the safety of public transportation and improved safety culture at transit agencies nationwide. It also identified some strategies for achieving this goal and these outcomes, such as establishing the Transit Rail Advisory Committee for Safety and assessing safety and security training. However, FTA has not identified specific performance goals that make clear the direct results its safety activities are trying to achieve and related measures that would enable the agency to track and demonstrate its progress in achieving those results. Without such specific goals and measures, it is not clear how FTA’s safety activities contribute toward DOT’s overall strategic goal of reducing transportation-related injuries and fatalities, including rail transit injuries and fatalities. In addition, in its fiscal year 2011 budget request FTA included the goal of improving the rail transit industry’s focus on safety vulnerabilities. FTA also identified some activities associated with this safety goal, such as submitting legislation to Congress. However, FTA did not clearly articulate the expected results associated with this goal and activities. Nor did FTA explain how such results would be measured and how they relate to DOT’s strategic goals. Linking FTA’s performance goals to departmental goals can provide a clear, direct understanding of how the achievement of annual goals will lead to the achievement of the agency’s strategic goals. We have previously reported that a clear relationship should exist between an agency’s annual performance goals and long-term strategic goals and mission. FTA officials told us that it can be difficult to set performance goals and measures for the agency’s safety efforts due to its limited authority over safety in the transit industry. In past work, we have reported that developing goals and measures for outcomes that are the result of phenomena outside of federal government control is a common challenge faced by many federal agencies. However, despite this challenge, measuring program results and reinforcing their connection to achieving long-term strategic goals can create a greater focus on results, help hold agencies and their staff accountable for the performance of their programs, and assist Congress in its oversight of agencies and their budgets. Performance goals and measures that successfully address important and varied aspects of program performance are key aspects of a results orientation. While FTA has identified various activities aimed at improving rail transit safety, it has not established clear results-oriented goals and measures that address key dimensions of the performance of its various efforts related to safety, such as its training and state of good repair programs. FTA could address important dimensions of program performance in different ways. For example, the agency could set goals and measures to address identified safety challenges, such as those identified in this report, or to capture results of its various safety-related efforts, such as its training programs or asset management initiatives. Alternatively, performance goals and measures could relate to the causes behind certain types of transit accidents, such as setting a goal of reducing the number of accidents where human error is a probable cause in a given year. Without goals related to various dimensions of program performance, FTA has not identified the intended results of its various safety-related efforts. Limited use of performance measures by FTA makes it difficult to determine the impact of these efforts on safety. While FTA has identified overall measures of transit safety—the number of transit injuries and fatalities per 100 million passenger-miles traveled—its annual performance plan lacks quantifiable, numerical targets related to specific goals, against which to measure the performance of its efforts. FTA’s fiscal year 2011 budget request did include a performance measure to track the percentage of federal formula funding that transit agencies used for replacement versus new capital purchases by the end of fiscal year 2011 and related this measure to its goal of improving the rail industry’s focus on safety vulnerabilities. However, this measure captures only one of the types of results FTA might expect to achieve from its various safety efforts. In the past, FTA safety planning documents have linked specific FTA performance goals and measures with DOT’s overall strategic safety goals; however, FTA is no longer using these documents. For example, FTA’s 2006 Rail Transit Safety Action Plan included safety goals and measures, such as reducing total derailments per 100 million passenger miles, major collisions per 100 million passenger trips, and total safety incidents per 10 million passenger trips. These goals and measures are clearly linked to DOT’s overall strategic goal of working toward the elimination of transportation-related injuries and fatalities, including rail transit injuries and fatalities. The plan also included a number of supporting priorities, such as reducing the impact of fatigue on transit workers, and how the agency planned to achieve them. The plan also included performance measures and target goals for FTA’s state safety oversight program, such as the number of dedicated state personnel and necessary levels of training and certification. FTA officials reported that the goals and measures captured in this and other past planning documents were no longer in use because of changes in safety environments. At present, FTA has no active strategic plan, and FTA officials estimated the new strategic plan would be completed in late 2011. Other agencies are presently making use of practices to enhance performance goals and measures for safety activities. For example, FRA has created a set of performance goals and measures that address important dimensions of program performance. In its proposed fiscal year 2011 budget, FRA included specific safety goals to reduce the rate of train accidents caused by various factors, including human errors and track defects. These goals are numeric, with a targeted accident rate per every million train miles. Collecting such accident data equips FRA with a clear way to measure whether or not those safety goals are met. FRA’s budget request has also linked FRA’s performance goals and measures with DOT’s strategic goals. Another DOT agency, the Federal Motor Carrier Safety Administration, has a broad range of goals and related performance measures that it uses to provide direction to—and track the progress of— its enforcement programs, including measures of the impact of its enforcement programs on the level of compliance with safety regulations and on the frequency of crashes, injuries, and fatalities. The agency’s end goal—to reduce crashes, injuries, and fatalities through its reviews—aligns with and contributes to DOT’s overall strategic safety goals. While these leading practices are useful, problems with FTA’s rail transit safety data could hamper the agency’s ability to measure its safety performance. We have found that data contained in FTA’s State Safety Oversight Rail Accident Database—which is compiled from data provided by state safety oversight agencies and transit agencies—are unreliable. Specifically, we found unverified entries, duplicative entries, data discrepancies, and insufficient internal controls. Without reliable data, it is difficult for FTA to measure performance based on goals to be captured in annual performance plans or agency strategic plans. Baseline and trend data also provide context for drawing conclusions about whether performance goals are reasonable and appropriate. Establishing procedures that ensure reliable data is an important internal control necessary to validly measure performance based on numerical targets. Additionally, decision makers can use such data to gauge how a program’s anticipated performance level compares with past performance. FTA officials have acknowledged the important role that data play in making decisions regarding how to address challenges to rail transit safety. FTA has implemented changes to the data collection process to address some of the data problems we identified and plans to take additional actions to validate and correct discrepancies contained in its State Safety Oversight Rail Accident Database, but these plans do not identify specific efforts to establish procedures that would improve data reporting in the future. To ensure the accuracy and reliability of the State Safety Oversight Rail Accident Database, we have recommended that FTA develop and implement appropriate internal controls to ensure that data entered are accurate and incorporate an appropriate method for reviewing and reconciling data from state safety oversight agencies and other sources. Without clear, specific, and varied performance goals and related measures linked to DOT’s strategic goal of reducing transportation-related injuries and fatalities, including rail transit injuries and fatalities, the intended results of FTA’s safety efforts are unclear. Furthermore, the absence of clear goals and measures to guide and track progress limits FTA’s ability to make informed decisions about its safety strategy and its accountability for its safety performance. Finally, without reliable data, FTA cannot establish useful performance measures, making it difficult to determine whether safety programs are accomplishing their intended purpose and whether the resources dedicated to program efforts should be increased, used in other ways, or applied elsewhere. Rail transit systems will remain vital components of the nation’s transportation infrastructure and will need to continue to provide safe service for the millions of commuters that rely on them daily. Through its assistance efforts, FTA has worked with transit agencies to foster a safer operating environment for these passengers. Planned, new assistance efforts by FTA, as well as legislative proposals to enhance FTA’s regulatory authority over transit safety, have the potential to further enhance safety on rail transit systems. Some additional proposals concerning new steps FTA could take to address safety challenges facing transit agencies also have the potential to improve rail transit safety. For example, while FTA is already working to instill safety culture at transit agencies, creating pilot projects to examine new approaches for instilling a strong safety culture at these transit agencies may have merit. Setting standards for a national training curriculum for transit employees may also ensure that a minimum threshold of training is achieved across the transit industry, if such standards could account for differences in transit agencies’ environments and technologies. Asset management shows promise in both helping transit agencies and protecting federal investment. Similarly, holding agencies that receive federal funds accountable for using asset management practices could help ensure that federal funds aimed at addressing this problem are effectively used. DOT is uniquely positioned to examine various proposals to discern any worthwhile options for implementation going forward, given available resources and other competing priorities, and to propose in its draft surface transportation reauthorization legislation any options deemed worthwhile. We are not recommending at this time that DOT take actions on proposals for improving rail transit safety, as the department is considering various options for improving transportation safety, including rail transit safety, in developing its reauthorization proposal. As FTA helps transit agencies ensure safety, setting clear performance goals and related measures for its safety efforts, based on leading practices, will be vital to improve FTA’s ability to set priorities and determine progress—both in overseeing transit agencies and in helping them maintain safety on their systems. Setting clear performance goals will help FTA to communicate a direction for its safety efforts and establish benchmarks for performance. Tracking progress through performance measures will help FTA in planning its future efforts and will help hold the agency accountable for achieving results. However, FTA must take further actions to improve the reliability of its safety data before it can track its safety performance based on new measures and goals. To ensure that FTA targets its resources effectively as it increases its safety efforts and is able to track the results of these efforts, we recommend that the Secretary of Transportation direct the FTA Administrator to use leading practices as FTA develops its plans for fiscal year 2011 and in the future. In particular, the Administrator should create a set of clear and specific performance goals and measures that (1) are aligned with the department’s strategic safety goals and identify the intended results of FTA’s various safety efforts and (2) address important dimensions of program performance. We provided a draft of this report to DOT and NTSB for their review and comment. Both provided technical comments and clarifications, which we incorporated into the report as appropriate. DOT agreed to consider our recommendation. We are sending copies of this report to interested congressional committees, the Secretary of Transportation, and the Chair of the National Transportation Safety Board. In addition, this report is available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-2834, or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix IV. Of the 48 rail accident investigations that the National Transportation Safety Board (NTSB) has reported on since 2004, 7 were on heavy rail transit systems operated by the Chicago Transit Authority (CTA) and the Washington Metropolitan Area Transit Authority (WMATA), and one was on the light rail transit system operated by the Massachusetts Bay Transportation Authority (MBTA). As shown in table 2, these accidents collectively resulted in 13 fatalities, hundreds of injuries, and millions of dollars in property damage. In its reports, NTSB identified the probable causes of accidents as well as factors that contributed to these accidents. In five of these eight accident investigations, NTSB found the probable cause to involve employee errors, such as the failure of the train operator to comply with operating rules and of track inspectors to maintain an effective lookout for trains. Of the remaining three accidents, NTSB found that problems with equipment were a probable cause of two accidents and that weaknesses in management of safety by the transit agency, such as its management of maintenance and of equipment quality controls, were a probable cause of all three accidents. For six of these eight accidents, contributing factors identified involved deficiencies in safety management or oversight, including weaknesses in transit agencies’ safety rules and procedures and in their processes for ensuring employees’ adherence to these rules and procedures, lack of a safety culture within the transit agency, and lack of adequate oversight by the transit agency’s state safety oversight agency and the Federal Transit Authority (FTA). In one accident report, NTSB found as a contributing factor the lack of safety equipment or technologies, such as a positive train control system that can prevent trains from colliding. In addition, as shown in table 2, NTSB has ongoing investigations on six accidents that occurred on heavy and light rail transit systems. To determine the challenges that the largest rail transit systems face in ensuring safety, we conducted site visits, examined documents, conducted interviews, and consulted relevant literature. We obtained documents from and interviewed officials at five large heavy rail transit systems and three large light rail transit systems operated by seven transit agencies. The five heavy rail systems are those operated by the Metropolitan Transportation Authority New York City Transit (NYCT), WMATA, CTA, MBTA, and the Bay Area Rapid Transit (BART). The three light rail systems are operated by MBTA, the San Francisco Municipal Transportation Agency (SF Muni), and the Los Angeles County Metropolitan Transportation Authority (LA Metro). We obtained budget documents, accident and audit reports, corrective action plans, and staffing and training information, among other information and documentation, from each system. Also, we interviewed representatives from these transit agencies and their respective state safety oversight agencies about the transit agencies’ challenges. We also analyzed published NTSB investigations of accidents on heavy and light rail transit systems since 2004 to help us determine the causes of and factors contributing to rail transit accidents in recent years. We used data from Federal Transit Administration’s (FTA) National Transit Database (NTD) to select these eight transit systems. The NTD data we used for our selection criteria were (1) annual ridership, as measured by unlinked passenger trips and passenger miles, (2) the number of rail transit vehicles in revenue service operations, and (3) total track mileage. To determine whether these NTD data were reliable for our purposes, we interviewed FTA officials who are knowledgeable about the database and assessed the accuracy of these data elements. We determined that these specific data elements were sufficiently reliable to be used as selection criteria. To determine the extent to which FTA’s assistance addresses the safety challenges faced by the largest transit agencies, we reviewed FTA documents on funding, state of good repair initiatives, technical assistance programs, and guidance and outreach related to rail transit safety. We also obtained information on transit safety training from the National Transit Institute and the Transportation Safety Institute. We interviewed officials from FTA and NTSB and representatives of the American Public Transportation Association (APTA). We asked officials from the transit systems we visited and their respective state safety oversight agencies for their assessment of FTA’s assistance efforts. We reviewed applicable federal regulations, laws, and legislative proposals. In addition, we consulted our prior work on performance management and rail transit issues. We further contracted with the National Academies’ Transportation Research Board to identify rail transit safety experts from the transit industry, academia, labor unions, and the rail consulting community. We interviewed 12 experts on the challenges that large rail transit agencies face in ensuring safety, the factors that contribute to rail transit safety accidents, and potential ways that FTA could improve its safety assistance efforts (see table 3). We also interviewed officials from NTSB and representatives of the APTA on these topics. In addition, as part of this review, we assessed FTA’s safety data to determine whether they were sufficiently reliable for us to use to report on trends in rail transit accidents as well as causes of those accidents. During that assessment, we identified inaccuracies, discrepancies, and duplicative entries, and determined that these data were not sufficiently reliable for these purposes and decided to conduct a separate review of the data’s reliability. We are issuing a report on our findings and recommendations based on this review. Appendix III: DOT Safety-Related Assistance Efforts That Address Transit Agencies’ Safety Culture, Staffing, and Training Challenges .4 billion to fund public transportation throughout the country. Recovery Act funds have primarily supported grants in capital projects at transit agencies, although some funds have been used for operating expenses. As of August 25, 2010, approximately $190 million had been obligated for use as operating expenses. In addition to the contact named above, Judy Guilliams-Tapia, Assistant Director; Catherine Bombico; Matthew Cail; Martha Chow; Antoine Clark; Colin Fallon; Kathleen Gilhooly; Brandon Haller; Hannah Laufe; Grant Mallie; Anna Maria Ortiz; and Kelly Rubin made significant contributions to this report. | Although transit service is generally safe, recent high-profile accidents on several large rail transit systems--notably the June 2009 collision in Washington, D.C., that resulted in nine fatalities and 52 injuries--have raised concerns. The Federal Transit Administration (FTA) oversees state agencies that directly oversee rail transit agencies' safety practices. FTA also provides assistance to transit agencies, such as funding and training, to enhance safety. GAO was asked to determine (1) the challenges the largest rail transit systems face in ensuring safety and (2) the extent to which assistance provided by FTA addresses these challenges. GAO visited eight large rail transit systems and their respective state oversight agencies, reviewed pertinent documents, and interviewed rail transit safety experts and officials from FTA and the National Transportation Safety Board (NTSB). The largest rail transit agencies face several challenges in trying to ensure safety on their systems. First, according to some experts we interviewed, the level of safety culture--awareness of and organizational commitment to the importance of safety--varies across the transit industry and is low in some agencies. NTSB found that the lack of a safety culture contributed to the June 2009 fatal transit accident in Washington, D.C. Second, with many employees nearing retirement age, large transit agencies have found it difficult to recruit and hire qualified staff. It is also challenging for them to ensure that employees receive needed safety training because of financial constraints and the limited availability of technical training. Training helps ensure safe operations; NTSB has identified employee errors, such as not following procedures, as a probable cause in some significant rail transit accidents. Third, more than a third of the largest agencies' assets are in poor or marginal condition. While agencies have prioritized investments to ensure safety, delays in repairing some assets, such as signal systems, can pose safety risks. The transit industry has been slow to adopt asset management practices that can help agencies set investment priorities and better ensure safety. FTA has provided various types of assistance to transit agencies to help them address these challenges, including researching how to instill a strong safety culture at transit agencies, supporting a variety of safety-related training classes for transit agency staff, and providing funding to help agencies achieve a state of good repair. The Department of Transportation (DOT) has proposed legislation that would give FTA the authority to set and enforce rail transit safety standards, which could help improve safety culture in the industry. FTA is also planning improvements to its training program and the development of asset management guidance for transit agencies, among other things. Some legislative proposals, studies, experts, and agency officials have identified further steps that FTA could take to address transit agencies' safety challenges, such as requiring transit agencies to implement asset management practices. Some of these suggested further steps may have the potential, if implemented, to enhance rail transit safety. DOT is currently developing a legislative proposal for reauthorizing surface transportation programs and may include new rail transit safety initiatives in this proposal. In addition, clear and specific performance goals and measures could help FTA target its efforts to improve transit safety and track results. GAO has identified leading practices to establish such performance goals and measures, but FTA has not fully adopted these practices. For example, FTA has not identified specific performance goals that make clear the direct results its safety activities are trying to achieve and related measures that would enable the agency to track and demonstrate its progress in achieving those results. Without such specific goals and measures, it is not clear how FTA's safety activities contribute toward DOT's strategic goal of reducing transportation-related injuries and fatalities, including rail transit injuries and fatalities. Furthermore, problems with FTA's rail transit safety data could hamper the agency's ability to track its performance. GAO is making recommendations for improving these data in a separate report (GAO-11-217R). To guide and track the performance of FTA's rail transit safety efforts, DOT should direct FTA to use leading practices to set clear and specific goals and measures for these efforts. DOT and NTSB reviewed a draft of this report and provided technical comments and clarifications, which we incorporated as appropriate. DOT agreed to consider the recommendation. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
The Child Support Enforcement Program was established in 1975 to help strengthen families and reduce dependence on welfare by helping to ensure that the responsibility for supporting children was placed on parents. The states operate programs to locate noncustodial parents, establish paternity, and obtain support orders, along with enforcing actual collections of those court-ordered support payments. The federal government—through OCSE—funds 66 percent of state administrative and operating costs, including costs for automated systems, and up to 90 percent of expenses associated with planning, designing, developing, installing, and/or enhancing automated systems. The Family Support Act of 1988 required that statewide systems be developed to track determination of paternity and child support collections. To address that requirement, OCSE developed regulations and guidance for conducting certification reviews. In 1993, OCSE published a certification guide, which addresses the functional requirements for child support enforcement systems. In general, the certification guide requires that the systems be operational, statewide, comprehensive, and effective and efficient. The guide also provides 53 specific requirements, which are grouped into the following categories: case initiation, location of parents, establishment of paternity, case management, enforcement, financial management, reporting, and security and privacy. (See appendix I for the system regulations and appendix II for descriptions of the guide’s specific requirements by category.) The guide was developed to help OCSE’s analysts ensure that certification reviews are conducted consistently—using the same criteria and standards for documentation. The analysts use the certification guide in conducting certification reviews, and states refer to it in preparing for their certification reviews. To ensure that states are meeting the functional requirements specified in the certification guide, OCSE also developed a certification questionnaire.The questionnaire provides a series of questions for analysts’ use in determining if the states’ systems address the functional requirements. The format and content of the questionnaire mirror those of the certification guide. In addition to the certification guide and questionnaire, OCSE has provided supplementary guidance to (1) aid in developing and testing specific areas such as financial requirements and (2) clarify and expand upon the requirements provided in the certification guide and questionnaire. OCSE uses its guidance to ensure that its staff consistently perform three types of certification reviews: functional, level 1, and level 2. A functional review occurs early in the development of a system before it is operational in a pilot site. During functional reviews, analysts evaluate parts of the system against the certification requirements to inform the state of the work that remains before its system can be certified. A level 1 review occurs when an automated system is installed and in operation in one or more pilot locations. (OCSE created this level of review in 1990 due to state requests for agency guidance prior to statewide implementation.) A level 2 review occurs when the system is considered by the state to be operational statewide. This review is required for final certification. Systems are granted full certification when they meet all functional requirements and conditional certification when the system needs only minor corrections that do not affect statewide operation. According to OCSE analysts, states whose systems receive either type of level 2 certification are exempt from penalties for failing to meet system requirements imposed by the Family Support Act. The Family Support Act of 1988 set a deadline of October 1, 1995, for implementation and federal certification of such systems. However, when only a few states met the deadline, the Congress passed legislation extending it by 2 years, to October 1, 1997. Current law requires HHS to impose substantial financial penalties on states that did not have certified child support enforcement systems by October 1, 1997. The Congress, under the House bill HR 3130 with Senate modifications SP 2286, is considering legislation to reduce those penalties. OCSE certified 17 states by the extended deadline and another 8 states since the deadline (as of March 31, 1998). In June 1997 we made several recommendations designed to strengthen OCSE’s oversight of child support enforcement systems. Specifically, we reported that the certification reviews are conducted too late for effective oversight. Because the reviews are conducted toward the end of the system development projects, the reviews come too late for timely redirection of systems development without significant costs being incurred. Our objectives were to determine (1) whether HHS’ certification guidance addresses the system provisions in the Family Support Act of 1988 and implementing regulations, (2) whether HHS has consistently administered the certification process, and (3) the certification status of the state systems. Our work was done to determine whether OCSE’s certification guidance completely addresses the system requirements in the act and supporting regulations; it does not determine the overall adequacy of OCSE’s certification review process. This issue was addressed in our June 1997 report, in which we identified weaknesses in HHS’ oversight of these systems, including the timeliness of the certification reviews. To document the certification process, we obtained and analyzed OCSE’s guidance for certifying child support enforcement systems. To determine whether this guidance addresses the legal and regulatory requirements for child support enforcement systems, we compared the certification guide and questionnaire to the child support enforcement system regulations. We also analyzed whether the regulations addressed the system provisions of the Family Support Act of 1988. To determine whether OCSE consistently administered the certification process, we obtained and reviewed all certification reports issued as of March 31, 1998, and assessed how OCSE officials at headquarters and in one regional office plan, administer, and report the results of certification reviews. While we discussed this review process with these officials, we did not visit states to observe OCSE conducting certification reviews or conduct independent work to verify the information presented in OCSE’s certification reports. We performed our work at HHS headquarters in Washington, D.C., and at the HHS regional office in Atlanta, Georgia. We conducted our work between December 1997 and April 1998, in accordance with generally accepted government auditing standards. HHS provided written comments on a draft of this report. These comments are highlighted in the “Agency Comments” section of this report and are reprinted in appendix IV. OCSE’s guidance for certification reviews generally complies with the system provisions in the Family Support Act of 1988 and the implementing regulations established by the Secretary of HHS. This guidance includes: (1) the certification guide, which defines systems functional requirements and (2) the certification questionnaire, which in essence, is the certification guide presented in a questionnaire format. Our analysis showed that the certification guide and questionnaire address key system elements of the law and implementing regulations. OCSE included references to system and program regulations in both the certification guide and questionnaire. We analyzed those references to determine whether the certification guidance addressed the regulations cited. The comparison in appendix III shows that each of the implementing regulations is addressed in OCSE’s certification objectives. For example, section 307.10(b)(1) of the regulation requires that child support enforcement systems maintain identifying information on individuals involved in child support cases. Seven different certification objectives in the certification guidance address this requirement. Two of those certification objectives, A-8 and D-4, demonstrate how the guide addresses this requirement; respectively, they state that, “the system must accept and maintain identifying information on all case participants,” and “the system must update and maintain in the automated case record, all information, facts, events, and transactions necessary to describe a case and all actions taken in a case.” OCSE has been consistent in the way it administers certification reviews. Specifically, it used the same types of teams, the same guidance that was discussed earlier, and the same method for certification reviews. Although the scope and length of functional, level 1, and level 2 certification reviews varied, OCSE has been generally consistent in the way that it conducted each type of review. OCSE’s review process is as follows. It begins preparing for a certification review when the state notifies it that the state system is compliant and ready for certification. When OCSE receives the request, it requires the state to submit consistent documentation, which includes the completed certification questionnaire. After OCSE receives the documentation, it assigns a team to review the information and develop issues for discussion during the certification review. These teams consistently included at least one supervisor and two systems analysts. In some cases, regional analysts also participated in the documentation review. Following that review, certification teams are formed from staff who have similar background and expertise. For example, the certification team leaders are usually systems analysts from OCSE headquarters. These leaders are assisted by teams from the Administration for Children and Families regional offices responsible for the states being reviewed. The regional teams are usually a combination of staff with systems, policy, or audit expertise. In performing the certification reviews, these teams consistently use the certification questionnaire. OCSE used the same certification questionnaire for all of its level 1 and level 2 certification reviews except one. The first level 1 review was conducted before OCSE developed the certification questionnaire. OCSE analysts have also used a consistent method for conducting certification reviews. Certification review teams spend approximately 1 week on-site conducting certification reviews. (Because functional reviews and level 1 reviews are more limited in scope, those reviews do not always take a full week.) During the certification review, the review team usually holds an entrance conference at the state office and allows the state staff to provide an overview of the child support enforcement system. The next few days are spent reviewing the state’s responses to the certification questionnaire and observing how adequately system screens and functions address the federal requirements. This review at the state office is often performed using a test version of the system—one that does not include actual cases. To supplement the information obtained at the state office, the certification team usually spends at least one day visiting local offices to observe the system in operation. At the local offices, the team interviews staff about their use of the system and the systems training they have received. In addition, they have the staff process sample cases to ensure that the system will handle them correctly, observe the staff processing actual cases, and review reports and documents generated by the system. OCSE uses the certification guide and questionnaire in lieu of a manual to instruct its staff on how to conduct certification reviews and relies heavily upon on-the-job training to ensure that the reviews continue to be conducted consistently. In one instance when a new certification staff member was added, that person was paired with experienced staff for the first two or three reviews after joining the review team, to gain experience and learn how to consistently cover the issues addressed by the certification teams. OCSE began reporting on the results of its certification reviews in 1994. In general, the format and process for preparing certification reports has been standardized. However, we noted that several reports contained inconsistencies, such as including inaccurate descriptions of the criteria against which the systems’ financial components were measured. OCSE’s analysts used a standard template for preparing certification reports. As a result, we found the certification reports to be very similar in format and content. Even though the scope of the different reviews varied, the reports for functional, level 1, and level 2 reviews addressed similar topics. For example, they typically included a background section giving the history of the development of and funding for the system and describing the scope and methodology of the certification review. The reports also presented both certification findings and management findings. Certification findings are those system problems that must be addressed prior to system certification. Management findings are optional systems changes for management to consider. These findings often relate to the efficiency of the states’ systems. OCSE used a consistent process for reviewing the draft certification reports. According to an OCSE supervisor, division management reviewed all certification reports for consistency prior to their issuance. In addition, the office requested comments from states before publishing the final reports. According to OCSE officials, the nature, extent, and timeliness of the states’ comments varied and, when appropriate, states’ comments were incorporated into the final certification reports. While OCSE published many standardized certification reports on the results of its certification reviews, we noted three types of exceptions with the reporting process. First, OCSE certified two state systems in July and December 1997, respectively, by sending a brief letter to each state instead of issuing a complete standardized written report. The division director explained that standardized reports were not prepared for those systems because the certification team found no problems with them during the review. Second, according to officials, a report was not published for one state’s level 1 review because the level 2 review was requested before the earlier report was published. Finally, the reports for one level 1 and five functional reviews contained a qualifying statement not contained in the boilerplate language of the standardized reports. This qualifying statement said that, in order to even be conditionally certified, a system must process the financial component of all sample cases correctly, in accordance with predetermined results. In contrast, the other standardized reports’ paragraph on this subject did not contain this qualification. The division director told us that the boilerplate language in the standardized reports was appropriate and that the qualifying language in the six reports was incorrect. She said OCSE will conditionally certify a system even though it does not process all sample cases correctly, as long as the majority of the financial transactions are processed accurately and the state has reasonable explanations for any variances. She added that none of the systems was denied level 2 certification based on the qualifying statement, and that she was unaware of any other systems that were denied certification for failing to process all test cases correctly. The division director also noted that the problem was not widespread because only one lead analyst was responsible for the incorrect language. However, the review process did not prevent the incorrect language from being incorporated into the six published reports. Finally, she said that, until we brought this issue to her attention, she was not aware that any reports included this language; and that she would act to ensure that such qualifying language did not appear in future reports. As of March 31, 1998, OCSE had either certified or conditionally certified 25 of 54 child support enforcement systems, representing approximately 38 percent of the reported average national caseload for fiscal year 1995. OCSE had conducted 67 certification reviews for the 54 state systems as of March 31, 1998. Some states have had several levels of review. Figure 1 shows the highest level of certification for the 54 child support enforcement systems as of March 31, 1998. On page 10, as figure 2 indicates, 25 state systems were level 2 certified as of March 31, 1998. Figure 2 shows the status of level 2 certification for each state. Since the October 1997 deadline, OCSE’s certification review workload has increased substantially, as shown by figure 3. OCSE conducted 13 level 2 certification reviews for the first quarter of calendar year 1998, equaling the number of level 2 reviews conducted in 1997—the most done in any previous year. The first quarter of calendar year 1998 is the second quarter of OCSE’s fiscal year 1998. documenting the results of and preparing certification reports for the certification reviews performed in 1998. The systems director said she expects the rate of certification reviews to decline sharply because, as of March 31, 1998, only one request for a certification review was pending. OCSE’s certification guidance addresses the system requirements of the Family Support Act of 1988 and HHS’ implementing regulations, and OCSE has administered the certification process consistently across states. Further, while OCSE, in general, used a standardized format and process in preparing certification reports on the results of its reviews, these reports were not always consistent. We recommend that the Assistant Secretary of the Administration for Children and Families increase OCSE’s oversight of the reporting process to ensure that the reports consistently address criteria for evaluating the financial components of state systems. The Assistant Secretary for Children and Families agreed with our recommendation to increase OCSE’s oversight of the reporting process. She stated that OCSE would increase its oversight and consistency of reporting by subjecting the functional and level 1 reports to the same degree of management review being provided to the level 2 reports. We will provide copies of this report to the Assistant Secretary, Administration for Children and Families, Department of Health and Human Services; the Director of the Office of Management and Budget; and appropriate congressional committees. We will also make copies available to others upon request. Please contact me at (202) 512-6253 or by e-mail at [email protected] if you have any questions concerning this report. Major contributors are listed in appendix V. At a minimum, each state’s computerized support enforcement system established under the title IV-D state plan at § 302.85 of this chapter must: (a) Be planned, designed, developed, installed, or enhanced in accordance with an initial and annually updated APD approved under § 307.15; and (b) Control, account for, and monitor all the factors in the support collection and paternity determination processes under the state plan. At a minimum this must include: (1) Maintaining identifying information such as Social Security numbers, names, dates of birth, home addresses, and mailing addresses (including postal zip codes) on individuals against whom support obligations are sought to be established or enforced and on individuals to whom support obligations are owed, and other data as required by the Office; (2) Periodically verifying the information on individuals referred to in paragraph (b)(1) of this section with federal, state, and local agencies, both intrastate and interstate; (3) Maintaining data necessary to meet federal reporting requirements on a timely basis as prescribed by the Office; (4) Maintaining information pertaining to (i) Delinquency and enforcement activities; (ii) Intrastate, interstate and federal location of absent parents; (iii) The establishment of paternity; and (iv) The establishment of support obligations; (5) Collecting and distributing both intrastate and interstate support payments; (6) Computing and distributing incentive payments to political subdivisions which share in the cost of funding the program and to other political subdivisions based on efficiency and effectiveness if the state has chosen to pay such incentives; (7) Maintaining accounts receivable on all amounts owed, collected, and distributed; (8) Maintaining costs of all services rendered, either directly or by interfacing with state financial management and expenditure information; (9) Accepting electronic case referrals and update information from the state’s title IV-A program and using that information to identify and manage support enforcement cases; (10) Transmitting information electronically to provide data to the state’s AFDC [Aid to Families With Dependent Children; now Temporary Assistance for Needy Families (TANF)] system so that the IV-A agency can determine (and report back to the IV-D system) whether a collection of support causes a change in eligibility for, or the amount of aid under, the AFDC program; (11) Providing security to prevent unauthorized access to, or use of, the data in the system; (12) Providing management information on all IV-D cases under the state plan from initial referral or application through collection and enforcement; (13) Providing electronic data exchange with the state Medicaid system to provide for case referral and the transfer of the medical support information specified in 45 C.F.R. 303.30 and 303.31; (14) Providing electronic data exchange with the state IV-F program for purposes of assuring that services are furnished in an integrated manner unless the requirement is otherwise met through the exchange conducted under paragraph (b)(9) of this section; (15) Using automated processes to assist the state in meeting state plan requirements under part 302 of this chapter and Standards for program operations under part 303 of this chapter, including but not limited to: (i) The automated maintenance and monitoring of accurate records of support payments; (ii) Providing automated maintenance of case records for purposes of the management and tracking requirements in § 303.2 of this chapter; (iii) Providing title IV-D case workers with on-line access to automated sources of absent parent employer and wage information maintained by the state when available, by establishing an electronic link or by obtaining an extract of the data base and placing it on-line for access throughout the state; (iv) Providing locate capability by automatically referring cases electronically to locate sources within the state (such as state motor vehicle department, state department of revenue, and other state agencies), and to the Federal Parent Locator Service and utilizing electronic linkages to receive return locate information and place the information on-line to title IV-D case workers throughout the state; (v) Providing capability for electronic funds transfer for purposes of income withholding and interstate collections; (vi)Integrating all processing of interstate cases with the computerized support enforcement system, including the central registry; and (16) Providing automated processes to enable the Office to monitor state operations and assess program performance through the audit conducted under section 452(a) of the Act. The system must accept, maintain, and process information for non-AFDC services. The system must automatically accept and process referrals from the State’s Title IV-A (AFDC) agency. The system must accept and process referrals from the State’s Title IV-E (Foster Care) agency. The system must automatically accept appropriate referrals from the State’s Title XIX (Medicaid) agency. The system must automatically accept and process interstate referrals. The system must uniquely identify and edit various case types. The system must establish an automated case record for each application/referral. The system must accept and maintain identifying information on all case participants. The system must electronically interface with all appropriate sources to obtain and verify locate, asset and other information on the non-custodial/putative parent or custodial parent. The system must automatically generate any needed documents. The system must record, maintain, and track locate activities to ensure compliance with program standards. The system must automatically resubmit cases to locate sources. The system must automatically submit cases to the Federal Parent Locator Service (FPLS). The system must automatically track, monitor, and report on the status of paternity establishment and support Federal regulations and State laws and procedures for establishing paternity. The system must automatically record, track, and monitor information on obligations, and generate documents to establish support including medical support. The system must accept, maintain, and process information concerning established support orders. The system must accept, maintain, and process information concerning medical support services. If the State chooses to have case prioritization procedures, the system must automatically support them. The system must automatically direct cases to the appropriate case activity. The system must automatically accept and process case updates and provide information to other programs on a timely basis. The system must update and maintain in the automated case record all information, facts, events, and transactions necessary to describe a case and all actions taken in a case. The system must perform routine case functions, keep the caseworker informed of significant case events, monitor case activity, provide case status information, and ensure timely case action. The system must automatically support the review and adjustment of support obligations. The system must allow for case closure. The system must provide for management of all interstate cases. The system must manage Responding-State case actions. The system must manage initiating-State case actions. The system must automatically monitor compliance with support orders and initiate enforcement actions. The system must support income withholding activities. (continued) The system automatically must support Federal tax refund offset. The system must automatically support State tax refund offset. The system must automatically identify, initiate, and monitor enforcement actions using liens and bonds. Where action is appropriate under State guidelines, the system must support Unemployment Compensation Intercept (UCI). The system must be capable of forwarding arrearage information to credit reporting agencies. The system must support enforcement through Internal Revenue Service full collection services when previous enforcement attempts have failed. In cases where previous enforcement attempts have failed, the system must periodically re-initiate enforcement actions. The system must support the enforcement of spousal support. The system must automatically monitor compliance with and support the enforcement of medical insurance provisions contained within support orders. With the exception of those cases with income withholding in force, the system must automatically bill cases with obligations. The system must automatically process all payments received. The system must support the acceptance and disbursement of payments using electronic funds transfer (EFT) technology. The system’s accounting process must be uniform statewide, accept and maintain all financial information, and perform all calculations relevant to the IV-D program. The system must distribute collections in accordance with 45 C.F.R. 302.32, 302.51, 302.52, 303.72, and 303.102. The system must generate notices to AFDC and former AFDC recipients, continuing to receive IV-D services, about the amount of support collections; and must notify the IV-A agency about collections for AFDC recipients. The system must maintain information required to prepare Federal reports. The system must provide an automated daily on-line report/worklist to each caseworker to assist in case management and processing. The system must generate reports required to ensure and maintain the accuracy of data and to summarize accounting activities. The system must provide management reports for monitoring and evaluating both employee, office/unit, and program performance. The system must support the expeditious review and analysis of all data that is maintained, generated, and reported by the system. The State must have policies and procedures to evaluate the system for risk on a periodic basis. The system must be protected against unauthorized access to computer resources and data in order to reduce erroneous or fraudulent activities. The State must have procedures in place for the retrieval, maintenance, and control of the application software. The State must have procedures in place for the retrieval, maintenance, and control of program data. The system hardware, software, documentation, and communications must be protected and back-ups must be available. (Table notes on next page) The certification guide is currently being revised to incorporate changes required by welfare reform. The new version will refer to Temporary Assistance for Needy Families, the program that replaced Aid to Families With Dependent Children. Child Support Systems Certification Objectives (A-H) Child Support Systems Certification Objectives (A-H) Child Support Enforcement: Privatization: Challenges in Ensuring Accountability for Program Results (GAO/T-HEHS-98-22, Nov. 4, 1997). Child Support Enforcement: Leadership Essential to Implementing Effective Automated Systems (GAO/T-AIMD-97-162, Sept. 10, 1997). Child Support Enforcement: Strong Leadership Required to Maximize Benefits of Automated Systems (GAO/AIMD-97-72, June 30, 1997). Child Support Enforcement: Early Results on Comparability of Privatized and Public Offices (GAO/HEHS-97-4, Dec. 16, 1996). Child Support Enforcement: Reorienting Management Toward Achieving Better Program Results (GAO/HEHS/GGD-97-14, Oct. 25, 1996). Child Support Enforcement: States’ Experience with Private Agencies’ Collection of Support Payments (GAO/HEHS-97-11, Oct. 23, 1996). Child Support Enforcement: States and Localities Move to Privatized Services (GAO/HEHS-96-43FS, Nov. 20, 1995). Child Support Enforcement: Opportunity to Reduce Federal and State Costs (GAO/T-HEHS-95-181, June 13, 1995). The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | Pursuant to a congressional request, GAO reviewed the Department of Health and Human Services (HHS) certification process for state child support enforcement systems, its administration of the process and the certification status of the state systems. GAO noted that: (1) certification guidance issued by the Office of Child Support Enforcement (OCSE) addresses the system requirements of the Family Support Act of 1988 and HHS' implementing regulations; (2) analysis of the certification process shows that OCSE has administered this process consistently across states since it began certifying child support enforcement systems in 1993; (3) it has used the same guidance for certification reviews and conducted reviews that were similar in scope and length for each level of certification; (4) while OCSE published many certification reports on the results of its certification reviews, its reporting was not always consistent; (5) as of March 31, 1998, OCSE had either certified or conditionally certified 25 of the 54 child support enforcement systems; and (6) OCSE had also conducted 13 additional reviews and was preparing certification reports for those systems. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
The United States is home to more immigrants than any other country in the world. Census estimated that 41 million foreign-born individuals resided in the United States from 2010 through 2014, making up 13 percent of the population. According to the World Bank, the United States is also, by far, the largest source of remittances from foreign-born residents to their home countries, including Mexico, China, India, and the Philippines, among others (see fig. 1). Remittance funds can be used for basic consumption, housing, education, and small business formation and can promote financial development in cash-based economies. In a number of developing economies, remittances have become an important and stable source of funds that exceeds revenues from exports of goods and services and financial inflows from foreign direct investment. Remittances can be sent through formal transfer systems and informal methods. Formal systems typically include banks, credit unions, money transfer businesses such as wire services, and postal services. In the United States, providers of remittance transfer services (including bank and nonbank institutions) are subject to federal oversight and, depending on the state in which they operate, can be subject to supervision by states. According to CFPB, nonbank remittance transfer providers sent an estimated 150 million individual transfers from the United States in 2012. Informal remittance transfer methods include hand-carried cash, courier services, and agents known as hawalas. Individuals can transfer remittance funds in several ways, such as 1. cash payments to individuals and bank accounts; 2. prepaid debit or credit cards; and 3. online and through mobile devices. Global remittance estimates are published annually by some international organizations on an annual basis. IMF collects data on components of remittances submitted by its member countries, including the United States, as part of its annual publication of balance of payments statistics. IMF’s Balance of Payments and International Investment Position Manual provides a framework for identifying individual remittance flows that benefit households. According to IMF, this framework can be applied by all countries and should lead to some level of comparability among them. The World Bank uses IMF statistics to produce an annual Migration and Remittances Factbook and monthly and annual remittances data on its website. Other international organizations, such as the IDB through the Multilateral Investment Fund, also produce annual reports on remittance estimates. In the United States, BEA is responsible for compiling the official U.S. estimates. Other nations may delegate the official estimation of remittances to central banks or specific government agencies. In response to requests from policymakers, remittance data compilers, and other data users, IMF and the World Bank published a guide for compilers and users of remittances data. The purpose of the guide is to promote lasting improvements in remittances data, which it seeks to accomplish by summarizing the definitions and concepts related to the balance of payments framework and by providing practical compilation guidance. Two items in the guide that substantially relate to remittances are “personal transfers” and “compensation of employees,” both of which countries are required to report to IMF. Personal transfers are a measure of all transfers in cash or in kind made or received by resident households to or from nonresident individuals and households. Compensation of employees is a measure of the income of short-term workers in an economy where they are not resident and of the income of resident workers who are employed by a nonresident entity. The guide also defines additional measures related to remittances, which countries are encouraged but not required to report. For example, personal remittances represent the sum of personal transfers, net compensation of employees, and capital transfers between households, according to the guide. Institutions use different methodologies to produce estimates of remittances. For example, BEA uses demographic and household survey data and a model that calculates the remittance rates by demographic group to create the official estimate of remittances from the United States. The World Bank has developed its own methodology to create remittance estimates. Its research group produces country-specific development indicators and international development statistics. The World Bank then complements these data with information from the IMF’s Balance of Payments and International Investment Position Manual to create annual and semi-annual remittance estimates. Since 2010, researchers at the World Bank have also used United Nations population data to develop a bilateral migration matrix, which provides a second set of country-specific bilateral remittance estimates— that is, estimates between sending and receiving countries. These estimates are based on the number of migrants in different destination countries and estimates of how changes in the income of migrants influence the remittances they send. IDB’s Multilateral Investment Fund has a different methodology, using estimates reported by central banks to IMF as a baseline for individual country estimates. The Multilateral Investment Fund then works with the Center for Latin American Monetary Studies to help refine remittance estimates for selected countries in the Latin America and Caribbean region. Finally, some central banks use a combination of methods to estimate remittances. The central bank of Mexico, known as Banco de México, tracks remittance flows to Mexico with the help of regulatory reporting requirements on money transmitters. Since 2003, Mexico’s methodology for estimating remittances has required firms that receive remittances to report, on a monthly basis, the amount of money received and the number of transactions conducted between the United States and Mexico. To track remittances through informal channels, such as couriers that fall outside this regulatory framework, Banco de México conducts a survey at the U.S.-Mexico border of Mexicans entering the country. The central bank of the Philippines, known as Bangko Sentral Ng Pilipinas, estimates remittances that are channeled through banks. The Philippine government also has established a formal program for registering and tracking overseas Filipino workers. This program provides data to the government on the type of employment these workers obtain as well as their salaries. The Bangko Sentral Ng Pilipinas also uses the Survey of Overseas Filipinos to supplement data from the program. Using these two approaches, Bangko Sentral Ng Pilipinas is able to identify remittance funds sent by Filipinos overseas through friends and relatives and amounts brought in when these workers return home. A proposed fine on immigrants unable to show proof of legal status who send money through remittance transfer providers covered under EFTA could raise money for border protection, but the potential amount of revenue to be generated is unknown. Net revenue from the fine—the total of all fines collected less CFPB’s administrative and enforcement costs— would depend on several key factors, namely the dollar amount of remittances sent by those without legal immigration status, changes in remitter behavior because of the fine, including a potential reduction in remittances through regulated providers, and the cost of enforcement. For example, the ability to raise money depends on a significant number of individuals without legal status using regulated remittance transfer providers and paying the fine. However, a fine could result in a decrease in remittances in the regulated market and an increase in remittances through informal methods of money transfer. The revenue raised by the proposed fine would first be used to pay CFPB for enforcement costs. We did not identify any estimates of the administrative and enforcement costs associated with the fine. Our hypothetical scenario analysis illustrates the sensitivity of potential net revenue estimates to these factors. CFPB and other federal regulators would enforce the requirements of the proposed legislation and CFPB identified some implementation challenges. Lastly, providers told us the fine could have consequences for them, and one provider said that smaller providers would likely be affected the most. A fine could potentially generate net revenue for border control, but the following selected factors would influence the actual amount: The dollar amount of remittances sent by those without legal immigration status. The revenue raised by the fine would depend on the dollar amount of remittances sent by those individuals in the United States without legal immigration status and, specifically by those using regulated remittance transfer providers. According to three studies we identified during discussions with experts, estimates of unauthorized U.S. immigrants in 2012 ranged from 11.1 million to 11.4 million people. Of that number, only those who conduct transactions through providers that are subject to EFTA would actually pay a fine, should they continue to use such providers, and that number is unknown. The response to the fine by individuals in the United States without legal status, including a reduction in remittances through regulated providers. If individuals without legal status respond to the fine by making money transfers that may not be subject to EFTA requirements, by remitting less, or leveraging connections with immigrants with legal status, the amount of revenue raised by the fine would be lower. Representatives from almost all of the organizations we spoke with, including providers, researchers, federal agencies, and community groups, stated that remitters without legal status may be deterred by the fine and the additional scrutiny around their immigration status. The amount of revenue generated by the fine would also depend on the extent to which those without legal immigration status continue to use regulated systems after the fine is imposed instead of switching to informal methods, such as hawalas. Two articles identified in our literature search noted that those without legal status may use methods that allow them to maintain a higher degree of anonymity. For example, those without legal immigration status may have relatives or friends who are authorized to be in the United States send remittances for them, potentially lowering the amount of revenue raised by a fine. Conversely, if most remitters continue to remit the same amount, and continue to remit through regulated channels, the total amount remitted may remain stable, and more revenue will be raised. Research experts, officials from industry, community groups, and some federal agencies with which we spoke suggested that some remitters unable to provide proof of legal status may send the remittance and pay the fine, but the exact percentage is unknown. While the effect of the fine depends heavily on the remitting behavior of individuals without legal immigration status and their response to the fine, limited information exists on how many of these individuals remit or the extent to which they rely on regulated methods. In the absence of definitive studies on remitting behavior, the extent to which immigrants use regulated or informal methods for remitting and how they will respond to price increases is unknown. If the costs of the fine, including costs for providers to implement the requirements of the proposed legislation, are passed on to the remitter in the form of a price increase, remitters might reduce the amounts or the frequency with which they remit. Information on price sensitivity—how senders respond to an increase in price—is limited. According to a CFPB report and a remittance transfer provider with whom we spoke, remitters’ response to higher prices may partly depend on knowledge of other available options, including access to information about fees charged by other providers. Behavioral changes could substantially limit the amount of revenue generated for border protection. Administrative and enforcement costs associated with the fine. Although the regulatory costs associated with the proposed legislation are unknown, CFPB officials told us that the agency would incur expenses associated with implementing the legislation and ensuring compliance. According to CFPB, these expenses would include the costs of developing rules, examining remittance transfer providers, and cooperating with other federal agencies on enforcement actions against noncompliant institutions. Other federal regulators also enforce EFTA for their regulated entities, and state regulators also may play a role in oversight of remittance transfer providers. As the revenue for the fine would be used first to reimburse CFPB for administrative and enforcement costs to carry out the proposed legislation, high costs to CFPB for these activities would mean less net revenue available for border protection. Uncertainty in these costs would contribute to uncertainty in how much revenue remains for border protection. Given the uncertainty related to these important factors, we constructed a scenario analysis to illustrate how the revenue generated for border protection could vary based on the values we assume for the following, given our starting assumptions about the total volume of remittances and proportion of those in the formal sector: 1. the dollar amount of remittances sent by immigrants without legal 2. the reduction in remittances through regulated providers in response to the request to show proof of legal status or pay a fine, and 3. the magnitude of administrative and enforcement costs to CFPB. The scenarios are hypothetical because the factors used to generate the results were selected solely to demonstrate the uncertainty in how much revenue would be collected. They are not supported by empirical research or evidence. The selected scenarios we illustrate are from a larger number that we analyzed to examine how sensitive net revenue from fines is to the factors. The three factors shown in figure 2 illustrate the potentially wide variation in net revenue from fines. In our analysis, we begin by assuming that the total volume of remittances is $50 billion and 50 percent of the total volume of remittances is sent through regulated providers. The scenario analysis varies the three factors above, thereby demonstrating the breadth of uncertainty in potential net revenue. As figure 2 demonstrates, when the factors vary potential net revenue from fines can change significantly. For example, one scenario with no change in the amount of remittances and low administrative and enforcement costs could provide $0.41 billion in potential net revenue for border protection. In contrast, another scenario with a 75 percent reduction in remittances after the fine and high administrative and enforcement costs would generate potential net revenue of only $0.01 billion. In some cases, the cost incurred by CFPB could be more than the revenue from the fine. For example, a small dollar amount of remittances sent by immigrants without legal status, large reductions in remittances, and high administrative and enforcement costs could lead to negative net revenue. Obtaining reasonable estimates of net revenue would depend upon having accurate, reliable, and complete information on the amount immigrants without legal status remit and their response to a requirement for providers to request proof of legal status or assess a fine, as well as administrative and enforcement costs. In the absence of such information, the potential net revenue a fine would generate is unknown. Officials from CFPB noted that in addition to creating uncertainty about administrative and enforcement costs, the proposed legislation, if passed, would require CFPB to address other issues, including issuing new rules to define what constitutes proof of legal status and to establish procedures for submitting fines, as well as coordinating with other regulators. As noted earlier, CFPB would be required to define by rule what constitutes acceptable documentation in states that do not require proof of legal status to obtain a state-issued driver’s license or a federal passport. CFPB would need to coordinate with other financial regulators. For example, the proposed legislation calls for remittance transfer providers to submit the fines to CFPB and for CFPB to then transfer to Treasury any remaining funds after the payment of CFPB’s administrative and enforcement costs. However as noted previously, other federal regulators have the authority to enforce EFTA for the entities they supervise, including enforcing the remittance provisions against those supervised entities that are remittance transfer providers under the act. The proposed legislation would provide CFPB with rulemaking authority, but does not state how CFPB would coordinate with other agencies. CFPB staff told us that it might need to develop procedures with others on examination and enforcement efforts. CFPB would be required to issue rules establishing the form and manner in which fines would be submitted to CFPB. CFPB staff told us that CFPB does not currently levy fines on consumers. Instead, CFPB levies monetary sanctions and brings other enforcement actions against consumer finance businesses and other persons in connection with violations of Federal consumer financial law. But CFPB staff noted that collecting fines directly from institutions for noncompliance is different from a fine on remitters collected by remittance transfer providers that is then submitted to the agency. Finally, CFPB may have examination authority over nonbank remittance transfer providers that also may be overseen by state regulators. If the proposed legislation were to become law, CFPB might have to coordinate with state regulators. If remittances decrease because the number of transactions or amounts remitted decline, the fee revenue associated with remittance transactions that providers receive would decrease. Without any corresponding reduction in cost, the decrease in remittances might decrease profits for some providers, but by how much is uncertain. Prior experience with legislation passed in Oklahoma in 2009 may demonstrate effects similar to those that could result from the proposed legislation though there are some key differences between the two. The Oklahoma law imposed a $5 fee on each wire transfer from a nondepository institution, and 1 percent of the amount of the transaction, if any, in excess of $500. When making a transfer, all persons regardless of immigration status were required to pay the fee. Under the Oklahoma law, customers who paid the fee are entitled to an income tax credit equal to the amount paid when filing individual income taxes in Oklahoma with either a valid Social Security number or a valid taxpayer identification number. The tax credit in effect means that those customers without a Social Security or taxpayer identification number are not eligible for the state income tax credit and therefore will have paid the remittance fee without being able to obtain a credit or refund. Statements of four remittance transfer providers with operations in Oklahoma suggest that the law has had mixed effects. According to two providers, revenues decreased once the law was in place. Two providers told us that transaction activity in the state had fallen. One other provider stated that their company had still not recovered from the decline in revenue. This provider told us that the decreased number of transactions was the result of remittances that moved to out of state providers or from regulated to informal channels. The other two providers we interviewed noticed decreases in remittances, although they noted they did not have a large presence in Oklahoma. Also, one official from a state audit association noted that fee revenues for the State of Oklahoma continued to increase after the first year of the imposition of the fee. Remittance transfer providers, industry associations, research experts, and some federal agencies we met with said that they expected to see revenues decrease in the regulated market if the proposed law (S.79) were passed, as it would send remittances to the informal market. New proof-of-legal status requirements and fine collection could also increase remittance transfer providers’ costs. Such potential costs were noted by almost all providers, and representatives from industry associations we spoke to. Several providers noted that they might need to pay for new computer infrastructure and databases, staff training, and compliance. One provider pointed out that just to add a new variable listing information on customers to an existing information system was a 9-month process that would involve testing and validation. Representatives of an industry association and one remittance transfer provider cited potential costs related to maintaining databases used to verify legal status. Remittance transfer providers could also face increased compliance costs related to new requirements. In some cases, providers told us that compliance costs could be significant. For example, some providers said that they had made significant investments to comply with the fee and exchange rate disclosures and other requirements implemented through amendments to Regulation E after the passage of the Dodd-Frank Act, such as developing procedures to electronically disclose the fees charged by the provider. Another provider said that it had spent more than $3 million on technology enhancements and customer service teams to satisfy the requirements of the rule. Still another provider noted that the company spent about 3 percent to 4 percent of its revenue on the legal compliance budget. One representative of a transfer provider whom we interviewed said that the company might be able to incorporate compliance requirements into its Bank Secrecy Act (BSA)/anti-money-laundering (AML) efforts. BSA/AML requirements for institutions that provide money transfer services include, among other things, collecting sender identification for each transfer in the amount of $3,000 or more. Banks are also required to implement a customer identification program, under which they establish procedures specifying what identifying information they will collect when customers open accounts. However, other providers noted that collecting identification is not the same as verifying legal status. For example, several providers accepted the Matrícula Consular de Alta Seguridad, which is an official identity card that Mexican Consulates issue to nationals living outside Mexico. As previously discussed, under the Remittance Status Verification Act this card would not be an acceptable form of identification for proving legal status within the United States for purposes of the act. One provider we spoke with explained that not all states require proof of legal status before issuing a driver’s license or other form of identification. Forms of identification that demonstrate legal status may vary from state to state. It could be difficult for money transfer clerks to know what form of identification to collect, particularly when remitters may hold identification from other states. Some providers and one trade association also noted that the proposed legislation would require additional staff training. For example, one provider said that the company operated through many retail outlets, such as grocery stores and gas stations, and it would not be practical to train all store clerks to determine the appropriate form of identification to show legal residency status. Another provider stated that it would be a significant challenge to train all agents—retail outlets that conduct transactions for the provider—on the documentation they would be responsible for collecting for proof of legal status. A trade association noted the difficulty and potential expense of training staff on how to properly check for proof of legal status; calculate, disclose, and collect the fine; and put the transaction in a database. How much of the fine and added cost would be absorbed by the provider or retail outlet partly depends upon the competitiveness of the market. Remittance transfer providers stated that in competitive markets with a number of providers and a variety of methods for transmitting money, the demand for remittances is more sensitive to prices. For example, one provider indicated that it lost customers when its prices were only marginally higher than those charged by other providers. With the prospect of losing more customers and revenue, one provider with whom we spoke stated that it might choose to absorb some of the fine and added cost instead of passing it on. One provider we spoke with expected that the added costs would increase the costs passed on to consumers by 3 to 4 percentage points. If these costs were passed on in such a manner, all consumers, regardless of legal status, could experience an increase in the price of remittance transfers that are sent to a foreign country. In addition, certain providers might be disproportionately affected by the requirements of the proposed legislation. According to representatives from two providers and a research expert, smaller providers generally operate at lower profit margins compared with larger providers. Providers with lower margins would find it more difficult to absorb costs imposed due to the fine and may be more adversely affected with a reduction in revenues. BEA’s estimate of remittances from the United States totaled approximately $40 billion in 2014, and its estimates of remittances generally increased from 2006 to 2014. BEA changed its remittance estimation methodology in 2012 in order to incorporate new data on reported remittances. However, BEA’s methodology for estimating remittances is not consistent with government-wide policies and guidance on statistical practices or with BEA’s own best practices and thus produces unreliable estimates. For example, BEA did not follow the guidelines from the National Research Council (NRC) of the National Academies stating that data releases from a statistical program should be accompanied by appropriate methods for analysis that take account of variability and other sources of error in the data. In addition, we identified several errors in BEA’s analysis that led us to question the reliability of BEA’s estimates, including data that are censored, measurement and coding errors, and an estimation methodology that is subject to biases. Further, BEA calibrated its new model to match the estimates from BEA’s old model, whose accuracy we questioned in a March 2006 report on remittance estimates. On the basis of discussions with BEA officials, BEA’s failure to follow best practices appears to be due to the fact that the agency does not consider its remittance estimates to be “influential information” that is subject to a high degree of transparency. However, BEA’s estimate is cited by national and international organizations and in some cases is incorporated into the estimates of these organizations, including the World Bank. BEA’s estimate of remittances from the United States (which it reports as personal transfers) totaled approximately $40 billion in 2014. As figure 3 shows, BEA’s estimates of remittances generally increased from 2006 to 2014. BEA’s estimates of remittances from the United States are based on demographic and household survey data and a model that calculates the remittance rates by demographic group. BEA assumes that the foreign- born population represents the relevant population of remittance senders in the United States, because this population is most likely to have a personal link to foreign residents. The estimates of personal transfers include all current transfers from resident to nonresident households, regardless of the means of transfer. BEA changed its model for estimating remittances in 2012 by using new demographic variables and data on reported remittances from the August 2008 migration supplement to the Current Population Survey (CPS) conducted by Census. For its revised model, BEA employed a multiplicative model—that is, a model whose results are the product of the combined effects produced by the individual variables. It used a nonstandard iterative technique to estimate the remittance rates. These rates show the proportion of income that is remitted. To obtain total remittances, the remittance rates for different demographic categories can be multiplied by the number of individuals in those categories and their incomes. In its new methodology, BEA combined the new remittance rates from its revised model with ACS data on foreign-born residents and their income to estimate total remittances sent annually from the United States (see fig. 4). The availability of nationally representative data on remittances in the CPS with actual reported numbers on remittances provided BEA with an opportunity to revise the model it created in 2005. BEA first tested its previous demographic variables against CPS data and found that its assumptions about family structure and time in the United States were weak indicators of how much people reported to remit in CPS data and that its previous country tiers did not match remitting behavior very well. Therefore, in 2012 BEA changed the new model by, removing U.S. citizens born abroad of American parents, assuming that this group’s remittance behavior would be similar to the behavior of U.S. born who were not included in their study; replacing the “children/no children” category with “married, spouse absent/other marital,” because those in the latter category were more likely to send remittances to spouses abroad and were thus a better predictor of remittances; adding the category “living with roommates/other living arrangements,” assuming that people shared housing to save money and therefore could send more remittances; combining immigrants who had been in the United States for 16 to 30 years with those who had been in the country for longer than 30 years into one category, “15 plus years,” as they found in CPS data that these two categories had similar remittance rates; and reallocating countries within pre-existing geographical tiers as BEA found that their previous country allocations were not the best match for the CPS data. We found several issues with BEA’s methodology that resulted in unreliable remittance estimates. BEA also did not follow its own best practices and Office of Management and Budget (OMB) or NRC guidance on documentation and methods for analysis that could have ensured reliability of its methodology and limited the inaccuracy in its estimates. Despite OMB and agency guidance and best practices that would provide that BEA should document its procedures for developing its new model for estimating remittances, BEA did not prepare adequate, transparent documentation of its efforts to develop its new model. BEA also did not prepare adequate documentation of management review and approval of the new model. OMB’s Information Quality Act (IQA) guidelines, which are designed so that agencies will meet basic information quality standards, state that agencies should ensure that data and methods are documented and transparent enough to allow an independent reanalysis by a qualified member of the public. IQA guidelines also direct agencies to develop management procedures for reviewing and substantiating the quality of information before it is disseminated. According to BEA best practice guidance, all changes in either methodology or data sources should receive documented management approval. In its own internal guidelines, BEA notes that it strives for the highest level of transparency about data and methods for its estimates to support the development of high-quality data and facilitate efforts to reproduce the information. Additionally, BEA best practices guidelines are designed to ensure the accuracy of input data; provide high-quality, timely analyses that document how estimates are made; and provide estimates that satisfy both internal and external customer needs. One BEA best practice is to enhance both transparency and replicability by instructing BEA staff to document each step or change in the methodology and document the rationale behind each decision. Another BEA best practice states that written analyses of the estimate should include a discussion of changes and revisions as well as deviations from standard methods. However, based on our analysis, BEA did not follow these guidelines, as the following examples illustrate. Documentation showing how the final remittance estimate is calculated was not maintained. When asked to provide records of analysis that supported the calculations of 2012 and 2013 remittance estimates (the most recent estimates available at the time of our review), BEA staff told us that the documents were created only when each year’s estimate was produced and were not saved. Unable to produce its original documents, BEA recreated the documentation to fulfill our request. However, BEA staff told us that the file could be missing some information required to successfully run the computer program that calculates total remittance estimates; for example, certain variables had been renamed and some fields were missing; their numbers were also multiplied by an arbitrary discount factor whose use was explained to us only later as something done to avoid a break in the series. Changes and revisions were not sufficiently documented. When asked to provide documentation of the analyses completed to determine changes in the model, BEA provided a conference paper containing written descriptions of its regression analyses. BEA staff who completed these analyses told us that the regression files had not been saved in a way that would allow the staff to easily provide us with the files applicable to the model changes. The staff described saving them among many partially complete files and told us that it would be difficult to identify the files that led to the current version of the model. Unable to provide its original research, BEA attempted to recreate the steps that were used to create the model. Management review of estimation methodology was insufficiently documented. BEA officials noted that staff adhered to internal guidance by obtaining both managerial and external reviews of the model’s revision but provided little documentation of them. BEA staff said the remittance model proposal was presented to the Modernization and Enhancement Steering Committee (MESC) for formal review. BEA provided minutes of the MESC meeting discussing the review of the model, but the minutes also indicated that BEA management was still considering changes. BEA staff could not provide documentation of additional management actions taken or of another MESC meeting held at a later date. BEA staff told us that the agency subjected the output of research that affected methodology changes to a full gamut of validity checks. However, the only documentation we received of a validity test was the MESC meeting minutes that contained a discussion of the model’s assumptions. BEA staff told us that the personal transfers model had been subjected to additional scrutiny by BEA senior management resulting from the authors’ conference presentations. However, BEA did not provide us with either documentation of the conference feedback or the results of senior management’s additional scrutiny. BEA officials stated that the decision to publish a Survey of Current Business article about the model’s revision constituted verification of management review. However, BEA could not provide any documentation of the approval process for publication to demonstrate what the management review entailed. The rationale and appropriateness of its methodology for estimating remittances was not documented. According to NRC guidelines for federal statistical agencies data releases from a statistical program should be accompanied by assumptions used for data collection, and what is known about the quality and relevance of the data. The guidelines also mention appropriate methods for analysis that take account of variability and other sources of error and the results of research on the methods and data. We found BEA did not follow these guidelines, as illustrated by the following examples, Data. We were unable to verify the accuracy of the data because we were not provided with documents detailing the steps and analyses BEA undertook to convert CPS data to the dataset BEA actually used to estimate the model. A BEA best practice states that an analysis of the estimate should include a discussion of questionable aspects of the source data, including outliers. However, BEA could not provide us with documents showing analyses performed to deal with various problematic aspects of the data and treatment of outliers. In addition, BEA conducts an analysis to assign a portion of the household’s income to each individual in the household. The income amount attributed to each individual is a critical component of the model and has a substantial effect on the result, yet BEA could not provide any documents showing sensitivity analyses of this critical assumption to see how the attribution of income affects its results. Further, for some households that did not have any family income in CPS data, BEA assigned them incomes but could not tell us how they calculated them. BEA also assigned all households within a given range of family income the same income. This approach introduces measurement error to the extent that households within a given range of family income do not, in fact, have the same income. BEA could not provide any documentation explaining these details or what implications the assigned incomes could have on its results. Estimation Technique. As described earlier, BEA used a nonstandard iterative technique to estimate its model. BEA staff acknowledged that the method was unusual and may be hard to comprehend. When we requested additional information on management review of the model, BEA staff stated that they had had the model reviewed by an outside expert. However, officials later said the review was informal and no written opinion was provided. Model specification. BEA did not follow IMF’s guide for compilers in two instances related to issues of model specification. First, the guide specifies that the variables used to explain and predict remittance rates may need to be converted to different forms to see if they generate a better model. BEA could provide no documentation showing it attempted to do this analysis. Second, the IMF guide states that statistical analyses are also needed to understand the relationship of different demographic variables to each other and to remittance rates in order to select the relevant variables. BEA could not provide us with any documentation showing they performed any tests on the relationship among different demographic variables. Goodness-of-fit. This term refers to how well a model represents the data. The IMF guide states that various statistics describing goodness-of-fit should be calculated to decide on the best model for determining the level of remittances. BEA presented the results of only one such test, namely the “R-squared (R2).” Moreover, BEA does not report standard errors for their models’ coefficients using their iterative method and does not document that this iterative method would produce correct standard errors for these coefficients. test is a measure of the how well the proposed model fits the data. R suggests that the model is a poor fit, while a high Rof 6.75 percent. BEA mentions an Rof 15.8 percent. BEA got this number when it ran the model again for us using a different dependent variable. Statistical Policy Directive No. 1 states that where appropriate any known or potential data limitations or sources of error should be described to data users so they can evaluate the suitability of the data for a particular purpose. But BEA does not mention this aspect of the data in its publication even though it has significant influence on the results. BEA’s model was not consistent with the data. BEA’s model assumption that all individuals within a demographic category remit on average the same percentage of their income is inconsistent with its data, which show that 75 percent of households remit nothing at all. Moreover, BEA’s model generates remittance rates for certain categories of households that have no individuals in them. For example, the model calculates that individuals in households with married persons with absent spouses who have roommates, who remit to low-tier countries, and who have spent 6 to 15 years in the United States, remit 8.5 percent of their income. However, there are no such individuals in the data. BEA failed to point out these data deficiencies even though OMB’s Statistical Policy Directive No.4 asks agencies to clearly point out limitations of the data to users. Failure to account for censored data leads to biased results. Because the value of reported remittances is only partially known, BEA’s remittance data are censored data. The remittance data are censored (at the bottom) because 75 percent of households remit $0 and all other households remit positive sums. The remittance data are censored (at the top) because, as noted earlier, the CPS assigns all households that remit over $10,000 a remittance value of $27,199. Estimating a model on censored data demands certain econometric techniques, which BEA has not adopted, in order to yield unbiased estimates. The guidelines presented by NRC as mentioned above specifically ask agencies to use appropriate methods for analysis that take account of variability and other sources of error. BEA’s model is incorrectly specified in its documentation, and the actual model specification may lead to biases. BEA’s documentation states that its model explains the total amount remitted by a household (in part) in terms of that household’s income level. However, BEA’s model assumes that the total amount remitted by a household depends only on the income of the foreign-born individuals in the household. To the extent that U.S.-born individuals in a household do remit, BEA’s model overestimates the fraction of income remitted by foreign-born individuals in that household. BEA officials said that they excluded U.S.-born individuals from a household on the basis that they remitted very little. But we found that 407 households with only U.S.-born individuals reported remitting almost 13 percent of total remittances, suggesting that the remittance rates of foreign-born individuals may be overestimated and, thus, biased. Measurement errors in a critical explanatory variable bias the results. As explained earlier, BEA’s assignment of household income to individuals within the household is critical to its analysis. Since this individual income variable is subject to measurement error, it biases the effect of this variable on the remittance rates, contributing to the unreliability of the remittance estimates calculated by BEA. Several coding and other errors also contributed to inaccuracy. BEA staff said that they considered the estimation of the personal transfers model a relatively straightforward task. As such, they did not consider independent programming of code by a reviewer necessary. We found several errors and unexplained adjustments in BEA’s code that might have been detected had a review been conducted. Calibration of this new model to match unreliable old estimates enhanced unreliability. BEA’s model predicts total remittance amounts that are substantially lower than those BEA has historically published. BEA handles this difference by multiplying the remittance rates from its model by an arbitrary calibration factor so that the total model’s estimated remittances equal those that BEA previously calculated for 2008. BEA calibrated the model because an analysis of CPS data determined that remittance rates may have been underestimated because many immigrants were reluctant to report their precise remittances. Because BEA calibrated the new model to old estimates, BEA estimated the same remittance amount for the year 2008 that the old model had produced. For the years following 2008, the remittance estimates differed slightly from the previous estimates because of the different demographic characteristics used in the new model (see table 2). In our March 2006 report on BEA remittance estimates, we questioned the accuracy of BEA estimates based on the model developed in 2005 after finding that the remittance rates BEA used were primarily based on its own judgment. We found shortcomings in BEA’s model, specifically with regard to the assumptions BEA made about the percentage of income remitted and the percentage of foreign-born persons who remit. We were unable to link the parameters that BEA used to capture the remitting behavior of foreign-born persons directly to the sources that BEA cited. We found that BEA used its own judgment to determine the proportion of the adult foreign-born population that sent remittances and the proportion of income they remitted. We concluded that the accuracy of these estimates was affected both by the quality of the underlying data as well as by these assumptions. Therefore, calibration of the new model—which may itself be unreliable—to the old estimates further affects the reliability of the final estimates. BEA officials told us that the personal transfers estimate was not a principal economic indicator. Therefore, BEA considered information related to the development of the estimate to be influential (as defined by OMB’s IQA guidelines) only in terms of the integrity of the estimate’s dissemination. Nonetheless, BEA’s Information Quality Guidelines state that at BEA the notion of data integrity goes beyond the maintaining of the security of its information. Integrity includes, among other things, transparency that is ensured by providing certain information, such as assumptions for missing source data and discussions of revisions. BEA officials also noted that personal remittances were a relatively small component of the U.S. current account. According to BEA officials, over the past 5 years personal transfers accounted for an average of 0.59 percent of gross current account transactions. Officials said that, as a result, resources devoted to improving the estimation of personal remittances had to be balanced with resources allocated to improving other estimates that could be more important to the balance of payments. However, a number of organizations use BEA’s estimates. BEA reports its personal transfer estimates to IMF, which publishes country estimates in its Balance of Payments Statistics Yearbook. In addition, the World Bank uses BEA estimates submitted to IMF as part of its calculations on remittances. IDB’s Multilateral Investment Fund also uses estimates published by IMF as a baseline for its calculations of individual country estimates. BEA officials also noted that OMB’s guidelines give agencies discretion in determining the level of quality to which information will be held. However, while the guidelines do afford agencies some discretion, the guidelines make it clear that agencies should not disseminate substantive information that does not meet a basic level of quality. As discussed earlier, by failing to follow its best practices, BEA has not met this basic quality level. BEA officials did not explain the reasons behind not following their own best practices or failing to maintain adequate documentation along the way. We have previously stated that appropriate documentation of a significant event or internal control, in a manner that allows it to be readily available for examination, is an example of a control activity that can be taken by federal program management. This type of control activity allows management to achieve objectives and respond to risks in its internal control system. Such events would include supervisory review of methodological changes to BEA’s estimation model. Moreover, BEA’s best practices require documentation of its methodology and data and supervisory and management review and approval of any changes. But BEA has not provided sufficient and transparent documentation of its procedures for developing its new personal remittance estimation model. The lack of documentation made our evaluation of BEA’s model and estimates difficult, and it was not possible for us to obtain reasonable assurance that BEA met federal guidelines and its own internal standards. Because the documentation provided to us by BEA is lacking in both clarity and completeness, we cannot say that BEA has met the goal of IQA to ensure and maximize the quality, objectivity, utility, and integrity of its remittance statistics, which are public information disseminated by federal agencies. However, based on the information we were able to obtain, we were still able to determine that the model produces unreliable annual estimates. BEA’s updated model for estimating remittances produces unreliable results due to underlying issues with the data, such as missing information and measurement problems. BEA did not satisfactorily explain why its methodology was appropriate, despite NRC’s guidance to do so. Moreover, BEA calibrated the new estimates to align with those from its old model, the accuracy of which we had previously called into question. Additionally, BEA could not provide us with sufficient documentation of the steps it took to test the model and ensure it received management review and approval—key quality assurance procedures. Documentation of BEA’s processes of analyzing, testing, and reviewing its model should not be simply an act of memorializing events. Documentation also provides evidence of an agency’s adherence to procedures and policies that are part of its quality assurance framework. BEA’s methodology for estimating remittances is not consistent with guidelines prescribed by BEA’s best practices standards, the standards of IQA, OMB statistical directives, and NRC guidance. Had BEA subjected its model to these standards, it would have taken important steps toward obtaining reasonable assurance that it had produced reliable annual estimates of remittances. Although BEA officials discount the importance of remittances as a component of international transactions statistics, the inability of BEA’s new model to produce more accurate remittances estimates is consequential, as BEA’s estimate is the official remittance estimate of the United States and is cited by both national and international organizations, and in some cases incorporated into the estimates of these organizations. We recommend that the Secretary of Commerce direct the BEA Director to take the following actions: To improve the reliability of the annual official U.S. estimate of remittances, conduct additional analyses of BEA’s estimates using estimation techniques appropriate for dealing with the shortcomings of the data. Analyses should also be conducted to understand the effect of various assumptions behind and limitations of the data on the estimates. To improve the transparency and quality of BEA’s international remittances estimate, follow established BEA best practices, OMB policies, and NRC guidance for documenting BEA’s methods and analyses used to revise its model for estimating remittances and for producing its annual estimates. We provided a draft of this report to the Secretaries of Commerce, Homeland Security, State and the Treasury, the Chair of the Board of Governors of the Federal Reserve System, and the Director of Consumer Financial Protection Bureau (CFPB). Commerce provided a letter, including written comments the Bureau of Economic Analysis (BEA) on a draft of the report, which are reprinted in Appendix II. CFPB, Treasury and State provided technical comments, which we incorporated as appropriate. In its comment letter, BEA stated that it intends to implement our two recommendations to the extent possible consistent with resource limitations as it continuously improves its remittance (personal transfer) estimate and other estimates. However, BEA stated that it did not agree with our report’s conclusions that its remittance estimates are unreliable or that its documentation of changes to its estimation model or annual estimates is inadequate. More specifically, BEA commented that it believes that its remittance estimates are valid and reasonable for the purpose for which they are prepared and that the documentation provided to GAO was fully adequate. We recognize BEA’s resource constraints. However, we maintain that our findings related to the reliability of BEA’s remittance estimates and documentation of the methodology to produce such estimates are valid and support the recommendations we made in the report. Regarding our conclusion that BEA’s remittance estimates are unreliable, in its comment letter BEA acknowledged the data limitations that GAO pointed out in the report but did not explain how these may affect its estimates. The limitations described in BEA’s comment letter were not discussed in the documentation provided by BEA. Nor did BEA provide evidence showing that it conducted alternative analyses to conclude that these limitations did not affect the quality of its final estimates. For example, in its comment letter BEA mentions that the calculation of its income variable was problematic but during our review did not present us with analysis to show how sensitive its estimates were to various assumptions about income, including that of taking the midpoint of the range of income provided in its data. Even BEA’s choice of demographic variables included in their analysis depends on how it calculates individual income. BEA acknowledges that its data was censored—where the value of reported remittances for some households in its data set is only partially known—but during our review, it did not provide evidence that it conducted additional analyses using an alternative methodology to see how final estimates might be affected. BEA told us that these households were responsible for a substantial proportion of all remittances and we found that it had considerable influence on BEA’s estimates. Though these and other data limitations described in this report could have substantial impact on the estimates, in its comments BEA dismisses the limitations stating that they would only have marginal effect on the estimates. However, BEA does not present evidence of having tested the magnitude of the effects on the estimates. Moreover, calibrating the estimates resulting from BEA’s revised estimation model to its previous estimates, the accuracy of which was deemed uncertain in a previous GAO report, further undermines our confidence in these estimates. As a result of data limitations, BEA’s choice of methodology in light of those limitations, and other errors and corrections BEA made, we maintain that BEA’s revised estimation model produces unreliable remittance estimates. Regarding our conclusion that BEA did not follow the best practices, policies, and guidance to which it is subject for documenting its methods and analyses, BEA stated that the documentation provided to GAO was fully adequate. We disagree. As discussed in this report, we identified several instances where BEA did not follow best practices, policies, and guidance. For example, we requested files that provided documentation of the analyses BEA conducted to determine changes to its estimation methodology. BEA provided written descriptions of its regression analysis in a conference paper. BEA staff told us that its analysis files had been saved among many partially complete files and that it would be difficult to identify the files that led to the current version of the model. BEA’s best practice standards require that all methodological changes and the rationale for the changes be clearly documented. As we describe in the report, without documentation BEA could not effectively convey and support the rationale and appropriateness of its methodology. We were unable to verify, among other things, the accuracy of much of BEA’s data or fully understand the selection of its methodology. As we stated in the report, documentation of analysis, testing, and evaluations of models should show evidence of adherence to procedures and policies that are part of an effective quality assurance framework. BEA did not provide documentation that reflected such a framework. For example, BEA officials described conducting managerial and external reviews of the model’s revision but provided only the minutes to one management review meeting indicating that the model had been discussed but was still under consideration. Though we requested documentation of final approval of the model by the management committee, BEA told us that it had nothing further to provide. BEA also described an external review of its model revision that was done by an external econometrician for quality assurance purposes. When we asked for documentation of this review, however, BEA told us that it had been informal and that no written opinion had been provided. In addition, BEA stated that our ability to reproduce the agency’s estimates showed that its documentation was adequate. However, we did not attempt to reproduce BEA’s estimates. Rather, we ran the computer program that BEA provided on the data created by BEA to replicate a few intermediate steps in its methodology. By replicating these steps, we found inconsistencies between BEA’s description of the analysis and what was actually done, and other errors. We did not and would not have been able to reproduce the analysis, based on the documentation that BEA provided, that led to the final remittance estimates or even create the dataset used by BEA from its listed sources. BEA noted that it provided us with new summaries to help explain certain aspects of its methodology, but asserted that we conflated this additional effort with an inadequacy of internal control and initial documentation. However, we maintain that in some cases, BEA provided these summaries because it was unable to provide us with original documentation. For example, we asked for records of analysis that supported the calculations of 2012 and 2013 estimates. BEA told us that the documents were created only when each year’s estimate was produced and were not saved. BEA also was unable to provide original documentation of the analysis that led to the current version of the model and attempted to recreate its steps in new documentation. BEA also rejected the statement that it did not follow best practices because it did not consider remittances to be influential. During our review, BEA staff told us that information about BEA’s remittance estimates was designated as influential only to prevent their disclosure before they were officially released. BEA also told us orally and in writing that as personal remittances were a relatively small component of the U.S. current account, resources devoted to improving the estimate of remittances had to be balanced with resources allocated to improving other estimates that could be more important to the balance of payments. Finally, BEA stated its remittance estimate was not designed to measure the potential impact of the WIRE Act (proposed Remittance Status Verification Act of 2015), and it understood that we would use its estimates as a basis for understanding the magnitude of cross-border transfers. BEA’s comment inaccurately described the purpose and scope of our review. As we describe in this report, our review focused on two separate objectives which were to (1) discuss the potential effects of assessing a fine on remitters unable to provide proof of legal U.S. immigration status, and (2) examine BEA’s remittance estimate and the extent to which its revised estimation methodology met government-wide policies and agency best practices. We used information on BEA’s remittance estimates solely to help us answer the report’s second objective. We are sending copies of this report to interested congressional committees and the Secretaries of Commerce, Homeland Security, State, and Treasury, as well as to CFPB and the Federal Reserve Board. In addition, the report will be available at no charge on GAO’s website at http://www.gao.gov. If you or your staffs have any questions about this report, please contact me at (202) 512-8678 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs are listed on the last page of this report. GAO staff who made major contributions to this report are listed in appendix III. This report (1) discusses the potential effects of collecting information from and imposing a fine on remitters unable to provide proof of legal U.S. immigration status, and (2) examines the Bureau of Economic Analysis’ (BEA) remittance estimate and the extent to which the revised estimation methodology met government-wide policies and agency best practices. To discuss the potential effects of assessing a fine on remitters unable to provide proof of U.S. immigration status, we summarized estimates of the number of immigrants without legal status from federal agencies and research organizations, including the Department of Homeland Security (DHS), Pew Research Center, and Center for Migration Studies (CMS). Together, DHS, Pew Research, and CMS are primary sources for the estimates of immigrants without legal status in the United States, which we determined by asking experts from each organization above to discuss all other similar estimates. Through interviews with immigration researchers, review of research articles, and comparison of the estimates, which ranged from 11.1 million to 11.4 million immigrants in the United States without legal status in 2012, we determined that the estimates were authoritative and sufficiently reliable for the purposes of this report. We used these sources to identify the size of the potentially affected group of immigrants without legal status. To acquire information on the effects of the proposed requirement to provide proof of legal status or pay a fine, we reviewed relevant academic and industry studies based on a literature search. We reviewed and summarized the literature for factors that could be associated with the proposed legislation, including the number and remitting behavior of immigrants without legal status, changes in remittance flows in response to a price increase, the effect of requiring proof of legal status on remittances, and market competition between remittance providers.. We determined the studies to be reliable for our purposes. To obtain perspectives on the potential effects of imposing a fine on remitters without proof of U.S. legal status, we interviewed researchers with expertise in remittances and immigration to the United States, financial institutions, remittance service providers, two industry trade associations, one state audit association, two community groups with knowledge of remitters’ concerns, and knowledgeable federal and international agencies. We judgmentally selected a cross-section of remittance transfer providers that included five nondepository remittance transfer providers and four depository institutions based on a number of factors, including the volume of remittances and diversity of countries serviced. We spoke with regulators, including the Consumer Financial Protection Bureau (CFPB) and the Financial Crimes Enforcement Network (FinCEN), to obtain their perspectives on compliance with requirements of the proposed Remittance Status Verification Act of 2015, should it become law. We also reviewed laws and regulations relevant to remittance transfer providers. Researchers with expertise in remittance transfers were selected by contacting two recognized experts and asking for referrals. We interviewed the experts recommended and continued to ask for others until the referrals began to repeat with experts we already interviewed. To select community groups, we asked others we interviewed for recommended groups. To highlight the uncertainty associated with the effects of the fine, we constructed a scenario analysis of several factors that may affect net revenue from the fine, which is the amount of fine collected that remains available for border protection after payment of CFPB’s administrative and enforcements costs. We varied hypothetical amounts for the following three factors: dollar amount of remittances sent by immigrants without legal status, the percentage reduction in remittances in response to the fine, and the cost for administration and enforcement. We selected the three factors by analyzing them among other potential factors and we found that these three provided wide variability in net revenue from the fine. Other factors we considered included the volume of total remittances, the percentage transmitted through formal methods, and the percent of remittances sent by immigrants without legal status. Though we conducted a literature search for statistics for each factor in our analysis, any studies found were not generalizable or sufficient for our purposes. The data were limited to remittance flows between specific countries, for example remittances sent between the United States and Mexico, or were not recent. Therefore, the dollar amounts or percentages given to each factor in our scenario analysis are hypothetical and selected only to show the potential variability in net revenue from the fine. To obtain information on BEA’s estimate of remittances (personal transfers) from the United States, we met several times with BEA officials responsible for developing the estimate. They provided us with an estimate of the total volume of remittances from the United States to the rest of the world from 2006 to 2014 that they provided to the IMF for inclusion in balance of payments statistics. In this report, we further assess BEA’s estimation model and find that its results are unreliable. To understand BEA’s revised methodology for estimating remittances (personal transfers) we conducted multiple interviews with BEA staff responsible for developing the estimate. We obtained BEA documentation describing the agency’s approach to estimating remittances, including components of its model, related statistical program files, and its outputs. We reviewed BEA’s presentation and description of the model and checked for consistency with its statistical program files and other calculations. We provided BEA with numerous follow-up questions about the methodology, and BEA provided us with written responses and attended additional meetings to provide more clarity. We also obtained documentation on the Census Bureau’s (Census) American Community Survey and Current Population Survey data to understand how they were used in BEA’s remittance estimation methodology and interviewed Census officials familiar with the survey. We also reviewed BEA’s best practices, Office of Management and Budget (OMB) statistical directives, and the National Research Council (NRC) of the National Academies of Sciences’ manual for statistical agencies to determine the extent to which BEA’s methodological changes conformed with guidance on statistical practices. To determine the extent to which BEA documented its changed methodology and its results and adhered to best practice standards, we met with BEA staff responsible for developing the estimate. BEA staff explained their documentation procedures to us. BEA staff also provided copies of BEA guidance on best practices regarding methodological changes. We also reviewed relevant law and regulations, as well as guidance from IMF, the Department of Commerce, OMB, and NRC. We reviewed documents provided by BEA for transparency and completeness. Additionally, we provided BEA with follow-up questions about the agency’s documentation processes and procedures, and BEA provided us with written responses. After receiving the responses, we again met with BEA staff to discuss these processes and procedures. To obtain a variety of views on remittance estimation, we met with officials from IMF, World Bank, Inter-American Development Bank and their external consultant, as well as the Mexican and Philippine central banks. We selected these two countries because they were among the top 10 recipient countries of U.S. annual outflows and both countries use a formal methodology to track inflows and outflows on at least an annual basis. In meetings with these entities, we gained an understanding of the methodologies used to estimate remittances and challenges in remittance estimation. We conducted this performance audit from October 2014 to February 2016 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the contact named above, Marshall Hamlett (Assistant Director); Julie Trinder-Clements (Analyst-in-Charge); Namita Bhatia- Sabharwal; Tarik Carter; Emily Chalmers; David Dornisch; Lawrance Evans, Jr.; Donald Hirasuna; Cheryl Jones; Madeline Messick; Patricia Moye; Jungjin Park; Oliver Richard; and Jena Sinkfield made key contributions to this report. | For many countries, remittances represent a large and stable source of foreign currency. Remittances have received increasing attention from policymakers as the volume of funds transferred has increased over the years. Despite the global significance of remittances, much remains unknown about the actual volume of remittances and the methods used to remit them. GAO was asked to study the potential effects of a fine on certain remitters and estimates of U.S. remittances. GAO examined (1) the potential effects of a fine on remitters unable to provide proof of legal immigration status, and (2) BEA's remittance estimate and the extent to which its revised estimation methodology met government-wide policies and best practices. GAO constructed a hypothetical scenario analysis to show the uncertainty associated with the effects of a fine. GAO interviewed, among others, BEA, International Monetary Fund and World Bank officials, and researchers. GAO also analyzed BEA's estimate of U.S. remittances and documentation of its methodologies. The Remittance Status Verification Act of 2015, S. 79, would require remittance transfer providers to request that all senders of remittances to recipients outside the United States provide proof of their legal status under U.S. immigration laws and impose a fine on those unable to provide such proof. The funds collected would be submitted to the Consumer Financial Protection Bureau (CFPB) to pay for its administrative and enforcement costs in carrying out the act, and any remaining funds would be used to pay expenses related to border protection. The fine may raise money for border protection, but the exact amount is unknown and would depend on several factors, including the dollar amount of remittances sent by those without legal status, changes in remitter behavior due to the fine, such as using unregulated transfer methods, and CFPB's administrative and enforcement costs to carry out the act. The first two factors above affect the volume of remittances that would be subject to a fine. The third factor affects the amount of net revenue from the fine remaining for border protection. Finally, remittance transfer providers told GAO that the fine could have consequences for them, including potentially disproportionate costs for small providers. The Bureau of Economic Analysis (BEA) estimated that remittances from the United States were approximately $40 billion in 2014. However, BEA's methodology for estimating remittances is not consistent with government-wide policies and guidance on statistical practices or with BEA's own best practices and thus produces unreliable estimates. GAO identified several weaknesses in BEA's estimation methodology, illustrated by the following examples. BEA failed to use appropriate methodology that addressed questionable aspects of the data, such as missing information and measurement problems. This is inconsistent with National Research Council of the National Academies of Science guidelines for federal statistical agencies and government-wide policies. BEA also calibrated the output of the new model to match the estimate produced by BEA's previous model. BEA did this because according to officials the new model produced substantially lower results than BEA had previously estimated. In a 2006 report GAO had questioned the reliability of BEA's previous model; as a result BEA's actions raise further concerns about the reliability of the new model's results. Moreover, BEA could not provide adequate, transparent documentation underlying its methodology or reviews of its methods and data. According to BEA officials, BEA did not adhere to its own best practices for changing its methodology because they did not consider the remittance estimate to be influential information. However, BEA's estimate is influential, as it is cited by national and international organizations and in some cases is incorporated into the estimates of these organizations, including the World Bank. GAO recommends that BEA conduct analyses to improve the reliability of its estimate and follow established policies for documenting its methods and analyses. BEA agreed to implement the recommendations but disagreed that its estimates are unreliable and not adequately documented. GAO disagrees and maintains that BEA's revised estimation model produces unreliable estimates and BEA could not provide adequate documentation of its methodology. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
In the United States, authority to regulate elections is shared by federal, state, and local officials. Congressional authority to regulate elections derives from various constitutional sources, depending upon the type of election, and Congress has passed legislation in major functional areas of the voting process, such as voter registration, as well as prohibitions against discriminatory voting practices. However, the responsibility for the administration of state and federal elections resides at the state level, and states regulate various aspects of elections including, for example, registration procedures, absentee and early voting requirements, and Election Day procedures. Within each state, responsibility for managing, planning, and conducting elections is largely a local process, residing with about 10,500 local election jurisdictions nationwide. Some states have mandated statewide election administration guidelines and procedures that foster uniformity in the way their local jurisdictions conduct elections, whereas other states have guidelines that generally permit local election jurisdictions considerable autonomy and discretion in the way they run elections. Along with the various ways that states and local election jurisdictions may share election policy responsibilities, there are a variety of cost-sharing arrangements between state and local election offices. The result is that elections can be administered differently across states and local jurisdictions. The offices that administer elections in states and local jurisdictions can be organized in different ways, and in some cases offices with primary responsibility for elections (referred throughout this report as election offices) may have responsibility for other areas of government as well. For example, in Rhode Island, the Secretary of State’s office oversees the Elections Division as well as other divisions and offices responsible for public records, business services, the state library, and the state archives. In contrast, in Delaware, the State Election Commissioner has a more singular focus of overseeing the Department of Elections. Similarly, local election offices may include a Board of Elections or Board of Canvassers that are specifically responsible for elections, or a county clerk’s office that may also have responsibility for public records, licenses, or other activities. As election officials manage voter registration processes and voter lists, they must balance two important goals. First, officials seek to minimize the burden on eligible people registering to vote. Additionally, they seek to ensure that the voter lists are accurate, a task that involves including the name of each eligible voter on the voter list, removing names of ineligible voters, and having safeguards in place so that names of voters are not removed in error from the list. States have established a variety of mechanisms for registering voters and confirming the identity and registration of those who seek to vote, whether at the polls on Election Day or by absentee ballot. Two key pieces of federal legislation require states to take certain measures addressing voter registration—NVRA and HAVA. In addition to any other method of voter registration provided for under state law, NVRA prescribes three methods of registering voters for federal elections: (1) when they obtain a driver’s license, (2) by mail using the federal voter registration form prescribed by the Election Assistance Commission (EAC), or (3) in person at offices that provide public assistance and services to persons with disabilities and other state agencies and offices. Certain states are exempt from NVRA—specifically those states that allowed Election Day registration at polling places at the time that NVRA was enacted and North Dakota, which does not require registration to vote. This means that in those exempted states voters can register to vote and vote on Election Day pursuant to state requirements and the states are not required to provide the NVRA registration methods noted above. Lastly, NVRA also establishes requirements to ensure that state programs that identify and remove from voter registration rolls the names of individuals who are no longer eligible to vote are uniform, nondiscriminatory, and do not exclude a voter from the rolls solely because of his or her failure to vote. HAVA required states to each establish a single, uniform, statewide, computerized voter registration list for conducting elections for federal office. To assist with those and other elections efforts addressed in HAVA, Congress authorized more than $3 billion in funding to be distributed to the states to fund compliance with HAVA requirements, and to generally improve the administration of elections for federal office. According to researchers, HAVA, and the funding Congress provided to implement HAVA, played a major role in removing barriers associated with paperless registration. Increasingly, voters in many states can register or update their registration information online, in addition to other available registration options required by NVRA or established by the states. As shown in figure 1, Arizona was the first state in the nation to implement online voter registration, in 2002, the same year as the passage of HAVA. As of May 2016, 31 states and Washington, D.C., offer online voter registration. In some of these states, the online registration option is only available to citizens who have a driver’s license or state-issued identification (ID) card. In these states, individuals who do not have either of these forms of ID may fill out the registration form online, print, sign, and mail it to the election office. Furthermore, with increased access to information online, states have also developed elections websites that provide electronic customer service for voters. Among other things, voters can view their polling locations, apply for an absentee ballot, or access other information that can assist voters in casting their ballots, including registering online. States have also begun implementing data-sharing efforts within their states to support the work of maintaining accurate voter registration lists. For example, election offices in some states are collaborating with their state’s motor vehicles agencies—such as a Department of Motor Vehicles (DMV), and hereafter we refer to motor vehicles agencies as DMVs—to share data, such as addresses and identifying information, electronically between the agencies. These systems establish a connection between the DMV and the state’s voter registration database, enabling the electronic transmission of information to election offices when individuals register to vote or update their registration when visiting the DMV. Election officials then process the data received—for example, they may add a new registration record for an eligible individual who applied while obtaining a driver’s license or update an existing registrant’s address if the individual moved to a new residence and provided the DMV with an updated address. States also use multiple sources—including collaboration with other states to share voter registration information across multiple states—to maintain accurate registration lists given that individuals may move across state lines without cancelling their registrations at their previous addresses. For example, the Electronic Registration Information Center (ERIC), founded in 2012 as a project between the states and The Pew Charitable Trusts, was organized to address the challenge of incomplete and inaccurate voter registration lists. Since shortly after ERIC’s founding, state election officials have overseen and managed the program to organize the collection, analysis, and distribution of data among member states. The organization uses automated data-matching software to produce reports for member states, with the goal of helping state and local officials maintain accurate registration lists. Researchers calculate turnout using different methods, based on available data and the purpose of their research. Specifically, turnout is expressed as a percentage, but the numerator and denominator used may differ. For instance, the numerator may represent the number of votes for the highest office on the ballot or total ballots cast (regardless of whether or not individuals voted for the highest office). Similarly, the denominator may represent the voting-age population (everyone 18 years of age and older), the voting-eligible population (the voting-age population adjusted for segments of the population that are not eligible to vote, such as non-citizens), or registered voters. Additionally, data may come from official voter records or from surveys—which rely on self-reported information—and political scientists have found that surveys produce higher estimates of turnout than official records maintained by election administrators. Possible explanations for this discrepancy between survey responses and actual records include memory limitations and respondents indicating they had voted when they had not, because of positive social attitudes toward voting among some groups of respondents. However, weaknesses in how voter records are maintained can also cause error and can lead to an underestimation of turnout when calculated as a proportion of registered eligible voters. Measurements of voter turnout can vary based on the calculation and data used. According to CPS data for the voting-age population, national turnout rates in presidential and midterm elections have declined slightly over the past three-and-a-half decades (see fig. 2). Although states and local election jurisdictions have implemented policies that seek to make voting more convenient, and thus less costly to voters, broad academic research on voter turnout has concluded that individual differences among citizens—such as age and political interest—and the competitiveness of elections are more strongly and consistently associated with the decision to vote than interventions that seek to increase convenience. Demographic differences may be strongly and consistently associated with differences in turnout rates, and to illustrate this, we have included figures in appendix III that show differences in turnout over time related to age, race and ethnicity, and educational attainment. States have implemented efforts to collect and share voter registration information electronically—specifically through (1) online registration, (2) sharing voter registration information between DMVs and election offices, and (3) sharing registration data among multiple states. According to literature on these efforts and election officials we spoke with, these efforts involve initial investments and implementation challenges, but they can provide efficiencies, such as improved accuracy of voter registration records, cost savings, and improved voter experience. States that adopt online registration create a web-based system or portal that takes applicants through the registration process enabling them to register and make updates to their registration online. For individuals who are not already registered, the system collects the required information that individuals would have otherwise provided on a paper registration form. Those who have already registered in the state may make changes online to their registration, for example by updating their address or changing their party affiliation. As of May 2016, 31 states and Washington, D.C., offer online registration, including four of the five states we visited—Colorado, Delaware, Illinois, and Oregon. In three of these four states, online registration is an option for individuals who have a driver’s license or state-issued ID card. Other registrants in these states can access a form online to print, sign, and mail to the election office or register through other methods available in their state. Investments of time and money are involved in implementing online registration, and the process can include technological challenges, according to election officials we spoke with and our literature review. Generally, state election offices are responsible for designing and implementing online registration systems that voters in any local jurisdiction within the state can use, and thus state offices incur the costs of these investments. However, the upfront costs of online registration are generally modest and quickly surpassed by the savings generated after implementation. A 2015 review by The Pew Charitable Trusts found that while the creation of an online registration system involved some initial expenditure, the reported average cost to design, build, and implement a system was $249,005, based on survey results from 14 states that implemented online registration as of November 2014. Additionally, among the states we visited that have online registration systems— Colorado, Delaware, Illinois, and Oregon—most state officials we spoke with did not mention costs when asked what, if any, challenges they faced when implementing online registration. Officials from one state, Illinois, cited the lack of additional funding for designing their online registration system as a challenge of implementing the effort. The costs states incur result from activities such as building the online registration infrastructure and performing ongoing maintenance. State personnel or outside specialists under contract from the state may complete these activities. For example, in Illinois, state officials reported that the State Board of Elections Information Technology Department designed the online registration system. The state’s total costs for fiscal years 2013 and 2014, including the salaries of the individuals who designed the system, were reported to be approximately $600,000. Similarly, state officials we spoke with in Oregon noted that the state developed its online registration system in house, and thus there was no additional expense resulting from the upfront costs for implementation, beyond staff time for the Information Services Division of the Office of the Secretary of State and the DMV. However, Oregon officials reported that there are monthly and annual costs associated with vendors who provide continual maintenance of the system. Election officials from three states we visited also said they needed to overcome multiple technical challenges when implementing online registration in their respective states. In particular, developing an online registration system includes the creation of a secure application for collecting registration information and transferring the information to local election offices, and this technical capability can be challenging to design. Illinois election officials said that in designing their state’s online registration system, they faced technical challenges because the system needed to interface with various systems that local jurisdictions use for processing and maintaining registration records. Thus, state officials designing the online registration system had to work with multiple vendors for the local jurisdictions’ systems to ensure the state’s online form could transmit data to the local jurisdictions. Election officials in the states we visited also noted that designing the online registration system to capture a signature from registrants was a challenge. According to The National Research Council, state DMV databases generally provide the signature used for online registration. In Colorado, to verify the identity and obtain a digitized signature for first time registrants, the online system needs to connect to the DMV database in real time; the state had to overcome initial technical challenges with this connection when first implementing the system in 2010. In Delaware, state officials told us that a 2003 change in state law made online registration possible by permitting the election office to accept electronic signatures—a registrant can either access the system on a tablet and provide a signature using a stylus pen or upload a scan of his or her signature—if the registrant does not already have a signature on file in the elections or DMV databases. According to literature we reviewed and state and local election officials we spoke with, the benefits of implementing an online registration system include administrative efficiencies that can result in improved registration accuracy and cost savings, including cost savings to voters in the form of greater convenience. Online registration results in administrative efficiencies, in part, by reducing the amount of manual data entry required to input information from registrants into a computerized voter registration database. Although state officials are generally responsible for the initial investments to set up the online registration system, local election officials may reap more of the benefits of online registration because they are responsible for processing and certifying individual registration records, and thus the local election officials benefit from being able to process registrations more quickly. For example, in Illinois, officials said that having the information electronically transferred has reduced processing times to a few minutes, replacing a more time-consuming process that required staff to open the envelope(s), date stamp each application, and manually enter the data into their computer systems. Officials from all four states we visited with online registration noted improved accuracy of their registration rolls as a benefit of the system, and local officials in Delaware cited this as the greatest benefit of the new system. Local election officials in Colorado and Oregon noted that online registration reduces the need to decipher illegible handwriting, which can lead to errors when processing handwritten, paper registration forms. Additionally, in Illinois, election officials said that the registration information they receive is more complete because the online system identifies when individuals have left a required field blank and does not allow them to submit the application without completing all the required fields. In contrast, if individuals submit paper forms with incomplete or missing information, local officials processing the registrations would need to contact the individuals to obtain the information required to complete the registration process. After implementing online registration, the administrative efficiencies associated with processing registration forms can translate into cost savings for election offices. Twelve out of 13 states with online registration surveyed by Pew in 2013 reported that cost savings is one of the key benefits of these systems. Officials in Maricopa County in Arizona—the first state to have online registration—also reported that the cost of registration dropped significantly since the implementation of online registration, from $0.83 for a paper registration to $0.03 for an online registration—a total savings of approximately $1.4 million between 2008 and 2012. All local officials we spoke with in states with online registration noted that the administrative efficiencies from online registration reduced the costs, as well as time costs, associated with managing their registration lists. In Delaware, election officials stated that staff now more efficiently process registration applications, whereas officials previously had to work 10- to 12-hour shifts to process all incoming registration forms by the official deadline. This has resulted in less use of overtime pay in the weeks leading up to the state’s registration deadline, according to officials. Delaware state election officials also noted that their staff spends less time responding to phone calls from voters with registration questions since the implementation of the state’s online registration system, which has allowed election officials more time to do other elections related tasks. Additionally, officials in one local jurisdiction reported they have reduced their overall costs because they have fewer requests to mail registration applications, which saves time, postage, and supplies. In addition to these benefits for election offices, the election officials we spoke with and the literature we reviewed noted that voters benefit from the added convenience online registration provides, and added convenience can translate to a decrease in the time cost to voters for participating in the voting process. Specifically, officials from all four states we visited with online registration noted that the system provides added convenience to voters, and other benefits, such as the ability to access other information related to an upcoming election. For example, officials from one local jurisdiction said that online registration along with their locally developed mobile application enables individuals to easily register, change their party affiliation, and access other information to participate in elections. Additionally, in the first year after implementation, a study of Washington residents reported that nearly 70 percent of people who had used the system reported that it was “very easy” to do so and 95 percent of those most informed about online registration agreed with the statement, “if I had a son or daughter turning 18, I would encourage them to register to vote online.” State and local election officials use a variety of tools to maintain voter registration lists. We reviewed two methods for sharing voter registration data electronically: (1) data sharing between state DMVs and election offices and (2) data sharing among multiple states. Since the passage of NVRA, DMVs have played a critical role in the voter registration process. Therefore, they are sometimes able to provide more current and accurate data about registered or potentially eligible voters. Moreover, in an effort to improve the quality of voter registration lists, states may take additional steps to share registration information with other states, thus helping to identify duplicate and deceased registrants and update each state’s registration rolls. Officials implementing DMV and interstate data-sharing efforts, as well as the literature we reviewed, have noted that there are investments and challenges to implementation, but generally these efforts result in efficiencies and costs savings for voter registration activities. NVRA requires that the DMV in every state give individuals applying for a driver’s license or state ID card the opportunity to register to vote or update their voter registration information. Some states have developed systems that electronically collect and share information between the DMV and election officials. As shown in table 1, all five states we visited had, or were in the process of implementing, data-sharing efforts between the DMV and election offices. According to officials we spoke with, as well as literature we reviewed, establishing a data-sharing program between the DMV and the election office involves up-front investment costs associated with technology, as well as continuous costs associated with staff time. Furthermore, the implementation process can present technological challenges. The up- front costs for setting up a data-sharing program can include costs for upgrading technology and for staff time implementing technological and procedural changes. State election officials and DMV officials may have to coordinate to upgrade their databases, software, and hardware, to facilitate data sharing. Software changes may require additional programming, which involves staff time from information technology staff or contractors, according to officials. Furthermore, following implementation of the program, officials who interact with applicants and process registrations—specifically DMV customer service representatives and local election officials—may need training on any new or changed procedures. According to a fiscal impact statement prepared by the Oregon State Elections Division, the projected costs for implementing the state’s new DMV data sharing program will be $796,000 for July 1, 2015 through June 30, 2019, which includes initial implementation costs for software and hardware upgrades to Oregon’s voter registration database as well as hiring a project manager. According to this fiscal impact statement, the Oregon Secretary of State anticipates using the state’s HAVA funds for these costs. This statement also notes anticipated costs to the state Department of Transportation of $33,200 for data system upgrades; however, it states that savings from the data-sharing process in the 2015- 2017 biennial budget will offset the Department of Transportation’s costs. Implementation of Delaware’s data-sharing program did not require hiring additional staff; rather, existing staff from both the state election office and DMV made the necessary programming and procedural changes. According to state election officials, Delaware also used federal funds provided through HAVA for some of the implementation costs, specifically to make programming adjustments to automated kiosks at the DMV that customers used prior to data sharing to update information on their drivers’ licenses or state IDs. The state election office’s costs were primarily to pay a vendor to make programming adjustments to the proprietary software for the kiosks, to incorporate the voter registration features. In addition to the costs for technology and staff time, setting up connections to share DMV data can be technologically challenging, according to the literature and election officials we interviewed. For example, a Pew Charitable Trusts report noted that compatibility between data systems at election offices and DMVs is a technological challenge to implementing data-sharing programs. Similarly, Colorado officials reported challenges getting DMV and state elections systems to work together, and officials plan additional changes through 2017 to improve the compatibility of data shared between the agencies. In Delaware, DMV officials told us that creating a web server link between the screen and keypad devices that voters use to input their information and the DMV computer system was the most difficult technical challenge. Lastly, Oregon DMV officials reported having to create an entirely new application in their system to share information with the election office. Following implementation of a data-sharing process, there may also be ongoing costs associated with processing an increased volume of registrations. Officials we spoke with, as well as studies analyzing the implementation of intrastate data-sharing efforts, note that the volume of voter registration applications can increase from implementing efforts such as DMV data sharing. Local election officials can face increased workload as they maintain responsibility for processing and certifying these registrations. For example, local officials in one state we visited told us more individuals were registering after implementation of DMV data sharing, and state officials in Delaware also reported increased registration rates, though neither reported that processing increased registrations presented a challenge. However, Oregon election officials and DMV officials we spoke with anticipate that the state’s data-sharing program—that registers DMV customers as of January 1, 2016, unless they specifically opt out—will increase registration rates and result in increased costs. Processing registrations includes the production and mailing of confirmation notices to eligible individuals informing them that election officials have certified their registration. Because of the expected workload increase for county officials, Oregon state election officials said the state plans to reimburse counties $0.15 per registered voter over a 6- to 8-year phase-in period for the program. The quality of registrants’ signatures collected at the DMV, and various constraints on sharing signatures across agencies, can pose a challenge for election officials when trying to verify a voter’s identity when comparing a signature captured during the registration process with a signature when the voter casts a ballot. While some state DMVs continue to collect a signature on a paper form as part of their registration process, others have installed new hardware to collect digital signatures, but an election official in Oregon cited challenges with the quality of these signatures, which can vary depending on the technology used. In Oregon, the signature provided to the DMV is crucial because it will become the official signature on file in the state’s voter registration system. As a vote- by-mail state, Oregon requires that the signature on file match the signature provided on the voter’s mail ballot. Oregon officials are considering installing signature pads at the DMV that will produce high quality signatures, but as of May 2016 the DMV staff are scanning a paper copy of the customer’s signature and transferring it to the state elections office. In Delaware, officials implemented the data-sharing program to collect two signatures, one for DMV transactions and one for elections office transactions. Delaware officials explained that this was necessary because, according to state law, DMV customers, in conducting DMV transactions, did not consent to share their signature with the Department of Elections. According to literature we reviewed and DMV officials we spoke with, DMV data-sharing programs can lead to cost savings and other efficiencies for officials while also providing added convenience to voters. Electronic data transmission can result in cost savings to DMV and election officials because of administrative efficiencies—such as eliminating physical transport—and improved data quality. For example, in Delaware, officials reported that prior to implementation of the data- sharing program, election officials drove to their local DMVs every day to pick up voter registration forms; and electronic transmission eliminated these daily trips. In other states where the DMV previously mailed registration forms to election offices, the electronic data transfer saves mailing costs. According to one report, Washington’s DMV data-sharing program saved $121,000 in mailing costs from January 2008 to July 2009. Furthermore, because electronic receipt of registration data replaces manual data entry from paper registration forms, DMV data sharing can reduce the amount of time elections officials spend processing registrations. In Delaware, the state election office returned full-time positions to the state because electronic application transmission increased efficiency, according to state election officials. Additionally, officials we spoke with stated that DMV data-sharing programs likely increase accuracy, as election officials are no longer deciphering illegible handwriting on paper forms. The literature also cites accuracy and cost savings as the predominant benefits of DMV data sharing. Among recommendations to improve states’ electoral systems and implementation of HAVA, a report by The Century Foundation Working Group on State Implementation of Election Reform encourages data sharing from DMV data systems and other state databases. The report cites examples from Kentucky and Michigan where data-sharing efforts ensured that states’ voter registration lists automatically reflected relevant updates, such as a change in address. Additionally, a Brennan Center report notes that the Washington Secretary of State’s office saved $126,000 in 2008 due to both online voter registration and DMV data sharing. In Delaware, officials reported reducing DMV transaction time by 1 minute per customer after the DMV customer service process incorporated registration questions, because customers no longer have to wait for representatives to print forms with their information in triplicate for customers to sign. DMV data sharing may also result in a more efficient experience for voters, because they are not required to update their voter registration records separately, as the DMV automatically forwards the information to the election office. The literature also indicates that shifting the burden of voter registration from the registrant to government agencies such as the DMV and the election office is especially helpful for mobile, low-income, and minority populations, who benefit from the added convenience, as well as young voters, who may be able to preregister when they apply to obtain a driver’s license. Rhode Island officials we spoke with also noted cost savings for voters because registration at the DMV eliminates the need for registrants to pay postage to mail a voter registration form to the elections office. Various interstate data-sharing efforts help state and local election offices maintain accurate voter registration lists, according to election officials and literature. These efforts include, among others, state participation in interstate exchanges—such as ERIC and the Interstate Voter Registration Crosscheck Program—in which states compare information from their voter registration lists, as well as individual states’ use of national databases—such as the U.S. Postal Service’s National Change of Address (NCOA) database or death records from the Social Security Administration—to identify registrants who have moved to another jurisdiction or state, or who have died. Researchers found that, in 2008 and 2010, approximately half of the states used checks against one or more external databases that contained information across multiple states to maintain the accuracy of their voter registration records. In a 2009 report, the National Research Council Committee on State Voter Registration Databases made multiple recommendations aimed at upgrading procedures to conduct data matching to enable election officials to identify potential duplicate registrations across states’ registration databases. Similarly, the Presidential Commission on Election Administration recommended that states should participate in interstate exchanges of voter registration information, such as ERIC and the Interstate Voter Registration Crosscheck Program, adding that such efforts could result in more accurate registration lists, among other benefits. Among such interstate data sharing efforts, we reviewed ERIC in more detail, because it provides an illustrative example of such interstate data-sharing efforts used by state and local election offices in maintaining voter registration lists. ERIC is a multistate partnership that uses data-matching technology to compare member states’ voter registration lists, DMV records, and nationally available lists from the U.S. Postal Service and the Social Security Administration. ERIC administrators stated that the goal of the partnership is to improve the accuracy and quality of voter registration rolls, adding that this can increase voter turnout and decrease costs associated with administering elections by enabling states to have more up-to-date registration lists. ERIC was organized in 2012 with seven states as founding members and has grown to include 19 member states and Washington, D.C., as of June 19, 2016, including all five states we visited. Participation in the ERIC partnership places a number of requirements on states to provide information to ERIC for data-matching purposes, and in response, ERIC administrators provide regular reports to the states that election officials may use to update their registration lists. At least bi-monthly, member states are required to provide ERIC with data from their voter registration lists and DMV records for individuals with licenses or state IDs. These data include identifiers/data elements such as name, address, date of birth, last four digits of a Social Security Number, driver’s license or state ID number, and citizenship, among others, when these data elements are available. At least once per year—or more frequently if the member state submits a request—ERIC administrators provide member states with lists of cross state matches, in-state updates (where the DMV may have a more up-to-date address than the election office), duplicate registrations, and deceased voters. Within 90 days, states are required to initiate contact with 95 percent of the voters whose registration data ERIC’s data-matching process deemed to be inaccurate or out-of-date, to begin registration list maintenance activities. At least every other year—or more frequently if the member state submits a request—ERIC administrators provide states with a list of possibly eligible, unregistered individuals—specifically, individuals who have a driver’s license or state ID but have not registered to vote. Using this information, states are required to establish a plan to outreach to these individuals, such as by sending a mailing that provides information on how these individuals can register if they are eligible citizens, though the individual approaches and mailings may vary by state. Member states incur financial and staff time investments for joining ERIC, as well as experience other challenges in leveraging the matched data based on the quality of their own state’s data. Participation in ERIC requires multiple fees, which can present a challenge according to state election officials and our literature review. Upon joining ERIC, states pay an initial $25,000 membership fee. States must also pay annual fees based on the number of registered voters in the state and the number of member states participating. State officials we spoke to report a range of participation fees between $26,000 and $75,000 annually. Officials from Delaware noted that they have used some of their remaining HAVA funds to cover the cost of their annual fees. Once states receive matched data from ERIC, election officials invest time and other resources to review and process the results in a timely manner. However, when asked about the challenges they faced from joining ERIC, local officials from Delaware, Rhode Island, and Oregon, who are responsible for processing the results, did not indicate that the requirement to contact registered voters identified as having inaccurate or out-of-date records within 90-days specifically posed an issue. States can incur costs associated with mailings that are required by the ERIC bylaws, in particular, the bi-annual mailings to all identified possibly eligible, unregistered individuals. According to the state election director in Oregon, ERIC identified about 795,678 possibly eligible, unregistered individuals in 2014. State officials reported that the total associated mailing cost was $123,767. However, new member states, or those interested in joining ERIC, can apply to The Pew Charitable Trusts for grants to help offset the costs associated with the required mailings. For example, grant funds from The Pew Charitable Trusts covered approximately three-quarters of Oregon’s $123,767 mailing to possibly eligible, unregistered individuals. Although the first batch of possibly eligible, unregistered individuals identified after a state joins ERIC can require a large mailing, states only need to attempt contact with those identified individuals once, according to the membership agreement. Because subsequent state mailings can focus on only those newly identified possibly eligible, unregistered individuals since the prior data provided by ERIC, these mailings are therefore likely not to be as large or costly. States may also face challenges using information provided by ERIC based on the reliability of underlying data provided, and the number and geographic proximity of member states. For example, Colorado election officials said that they are not confident in the quality of the state’s DMV address data and thus the in-state updates list that state election officials receive from ERIC are not always accurate. Colorado DMV officials stated that, by 2017, they plan to complete upgrades to make their system more compatible with the state’s voter registration database, which election officials expect will make the DMV address data more reliable and the ERIC matching process more useful. Additionally, election officials in some of the states that we visited reported that the absence of ERIC participation among neighboring states limited ERIC’s ability to provide complete data to update registration lists. Officials in Delaware and Rhode Island noted that participation by states to which retirees commonly move (such as Florida) might result in particularly useful information for updating registration lists. According to election officials we spoke with, as well as officials’ views cited in literature we reviewed, a state’s participation in ERIC leads to more accurate voter registration lists and cost savings for state and local election offices. State officials noted that ERIC data improve the accuracy of voter registration lists by identifying registrants who elections administrators should remove for various reasons, such as having moved to another state or died. For example, local officials in Oregon noted that ERIC lists identified over 900 registrants who died in another state, enabling election officials to remove the majority of these registrants from Oregon’s registration list. From our literature review, studies that evaluated ERIC also identified increased accuracy as a benefit of the program. For example, an RTI International report noted that all officials interviewed for the study from states that had participated in ERIC were confident in ERIC’s matching process to increase the accuracy of their voter registration lists. State officials in all states we visited, as well as multiple sources in the literature we reviewed, reported that improved accuracy of registration lists translates into cost savings from decreased mailing costs as well as decreased staff time to maintain the voter registration lists. Vote-by-mail states, such as Colorado and Oregon, have a heightened interest in maintaining clean voter registration lists because of the costs associated with mailing a ballot to an incorrect address. Election officials in both states noted that the data provided by ERIC are among multiple tools they use to maintain accurate registration rolls. Additionally, according to a study by The Pew Charitable Trusts, King County, Washington, which conducts elections entirely by mail, saw a drop in undeliverable ballots from 17,911 in the 2013 primary to 11,174 in the 2014 primary, which county election officials attributed to Washington’s participation in ERIC. States without all vote-by-mail elections cited similar benefits. For example, according to election officials in Delaware, having more accurate voter registration lists from participation in ERIC has resulted in mailings that are more effective, because updates based on the information provided by ERIC increase the likelihood that voters will receive mailings while reducing the amount of undeliverable mail. Since making ERIC updates, officials reported receiving about three bins of returned postcards instead of eight bins from the state’s bi-annual mailing to verify voters’ addresses, resulting in less money wasted on printing and postage for mailings that do not reach the intended recipient. Elections officials in Delaware and Rhode Island also reported that their staff spend less time updating the voter registration lists in the months leading up to an election, with the work of cleaning the registration list more evenly distributed across the year. One local election official in Rhode Island, which joined ERIC in July 2015, indicated that ERIC participation should help to reduce the rate of duplicate registrations, resulting in reduced printing costs at the local level for poll books. According to a Pew Charitable Trusts report, the director of elections in Minnesota cited approximately $116,250 in savings to counties since the state joined ERIC in August 2014. State and local governing bodies and election officials are responsible for selecting and implementing various policies and practices (hereafter “policies”) to facilitate election administration. We systematically reviewed literature to identify which of these policies researchers have studied for potential effects on turnout and the findings from these studies. Through our review we identified 11 policies that were each studied in multiple publications. The research indicated these policies had varying effects on turnout. For instance, the majority of studies we reviewed that assessed the effect of same day registration and all vote-by-mail on voter turnout found that these policies increased turnout. Additionally, some studies on informational mailings and no-excuse absentee voting policies also found that these policies increased turnout, but other studies associated with these policies reported mixed evidence or no evidence of an effect. In appendix IV, we summarize the detailed results of our literature review and present contextual information related to each of the 11 policies. Broad academic research on voter turnout has generally shown that individual and demographic differences among populations—such as political interest and age—and the competitiveness of elections are more strongly and consistently associated with the decision to vote than interventions that seek to make voting more convenient, and thus less costly, to voters. Additionally, according to CPS data for the voting-age population, national turnout rates in presidential and midterm elections have declined slightly over the past three-and-a-half decades; at the same time, state and local governments have implemented various policies which, in many cases, have helped to expand options related to when, where, and how individuals may register and vote. Our review focused on policies that fall into three broad categories: Providing information: State and local strategies for providing information about registration and elections can vary in terms of the methods used (e.g., websites, mail, etc.) and content, format, and frequency of communications. Some informational policies are determined by state law, regulation, or policy, and others are determined by local jurisdictions. Registering individuals: States vary with regard to where, when, and how citizens may register to vote. For instance, some states have registration closing dates in advance of Election Day while other states allow citizens to register and vote on Election Day. Within state requirements, local jurisdictions may have some discretion, such as in selecting which locations may be available for citizens to register in person. Providing opportunities to vote: States also vary with regard to where, when, and how registered individuals may cast a ballot. For instance, states differ in the extent to which they allow voting prior to Election Day (either in-person or by mail). Within state requirements, local jurisdictions may have some discretion, such as in determining which specific days they will allow early in-person voting, or in setting polling hours. We identified and reviewed literature that assessed the effects of a variety of policies on voter turnout. Specifically, our literature search identified over 400 journal articles, reports, or books published from 2002 through 2015 relevant to the topic of voter turnout. We used a systematic process to conduct the review, which appendix II describes in more detail. We ultimately identified and reviewed 118 studies within 53 publications that (1) assessed policies that have been or could be implemented by a state or local government, (2) contained quantitative analyses of the effect of a given policy on turnout, and (3) used sufficiently sound methodologies for conducting such analyses. As used in this report, a “study” is an analysis or experiment with a unique sample of data. Our synthesis of the research literature provides a high-level summary of each policy’s general effect on turnout, as reported in recent research. Although we found the studies we reference in our report to have used sufficiently sound methods, the studies we reference were subject to limitations. For instance, many of the policies we reviewed cannot easily be evaluated using randomized controlled trials that often provide the most persuasive evidence of program effects, and thus many of the studies in our review used quasi-experimental approaches or statistical analysis of observational data to examine the impacts of such policies. With such designs, any observed differences in turnout across jurisdictions, time periods, or groups could be caused or influenced by the policy itself; by factors related to the jurisdiction’s decision to adopt the policy; by differing demographic factors across voters; by the contemporaneous implementation of other election policies; or by unobserved or unmeasured factors—such as mobilization campaigns, news media coverage, or social and psychological differences across voters. As a result, distinguishing the unique effects of a policy from the effects of other factors that affect turnout can be challenging. These vulnerabilities can be mitigated, in part, with attention to research design, including appropriate statistical analysis and interpretation. Nevertheless, any policy evaluation in a non-experimental setting cannot account for all unobserved factors that could bias or confound impact estimates with certainty. Our synthesis of the research literature also discusses additional contextual information that may be related to a specific policy’s effect on turnout. We recognize that variations in policy implementation exist—such as differences between the number and type (weekday versus weekend) of days early in-person voting may be available—and may have different effects on turnout. We provide examples of studies that assessed some of these variations in implementation, and their associated impacts on turnout, in the individual policy summaries in appendix IV. Moreover, the development and implementation of various election administration policies are informed by a variety of factors at the state and local level, and thus research findings on turnout may not be the only considerations for election officials in deciding whether to implement changes to election administration policies. We include a discussion of selected factors that administrators may consider in the individual policy summaries in appendix IV. We reviewed the research conducted on 11 policies that met the criteria for inclusion in our literature review. Each of these 11 policies falls within one of the three broad types of activities conducted by election administrators: providing information, registering individuals, or providing opportunities to vote. Figure 3 presents the total number of studies that examined each policy’s impact on turnout, and summarizes the findings of the studies. Some studies examined more than one policy and thus appear more than once in figure 3. Additionally, some studies reported more than one finding related to the effect of a given policy. For a given policy, we categorize the findings for each study as follows: Increased turnout: A study reported only statistically significant positive effects (one or more). Mixed evidence: A study reported one or more statistically significant effects (positive or negative) and one or more findings that were not statistically significant. Alternatively, a study reported one or more statistically significant positive effects and one or more statistically significant negative effects, with or without additional findings that were not statistically significant. No evidence of effect: A study reported no statistically significant effects. Decreased turnout: A study reported only statistically significant negative effects (one or more). As shown in figure 3, some policies have been studied more than others, and the research on some policies resulted in more consistent findings than on others. Taking both of these factors into consideration, we observe that: The majority of studies we reviewed on same day registration (21 of 33 studies) and all vote-by-mail (11 of 21 studies) found that these policies increased turnout. Vote centers (polling places where registrants can vote regardless of assigned precinct) and the sending of text messages to provide information about registration and elections have not been studied as much as some of the other policies, but almost all of the studies we reviewed on these policies (with the exception of one study on vote centers) reported increased turnout. Some studies of mailings to provide information and no-excuse absentee voting policies also found that these policies increased turnout, while other studies associated with these policies reported mixed evidence or no evidence of an effect. In some cases, variations in how these policies were implemented and unique contextual factors associated with their implementation may, in part, account for this varied evidence. Most studies that examined e-mail and robocalls used to provide information reported no evidence of an effect on turnout. Most studies (15 of 20) associated with early in-person voting found that the policy either had no effect on turnout (7 studies) or decreased turnout (8 studies), and 5 studies reported mixed evidence. In appendix IV we present additional information specific to each of the 11 policies. For each policy, we present (1) a summary of findings from the literature related to the policy’s effects on voter turnout; (2) examples of specific studies; (3) descriptions of variations in how the policy may be implemented; and (4) information about the administrative costs of policy implementation, effects on voter convenience (costs to voters), and other considerations that election officials may wish to consider when deciding whether and how to implement the policy (e.g., technological or legal considerations). States and local election jurisdictions incur a variety of costs associated with administering elections, and the types and magnitude of costs can vary by state and jurisdiction. Further, quantifying the costs for all election activities is difficult for several reasons, including that multiple parties incur costs associated with elections and these parties may track costs differently. Although some parties’ costs can be easily identified in state and local budgets or other cost-tracking documents, other costs may be difficult to break out or attribute to election activities. Additionally, voters’ costs are difficult to quantify and monetize because individual voters’ circumstances differ. Election officials are responsible for providing information, registering individuals, and providing the opportunity to vote, but states and local jurisdictions differ in how they administer the activities within these areas of responsibility. The differences in election administration across jurisdictions result in variations in the types and magnitudes of costs that states and local jurisdictions incur for these activities. The following are some examples of variations in cost for different aspects of election administration. State and local jurisdictions have different ways of informing residents about registration requirements and the voting process, and the costs for these efforts can vary. For example, communication efforts could include speaking to civic groups, churches, unions, high schools, and other interested groups; providing registration and voting information at naturalization ceremonies; publishing information in newspapers, on websites, or on social media; or mailing each household a voter guide. The type and magnitude of costs for these outreach efforts can vary because of the different methods states and local election jurisdictions may use to provide information to residents. For example, speaking to interested groups involves a time cost for the officials who speak at such events, and this time cost may be considered part of an election official’s regular salary and work schedule, whereas mailing voter guides involves printing and postage costs. A state election official in Rhode Island noted that he visits high schools to inform students about registration, and the costs are staff time that fall within his regular salary, in addition to transportation (mileage reimbursement). According to the chief election official in one local jurisdiction we visited, the election office spent about $7,000 to advertise elections information (e.g., polling locations, deadlines related to the election) in newspapers for the 2014 primary and general elections, in addition to about $12,000 for printing and mailing informational materials about state referenda on the ballot to registered voters. Officials in another local election jurisdiction said they send some voter information by e-mail, such as reminders to update registration information or information about election updates; although providing information by e-mail does not involve printing or postage costs, it requires that the election office have access to e-mail addresses. In some cases, election officials may be able to use e-mail addresses provided through other local government activities—for example, in one local jurisdiction we visited, the clerk includes election information in a general community e-mail newsletter to individuals who request the newsletter through the local government’s website. States and local jurisdictions may consider these costs, as well as other factors, such as the intended target audience or legal requirements, in selecting a combination of outreach efforts to inform residents about registration and voting processes. In particular, information about registration requirements and processes may need to be distributed in such a way to reach individuals who are not registered or may need to update their registration. States and local election officials may choose outreach methods that address the general public to provide such information. In some cases, media may convey the information as part of a local news segment without charging an advertising fee. For example, Delaware election officials said that they provide information to the local TV news network to promote National Voter Registration Day, and officials in another local jurisdiction said they shared information with media outlets when the state introduced online registration. Officials may also use free-of-charge social media accounts to provide information. Additionally, as noted earlier in this report, states that participate in ERIC are required by the program to mail information to potentially eligible, unregistered individuals to provide information about opportunities to register. The costs of these mailings can be affected by the format the state chooses for its mailing (e.g., postcard or letter) as well as the number of potentially eligible, unregistered individuals ERIC identifies in the state. Other outreach efforts may be targeted at registered voters to inform them about the particular details regarding an upcoming election. In some cases, states or local jurisdictions are required by state law to provide certain types of information to registered voters. For example, in Colorado, both the state and local jurisdictions are required to mail information to voters when there are ballot issues that affect debt or taxes. In Rhode Island, the Secretary of State must mail a voter information handbook that lists all state questions and explanations of the subjects of these questions to each residence, while local jurisdictions, prior to each local election at which public questions are on the ballot, may mail similar voter information handbooks listing public questions and explanations of the subjects of the questions to each residence in lieu of posting the information in public locations and publishing it in a local newspaper. One jurisdiction we visited spent about $75,000 for printing and mailing a state-required notice for an election in 2013. State election officials may also need to ensure that potential voters are informed about, and have access to, forms of ID required to vote in that state. In general, many states that require a government-issued photo ID for voting offer some form of ID free of charge. However, voters may incur costs—either monetary or time costs—for obtaining ID, as discussed later in this report. States may make IDs available free of charge for residents for voting purposes in a variety of ways—including providing them through the DMV or through the state election office. For example, in Rhode Island, the Secretary of State’s office purchased equipment to produce voting ID cards, and individuals can obtain these cards free of charge by visiting the Secretary of State’s office. The equipment is portable, and staff from the Secretary of State’s office also bring the equipment to various events to provide additional opportunities to obtain a voting ID card. Therefore, in addition to the costs for the ID equipment, the state incurs staff time cost for attending local events. The National Conference of State Legislatures reported that Indiana’s estimated production costs—including staff time, transaction time, and manufacturing—for providing 168,264 IDs to voters in 2010 exceeded $1.3 million. The Brennan Center reported that this estimate did not include costs such as training and voter education and outreach. Some states have prepared fiscal notes to accompany pending legislation that demonstrate how much providing voters with free IDs could cost. Although a proposed voter ID law did not pass in Minnesota, the state estimated that providing voter ID at 90 locations across the state would cost the state at least $250,000 in the first year of implementation, with recurring costs in future years. The state noted that individuals who lack ID tend to change residences more often than the average person, which may affect supply costs. Minnesota’s estimate also noted that county auditors would incur substantial expenses related to providing the IDs, including designating and housing locations where voter ID could be obtained, processing the applications for voter ID cards, issuing and producing the cards, as well as receiving returned cards when residents change their residence. Additionally, the state’s cost estimate noted that municipal governments would need to hire additional poll workers to accommodate the additional time needed for asking for an ID from each voter as well as handling provisional ballots for individuals who did not bring ID with them to the polls. The state estimated that the local government costs for additional poll workers could range from $375,830 to $536,900 for each statewide election. The different registration methods offered within states can influence costs—the use of paper forms involves paper and printing costs, among others, whereas an online registration option involves information technology development and maintenance costs. The U.S. Public Interest Research Group, a coalition of state public interest research groups, released a study in 2009 of 100 counties of various sizes in 36 states that estimated that these counties’ cost to conduct registration and run error- correction programs on the voter registration information was $33,467,910 for the 2008 election. According to the report, in counties in the survey with populations under 50,000, total expenditures were estimated at $86,977 per county; in counties with population between 50,000 and 200,000 persons, the total expenditures were $248,091 per county; and, in counties with total populations greater than 200,000 the total expenditures per county were estimated to average $1,079,610. The report also noted that in a survey of a subset of the 100 counties (9 counties from each of the three population ranges), most counties reported that full-time registrar staff spent at least half their time on registration issues. However, since the 2008 election, a total of 31 states and Washington, D.C., have implemented online voter registration which, as discussed earlier in this report, may involve initial investments, but may later result in time and cost savings to local election officials who spend less time processing electronic registrations than paper registrations. As reported earlier, improved efficiencies in processing registration can reduce the number of staff needed to process registration or may free up staff to attend to other responsibilities. States and local jurisdictions can also incur costs for voter registration list maintenance activities, and these activities vary across states and local jurisdictions. For example, some states or local jurisdictions may send mailings to all registered voters and use any returned undeliverable mail as an indication that a voter is not currently residing at the address on the voter’s registration record. States may also participate in the data-sharing efforts mentioned earlier in this report or checks against other data sources. For example, state and local election officials can compare their voter registration lists against databases such as the U.S. Postal Service NCOA database to determine whether an individual has moved to a new address or Social Security Administration records to determine if an individual is deceased. States provide opportunities to vote, such as voting in-person on or before Election Day or voting by mail (absentee options in all states and vote-by-mail in three states). These different voting methods also result in different types of costs—for example: Polling Places. Election officials in jurisdictions that offer in-person voting options locate and prepare polling places and organize and deliver voting equipment and supplies to polling places. The costs for establishing in-person polling locations can vary by state and local jurisdiction. According to officials in one local jurisdiction we visited, their office pays a $125 daily rental fee for polling places that are in privately owned buildings and, for public buildings, the jurisdiction pays only the marginal costs of keeping the buildings open before or after regular hours for voting purposes. In contrast, in Delaware, the fiscal year 2016 state appropriations act requires election jurisdictions to pay owners of polling locations a $300 daily rental fee, regardless of whether the building is publicly or privately owned. The total costs for polling places can vary because of any fees to use the facility, the number of days the facility is used for voting (e.g., early voting in addition to Election Day), and the number of polling places in a given jurisdiction. Among the jurisdictions we visited that primarily offer in- person voting opportunities, according to election officials in those jurisdictions, the number of polling places ranged from 3 to about 1,800. Election Workers. Costs for recruiting, training, and paying poll workers at polling places can also vary—for example, election officials in jurisdictions in the three states we visited that offer primarily in- person voting—Delaware, Illinois, and Rhode Island—cited poll worker compensation ranging from $100 to $235 per day, with variations across and within states and by level of responsibility. Regardless of whether states offer voting in person or by mail, election offices may need to hire temporary staff to assist with the additional workload in the weeks leading up to or following an election. For example, Colorado voters have the option of voting by mail or in person at voter service and polling centers, which can be partially staffed with permanent staff from election offices, but local jurisdictions also hire additional workers to assist at these polling locations or with other election activities. Similarly, Oregon does not have in-person polling locations, yet local election officials we spoke with said that they hire temporary staff to assist with a range of elections responsibilities, including registration and ballot processing, during the peak workload period surrounding an election. Vote-by-Mail and Absentee Ballots. Preparing ballots to be mailed to voters in vote-by-mail states, or to absentee voters in states that continue to offer in-person voting, involves printing costs for ballots and envelopes and postage costs for delivering the ballots to voters. For example, in one large, urban local jurisdiction we visited in a vote- by-mail state, the ballot printing costs for a 2013 statewide election were over $280,000, and according to officials, postage costs to mail these ballots were about $32,000. In Rhode Island, which conducts primarily in-person voting, the state assumes the costs for all absentee ballots, including printing the ballots, mailing them to voters, and processing the ballots. Rhode Island election officials said that the use of absentee voting has increased since the state broadened the allowed excuses for requesting an absentee ballot, and thus the costs for absentee ballots have increased. However, to ensure that no in-person polling place experiences a ballot shortage, the state has continued to print enough ballots for in-person voting for all registered voters in every precinct. The costs for mailed ballots may also include return postage—for example, Rhode Island state election officials said that the state pays return postage for absentee ballots for homebound voters. The magnitude of costs for any particular expense category can vary based on the voting opportunities offered. For example, the total costs for poll workers—who may be paid for each day of work—can increase if early voting is offered and poll workers are needed to staff polling places on days in addition to Election Day. Similarly, the total costs for polling places can depend on the number and types of polling places within an election jurisdiction. A state or local jurisdiction’s costs can also depend on how many elections there are over a given period of time—although some states have standardized election calendars that consolidate federal, state, and local elections at the same times, other states may have multiple elections at different times for different levels of government. Special elections can also affect the total costs for conducting elections by increasing the number of elections. Quantifying the costs for all election activities is difficult for several reasons, including that multiple parties incur costs associated with elections and these parties may track costs differently. Although some parties’ costs can be easily identified in state and local budgets or other cost-tracking documents, other costs may be difficult to break out or attribute to election activities. Therefore, adding up the budgets for all election jurisdictions within a state together with the budget for the state election office is not a comprehensive or accurate means for determining the cost of elections within a given state. Such budget or cost-tracking documents also do not include the cost to voters, and voters’ costs are additionally difficult to quantify and monetize because individual voters’ circumstances differ. States and local election jurisdictions have developed their own methods of tracking election activities and associated costs through documents such as budgets, accounting systems, or spreadsheets. The budgets for state and local election offices are one way of identifying and tracking costs associated with elections. However, there is no standard budget scheme across all states or local election jurisdictions for categorizing the various elections activities and their associated costs. For example, state and local jurisdictions use different time frames for their budgeting process. According to the National Association of State Budget Officers, 30 states prepare annual budgets, while 20 states prepare biennial budgets, though the association reports that in practice, a number of states use a combination of annual and biennial budgeting. The months covered by budgets can also vary, which provides important context for elections budgets because of how many elections fall within the period covered by a budget. This can affect how costs for elections are distributed across different fiscal years’ budgets. For example, two local jurisdictions we visited in the same state have fiscal years that span different periods such that their fiscal years cover a different number of elections—for one jurisdiction, the presidential primary, general election primary, and general election will occur in fiscal year 2016, and for the other the presidential primary will occur in fiscal year 2016 and the general election primary and general election will occur in fiscal year 2017. Election offices may also maintain accounting records, spreadsheets, or other documents that provide varying levels of detail on elections costs. Within these cost-tracking documents, state or local jurisdictions can use different categories to organize their elections expenses. For example, across three local jurisdictions we visited in different states, all three local jurisdictions track costs in a postage category but use varying categories to capture the costs for supplies. Specifically, one jurisdiction has a single category for “signage, forms, and all other supplies,” whereas the other two jurisdictions have additional categories—one jurisdiction has categories for office supplies and toner cartridge and ribbons, and the other jurisdiction has categories for computer supplies, map supplies, office supplies, and supplies and equipment. As such, cost information may not be standardized across or within states, and thus it may not be possible to calculate the costs for a particular election activity or expense across jurisdictions because the information is captured or reported in different ways. Some states have implemented efforts to standardize cost tracking for elections across local election jurisdictions in their states. For example, the Oregon and Colorado state election offices collect election cost data from local jurisdictions within their states. Oregon state officials said that the data they collect are used to summarize information about the costs of an election—state officials compile the total costs for each county and calculate, per county and statewide, the average cost per eligible voter and the average cost per ballot cast for each statewide election and track these costs over time. Colorado state officials explained that the state developed a standardized cost tracking form to determine the costs of elections for local jurisdictions, particularly given that the state is statutorily required to reimburse local jurisdictions when there is a state measure on a ballot. Although both states have developed methods of collecting cost information from local jurisdictions in a standardized way, the categories these two states use in their cost-tracking forms are different. Other states collect information on certain expenses, but do not standardize broad cost information across jurisdictions to calculate the overall costs of an election—for example, state election officials in Illinois said that local jurisdictions incur the majority of election-related costs, but the state reimburses a portion of the poll worker cost and thus collects limited cost information on poll workers from local jurisdictions for reimbursement purposes. Identifying elections costs can also be difficult when the office that is tasked with administering elections responsibilities also has responsibility for activities other than elections. Specifically, the cost information tracked by such offices may include costs that are not related to elections, and thus it may be difficult to separate elections costs from costs incurred for other activities and responsibilities. For example, in some locations a county, city, or town clerk is responsible for overseeing elections as well as other functions. In one local jurisdiction we visited, the clerk’s office was responsible for administering elections as well as issuing licenses (marriage, dogs, yard sales, hunting, and fishing) and maintaining public records (birth certificates, death certificates, and probate records). The clerk for canvassing in this location is primarily responsible for administering elections, but all staff within the clerk’s office assist with elections activities such as answering phone calls from residents with registration and voting questions. Additionally, some elections-related activities may rely on support from other state or local offices that do not have primary responsibility for administering elections. For example, election offices may receive support from offices that provide legal or information technology support. In some cases, for these other offices, it may be possible to identify the costs associated with elections-related activities, but in other cases, it may be difficult to separate elections-related costs from regular operating costs. For example, in one local jurisdiction we visited, all county offices use the same accounting system, which enables offices other than the Elections Division to charge expenses to the Elections Division accounting code so those offices can be reimbursed for elections-related expenses. In contrast, the state elections director in Rhode Island said that during busy periods around Election Day, employees from divisions of the Secretary of State’s office other than the Elections Division assist state election officials, but the budgets for these other divisions do not separate staff time by activity to identify what proportion of time is spent on their primary activities in those other divisions and what proportion is spent on election-related activities. In addition to costs to state and local jurisdictions, voters also incur costs associated with elections. Some costs to voters are monetary, though not all voters will incur these costs to the same extent. Voters may incur postage costs for submitting forms or returning a mail ballot. The cost for a first-class stamp is $0.47, although additional postage may be required if a ballot has numerous pages that exceed 1 ounce in weight. However, not all voters rely on mail to submit their registration applications or cast a ballot, and in some cases even those that do may not incur the cost associated with postage. For example, in Rhode Island, absentee voters who certify that they fit particular criteria receive an absentee ballot that does not require return postage; rather, the state incurs the return postage cost for those ballots. Additionally, voters can incur costs for transportation to the designated registration and voting locations. This can include public transportation fares as well as the cost of fuel/mileage for the use of a private vehicle, and the amounts for these costs will vary by voter. Further, some states require voters to show specific forms of ID to be able to vote. Some voters may have the required ID for everyday purposes—such as drivers’ licenses—whereas others may need to obtain such ID specifically for the purposes of voting. For individuals who need to obtain ID for voting purposes, the costs and requirements to obtain certain forms of ID, including a driver’s license, nondriver state ID, or free state ID, vary by state. For example, a voter may be required to present documentation to obtain such IDs—including the free state IDs offered for voting purposes—and the underlying documents, such as a birth certificate, can result in costs to voters as well. However, voters also incur costs associated with time, for which it may be difficult to assign a dollar amount. For example, voters may spend time registering to vote, researching candidates and issues, obtaining required ID, traveling to a polling place, and casting a ballot. The time required for these activities can vary based on the options available to the voter—for example, voters who vote by mail (either as absentee voters or because they are in a vote-by-mail state) receive their ballots by mail and do not wait in line as voters who vote in person may have to do. The costs of spending time on these voting processes rather than some other activity result in an opportunity cost to voters, and these opportunity costs may vary by voter based on numerous factors, including each voter’s individual competing priorities as well as the range of options available for how to register or vote. For example, options that increase voter convenience reduce the amount of time a voter spends registering or voting instead of engaging in some other activity, thus reducing the opportunity cost of voting. Ultimately, an individual’s decision about whether to participate in the voting process—first, deciding whether to register to vote, then deciding whether to cast a ballot—can be seen as a consideration of the costs and benefits of voting for that individual, and not all individuals experience the same costs for participating. We provided a draft of this report to the EAC and the state and local election officials and DMV officials we met with in our five selected states. The EAC had no comments on the draft report, as noted in an e-mail received on June 23, 2016, from the commission’s Executive Director. We incorporated technical comments received from other parties in the report as appropriate. We are sending copies of this report to the EAC, appropriate congressional committees and members, and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-8777 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix VI. Section 302 of the Help America Vote Act (HAVA) established provisional voting requirements. Specifically, potential voters who declare that they are registered to vote in the jurisdiction in which they desire to vote must be permitted to cast provisional ballots in the event their names do not appear on the registration list or the voters’ eligibility is challenged by an election official. In general, the issuance of a provisional ballot can be described as a safety net or fail safe for the voter, in that (1) it maintains the person’s intent to vote and voting selections until election officials determine that the person does or does not have the right to cast a ballot in the election, and (2) it allows the determination of the voter’s eligibility to be made at a time when more complete information is available either from the voter or from the election jurisdiction. Election officials make the decision on whether to count provisional ballots based on voter eligibility standards established in state and federal law, including age, citizenship, and residence requirements. The policies and procedures for administering provisional voting vary across states. For example, in some states, a person can cast a provisional ballot in any precinct in the state regardless of where the person is registered. In other states, a person must cast a provisional ballot in the precinct in which the person is eligible to vote. Data on the overall number of provisional ballots cast are available through the Election Assistance Commission’s (EAC) Election Administration and Voting Survey (EAVS), which the EAC administers to states and U.S. territories after each general election. States report the data at the level of individual election jurisdictions. Table 2 below presents the percent of provisional ballots cast as a percentage of the total number of participating voters. In some cases, states do not provide data on the number of provisional ballots cast in some jurisdictions. To ensure the reliability of the data we present, table 2 omits data from any state where, in a given year, 20 percent or more of the local jurisdictions within the state did not provide data on provisional ballot use. To further assess the reliability of the 2008 through 2014 EAVS data, we interviewed EAC officials regarding their data collection and quality control processes. We found the data to be sufficiently reliable for the purposes of our review. This report addresses the following questions: 1. What are the reported benefits and challenges of efforts to collect and share voter registration information electronically? 2. What is known about the effect of selected policies and practices on voter turnout? 3. What is known about the costs of elections? For all three questions, we (1) reviewed and analyzed relevant literature and (2) conducted interviews with state and local jurisdiction election officials from five selected states, Colorado, Delaware, Illinois, Oregon, and Rhode Island. For the question regarding voter registration efforts, we also analyzed data from the U.S. Census Bureau’s Current Population Survey (CPS) Voting and Registration Supplement for general elections occurring from 2008 through 2014 to determine the extent to which policies to collect and share voter information electronically may improve the accuracy of voter registration lists. We identified selected efforts and policies within the scope of each question to examine in detail in this review. Specifically, in examining efforts to collect and share voter registration information electronically, we limited the scope of our review to online voter registration, data-sharing efforts between the state election office and the state motor vehicle agency, and interstate exchanges of voter registration information, including states’ participation in the Electronic Registration Information Center (ERIC). We selected ERIC as an illustrative example of such interstate voter registration data-sharing efforts. We interviewed representatives of ERIC and reviewed documentation regarding requirements for participation. Regarding the effect of policies and practices on voter turnout, we limited our review to policies and practices that have been rigorously studied in academic and professional literature and that election officials have implemented or could potentially implement in their states or local jurisdictions. Additional information on how we identified these policies through a literature review is discussed below. In examining the costs of elections, we reviewed information about the costs to states and local election jurisdictions, as well as the cost to voters. We did not examine campaign or other third party costs. We conducted a literature review of research related to our three objectives. A GAO research librarian conducted searches of scholarly and peer reviewed publications; government reports; dissertations; conference papers; books; association, think tank, and other nonprofit organizations’ publications; working papers; and general news articles published from 2002 through 2015 to identify publications that were potentially relevant to each objective. We also reviewed literature recommended by experts and researchers affiliated with organizations such as the Congressional Research Service, the National Conference of State Legislatures, the National Association of Counties, and the Bipartisan Policy Center. The literature search produced over 1,000 publications related to the topics in our three objective questions. GAO analysts worked in pairs to complete the following steps: 1. We reviewed each publication’s abstract and determined whether the publication was potentially relevant to one or more of our objectives. 2. For those publications we determined to be relevant, we reviewed the full text, to determine whether the publication provided evidence that could be used to directly address one of our objectives. Each analyst reviewed the publication independently, then reached consensus within the pair. For each objective, we analyzed the evidence presented in the relevant publications using a data collection instrument specific to each objective. Regarding efforts to collect and share voter registration information electronically, we used a data collection instrument to catalog the benefits and challenges of the efforts within the scope of our review. For every publication determined to be relevant to this objective, one analyst reviewed the full text version, highlighted the benefits and challenges that the article identified, and entered that information into the data collection instrument. A second analyst compared the information entered in the data collection instrument against the original publication and noted any discrepancies. The pair of analysts discussed any discrepancies noted until they reached a consensus on the benefits and challenges identified within that publication. Research on voter turnout has examined a wide range of policies and practices (hereafter “policies”), and in reviewing the over 400 publications our search identified as related to voter turnout, we focused our review on policies that have been or could potentially be implemented by a state or local government. Thus, we excluded research on policies that could not reasonably or feasibly be implemented by a state or local government, including partisan policies—such as using partisan language in mailings to potential voters—and policies that would be resource-intensive, such as door-to-door canvassing. In order to provide a reasonable and useful synthesis of the literature, we further limited our scope by excluding research on policies that did not have a federal nexus to voting (such as how turnout in local elections is affected by consolidating local elections with state or federal elections), or examined alternative voting systems (e.g., ranked-choice or compulsory voting), or voter identification laws. The publications we reviewed often conducted multiple analyses or experiments. As used in this report, a “study” is an analysis or experiment with a unique sample. In some cases, authors presented their findings broken down by type of election (e.g., presidential vs. midterm) or election year (e.g., 2002 and 2004). In these instances, we considered the findings related to the separate types of elections or time periods as resulting from separate samples (thus, separate “studies,” as we use the term in this report). In reviewing the results from our literature review, as discussed in greater detail below, we excluded studies that assessed the combined effect of two or more policies (because such studies would not enable us to determine the effect of each policy independent of the other or others), analyzed policies using data for elections outside of the United States, or assessed a policy’s effect on over or under votes. 1. Cataloguing publications based on policies within our scope: A GAO analyst reviewed each publication, and recorded (a) what policy or policies the publication addressed that were within our scope, (b) whether the publication used one or more systematic quantitative methodologies, and (c) whether or not the publication used original data analysis in at least one or more analyses. A second GAO analyst verified these determinations and worked with the first analyst to ensure both analysts were in agreement. Based on these reviews, we identified publications that analyzed one or more policies within our scope, used one or more systematic quantitative methodologies, and contained original analysis. 2. Identifying specific studies that used sufficiently rigorous methods: A GAO social scientist reviewed the publications identified in the first step to identify studies within these publications for which the design, implementation, and analyses were sufficiently sound to support the results and conclusions, based on generally accepted social science principles. Specifically, the social scientist examined such factors as whether data were analyzed before and after policy changes were made; how the effects of policy changes were isolated (i.e., the use of groups or states not receiving the change, or statistical controls); the appropriateness of sampling, if used; outcome measures; and the statistical analyses used. A second GAO social scientist verified these determinations and worked with the first social scientist to ensure both were in agreement. A statistician reviewed studies when additional expertise was necessary to interpret findings from studies that used advanced statistical techniques or to ensure that researchers who analyzed complex survey data employed appropriate sample weights when reporting findings. To ensure that there was a sufficient body of research on each policy we selected, we excluded policies that were not examined in at least two publications. As a result of this process, from more than 400 publications we initially identified related to voter turnout, we found 53 that studied policies within the scope of our review and used sufficiently sound methodologies. Within these publications, 118 studies examined a total of 11 policies. The studies we reviewed used various quantitative approaches and data, and covered different types of elections and time periods. Some studies used randomized experiments or quasi-experimental research designs, and some studies used non-experimental designs, such as statistical analysis of observational data. Studies used both longitudinal and cross- sectional comparisons. Similarly, some studies used data obtained directly from official state or local voter records (or from vendors or others that compiled official voter records), and some used survey responses, such as from the CPS Voting and Registration Supplement. The studies we reviewed also covered different types of elections (e.g., presidential, midterm, primary, statewide, local, or various combinations of these) and time periods (with studies ranging from addressing one election to multiple elections and ranging from 1920 to 2014). Further, some studies examined the separate effects of more than one policy on voter turnout, and some studies reported more than one finding related to the effect of a given policy on turnout. For instance, some studies analyzed turnout data using multiple statistical models, resulting in multiple findings (one from each model). Additionally, other studies reported more than one finding because they broke down their results by subsamples, such as by race or by treatment groups associated with variations in policy implementation. Where studies reported one or more effects on turnout for a given policy, we reported a range of effects. Moreover, not all studies reported findings that were statistically significant (at least at the 0.10 level). Many studies did not detect a statistically significant effect, or reported a finding that was not statistically significant along with a statistically significant effect. When a study reported one or more findings that were not statistically significant, this did not mean that the policy examined did not have an effect on turnout, only that the study could not affirmatively reject the possibility that the policy had no effect on turnout. For each of the 118 studies, a GAO social scientist reviewed each of the study’s findings related to voter turnout and recorded key information on each finding. If a study examined more than one policy, these findings were recorded separately for each policy included in the study. For each policy examined within a particular study, the social scientist categorized the findings related to that policy as follows: Increased turnout: Only statistically significant positive effects (one or more). Mixed evidence: One or more statistically significant effects (positive or negative) and one or more findings that were not statistically significant; or, one or more statistically significant positive effects and one or more statistically significant negative effects, with or without additional findings that were not statistically significant. No evidence of effect: No statistically significant effects. Decreased turnout: Only statistically significant negative effects (one or more). For each significant effect, the social scientist also recorded the associated percentage point increase or decrease in voter turnout, when possible. However, study authors oftentimes did not report one or more of their effects in terms of a percentage point increase or decrease in turnout; for instance, in some cases, authors reported effects in statistical terms such as coefficients from a statistical model. A second GAO social scientist verified these determinations and worked with the first social scientist to ensure both were in agreement. Regarding election costs, we used a data collection instrument to catalog information in each of the relevant publications regarding (1) the types of costs associated with elections activities and (2) examples of amounts corresponding to these activities, where available. Of the over 150 publications from our search that identified costs associated with one or more election activities, none of the publications we reviewed comprehensively addressed all areas of election-related costs, and oftentimes the publications identified by our search only identified costs associated with one particular aspect of elections. To obtain the perspectives of state and local election officials regarding the policies, practices, and efforts in use in their respective states and jurisdictions that corresponded with our objectives, we selected five states to visit—Colorado, Delaware, Illinois, Oregon, and Rhode Island. We selected these states primarily based on the statewide implementation of the registration and turnout policies in the scope of our review, prioritizing states that had more polices in place than others. Specifically we considered states that had implemented online voter registration, data- sharing efforts between the election office and the state motor vehicle agency or through interstate data-sharing efforts, Election Day or same day registration, vote-by-mail as their selected voting method, and requirements for informational mailings to voters. Because all states and local jurisdictions incur election-related costs and could provide perspectives on the topic, considerations regarding elections-related costs did not significantly affect our site selection decisions. Finally, we considered geographic diversity (by selecting states from various regions of the country), when possible, when making state selection decisions in order to capture possible regional differences in election administration practices. Within the five states identified above, we selected two local election jurisdictions to visit in order to obtain different perspectives at the local level within a state. We selected jurisdictions based on (1) recommendations from introductory teleconference meetings with state election officials and (2) demographic factors, specifically population size and density. The following is a list of the election jurisdictions we visited in our five selected states: Colorado: Denver City and County, Grand County Delaware: Kent County, New Castle County, and Sussex County Illinois: City of Chicago, Sangamon County Oregon: Multnomah County, Yamhill County Rhode Island: City of Pawtucket, Town of Scituate While our selected states and local jurisdictions are not representative of all states and jurisdictions nationwide and their responses cannot be generalized to other states and local election jurisdictions, officials in these locations provided a range of perspectives on efforts to collect and share voter registration information electronically, the effect of selected policies and practices on voter turnout, and elections-related costs. During our visits, we met with state and local election officials, including the state election director (or equivalent) and the chief election official in each local jurisdiction. We also met with officials from state motor vehicle agencies in Colorado, Delaware, Oregon, and Rhode Island to get their perspectives on voter registration data-sharing programs with the state election office. We corroborated the information we gathered through these interviews by reviewing relevant state statutes and documentation that these states and local election jurisdictions provided to us, such as cost data. We conducted these interviews between October and December 2015. For examples of election costs provided in this report based on literature we reviewed or documents provided to us by state and local election officials, a GAO economist reviewed the source material to assess data reliability. To the extent that the source documentation included information about how cost estimates were derived, the economist reviewed the methodology to ensure reliability, but we did not independently assess the internal controls associated with state or local financial systems or other means for calculating such costs. We determined that these data were sufficiently reliable for providing illustrative examples of the costs for election activities. For the objective regarding voter registration, we analyzed data to determine whether policies to collect and share voter information electronically—specifically online voter registration and ERIC—improved the quality of voter registration lists. We focused our data analysis on these two efforts to collect and share voter information electronically because, among the efforts within the scope of our objective on registration, these efforts are fairly standardized in their implementation. Specifically, it was possible for us to identify which states have implemented online voter registration and ERIC and when they did so. In contrast, states may have varying levels of data sharing between their election offices and motor vehicles agencies. Therefore we could not group states into definitive groups of those that had similarly implemented DMV data sharing and those that had not. We analyzed data to determine whether these two policies—online voter registration and ERIC—affected the proportion of individuals surveyed who did not vote because there was a problem with their registration, as reported in the biennial CPS Voting and Registration Supplement from 2008 through 2014. Specifically, the CPS Voting and Registration Supplement asks respondents who indicated that they were registered but did not vote the main reason why they did not vote. These respondents are presented with 11 possible choices, one of which is “Registration problems (i.e. didn’t receive absentee ballot, not registered in current location).” We considered this measure—the proportion of registered non-voting individuals who responded to this question by selecting the choice for registration problems—to be a proxy-indicator of registration list quality because problems with registration can indicate that registration data are inaccurate. We reviewed documentation describing steps taken by the CPS data managers to ensure data reliability and tested the data for anomalies that could indicate reliability concerns. We determined that the CPS data were sufficiently reliable for the purposes of this analysis. For our analysis, we used the difference-in-difference modeling approach to attempt to identify what effect, if any, states’ adoption of online voter registration or ERIC had on our proxy measure of the quality of state voter registration lists. The difference-in-difference estimation strategy compares the difference in the average outcome between two time periods among a “treatment” group (in this case, states that adopted a given policy in-between the two time periods) and a “control” group (states whose policies did not change between the two time periods). The approach is designed to account for both pre-existing differences between treatment and control groups, as well as changes over time that affect states in both groups. In order to make appropriate comparisons, we modeled presidential (2008 and 2012) and midterm (2010 and 2014) years separately. The policy “treatments” of interest are states’ adoption of online voter registration and ERIC. As noted in our report, our analysis did not find statistically significant reductions in reported registration problems in states that had implemented online voter registration between the two presidential elections or the two midterm elections, compared to those states that had not. Similarly, our analysis did not find statistically significant reductions in reported registration problems in states that joined ERIC compared to states that had not. Thus, we cannot conclude based on the evidence from this analysis that states that adopted online voter registration or ERIC saw changes in our proxy measure of registration list quality. However, despite the advantages of our estimation approach, a number of limitations are associated with the data and methods we employed for this analysis. First, our outcome variable was not a direct measure of registration list accuracy, and there could have been other factors responsible for respondents reporting registration problems besides the quality of a state’s registration list. Second, our analysis did not control for any variables that may have been associated with the adoption of online registration or ERIC, and this could have affected our results. Finally, given the type of analysis we conducted, the number of states that had online registration during this time period as well as the relatively small size of our analysis sample may have affected our ability to detect a statistically significant relationship. We conducted this performance audit from April 2015 to June 2016 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Broad academic research on voter turnout has generally found that individual differences among citizens may be strongly and consistently associated with differences in turnout rates. To illustrate, we have included figures 4 through 6 that show differences in turnout over time related to age, race and ethnicity, and educational attainment. Presidential and midterm elections are presented separately, since as noted earlier in this report, nationwide turnout has been consistently higher in presidential elections than midterm elections since 1972. The figures below report voter turnout estimates based on the percentage of individuals in a given demographic that voted among the total voting-age U.S. population in that demographic group, as reported by the U.S. Census Bureau, from the Current Population Survey (CPS) Voting and Registration Supplement. From more than 400 publications we initially identified related to voter turnout, we identified and reviewed 118 studies within 53 publications that (1) assessed policies that have been or could be implemented by a state or local government (11 such policies in total across the 118 studies), (2) contained quantitative analyses of the effect of a given policy on turnout, and (3) used sufficiently sound methodologies for conducting such analyses. This appendix presents additional information specific to each of these 11 policies. Each policy summary contains the following sections: Literature review results. This section includes a summary of findings from the literature related to a policy’s effects on turnout. It also includes a figure showing the specific findings reported by each study that examined the policy. As previously discussed, each study may contain more than one finding related to a given policy’s effect on voter turnout, such as when findings were broken down by race or treatment groups associated with variations in policy implementation. Where studies reported more than one statistically significant effect on turnout for a given policy, we reported the range of effects. Where studies reported effects in terms of percentage point differences— which allow for comparisons of effects on the same scale—we report those differences. However, not all studies reported statistically significant effects—and studies that reported such effects did not always do so in units of percentage point differences. We use symbols in each figure to communicate these various types of findings, as shown in figure 7. Examples of specific studies. This section includes a description of selected individual studies, including the specific findings, data analyzed, the population studied, any variations in policy implementation that were examined, and other contextual information—such as what specific election or elections were studied, among other things. Variations in implementation. This section includes descriptions of variations in how a policy may be implemented. For instance, jurisdictions may implement same-day registration at all polling places or at a limited number of them, or jurisdictions may send mailings in different formats, including as postcards or voter guides, among others. Observations on cost, voter convenience, and other considerations. This section includes information about the administrative costs of policy implementation, effects on voter convenience (costs to voters), and other considerations that election officials may wish to consider when deciding whether and how to implement these policies (e.g., technological or legal considerations). Some of these observations come from election officials we met with during our state and local visits. Third party groups and local governments sometimes send e-mail messages to inform potential voters about dates or other aspects of upcoming elections and encourage them to register or vote. Variations in implementation The nature of e-mail would allow for variations in: Content. E-mails may vary in the type of information provided, the tone of the message, and may or may not include website links to access other content. Additionally, e-mail messages may include a variety of presentation styles (e.g., font, color, pictures, and organization). They may also communicate information in one or more languages. Lastly, they may contain appeals for individuals to take action, such as to register or to vote. We reviewed 18 studies in 4 publications, and 13 studies were from the same publication and had the same author. In total, 15 of the 18 studies found no evidence of an effect on turnout and 3 studies reported mixed evidence—all 3 reported no evidence of an effect for one treatment group and statistically significant positive effects for another treatment group. Studies 1, 2, and 3 (from one publication and with the same authors) attempted to assess whether varying the source of unsolicited e-mail messages (official governmental source versus fictional voter mobilization organization) had differing effects on turnout. All three studies were conducted in cooperation with the San Mateo County, California, Registrar, in connection with a 2009 local election, the June 2010 statewide primary election, and the November 2010 general election, respectively. All three studies found statistically significant, albeit small, effects on turnout (0.7, 0.5, 0.5 percentage point increases) for e-mail messages sent from the Registrar's office, as compared to the control groups (which did not receive e-mail messages). Using separate treatment groups, all three studies also assessed the effect of identical e-mail messages sent from a fictional voter mobilization organization, and each study found no statistically significant effect on turnout resulting from these messages—even though the content and the timing of the messages were the same as those sent from the Registrar's office. The authors noted that participants in their studies only included individuals who had provided an e-mail address at the time of registration (about 20 percent of all registered individuals). These individuals tended to be younger and less likely to have voted in previous elections than other registrants in the county. Studies 4 through 16 (from one publication and with the same author) found no evidence of an effect on turnout for e-mail messages, despite using different strategies to recruit study participants. The author partnered with three nonpartisan organizations to design, implement, and analyze the results of 13 studies examining the effect of e-mail communication on turnout. o One organization conducted five studies (studies 4 through 8) at five different universities (across four states) during the 2002 congressional election. Student names and e-mail addresses were purchased from or provided by the five universities. For each university, students with valid e-mail addresses were randomly divided into a treatment group (which received e-mail messages) and a control group (which did not receive e-mail messages). Students in each treatment group received a series of unsolicited e-mails encouraging registration and voter turnout. The author noted that the percentage of e-mails that were actually opened averaged only 20 percent across the five universities. o Another organization conducted a study (study 9) among registered individuals under the age of 26 in the city of Houston, Texas, during a 2003 mayoral race. This study targeted those registrants who had indicated that they were willing to receive e-mail messages from third parties and did not opt-out after receiving an e-mail invitation to participate in the study. The study included a treatment group of 6,386 that received a brief welcome e-mail and three additional e-mail messages leading up to the election, and a control group of the same size that only received the welcome e-mail. E-mails (other than the welcome e-mail) typically began with a short quiz and an invitation to explore a website, and concluded with a brief blandishment to vote. The percentage of e-mails that were opened was not available. o A third organization conducted the remaining seven studies (studies 10 through 16) in connection with the 2004 presidential election. These studies differed from the other six studies because e-mail messages were only sent to individuals who came to the organization's website and specifically requested to join the e-mailing list reminding individuals to register and vote. Individuals who signed up were assigned to either a treatment or a control group in each of the seven study locations (California; Colorado; Michigan; Minnesota; Missouri; North Carolina; and Clark County, Nevada). Study participants in the treatment groups were sent a series of e-mail messages encouraging registration and turnout. E-mail message open rates were not recorded for these studies. Observations on cost, voter convenience, and other considerations One publication we reviewed noted that e-mail communication has low transactions costs and large economies of scale, compared to direct mail. However, there may be costs associated with obtaining e-mail addresses for the target population. The voter registration forms of the five states we visited all asked for, but did not require, e-mail addresses. As a result, in these states e-mail contact information for registered individuals is dependent on whether a registrant chooses to provide this information. States or localities may have privacy laws or regulations associated with the storage and use of voter information, including e-mail addresses. Officials in one local jurisdiction said that, upon receiving a voter registration application that includes an e-mail address, they retain the e-mail address for potential outreach purposes, but they do not import the e-mail address into their state’s voter registration database along with the other required registration information (such as name and address). Officials said they do this to protect the privacy of these individuals—because once an e-mail address is imported into the state’s official voter registration database, it becomes part of the public record, and campaigns and other parties can use it to contact registered individuals. State or local governments sometimes send mailings to inform potential voters about upcoming elections or encourage them to register or vote. Potential variations in mailings fall into five broad categories: Format. Mailings may be sent as postcards, brochures, pamphlets, booklets, or in other formats. Additionally, they may include a variety of presentation styles (e.g., font, color, pictures, and organization), and they may communicate information in one or more languages. We reviewed 36 studies in 16 publications. We found that the results of the studies were mixed, with most studies reporting no evidence of an effect (13 studies), an increase in turnout (9 studies), or a combination of the two (12 studies), as shown in figure 9 below. Study 1 found mixed results for mailings among the groups studied, and reported that mailings may have differing effects on turnout among registered individuals, based on their propensity to vote. The study reported the results of a field experiment conducted in Brownsville, Texas, prior to the 2004 presidential election. Specifically, study authors obtained a list of all registered individuals from the Cameron County Elections Department, and randomly assigned participants to a mailing group and a control group. Postcards were sent to 3,794 registered individuals 8 days prior to the election encouraging them to vote. After the election, the Cameron County Elections Department provided validated voter turnout information for those in the mailing and control groups. The authors reported an increase in turnout of 5.5 percentage points among “regular” and “occasional” voters and 5.6 percentage points among “rare” voters, compared to similarly classified individuals in the control group, who did not receive postcards. The authors also reported no statistically significant effects for mailings among other subgroups (habitual voters and registered nonvoters). The authors suggested that the unusually high effects on turnout they found in their study might have been because of the lack of other mobilization efforts, thus their mailer appeared more visible. Study 5 tested the effects on voter turnout of five different variations of postcard mailings, and found mixed results among the groups studied. The postcards contained information about either (1) an individual’s past voting record, (2) past turnout rates in their local community, or (3) some combination of the two. Postcards were sent to 5,000 registered individuals in Hawthorne, California, approximately 5 days prior to a local election in November 2011. After the election, the authors obtained validated voter turnout data from the official Hawthorne, California voter file, and compared it across the five treatment groups and the control group (which did not receive a postcard). The authors found that only messages that included information about the subjects’ own voting histories (whether or not they voted in the 2006 and 2008 general elections) effectively mobilized them to vote. Specifically, three of the five variations in postcard content contained individuals’ personal voting history, and for these variations, the authors reported turnout effects ranging from 1.4 to 3.1 percentage point increases, compared to the control group; whereas the other two variations in postcards, which did not include personal voter histories, resulted in no statistically significant effects on turnout. Studies 6 and 17 (from one publication and with the same authors) found that sending postcards to eligible but unregistered individuals encouraging them to register resulted in increases in turnout for all groups studied. Study authors conducted their studies in partnership with Delaware state election officials in connection with the 2012 general election (study 6), and in partnership with Oregon state election officials in connection with the 2014 general election (study 17). Upon receiving contact information for eligible but unregistered individuals in each state from the ERIC program, authors assigned these individuals to treatment groups (who received different variations of postcards) and a control group (who did not receive postcards) in each state. Both states used a similar set of variations in their communication—stressing either urgency, civic identity or obligation, or plan making (prompting individuals to think about the process of registering). Study 36 assessed the effect of mailing Easy Voter Guides to 27,108 registered Asian-Americans in Orange County, California, prior to the November 2006 general election. According to the authors of the study, Easy Voter Guides included user-friendly information about ballot items, including for and against arguments for ballot propositions, and information about the candidates for office. These guides were sent only to registered individuals with a “low-propensity” for voting. Study authors reported statistically significant effects in two of six groups studied, compared to the control group (which did not receive a mailing). An organization serving Asian-Americans mailed the guides prior to the November 2006 general election in English to registered individuals who were U.S. born or were foreign born and 35 years old or younger, and in Chinese, Korean, or Vietnamese to registered individuals who were foreign born and older than 35 years. The study found that mailing Easy Voter Guides in English to Chinese-American and Vietnamese-American registered individuals decreased turnout by 4.1 and 3.1 percentage points, respectively. However, the guides had no statistically significant effect on turnout among other language/ethnicity combinations (Chinese- Americans who were sent a guide in Chinese, Korean-Americans who were sent a guide in Korean or English, and Vietnamese- Americans who were sent a guide in Vietnamese). Observations on cost, voter convenience, and other considerations To save on costs, state and local election officials may send eligible mailings using U.S. Postal Service bulk mail, and in some instances may use discounted nonprofit rates. In some instances, local jurisdictions consolidate their mailings with state mailings to save on costs. For example, the Colorado Constitution allows local jurisdictions to combine the mailing of required notices related to debt or tax ballot measures with the mailing of the required ballot information booklet from the state in order to save mailing costs. Competing requirements can limit resource availability for informational mailing efforts. For example, election officials in one local jurisdiction we visited said they are already required to send mailings for multiple purposes—such as to correct misinformation or obtain additional information as part of the registration process. These officials said they would like to send more mailings to inform people of their options regarding how they may register and vote, but said they are unable to do so because of the cost of the mailings they are already required to send. State laws may require that jurisdictions send mailings for various reasons. For instance, Colorado’s Constitution requires that (1) the state distribute an information booklet to registered voters prior to each election in which a statewide issue will appear on the ballot, and (2) local jurisdictions mail notices to inform registered voters about upcoming ballot measures related to debt or taxes. Similarly, Rhode Island state law requires that prior to each general election the state mail each residence basic information pertaining to all ballot questions, and specific information pertaining to questions involving bonds, indebtedness, or any other long term financial obligations. Specifically, all variations of postcards—for both Delaware and Oregon –included the same basic information on the back of the postcards encouraging registration. However, the pictures and wording on the front of the postcards varied. Delaware postcards included four variations, which the authors referred to as “visualization/plan making,” “urgency,” “national identity,” and “state identity;” Oregon postcards also included four variations, which the authors referred to as “visualization/plan making,” “civic duty,” “state identity”, and “placebo.” The “placebo” postcard variation in Oregon included a picture of the same sticky note that is on the front of all other Oregon postcard variations and reads “IMPORTANT! Don’t forget to register to vote before the deadline.” The placebo postcard, however, had no other pictures on the front of the postcard (as the other variations did). The lack of difference between the placebo variation and the other variations in Oregon indicated that none of the postcard variations led to increases in turnout beyond the common elements shared across all of the mailings. Third party groups sometimes make “robocalls”—automated telephone calls that deliver prerecorded messages—to inform potential voters about dates or other aspects of upcoming elections and encourage them to register or vote. State and local governments could also consider this method. Variations in implementation The nature of robocalls would allow for variations in: Content. Robocalls may vary in the type of information provided, the tone of the message, the language used (e.g., English, Spanish, etc.), and may or may not include information specific to the recipient of the call—such as a recipient’s polling place location. They may also contain appeals for individuals to take action, such as to register or to vote. We reviewed six studies in two publications, and five of the six studies came from one publication and had the same author. All six studies reported no statistically significant effects of robocalls on turnout, although one study also found a statistically significant effect on turnout ranging from 2.2 to 3.4 percentage points (as shown in figure 10). prerecorded messages from different people, such as state or local election officials or other recognizable individuals. In the studies we reviewed, robocalls were placed by organizations other than election jurisdictions. pressure” made prior to the August 2008 Michigan primary election had turnout effects in that election and had persisting effects in the subsequent November 2008 presidential election. Specifically, registered individuals in the treatment group who had not voted in the 2006 primary election, received robocalls saying: “We are calling to remind you to vote in tomorrow’s election. Primary elections are important, but many people forget to vote in them. According to public records, you did vote in both November 2004 and 2006, but you missed the 2006 August primary. Please remember to vote tomorrow, August 5th.” A control group of registered individuals with the same voting history received a robocall encouraging them to recycle. Using public records collected by registrars of voters to compare groups, study authors reported that the treatment group turned out to vote by 2.2 percentage points more than the comparison group among one- voter households and 3.4 percentage points more than the comparison group among two-voter households. Again using public records, the authors followed-up to see if these turnout effects persisted in the next election—the 2008 presidential election in November. They reported that the differences between the treatment and control groups found in the 2008 primary election did not persist in the 2008 presidential election in November. Studies 2 through 6 (from one publication and with the same author) each found no effect on turnout from the use of robocalls. The studies were all conducted in connection with the November 2002 midterm election, and each study targeted registered individuals who resided in precincts where Latinos made up at least 70 percent of those registered to vote, and where turnout among registered individuals was less than half the national average in the 2000 presidential election. Each of the studies were conducted in precincts in one of five locations nationwide: Los Angeles County, California; Orange County, California; Harris County, Texas; the Denver, Colorado metropolitan area; and the state of New Mexico. In each location, registered individuals were randomly assigned to either a location- specific treatment group (which received robocalls) or a location- specific control group (which did not receive robocalls). In the two California locations, the Texas location, and the New Mexico location, those in the treatment groups received two calls in Spanish, read by a prominent Spanish-language newscaster, encouraging them to vote and saying “Your vote strengthens our communities and families with better schools, better jobs and safer communities.” In the Denver metropolitan area, a Denver city council member read a similar script in English to those in the treatment group. The author reported no statistically significant effects resulting from the robocalls in each location (comparing turnout among those in the treatment group in each location to turnout among those in the control group in each location). The author highlighted challenges with robocalling, including the inability to know with certainty the primary language of the household, and difficulty obtaining current phone numbers. The author suggested that future studies might find stronger effects of robocalls on turnout by trying new or different messages, such as messages that include each respondent’s polling location. Observations on cost, voter convenience, and other considerations There may be costs associated with (1) obtaining phone numbers for the target population, (2) determining which phone numbers are current and which numbers connect to landlines versus mobile phones, and (3) placing robocalls. In one publication we reviewed, the author reported that the total cost of a robocall campaign that targeted 240,951 people was $23,725, making the cost per participant about 10 cents. Federal law does not prohibit non-commercial, informational robocalls to landlines without prior consent. However, by regulation, prior consent must be obtained, either orally or in writing, for similar calls to wireless phones. States or localities may have privacy laws or regulations associated with the storage and use of voter information, including phone numbers. The voter registration forms of the five states we visited all asked for, but did not require, a phone number. As a result, in these states phone contact information for registered individuals is dependent on whether a registrant chooses to provide this information. Third party groups sometimes send text messages to inform potential voters about dates of upcoming elections and encourage them to vote. State and local governments could also consider this method. Variations in implementation The nature of text messaging would allow for variations in: Content. Text messages may vary in the type of information provided, the tone of the message, the language used (e.g., English, Spanish, etc.), and may or may not include website links to access other content. They may also contain appeals for individuals to take action, such as to register or to vote. Source. In the studies we reviewed, text messages were sent by organizations other than election jurisdictions. those who received text messages sent a day prior to the 2006 midterm election, compared to those who received no text messages. The authors reported the results of a nationwide field experiment wherein study participants were randomly divided into 4 treatment groups of about 1,000 registered individuals each, and a control group of about 4,000 registered individuals (control group participants did not receive a text message). Individuals in all four treatment groups received text messages that said: “A friendly reminder that TOMORROW is Election Day.” Following this sentence in the text message, treatment groups received different combinations of information. Specifically, treatment groups one and two received a “civic duty” message (“Democracy depends on citizens like you—so please vote!), with treatment group two also receiving the phone number for a national voter assistance hotline; and treatment groups three and four received a “close election” message (“Elections often come down to a few votes—so please vote!”), with treatment group four also receiving the phone number for the hotline. The text messages closed with the name of the organization that initially registered the individual, as well as the name of the organization responsible for sending the text message. The authors reported that the results indicated no significant difference between the turnout effects attributed to the two message appeals (civic duty vs. close elections). They also concluded that adding a polling place hotline number did not have an additional effect on turnout (beyond the effects found for text messages generally). Studies 2 and 3 (from one publication and with the same authors) had different authors than study 1 but used a similar text message as study 1. Study 2 reported an increase of 0.9 percentage points in turnout among the study participants who received text messages a day prior to the June 2010 statewide elections in California, compared to those who did not. Specifically, individuals were divided randomly into a treatment group and a control group. In the treatment group, text messages were sent to 14,844 registered individuals in San Mateo County who had provided valid cell phone numbers at the time of registration; the control group included 14,829 registered individuals who did not receive a text message, but who also provided valid cell phone numbers at the time of registration. The text messages were sent by a third party firm and included the same text as the civic duty message in study 1 noted above: “A friendly reminder that TOMORROW is Election Day. Democracy depends on citizens like you—so please vote!” The authors noted that in study 1 (discussed above), people received texts from organizations that registered them and in some cases gave permission to be contacted via text message in the future. The authors distinguished their study from the previous study by noting that their experiment sent “cold” text messages—that is, the recipients did not receive text messages from an organization that registered them in person and were not asked for permission to receive text messages prior to Election Day. In study 3 the same text messages were sent to a similar population in San Mateo County, California, with a control group identified in the same way as in study 2—but this study was conducted in connection with a November 2009 local election. The authors reported a 0.8 percentage point increase in turnout associated with sending text messages in connection with this election. Observations on cost, voter convenience, and other considerations There may be costs associated with (1) obtaining phone numbers for the target population, (2) determining which phone numbers are current, and which are landline versus mobile numbers, and (3) sending bulk text messages. The three studies we reviewed used phone numbers provided during voter registration. Two of these studies used an outside firm to determine which numbers were connected to mobile phones, and one study noted that this task was performed, but did not say who performed it. All three studies used third party groups to send the text messages. Federal law prohibits text messages sent to mobile phones using an automatic telephone dialing system unless the recipient previously provided consent. For non-commercial, informational text messages, prior consent may be given orally or in writing. Additionally, the message must include a convenient way to “opt-out” of future messages. Additionally, states or localities may have privacy laws or regulations associated with the storage and use voter information, including phone numbers. The voter registration forms of the five states we visited all asked for, but did not require, a phone number. As a result, in these states phone contact information for registered individuals is dependent on whether a registrant chooses to provide this information. Some states’ laws require that individuals wishing to vote submit their voter registration applications by a particular deadline before an upcoming election. We reviewed six studies from four publications. Four of the six studies assessed the effect on turnout of moving the registration closing date closer to Election Day, and two of the six studies assessed the extent to which a requirement to register at least 28 days prior to an election affected turnout. The results are shown in figure 12 below. a vote for President/Vice President. turnout in states without Election Day registration. register at least 15 days before Election Day. However, California residents who move to a new county after this deadline can vote at the polling place of their old address (if they were already registered). years from 1920 through 2000. This study found that for all states combined as the period of time between registration closing and Election Day decreased, turnout increased. However, when examining states on the basis of geography, the effect of registration closing dates on turnout depended on the group of states included in the comparison. Study authors reported statistically significant effects for all states combined and also for southern states, but not for non- southern states. For all states combined, the turnout increase was estimated to be 0.021 percentage points for each one-day decrease in the time between the registration deadline and Election Day, and for southern states the turnout increase was 0.045 percentage points per one-day decrease. So, for all states combined, a 10-day decrease in the time between the registration deadline and Election Day resulted in a 0.2 percentage point increase in turnout; and for southern states, a 10-day decrease resulted in a 0.45 percentage point increase in turnout. Studies 5 and 6 (from one publication) assessed the effect of state registration deadlines that required individuals to register at least 28 days before an election (compared to the effect of state registration deadlines that were less than 28 days). Using U.S. Census Bureau’s Current Population Survey (CPS) data, study 5 found no statistically significant effect on turnout from having a deadline at least 28-days before the 2008 presidential election, across four different models (the models differed in the extent to which they controlled for different early voting laws and other state-level reforms). Also using CPS data, study 6 conducted a similar analysis in relation to the 2012 midterm election and found a statistically significant decrease in turnout in one model (not reported as a percentage point estimate) and no statistically significant effects in the other three models. Observations on cost, voter convenience, and other considerations All five of the states we visited have registration deadlines prior to Election Day; however, three of them (Colorado, Illinois, and Rhode Island) also have some form of same day registration. State election officials in four of these states generally said that having a deadline provides time for local election officials to complete various election- related tasks, such as processing remaining registration applications and updating registration rolls, preparing and printing poll books, preparing ballots, and mailing ballots. One publication we reviewed argued that voters’ interest in an election builds progressively until Election Day, and so registration closing dates can limit whether some interested citizens, who would otherwise be eligible, register to vote in time and are then able to vote. Similarly, a local election official we spoke with noted that despite the jurisdiction’s efforts to announce and publicize upcoming elections, many voters do not become interested in the elections until a few weeks before Election Day, which is too late to register in that jurisdiction’s state. States may have different deadlines and rules related to third party registration drives. For instance, in addition to having a separate deadline for individuals who register through voter registration drives, Colorado law requires that third party groups deliver completed applications to the appropriate local jurisdiction no later than 15 business days after the application was signed (or postmarked by that date, if mailed). Researchers at one organization we spoke with said that third party registration groups often turn in many paper registration forms at the same time close to the registration deadline, increasing the burden on election officials who need to input the information from the forms into their state’s voter registration database. Federal law requires that individuals who move to a new state within 30 days of a presidential election must be allowed to vote for President and Vice President in their former state, either in-person or by absentee ballot. Additionally, individuals who move within the same local jurisdiction must be given the opportunity to update their registration and vote even if they never notified the registrar of their move. Some state laws allow eligible individuals to register (or update their registration) and vote on the same day. Variations in implementation Depending on the state, same day registration may be offered on Election Day, during an early voting period, or both. Some states refer to this policy as Election Day registration, and the two terms (same day registration and Election Day registration) are sometimes used interchangeably. When referring to findings in publications and studies, we use the terms used by the authors. county election office and all permanent early voting and Election Day polling places, while smaller counties that use printed poll books are only required to offer it at the main election office and in any polling place that serves 20 percent or more of the county’s residents and is located in a different municipality than the main election office. Some states limit the type of ballot available to individuals who register and vote on the same day. For example, in Rhode Island, an individual who has not registered to vote can cast a ballot for President and Vice President on Election Day at his or her city or town hall or an alternate location designated by the local board of canvassers, and the casting of the ballot begins the process of voter registration. proximity of registration deadlines to Election Day affects voter turnout, with Election Day registration being a registration deadline of 0 days before Election Day. Study 1 analyzed CPS data at the county level for all presidential and midterm elections from 1992 through 2004 in which there was at least one statewide-office on the ballot. The author analyzed the effect of the length of registration deadlines (how far in advance an individual must register before an election) on turnout and estimated that a change from a registration deadline 30 days in advance of the election to 0 days (the equivalent of Election Day registration) would increase turnout by 8.7 percentage points. Study 19 used individual-level CPS data from presidential and midterm elections from 1994 through 2006 and estimated that changing a registration deadline from 30 days before Election Day to 0 days (Election Day registration) would increase an individual’s probability of voting by an estimated 7.3 percent. Two studies (Studies 7 and 13) in another publication assessed the effect of Election Day registration in presidential and midterm elections, respectively, from 1980 through 2006. Using one model, Study 7 found that the policy increased turnout by 4.5 percentage points, on average, in presidential election years, and Study 13 found that the policy increased turnout by 2 percentage points, on average, in midterm election years. The authors also presented alternative models in each study that controlled for the interaction between Election Day registration and the competitiveness of key races (presidential races in the presidential election years and state gubernatorial races in midterm election years) and found no statistically significant effects of Election Day registration alone in these models. However, study 13 found that states with Election Day registration that had competitive gubernatorial elections had significantly higher midterm turnout than did states with only Election Day registration or just competitive gubernatorial races. The authors concluded that with less information generally available for midterm elections, electoral competition along with Election Day registration may combine to increase turnout. Study 9 examined Wisconsin’s initial implementation of Election Day registration as a natural experiment because the policy was implemented in different municipalities throughout the state at different times. Specifically, municipalities that previously required voters to register before voting were required to implement the policy, whereas municipalities that did not have a registration requirement did not change their procedures. As a result, a county could have some municipalities that implemented Election Day registration and some that continued without a registration requirement. Using county- level data from the Wisconsin Election Board for the number of votes cast in presidential elections from 1972 through 1980 and the proportion of a county’s population affected by the adoption of Election Day registration, the authors estimated that if a county were to move from no population covered by Election Day registration to having all of its population covered by Election Day registration, turnout would increase by about 3.0 percentage points. Observations on cost, voter convenience, and other considerations Implementing same day registration can have cost implications. To avoid the potential for longer lines at polling places resulting from the additional step to register and possible confusion over the process, some jurisdictions may hire additional poll workers to assist those who need to register as well as vote. For example, the Office of Policy Analysis within the Maryland Department of Legislative Services reported that the main additional cost of implementing Election Day registration, as it was being considered in proposed legislation, would be hiring additional staff to process registrations at polling places. Additionally, officials from one large urban jurisdiction told us they had purchased electronic poll books in anticipation of offering same day registration, which was a cost to the jurisdiction. However, electronic poll books may reduce costs in other areas, specifically eliminating the need to print paper poll books for each polling place. Literature that we reviewed indicated that same day registration can provide convenience for voters, including those who missed a registration deadline as well as particular subsets of voters, such as young voters or those who move frequently. The Pew Center on the States reported that allowing for registration on Election Day can provide a failsafe option for eligible individuals whose registration information is inaccurately listed despite their own best efforts. Similarly, the Government Accountability Board of Wisconsin—a state that has had Election Day registration since 1976—noted that Election Day registration offers an eligible individual the opportunity to correct administrative mistakes made by the individual or election officials. The National Conference of State Legislatures reported that same day registration can result in administrative burdens for election officials, given that eligibility must be verified at the same time that the registrant provides his or her information. In general, states with same day registration require applicants to provide some form of identification and proof of residency. One case study across multiple states noted that the Election Day registration process and requirements can result in confusion for poll workers and long lines. Literature we reviewed cited concerns about potential fraud associated with registering individuals and immediately allowing them to cast a ballot, because there may be challenges to verifying an individual’s eligibility at the polling place. For example, polling places that use printed precinct-specific poll books may not have resources available for poll workers to be able to confirm that the individual is not already registered in another location. However, the literature and officials we spoke with noted that there are methods for limiting the potential for fraud. In Colorado, all jurisdictions in the state use electronic poll books that are connected in real-time via the internet to the state’s voter registration system, and the Colorado election director said that this technology helps officials or poll workers at the polling place determine if a person is already registered elsewhere in the state, which can mitigate fraud concerns. In Illinois, which recently began to implement same day registration, state officials said that local jurisdictions will be responsible for determining how to verify eligibility and prevent fraud—they noted that some may use tools connected to the internet to verify potential registrants against the state’s registration system, while others may require voters to sign an affidavit that they will not vote again at another location. Some states and local jurisdictions conduct elections by mail, such that all registered individuals receive ballots in the mail prior to Election Day and may return the ballots by mailing them or dropping them off at one or more designated locations. We reviewed 21 studies in 12 publications. We found that 11 studies reported an increase in turnout, 3 studies reported no evidence of an effect on turnout, 4 studies reported a combination of these (increases and non-significant findings), 1 study reported a combination of a decrease in turnout and no evidence of an effect on turnout, and 2 studies reported decreases in turnout. Study 1 examined turnout patterns in all statewide elections in Oregon from 1960 through 2010 to test the extent to which elections conducted using all vote-by-mail procedures affected turnout compared to elections that used polling places and absentee vote-by- mail. This was possible because Oregon adopted all vote-by-mail as a method of voting in all elections in 1998. Using Oregon’s official turnout data, the authors examined the extent to which all vote-by- mail had turnout effects on different types of elections and assessed how long any effects persisted over time. They reported no statistically significant effects on turnout associated with conducting all vote-by-mail elections beyond the first three elections in which it was used and beyond certain types of low-interest special elections. However, they reported effects ranging from an increase of 8.4 percentage points to an increase of 15.5 percentage points associated with the first three elections conducted using all vote-by- mail. Specifically, the 8.4 percentage point increase was associated with all types of elections. It was calculated using the voting-eligible population as the denominator for turnout. The 15.5 percentage point increase was associated only with the special and primary elections that were among the first three elections using all vote-by-mail. It used registered individuals as the denominator for turnout. Additionally, the authors reported that using all vote-by-mail resulted in an 11 percentage point increase in turnout in low-profile, low interest elections—which they defined as special elections that included only ballot initiatives and referenda, and did not include candidate races. The authors concluded that the introduction of all vote-by-mail in Oregon led to a novelty effect that dissipated after three subsequent elections. Additionally, they concluded that any lasting turnout effects of all vote-by-mail are most likely limited to subfederal contests, precisely where voter interest is lowest and the relative impact of lowering of the costs of voting would be greatest. Study 7 assessed the extent to which Colorado’s adoption of all vote- by-mail in 2013 affected turnout in the 2014 midterm election. The authors noted that when Colorado adopted all vote-by-mail, it retained all previously available voting options (i.e., voting in-person prior to or on Election Day). Thus, the authors asserted, their results reflect the effect of adding vote-by-mail as an option, rather than switching to vote by mail as the only option. Using county-level data from the Election Assistance Commission for all presidential and midterm elections from 2004 through 2014 and multivariate linear regression models, the authors estimated the effect of an all vote-by-mail election on turnout among registered individuals. Because same day registration was adopted at the same time as the all vote-by-mail policy, the authors excluded all individuals who voted using same day registration from their analysis. They reported that the all vote-by-mail system implemented in the 2014 election was associated with increases in turnout across all four models they used, ranging from 2.5 percentage points to 5.1 percentage points. Studies 20 and 21 (from one publication) found that turnout was lower in California precincts that conducted elections exclusively by mail compared to California precincts that used polling places in combination with the opportunity to cast absentee ballots. The authors took advantage of a “natural experiment” created by a state law in California that allows county registrars to designate any precinct with fewer than 250 registrants as an all vote-by-mail precinct. The authors noted that since this natural experiment did not perfectly mimic random assignment, they used matching methods to pair each all vote-by-mail precinct with precincts where elections were conducted using polling places and absentee ballots, and contained populations with similar demographic and political attributes (e.g., similar partisanship balances among the electorate and similar margins of victory between the top two contestants). Study 20 assessed turnout differences between the two groups across 9 counties in the 2002 midterm election. Study 21 assessed differences between the two groups across 18 counties in connection with the 2000 presidential election. Using precinct-level data, the authors reported that turnout among voters in all vote-by-mail precincts was 1.5 and 2.7 percentage points lower than turnout among voters in polling place precincts for the 2002 midterm election and 2000 presidential election, respectively. The authors concluded that shifting elections from the polling place to the mailbox risks producing a decline in turnout for regularly scheduled general elections. Observations on cost, voter convenience, and other considerations State and local election officials we met with in Colorado and Oregon generally agreed that all vote-by-mail elections save on costs and are easier to administer than elections where votes are cast at polling places. For instance, according to the Colorado Director of Elections, many Colorado counties reported that the change to all vote-by-mail (which Colorado did in 2013) saved their jurisdictions money in the administration of elections. Further, this official said that counties reported that the primary area of cost savings was personnel costs, due to not having to staff as many locations. Additionally, officials in one local jurisdiction said that they believed that the all vote-by-mail system they use saves money because they do not have to maintain as much voting equipment, hire as many poll workers, or rent polling places. Chief election officials from three local jurisdictions stated that voting by mail allows voters whatever time they need to research the relevant issues and cast a ballot. On the other hand, the chief local election official from one local jurisdiction noted that filling out a ballot in one’s own home may compromise privacy or allow for coercion in the voting process (for example, from other family members). The Presidential Commission on Election Administration reported several challenges associated with conducting elections by mail. For instance, the Commission reported that voting by mail requires successful delivery and receipt of the ballot at many stages in the voting pipeline. Thus, ballots can be lost in the mail or can be mailed out or received too late for timely voting. They also reported that voters occasionally make mistakes in complying with various signature and other requirements in order for their mail ballots to be counted. Nevertheless, the Commission reported that appropriate safeguards could mitigate these risks—such as establishing communication with the local Postmaster and implementing ballot tracking mechanisms. The state election directors in both Colorado and Oregon—which conduct all vote-by-mail elections—reported having little concern with the security and integrity of administering all vote-by-mail elections. These officials noted that their states require one or more local election officials to compare signatures on the ballots with signatures in their states’ respective voter registration databases and validate each signature individually. Additionally, one local jurisdiction we visited implemented a ballot tracking system that uses postal barcodes to track mail ballot envelopes from the time they are printed through every stage of the postal process, up to the time when they have been delivered to the local jurisdiction. This system provides reports to local election officials about the status of all mail ballots and can be used by individual voters to obtain status updates, including notifications that their ballots have been accepted for counting (or rejected due a signature discrepancy). Some states or jurisdictions allow voters to cast their vote in-person without an excuse before election day. Variations in implementation States use different terms for this policy, such as early voting or in- person absentee voting, among others. Among states that allow for early voting, the specific circumstances for in-person early voting—such as the dates, times, and locations—are based on state and local requirements. Implementation and characteristics of early voting vary among states and, in some cases, among the jurisdictions within a state. States vary in terms of the days and locations provided for early in-person voting, including the extent to which voting is available on one or more weekends prior to Election Day. Additionally, states differ in terms of the hours that polling locations are open for early in-person voting. For example, some states specify a particular location or minimum number of early voting locations, such as the election office, and allow local election officials discretion to expand the locations or number of early voting sites. Some states’ laws dictate the number of early voting locations based on the population of the relevant jurisdiction. We reviewed 20 studies from 12 publications, and these studies had varied findings. Seven studies found no statistically significant effect, another 8 studies found that the policy decreased turnout, and 5 studies reported mixed evidence. Reported effects from these studies ranged from a 3.8 percentage point decrease in turnout to a 3.1 percentage point increase. Study 1 used CPS and state turnout data from presidential elections from 1972 through 2008 to assess the effect of the length of the early voting period on turnout. The study estimated that early in-person voting had no statistically significant effect on turnout if the early voting period was less than 27 days, and that a voting period of at least 27 days would be necessary to see any positive effect of early in-person voting on turnout. Further, the authors estimated that a 45- day early voting period would lead to a 3.1 percentage-point increase in turnout. However, the authors note that 45 days is a longer voting period than most states allow. Study 2 found that early in-person voting had mixed effects on turnout, depending on how long the policy had been in place. Using aggregate turnout data from 500 randomly selected counties nationwide, the authors analyzed turnout in these counties for presidential elections from 1972 through 2004 and examined the change in turnout in each county before and after the county implemented early in-person voting. The study found that the first time early in-person voting was offered in a presidential election, it increased county-level turnout by 1.5 percentage points compared to the previous presidential election, but the second time early in-person voting was offered in a presidential election, it decreased county-level turnout by 2.4 percentage points (also compared to the previous presidential election); the study found no statistically significant effect of this policy in the third presidential election after it was introduced (again, compared to the previous presidential election). Study 10 used CPS data covering presidential and midterm elections from 1980 through 2010 and found no statistically significant effects of early in-person voting on turnout among Whites and Blacks, but found evidence that early in-person voting decreased turnout among Latinos in states that offered this policy compared to states that did not, though the effect was not stated in terms of a percentage point difference. Studies 19 and 20, which are from one publication, analyzed the effect of early in-person voting on county-level turnout across multiple states (collected from several sources) in the 2012 and 2008 presidential elections, respectively. The studies found that the statewide availability of this policy decreased turnout by 3.8 percentage points in the 2008 election and by 3.5 percentage points in the 2012 election in counties that offered it in each election compared to counties where it was not offered. The author suggested that early in-person voting depresses turnout by decreasing the civic energy traditionally associated with Election Day as well as get-out- the-vote (GOTV) efforts and media attention in the weeks before Election Day. However, the author also included in his analysis a variable that accounted for the number of early voting locations (per 1,000 voting-age residents) and found that a greater number of early voting locations was associated with higher turnout. The author concluded that while adopting early in-person voting at the state level fails to increase turnout, adopting early voting and also providing an ample number of locations could lower voters’ costs enough to increase turnout. Observations on cost, voter convenience, and other considerations Although there can be costs associated with opening polling places for additional days, the Presidential Commission on Election Administration reported that these costs could be offset because fewer polling places could be needed on Election Day, although the Commission noted that adequate facilities should remain available on Election Day to ensure reasonable wait times. Officials in one local jurisdiction we visited said they were able to reduce Election Day polling places by 20 percent because of early voting opportunities, including early in-person voting and no-excuse absentee voting. Officials from one local jurisdiction we visited said that early in-person voting increases convenience for voters and reduces wait times on Election Day. These officials noted that early in-person voting can also reduce the effect of any problems that arise on Election Day—for example, if there is inclement weather, individuals who vote early are unlikely to be affected. In a nationwide survey we conducted following the November 2012 election, officials in 23 states reported that the availability of alternative voting options, including early in-person voting, can affect wait times on Election Day. In some cases, officials reported that they believed that no or limited opportunities for voting outside of Election Day were a contributing factor to long wait times on Election Day. States and local jurisdictions determine the hours available for voting on Election Day or any available early voting days, and the number of hours available for voting can be longer in some locations than others. Variations in implementation Current federal law does not dictate the hours that polling places are required to be open on Election Day. Some states establish statewide requirements for the times that voting will be available, while other states may allow for local discretion regarding the opening and closing times for the polls. For example, Delaware state law requires polling places to open at 7:00 a.m. and close at 8:00 p.m. Kansas state law establishes poll hours from 7:00 a.m. to 7:00 p.m., unless different hours are set and publicly announced by the county election officer, and if different hours are set state law requires the polls to be open at least 12 continuous hours (opening no earlier than 6:00 a.m., closing no earlier than 7:00 p.m. and no later than 8:00 p.m.). In Vermont, each town sets polling place hours, though state law requires that the polls open no earlier than 5:00 a.m. and no later than 10:00 a.m. and that polls close at 7:00 p.m. The literature we reviewed addressed the hours for voting on Election Day, but states and local jurisdictions may also vary the hours available for voting during early in-person voting. For example, polling places can be open different hours on weekdays compared to weekends during the early voting period. examined the effects of states having polling places open before 7:00 a.m. (compared to states that did not have polling places open before that time) and the effects of having polling places open after 7:00 p.m. (compared to states that did not have polling places open after that time). The study reported an estimated 1.7 percentage point increase in turnout from opening polling places before 7:00 a.m. and a 1.0 percentage point increase in turnout from keeping polling places open after 7:00 p.m. The authors of study 2 compiled an original data set of state turnout rates for presidential elections from 1920 through 2000 using vote totals provided by the Clerk of the U.S. House of Representatives and population data from the U.S. Census Bureau’s Statistical Abstracts of the United States. The study uses three multivariate regression models—one for all 50 states, one for 11 southern states, and one for 39 non-southern states to look at the effect of the varying number of hours the polls are open, by state, on turnout. The study found that the number of hours that polls were open did not have a statistically significant effect in the models for all 50 states or southern states, but the model for non-southern states indicated a statistically significant relationship between the total number of hours that the polls are open on Election Day and voter turnout particularly between non-Southern states that had shorter polling days (4 to 5 hours) and those that had longer polling days (9 to 10 hours). The authors noted a possible decrease in turnout from having the polls open for more than 11 hours, and suggested that there is a point (around 10 hours) at which state turnout rates are maximized. Observations on cost, voter convenience, and other considerations Keeping polling locations open for additional hours may have cost implications. For example, election officials in one local jurisdiction we visited said the election office must pay for the marginal cost of having the facility open for the additional voting hours, which can include costs such as utilities or building security. Similarly, states or local jurisdictions that pay poll workers by an hourly rate would incur additional costs from having polling places open additional hours. Additional poll hours may provide additional convenience to some voters. According to CPS data for general elections from 2000 through 2014, the most commonly reported reason among registered voters for not voting was being too busy or having a conflicting schedule. Extending voting hours may provide additional options for some voters whose schedules conflict with existing available hours for voting. Election offices can face challenges in recruiting poll workers, partly because of the hours workers are required to be at polling locations, and election offices may face increased challenges in recruiting poll workers for longer voting days. The EAC has reported that many people find the long hours required of poll workers to be a deterrent to serving as a poll worker. According to the EAC, to address the issue that some individuals may not want to work long hours on Election Day, some jurisdictions allow poll workers to work split shifts. However the EAC notes that such split shifts are sometimes controversial because they can create administrative difficulties; for example, poll workers for later shifts may not show up to replace those who are scheduled to leave. different hours are set and publicly announced by the county election officer, and if different hours are set state law requires the polls to be open at least 12 continuous hours (opening no earlier than 5:00 a.m., closing no earlier than 6:00 p.m. and no later than 7:00 p.m.). Kan. Stat. Ann. § 25-106. Some states allow any registered individual to request an absentee ballot and to vote by mail, without requiring that the individual state a reason for doing so. Variations in implementation Some states with no-excuse absentee voting require voters to submit an application for an absentee ballot with each election, whereas other states permit any registered voter to join a permanent absentee voting list. Once voters opt into this list, they will, in general, automatically receive an absentee ballot for all future elections. We reviewed 18 studies from 12 publications that had varied findings. Reported effects from these studies ranged from a 3.2 percentage point increase to a 4.0 percentage point decrease, with many studies (10 of 18) reporting mixed evidence or no statistically significant effects. Study 2 assessed the effect of expanding no-excuse absentee voting in Colorado to include the option for joining a permanent absentee list (Colorado adopted this policy in 2008). Using county-level data on voting methods and turnout in Colorado for all presidential and midterm elections from 2004 through 2014, the authors estimated an associated increase in turnout of 1.8 to 2.3 percentage points among registered individuals across four models (comparing turnout after the adoption of the policy to turnout prior to the adoption of the policy). Studies 4, 5, and 6 (from one publication) used CPS data to assess the effects of both permanent no-excuse absentee voting and nonpermanent no-excuse absentee voting on turnout in the 2000, 2004, and 2008 presidential elections, respectively. Each study found that both forms of no-excuse absentee voting were associated with a higher probability of voting among individuals in states that allowed these voting options than in states that did not allow these options. However, the authors did not report their findings in terms of percentage point effects on turnout. Studies 15 and 16 (from one publication) assessed the effect of no- excuse absentee voting in midterm and presidential elections, respectively. These studies utilized state-level turnout data from all 50 states for elections held from 1980 through 2006. Study 15, related to midterm elections, found no statistically significant effect on turnout associated with no-excuse absentee voting (comparing turnout in states that had no-excuse absentee voting to states that did not). Study 16, related to presidential elections, estimated a decrease in turnout of 1.1 percentage points in states that allowed no-excuse absentee voting, compared to states that did not. Study 17 attempted to determine the extent to which no-excuse absentee voting affected turnout and the extent to which the effect persisted in subsequent elections after its introduction. Using turnout data from 500 randomly selected counties nationwide, and examining turnout in presidential elections from 1972 through 2004, the authors of this study reported that the policy had no statistically significant effects on turnout in the first two presidential elections after it was introduced. However, they estimated a 2.8 percentage point decrease in turnout associated with the third presidential election after the policy became available. Observations on cost, voter convenience, and other considerations The author of one publication reported cost savings associated with administering permanent no-excuse absentee ballots versus administering nonpermanent no-excuse absentee ballots. Specifically, the author reported that in the 2008 presidential election, Contra Costa County in California spent $1.37 per ballot to administer about 215,000 permanent no-excuse absentee ballots versus $10.64 per ballot to administer about 40,000 nonpermanent no-excuse absentee ballots. Some of the cost savings, the author explained, come from not having to process individual requests for each ballot. The author notes, however, that while the lower cost for administering permanent absentee ballots is encouraging, it is important to remember that mail ballots still represent additional costs in the conduct of elections as long as a jurisdiction is also staffing polling places. Some jurisdictions may choose to pay for return postage on absentee ballots, which transfers this cost from the voter to the jurisdiction. In 2010, the EAC reported that approximately 30 percent of survey respondents in a nationwide EAC study believed that free postage would make them more likely to vote in an election. However, only 9 percent of respondents indicated that without a free postage program there was a significant chance they would not vote in an election. Officials in one local jurisdiction we visited said that enabling more people to vote by mail through no-excuse absentee voting can reduce the number of people voting in person on Election Day. These officials said they plan to send an absentee application to all registered individuals with an active registration prior to the 2016 presidential election to encourage the use of absentee voting. They said the amount of cost savings from this effort will depend on how many people use this option. They anticipate an increase in the use of absentee voting and expect some overall cost savings from this effort, but noted that absentee mail voting also has labor costs—such as costs associated with processing the requests, mailing the ballots, and performing signature verifications, among other things. Some states and local jurisdictions allow for the use of vote centers, which are polling places strategically located throughout a political subdivision where any registered individual may vote, regardless of the precinct in which the individual resides. Variations in implementation Vote centers may be open during an early voting period, on Election Day, or both. They may be located at shopping malls, grocery stores, community centers, or a variety of other locations, and may vary in the number of registered individuals they serve. We reviewed six studies in four publications. Five of the six studies had the same two co-authors, although one of the five studies included additional co-authors. Most studies (five of six) reported that the use of vote centers increased turnout. Study 1 compared turnout among voters in Larimer County, Colorado (which used vote centers in 2003 and 2004) to turnout among voters in adjacent Weld County, Colorado (which did not use vote centers in 2003 and 2004). The elections examined included a November 2003 local election, an August 2004 state and federal primary election, and the November 2004 presidential election. The authors noted that Larimer and Weld counties are geographically proximate to each other, shared elected representatives, and shared some demographic characteristics among their populations. To make this comparison, the authors matched 1,930 registered individuals in Larimer County (who used vote centers) to 1,930 similarly-situated registered individuals in Weld County (who did not use vote centers). The authors found that turnout was 2.6 percentage points higher in the treatment group than in the control group across the three elections, and concluded that this difference suggested that vote centers had a positive effect on turnout. Study 2 compared turnout in Travis County, Texas—in elections that occurred before and after the county adopted vote centers—and found that the use of vote centers increased turnout by 1.4 percentage points. Specifically, the authors compared turnout in a 2009 statewide constitutional amendment election (before Travis County adopted vote centers) to turnout in a 2011 statewide constitutional amendment election (after the county adopted vote centers). The authors noted that the elections were similar to one another and there were no significant campaigns related to any of the amendments in either election (11 amendments in 2009 and 10 amendments in 2011). To make the comparison, the authors obtained the Travis County voter file and filtered it to exclude any individual who was not eligible to vote in both elections. The authors explained that in Travis County, most of the precinct polling sites used in the 2009 election continued to operate in the 2011 election but were converted to vote centers, and the county also added six new vote center locations for the 2011 election (for a total of 187 vote center locations). As such, Travis County adopted the openness of vote centers—so that anyone in the county could vote at any location— without centralizing the polling places. This, the authors asserted, allowed them to isolate the effects of openness on turnout apart from effects that might result from consolidating the total number of polling places; however, the Travis County model likely differs from most other vote center models where consolidation is a key aspect. Study 6 assessed the effect of vote centers on turnout in the 2006 midterm election for 61 of Colorado’s 64 counties. Using county-level turnout data, the authors compared turnout among counties that used vote centers and counties that did not. The authors used three different models to assess the effect of vote centers on turnout and the extent to which turnout also may have been influenced by electoral competition. The authors reported decreases in turnout of 2 and 1.8 percentage points in counties using vote centers for two of the models and no statistically significant effect in the third model. The authors noted that these findings match some basic statistics regarding which counties chose to make the switch to vote centers, with counties that chose to make the switch having lower overall turnout in both the 1998 and 2002 midterm elections also. Thus, the authors concluded, the causal direction of the relationship between vote centers and turnout remains unclear. Observations on cost, voter convenience, and other considerations A study sponsored by the Indiana Secretary of State and published in 2010 concluded that Election Day vote centers would likely produce significant cost savings for all counties that might implement them due to efficiencies gained from having fewer overall polling places and poll workers on Election Day, and reducing the number of voting machines necessary. Informed by a previous study relating specifically to the implementation of Election Day vote centers at three pilot sites, authors of this study developed a model that estimated possible cost savings associated with implementing Election Day vote centers in each county. Compared to a precinct- based process, the authors estimated that Election Day vote centers would result in cost-per-vote savings ranging from 20 percent to 56 percent across Illinois counties. The chief election official in one local jurisdiction we visited said that vote centers are easier to manage and result in cost savings for the jurisdiction, compared to managing traditional precinct locations. Specifically, this official said that having a smaller number of locations and not having to hire as many poll workers results in cost savings. Additionally, the official said, vote centers provide a better experience for voters because, in this jurisdiction, permanent elections office staff, rather than temporary poll workers, are always present to answer any questions or address any concerns. According to one publication we reviewed, although vote centers are typically fewer in number across jurisdictions and may be greater distances from individuals’ residences than are precinct-based polling places, they increase the number of sites available to potential voters, and may be more complementary with people’s daily routines than exclusive precinct locations. The authors note that having vote centers available might be particularly important for those individuals who commute longer distances to work, as precinct-based polling locations might only be accessible to them in the mornings or evenings, while other polling locations might be nearer to their workplaces and more accessible throughout the day. The EAC guidelines state that vote centers require the use of electronic poll books and a secure real-time connection to a central election office voter registration database to ensure that no voters can vote twice in an election. Additionally, the National Conference of State Legislatures reported that vote centers must be able to produce the appropriate ballot for each voter. This requires either touchscreen machines that can be reset for each voter, "print-on- demand" equipment, or sufficient quantities of paper ballots for every ballot style. Thus, there may be initial investments required for some jurisdictions wishing to convert to a vote center model. This bibliography contains citations for the studies in the 53 publications we reviewed regarding policies and practices that may affect voter turnout. The publications listed below include one or more studies for which the design, implementation, and analyses were sufficiently sound to support the results and conclusions, based on generally accepted social science principles. (See appendix II for more information about how we made these determinations.) Publications may be listed multiple times—once under each policy or practice within our scope that the publication’s authors analyzed—and following the citation we include the study numbers that correspond to content in the individual policy summaries earlier in this report. For example, in the e-mail policy summary, figure 8 in appendix IV depicts findings from 18 studies (each numbered in the figure), and the numbers 1 through 18 in the figure correspond to the numbers listed following citations for publications that analyzed the effect of e-mail on voter turnout. Haenschen, Katherine. “@ The Vote: Four Experiments Using Facebook & Email To Increase Voter Turnout.” Prepared for the American Political Science Association Conference, September 2015. (Study 17) Malhotra, Neil; Melissa R. Michelson; and Ali Adam Valenzuela. “Emails from Official Sources Can Increase Turnout.” Quarterly Journal of Political Science, vol. 7, no. 3, (2012): 321-332. (Studies 1–3) Nickerson, David W. “Does Email Boost Turnout?” Quarterly Journal of Political Science, vol. 2, no. 4, (2007): 369-379. (Studies 4–16) Ulbig, Stacy G. and Tamara Waggener. “Getting Registered and Getting to the Polls: The Impact of Voter Registration Strategy and Information Provision on Turnout of College Students.” PS: Political Science and Politics, vol. 44, no. 3, (2011): 544-551. (Study 18) Abrajano, Marisa and Costas Panagopoulos. “Does Language Matter? The Impact of Spanish Versus English-Language GOTV Efforts on Latino Turnout.” American Politics Research, vol. 39, no. 4, (2011): 643-663. (Study 7) Bedolla, Lisa Garcia and Melissa R. Michelson. “What Do Voters Need to Know? Testing the Role of Cognitive Information in Asian American Voter Mobilization.” American Politics Research, vol. 37, no. 2, (2009): 254-274. (Studies 2, 34, and 36) Davenport, Tiffany C.; Alan S. Gerber; Donald P. Green; Christopher W. Larimer; Christopher B. Mann; and Costas Panagopoulos. “The Enduring Effects of Social Pressure: Tracking Campaign Experiments over a Series of Elections.” Political Behavior, vol. 32, no. 3, (2010): 423-430. (Studies 3, 8, 10, 13, 27, and 28) Gerber, Alan S.; Gregory A. Huber; Daniel R. Biggers; and David J. Hendry. “Ballot Secrecy Concerns and Voter Mobilization: New Experimental Evidence about Message Source, Context, and the Duration of Mobilization Effects.” American Politics Research, vol. 42, no. 5, (2014): 896-923. (Studies 4 and 14) Mann, Christopher B. and Lindsay Pryor, 2013 ERIC Voter Registration Outreach in Washington State. Accessed July 23, 2015, http://www.sos.wa.gov/assets/elections/2013-ERIC-Voter-Registration-in- Washington-State-FINAL-3-20-2014.pdf. (Study 16) Mann, Christopher B. and Lisa A. Bryant. “If You Ask, They Will Come (to Register and Vote): Field Experiments with State Election Agencies on Encouraging Voter Registration.” Prepared for the MIT Conference on Election Administration, June 8, 2015. (Studies 6 and 17) Matland, Richard E. and Gregg R. Murray. “An Experimental Test of Mobilization Effects in a Latino Community.” Political Research Quarterly, vol. 65, no. 1, (2012): 192-205. (Study 1) Matland, Richard E. and Gregg R. Murray. “I Only Have Eyes for You: Does Implicit Social Pressure Increase Voter Turnout?” Political Psychology (2015). (Studies 11, 22–24, and 35) Murray, Gregg R. and Richard E. Matland. “Mobilization Effects Using Mail: Social Pressure, Descriptive Norms, and Timing.” Political Research Quarterly, vol. 67, no. 2, (2014): 304-319. (Studies 20 and 21) Panagopoulos, Costas. “I’ve Got My Eyes on You: Implicit Social- Pressure Cues and Prosocial Behavior.” Political Psychology, vol. 35, no. 1, (2014): 23-33. (Study 15) Panagopoulos, Costas. “Raising Hope: Hope Inducement and Voter Turnout.” Basic and Applied Social Psychology, vol. 36, no. 6, (2014): 494-501. (Studies 25 and 26) Panagopoulos, Costas; Christopher W. Larimer; and Meghan Condon. “Social Pressure, Descriptive Norms, and Voter Mobilization.” Political Behavior, vol. 36, no. 2, (2014): 451-469. (Study 5) Ramírez, Ricardo. “Giving Voice to Latino Voters: A Field Experiment on the Effectiveness of a National Nonpartisan Mobilization Effort.” The ANNALS of the American Academy of Political and Social Science, vol. 601, no. 1, (2005): 66-84. (Studies 18, 19, and 30–33) Trivedi, Neema. “The Effect of Identity-Based GOTV Direct Mail Appeals on the Turnout of Indian Americans.” The ANNALS of the American Academy of Political and Social Science, vol. 601, no. 1, (2005): 115-122. (Study 29) Wolfinger, Raymond E; Benjamin Highton; and Megan Mullin. “How Postregistration Laws Affect the Turnout of Citizens Registered to Vote.” State Politics & Policy Quarterly, vol. 5, no. 1, (2005): 1-23. (Study 12) Wong, Janelle S. “Mobilizing Asian American Voters: A Field Experiment.” The ANNALS of the American Academy of Political and Social Science, vol. 601, no. 1, (2005): 102-114. (Study 9) Davenport, Tiffany C.; Alan S. Gerber; Donald P. Green; Christopher W. Larimer; Christopher B. Mann; and Costas Panagopoulos. “The Enduring Effects of Social Pressure: Tracking Campaign Experiments over a Series of Elections.” Political Behavior, vol. 32, no. 3, (2010): 423-430. (Study 1) Ramírez, Ricardo. “Giving Voice to Latino Voters: A Field Experiment on the Effectiveness of a National Nonpartisan Mobilization Effort.” The ANNALS of the American Academy of Political and Social Science, vol. 601, no. 1, (2005): 66-84. (Studies 2–6) Dale, Allison and Aaron Strauss. “Don’t Forget to Vote: Text Message Reminders as a Mobilization Tool.” American Journal of Political Science, vol. 53, no. 4, (2009): 787-804. (Study 1) Malhotra, Neil; Melissa R. Michelson; Todd Rogers; and Ali Adam Valenzuela. “Text Messages as Mobilization Tools: The Conditional Effect of Habitual Voting and Election Salience.” American Politics Research, vol. 39, no. 4, (2011): 664-681. (Studies 2 and 3) Leighley, Jan E. and Jonathan Nagler. Who Votes Now? Demographics, Issues, Inequality, and Turnout in the United States. Princeton, New Jersey: Princeton University Press, 2013. (Study 2) McDonald, Michael P.; Enrijeta Shino; and Daniel A. Smith. “Convenience Voting and Turnout: Reassessing the Effects of Election Reforms.” Prepared for the New Research on Election Administration and Reform Conference at MIT, June 8, 2015. (Studies 5 and 6) Springer, Melanie J. “State Electoral Institutions and Voter Turnout In Presidential Elections, 1920-2000.” State Politics & Policy Quarterly, vol. 12, no. 3, (2012): 252-283. (Study 3) Vonnahme, Greg. “Registration Deadlines and Turnout in Context.” Political Behavior, vol. 34, no. 4 (2012): 765-779. (Studies 1 and 4) Alvarez, R. Michael; Stephen Ansolabehere; and Catherine H. Wilson. “Election Day Voter Registration in the United States: How One-Step Voting Can Change the Composition of the American Electorate.” Working paper, Caltech/MIT Voting Technology Project, June 2002. (Study 20) Burden, Barry C. “Registration and Voting: A View from the Top” in The Measure of American Elections, ed. Barry C. Burden and Charles Stewart III. New York, New York: Cambridge University Press, 2014. (Studies 2 and 4) Burden, Barry C.; David T. Canon; Kenneth R. Mayer; and Donald P. Moynihan. “Election Laws, Mobilization, and Turnout: The Unanticipated Consequences of Election Reform.” American Journal of Political Science, vol. 58, no. 1, (2014): 95-109. (Studies 12, 16, and 25–27) Fitzgerald, Mary. “Greater Convenience but Not Greater Turnout: The Impact of Alternative Voting Methods on Electoral Participation in the United States.” American Politics Research, vol. 33, no. 6, (2005): 842- 867. (Studies 8 and 15) Fullmer, Elliott B. “Early Voting: Do More Sites Lead to Higher Turnout.” Election Law Journal, vol. 14, no. 2, (2015): 81-96. (Studies 10 and 14) Hanmer, Michael J. Discount Voting: Voter Registration Reforms and Their Effects. New York, New York: Cambridge University Press, 2012. (Studies 6, 24, 28 and 29) Keele, Luke and William Minozzi. “How Much Is Minnesota Like Wisconsin? Assumptions and Counterfactuals in Causal Inference with Observational Data.” Political Analysis, vol. 21, no. 2, (2013): 193-216. (Studies 30 and 31) Larocca, Roger and John S. Klemanski. “U.S. State Election Reform and Turnout in Presidential Elections.” State Politics & Policy Quarterly, vol. 11, no. 1, (2011): 76-101. (Studies 21–23) Leighley, Jan E. and Jonathan Nagler. Who Votes Now? Demographics, Issues, Inequality, and Turnout in the United States. Princeton, New Jersey: Princeton University Press, 2013. (Study 3) McDonald, Michael P.; Enrijeta Shino; and Daniel A. Smith. “Convenience Voting and Turnout: Reassessing the Effects of Election Reforms.” Prepared for the New Research on Election Administration and Reform Conference at MIT, June 8, 2015. (Studies 17 and 18) Neiheisel, Jacob R. and Barry C. Burden. “The Impact of Election Day Registration on Voter Turnout and Election Outcomes.” American Politics Research, vol. 40, no. 4, (2012): 636-664. (Study 9) Pellissier, Allyson. “In Line or Online? American Voter Registration in the Digital Era.” Working paper, Caltech/MIT Voting Technology Project, February 18, 2014. (Study 33) Rocha, Rene R. and Tetsuya Matsubayashi. “The Politics of Race and Voter ID Laws in the States: The Return of Jim Crow?” Political Research Quarterly, vol. 67, no. 3, (2014): 666-679. (Study 32) Springer, Melanie J. “State Electoral Institutions and Voter Turnout In Presidential Elections, 1920-2000.” State Politics & Policy Quarterly, vol. 12, no. 3, (2012): 252-283. (Study 5) Street, Alex; Thomas A. Murray; John Blitzer; and Rajan S. Patel. “Estimating Voter Registration Deadline Effects with Web Search Data.” Political Analysis, vol. 23, no. 2 (2015): 225-241. (Study 11) Tolbert, Caroline; Todd Donovan; Bridgett King; and Shaun Bowler. “Election Day Registration, Competition, and Voter Turnout” in Democracy in the States: Experiments in Election Reform, ed. Bruce E. Cain, Todd Donovan, and Caroline J. Tolbert. Washington, D.C.: Brookings Institution Press, 2008. (Studies 7 and 13) Vonnahme, Greg. “Registration Deadlines and Turnout in Context.” Political Behavior, vol. 34, no. 4, (2012): 765-779. (Studies 1 and 19) Burden, Barry C. “Registration and Voting: A View from the Top” in The Measure of American Elections, ed. Barry C. Burden and Charles Stewart III. New York, New York: Cambridge University Press, 2014. (Studies 3 and 16) Gerber, Alan S.; Gregory A. Huber; and Seth J. Hill. “Identifying the Effect of All-Mail Elections on Turnout: Staggered Reform in the Evergreen State.” Political Science Research and Methods, vol. 1, no. 1, (2013): 91- 116. (Studies 4, 8, 9 and 10) Gronke, Paul and Peter Miller. “Voting by Mail and Turnout in Oregon: Revisiting Southwell and Burchett.” American Politics Research, vol. 40, no. 6, (2012): 976-997. (Study 1) Gronke, Paul; Eva Galanes-Rosenbaum; and Peter A. Miller. “Early Voting and Voter Turnout” in Democracy in the States: Experiments in Election Reform, ed. Bruce E. Cain, Todd Donovan, and Caroline J. Tolbert. Washington, D.C.: Brookings Institution Press, 2008. (Studies 6 and 18) Kousser, Thad and Megan Mullin. “Does Voting by Mail Increase Participation? Using Matching to Analyze a Natural Experiment.” Political Analysis, vol. 15, no. 4, (2007): 428-445. (Studies 20 and 21) Larocca, Roger and John S. Klemanski. “U.S. State Election Reform and Turnout in Presidential Elections.” State Politics & Policy Quarterly, vol. 11, no. 1, (2011): 76-101. (Studies 12, 13, and 17) Menger, Andrew; Robert M. Stein; and Greg Vonnahme. “Turnout Effects from Vote by Mail Elections.” Prepared for the 2015 Annual Meetings of the American Political Science Association, September 3-6, 2015. (Study 7) Pellissier, Allyson. “In Line or Online? American Voter Registration in the Digital Era.” Working paper, Caltech/MIT Voting Technology Project, February 18, 2014 (Study 14) Richey, Sean. “Voting by Mail: Turnout and Institutional Reform in Oregon.” Social Science Quarterly, vol. 89, no. 4, (2008): 902-915. (Studies 2 and 5) Rocha, Rene R. and Tetsuya Matsubayashi. “The Politics of Race and Voter ID Laws in the States: The Return of Jim Crow?” Political Research Quarterly, vol. 67, no. 3, (2014): 666-679. (Study 19) Southwell, Priscilla L. “Analysis of the Turnout Effects of Vote by Mail Elections, 1980-2007.” The Social Science Journal, vol. 46, no. 1 (2009): 211-217. (Study 15) Southwell, Priscilla L. “Voting Behavior in Vote-by-Mail Elections.” Analyses of Social Issues and Public Policy, vol. 10, no. 1, (2010): 106-115. (Study 11) Fitzgerald, Mary. “Greater Convenience but Not Greater Turnout: The Impact of Alternative Voting Methods on Electoral Participation in the United States.” American Politics Research, vol. 33, no. 6, (2005): 842- 867. (Studies 3 and 4) Fullmer, Elliott B. “Early Voting: Do More Sites Lead to Higher Turnout.” Election Law Journal, vol. 14, no. 2, (2015): 81-96. (Studies 19 and 20) Giammo, Joseph D. and Brian J. Brox. “Reducing the Costs of Participation: Are States Getting a Return on Early Voting.” Political Research Quarterly, vol. 63, no. 2, (2010): 295-303. (Study 2) Gronke, Paul; Eva Galanes-Rosenbaum; and Peter A. Miller. “Early Voting and Voter Turnout” in Democracy in the States: Experiments in Election Reform, ed. Bruce E. Cain, Todd Donovan, and Caroline J. Tolbert. Washington, D.C.: Brookings Institution Press, 2008. (Study 6 and 7) Larocca, Roger and John S. Klemanski. “U.S. State Election Reform and Turnout in Presidential Elections.” State Politics & Policy Quarterly, vol. 11, no. 1, (2011): 76-101. (Studies 12–14) Leighley, Jan E. and Jonathan Nagler. Who Votes Now? Demographics, Issues, Inequality, and Turnout in the United States. Princeton, New Jersey: Princeton University Press, 2013. (Study 1) McDonald, Michael P.; Enrijeta Shino; and Daniel A. Smith. “Convenience Voting and Turnout: Reassessing the Effects of Election Reforms.” Prepared for the New Research on Election Administration and Reform Conference at MIT, June 8, 2015. (Studies 5 and 11) Pellissier, Allyson. “In Line or Online? American Voter Registration in the Digital Era.” Working paper, Caltech/MIT Voting Technology Project, February 18, 2014 (Study 15) Richey, Sean. “Voting by Mail: Turnout and Institutional Reform in Oregon.” Social Science Quarterly, vol. 89, no. 4, (2008): 902-915. (Studies 17 and 18) Rocha, Rene R. and Tetsuya Matsubayashi. “The Politics of Race and Voter ID Laws in the States: The Return of Jim Crow?” Political Research Quarterly, vol. 67, no. 3, (2014): 666-679. (Study 10) Springer, Melanie J. “State Electoral Institutions and Voter Turnout In Presidential Elections, 1920-2000.” State Politics & Policy Quarterly, vol. 12, no. 3, (2012): 252-283. (Study 9) Tolbert, Caroline; Todd Donovan; Bridgett King; and Shaun Bowler. “Election Day Registration, Competition, and Voter Turnout” in Democracy in the States: Experiments in Election Reform, ed. Bruce E. Cain, Todd Donovan, and Caroline J. Tolbert. Washington, D.C.: Brookings Institution Press, 2008. (Studies 8 and 16) Springer, Melanie J. “State Electoral Institutions and Voter Turnout In Presidential Elections, 1920-2000.” State Politics & Policy Quarterly, vol. 12, no. 3, (2012): 252-283. (Study 2) Wolfinger, Raymond E; Benjamin Highton; and Megan Mullin. “How Postregistration Laws Affect the Turnout of Citizens Registered to Vote.” State Politics & Policy Quarterly, vol. 5, no. 1, (2005): 1-23. (Study 1) Fitzgerald, Mary. “Greater Convenience but Not Greater Turnout: The Impact of Alternative Voting Methods on Electoral Participation in the United States.” American Politics Research, vol. 33, no. 6, (2005): 842-867. (Studies 10 and 11) Giammo, Joseph D. and Brian J. Brox. “Reducing the Costs of Participation: Are States Getting a Return on Early Voting.” Political Research Quarterly, vol. 63, no. 2, (2010): 295-303. (Study 17) Gronke, Paul; Eva Galanes-Rosenbaum; and Peter A. Miller. “Early Voting and Voter Turnout” in Democracy in the States: Experiments in Election Reform, ed. Bruce E. Cain, Todd Donovan, and Caroline J. Tolbert. Washington, D.C.: Brookings Institution Press, 2008. (Studies 13 and 18) Larocca, Roger and John S. Klemanski. “U.S. State Election Reform and Turnout in Presidential Elections.” State Politics & Policy Quarterly, vol. 11, no. 1, (2011): 76-101. (Studies 4–6) Leighley, Jan E. and Jonathan Nagler. Who Votes Now? Demographics, Issues, Inequality, and Turnout in the United States. Princeton, New Jersey: Princeton University Press, 2013. (Study 1) Martinez, Michael D. and Daniel A. Smith. “Your Ballot’s in the Mail: The Effects of Unsolicited Absentee Ballots.” Prepared for the 2015 Annual Meetings of the American Political Science Association, September 3-6, 2015. (Study 3) McDonald, Michael P.; Enrijeta Shino; and Daniel A. Smith. “Convenience Voting and Turnout: Reassessing the Effects of Election Reforms.” Prepared for the New Research on Election Administration and Reform Conference at MIT, June 8, 2015. (Studies 8 and 12) Menger, Andrew; Robert M. Stein; and Greg Vonnahme. “Turnout Effects from Vote by Mail Elections.” Prepared for the 2015 Annual Meetings of the American Political Science Association, September 3-6, 2015. (Study 2) Pellissier, Allyson. “In Line or Online? American Voter Registration in the Digital Era.” Working paper, Caltech/MIT Voting Technology Project, February 18, 2014 (Study 7) Rocha, Rene R. and Tetsuya Matsubayashi. “The Politics of Race and Voter ID Laws in the States: The Return of Jim Crow?” Political Research Quarterly, vol. 67, no. 3, (2014): 666-679. (Study 9) Springer, Melanie J. “State Electoral Institutions and Voter Turnout In Presidential Elections, 1920-2000.” State Politics & Policy Quarterly, vol. 12, no. 3, (2012): 252-283. (Study 14) Tolbert, Caroline; Todd Donovan; Bridgett King; and Shaun Bowler. “Election Day Registration, Competition, and Voter Turnout” in Democracy in the States: Experiments in Election Reform, ed. Bruce E. Cain, Todd Donovan, and Caroline J. Tolbert. Washington, D.C.: Brookings Institution Press, 2008. (Studies 15 and 16) Juenke, Eric Gonzalez and Julie Marie Shepherd. “Vote Centers and Voter Turnout” in Democracy in the States: Experiments in Election Reform, ed. Bruce E. Cain, Todd Donovan, and Caroline J. Tolbert. Washington, D.C.: Brookings Institution Press, 2008. (Study 6) Stein, Robert M. and Greg Vonnahme, “Effect of Election Day Vote Centers on Voter Participation.” Election Law Journal, vol. 11, no. 3, (2012): 291-301. (Studies 3–5) Stein, Robert M. and Greg Vonnahme. “Engaging the Unengaged Voter: Vote Centers and Voter Turnout.” The Journal of Politics, vol. 70, no. 2, (2008): 487-497. (Study 1) Vonnahme, Greg; Lonna Atkeson; Lisa Bryant; Christopher Mann; and Robert Stein. “Election Day Vote Centers, Voter Participation, and the Spatial Distribution of Voting.” Prepared for the 12th Annual Meeting of the State Politics and Policy Conference, February 16-18, 2012. (Study 2) In addition to the contact named above, Tom Jessor (Assistant Director), David Alexander, Carl Barden, Chuck Bausell, Colleen Candrl, Katherine Davis, William Egar, Michele Fejfar, Alana Finley, Daniel Friess, Eric Hauswirth, Jeff Jensen, Jan Montgomery, Heidi Nielson, Anna Maria Ortiz, Amanda Parker, Kelsey Sagawa, Natalie Swabb, Janet Temko- Blinder, and Jeff Tessin made significant contributions to this report. | Since the enactment of the Help America Vote Act of 2002, there have been notable changes in how states and local election jurisdictions conduct key election activities, such as registration and voting. States regulate some aspects of elections, but the combinations of election administration policies can vary widely across the country's approximately 10,500 local election jurisdictions. GAO was asked to examine the benefits, challenges, and other considerations of various election administration policies. This report addresses the following questions: (1) What are the reported benefits and challenges of efforts to collect and share voter registration information electronically? (2) What is known about the effect of selected policies on voter turnout? (3) What is known about the costs of elections? To address these three questions, GAO reviewed and analyzed relevant literature from 2002 through 2015. GAO identified 118 studies that examined the effect of selected policies that have been or could be implemented by state or local governments on voter turnout. GAO reviewed the studies' analyses, and determined that the studies were sufficiently sound to support their results and conclusions. In addition, GAO conducted visits and interviewed state and local election officials from five states that had implemented efforts and policies relevant to GAO's research questions to varying degrees, and provided geographic diversity. The results from these five states are not generalizable, but provide insight into state and local perspectives. According to GAO's literature review and election officials interviewed, the benefits of collecting and sharing voter registration information electronically include improved accuracy and cost savings; while challenges include upfront investments and ongoing maintenance, among other things. For example, establishing infrastructure for online registration requires time and money, but can generate savings and enhance accuracy by, for instance, reducing the need for local election officials to manually process paper registration forms. The upfront costs of online registration are generally modest and quickly surpassed by savings generated after implementation. GAO reviewed research to identify 11 election administration policies that had each been studied multiple times in connection with voter turnout and found varying effects. For example: The majority of studies on same day registration and all vote-by-mail found that these policies increased turnout. Vote centers (polling places where registrants can vote regardless of assigned precinct) and the sending of text messages to provide information about registration and elections have not been studied as much as some of the other policies, but almost all of the studies reviewed on these policies reported increases in turnout. Some studies of mailings to provide information and no-excuse absentee voting also found that these policies increased turnout, while other studies reported mixed evidence or no evidence of an effect. Most studies of e-mail and robocalls to provide information reported no evidence of an effect on turnout. Most studies of early in-person voting reported no evidence of an effect on turnout or found decreases in turnout, while the remaining studies reported mixed evidence. Distinguishing the unique effects of a policy from the effects of other factors that affect turnout can be challenging, and even sufficiently sound studies cannot account for all unobserved factors that potentially impact the results. Additionally, research findings on turnout are only one of many considerations for election officials as they decide whether or not to implement selected policies. States and local election jurisdictions incur a variety of costs associated with administering elections, and the types and magnitude of costs can vary by state and jurisdiction. Further, quantifying the total costs for all election activities is difficult for several reasons, including that multiple parties incur costs associated with elections and may track costs differently. Although some parties' costs can be easily identified in cost-tracking documents, other costs may be difficult to attribute to election activities. Additionally, voters' costs can also be difficult to quantify because each voter's costs vary based on factors such as method of voting, or time required to travel to polling places, among other things. The Election Assistance Commission did not have any comments on this report, and GAO incorporated technical comments provided by state and local election officials and DMV officials as appropriate. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
We last provided you an overview of federal information security in September 1996. At that time, serious security weaknesses had been identified at 10 of the largest 15 federal agencies, and we concluded that poor information security was a widespread federal problem. We recommended that the Office of Management and Budget (OMB) play a more active role in overseeing agency practices, in part through its role as chair of the then newly established Chief Information Officers (CIO) Council. Subsequently, in February 1997, as more audit evidence became available, we designated information security as a new governmentwide high-risk area in a series of reports to the Congress. During 1996 and 1997, federal information security also was addressed by the President’s Commission on Critical Infrastructure Protection, which had been established to investigate our nation’s vulnerability to both “cyber” and physical threats. In its October 1997 report, Critical Foundations: Protecting America’s Infrastructures, the Commission described the potentially devastating implications of poor information security from a national perspective. The report also recognized that the federal government must “lead by example,” and included recommendations for improving government systems security. This report eventually led to issuance of Presidential Decision Directive 63 in May 1998, which I will discuss in conjunction with other governmentwide security improvement efforts later in my testimony. As hearings by this Committee have emphasized, risks to the security of our government’s computer systems are significant, and they are growing. The dramatic increase in computer interconnectivity and the popularity of the Internet, while facilitating access to information, are factors that also make it easier for individuals and groups with malicious intentions to intrude into inadequately protected systems and use such access to obtain sensitive information, commit fraud, or disrupt operations. Further, the number of individuals with computer skills is increasing, and intrusion, or “hacking,” techniques are readily available. Attacks on and misuse of federal computer and telecommunication resources are of increasing concern because these resources are virtually indispensable for carrying out critical operations and protecting sensitive data and assets. For example, weaknesses at the Department of the Treasury place over a trillion dollars of annual federal receipts and payments at risk of fraud and large amounts of sensitive taxpayer data at risk of inappropriate disclosure; weaknesses at the Health Care Financing Administration place billions of dollars of claim payments at risk of fraud and sensitive medical information at risk of disclosure; and weaknesses at the Department of Defense affect operations such as mobilizing reservists, paying soldiers, and managing supplies. Moreover, Defense’s warfighting capability is dependent on computer-based telecommunications networks and information systems. These and other examples of risks to federal operations and assets are detailed in our report Information Security: Serious Weaknesses Place Critical Federal Operations and Assets at Risk (GAO/AIMD-98-92), which the Committee is releasing today. Although it is not possible to eliminate these risks, understanding them and implementing an appropriate level of effective controls can reduce the risks significantly. Conversely, an environment of widespread control weaknesses may invite attacks that would otherwise be discouraged. As the importance of computer security has increased, so have the rigor and frequency of federal audits in this area. During the last 2 years, we and the agency inspectors general (IG) have evaluated computer-based controls on a wide variety of financial and nonfinancial systems supporting critical federal programs and operations. Many of these audits are now done annually. This growing body of audit evidence is providing a more complete and detailed picture of federal information security than was previously available. The most recent set of audit results that we evaluated—those published since March 1996—describe significant information security weakness in each of the 24 federal agencies covered by our analysis. These weaknesses cover a variety of areas, which we have grouped into six categories of general control weaknesses. The most widely reported weakness was poor control over access to sensitive data and systems. This area of control was evaluated at 23 of the 24 agencies, and weaknesses were identified at each of the 23. Access control weaknesses make systems vulnerable to damage and misuse by allowing individuals and groups to inappropriately modify, destroy, or disclose sensitive data or computer programs for purposes such as personal gain or sabotage. Access controls limit or detect inappropriate access to computer resources (data, equipment, and facilities), thereby protecting them against unauthorized modification, loss, and disclosure. Access controls include physical protections, such as gates and guards, as well as logical controls, which are controls built into software that (1) require users to authenticate themselves through the use of secret passwords or other identifiers and (2) limit the files and other resources that an authenticated user can access and the actions that he or she can execute. In today’s increasingly interconnected computing environment, poor access controls can expose an agency’s information and operations to potentially devastating attacks from remote locations all over the world by individuals with minimal computer and telecommunications resources and expertise. Common types of access control weaknesses included overly broad access privileges inappropriately provided to very large access that was not appropriately authorized and documented; multiple users sharing the same accounts and passwords, making it impossible to trace specific transactions or modifications to an individual; inadequate monitoring of user activity to deter and identify inappropriate actions, investigate suspicious activity, and penalize perpetrators; improperly implemented access controls, resulting in unintended access or gaps in access control coverage; and access that was not promptly terminated or adjusted when users either left an agency or when their responsibilities no longer required them to have access to certain files. The second most widely reported type of weakness pertained to service continuity. Service continuity controls ensure that when unexpected events occur, critical operations continue without undue interruption and critical and sensitive data are protected. In addition to protecting against natural disasters and accidental disruptions, such controls also protect against the growing threat of “cyber-terrorism,” where individuals or groups with malicious intent may attack an agency’s systems in order to severely disrupt critical operations. For this reason, an agency should have (1) procedures in place to protect information resources and minimize the risk of unplanned interruptions and (2) a plan to recover critical operations should interruptions occur. To determine whether recovery plans will work as intended, they should be tested periodically in disaster simulation exercises. Losing the capability to process, retrieve, and protect information maintained electronically can significantly affect an agency’s ability to accomplish its mission. If controls are inadequate, even relatively minor interruptions can result in lost or incorrectly processed data, which can cause financial losses, expensive recovery efforts, and inaccurate or incomplete financial or management information. Service continuity controls were evaluated for 20 of the agencies included in our analysis, and weaknesses were reported for all of these agencies. Common weaknesses included the following: Plans were incomplete because operations and supporting resources had not been fully analyzed to determine which were the most critical and would need to be resumed as soon as possible should a disruption occur. Disaster recovery plans were not fully tested to identify their weaknesses. One agency’s plan was based on an assumption that key personnel could be contacted within 10 minutes of the emergency, an assumption that had not been tested. The third most common type of weakness involved inadequate entitywide security program planning and management. Each organization needs a set of management procedures and an organizational framework for identifying and assessing risks, deciding what policies and controls are needed, periodically evaluating the effectiveness of these policies and controls, and acting to address any identified weaknesses. These are the fundamental activities that allow an organization to manage its information security risks cost effectively, rather than reacting to individual problems ad hoc only after a violation has been detected or an audit finding has been reported. Weaknesses were reported for all 17 of the agencies for which this area of control was evaluated. Many of these agencies had not developed security plans for major systems based on risk, had not formally documented security policies, and had not implemented a program for testing and evaluating the effectiveness of the controls they relied on. The fourth most commonly reported type of weakness was inadequate segregation of duties. Segregation of duties refers to the policies, procedures, and organizational structure that help ensure that one individual cannot independently control all key aspects of a process or computer-related operation and thereby conduct unauthorized actions or gain unauthorized access to assets or records without detection. For example, one computer programmer should not be allowed to independently write, test, and approve program changes. Segregation of duties is an important internal control concept that applies to both computerized and manual processes. However, it is especially important in computerized environments, since an individual with overly broad access privileges can initiate and execute inappropriate actions, such as software changes or fraudulent transactions, more quickly and with greater impact than is generally possible in a nonautomated environment. Although segregation of duties alone will not ensure that only authorized activities occur, inadequate segregation of duties increases the risk that erroneous or fraudulent transactions could be processed, that improper program changes could be implemented, and that computer resources could be damaged or destroyed. Controls to ensure appropriate segregation of duties consist mainly of documenting, communicating, and enforcing policies on group and individual responsibilities. Enforcement can be accomplished by a combination of physical and logical access controls and by effective supervisory review. Segregation of duties was evaluated at 17 of the 24 agencies. Weaknesses were identified at 16 of these agencies. Common problems involved computer programmers and operators who were authorized to perform a wide variety of duties, thus enabling them to independently modify, circumvent, and disable system security features. For example, at one agency, all users of the financial management system could independently perform all of the steps needed to initiate and complete a payment—obligate funds, record vouchers for payment, and record checks for payment—making it relatively easy to make a fraudulent payment. The fifth most commonly reported type of weakness pertained to software development and change controls. Such controls prevent unauthorized software programs or modifications to programs from being implemented. Key aspects are ensuring that (1) software changes are properly authorized by the managers responsible for the agency program or operations that the application supports, (2) new and modified software programs are tested and approved prior to their implementation, and (3) approved software programs are maintained in carefully controlled libraries to protect them from unauthorized changes and ensure that different versions are not misidentified. Such controls can prevent both errors in software programming as well as malicious efforts to insert unauthorized computer program code. Without adequate controls, incompletely tested or unapproved software can result in erroneous data processing that depending on the application, could lead to losses or faulty outcomes. In addition, individuals could surreptitiously modify software programs to include processing steps or features that could later be exploited for personal gain or sabotage. Weaknesses in software program change controls were identified for 14 of the 18 agencies where such controls were evaluated. One of the most common types of weakness in this area was undisciplined testing procedures that did not ensure that implemented software operated as intended. In addition, procedures did not ensure that emergency changes were subsequently tested and formally approved for continued use and that implementation of locally-developed unauthorized software programs was prevented or detected. The sixth area pertained to operating system software controls. System software controls limit and monitor access to the powerful programs and sensitive files associated with the computer systems operation. Generally, one set of system software is used to support and control a variety of applications that may run on the same computer hardware. System software helps control and coordinate the input, processing, output, and data storage associated with all of the applications that run on the system. Some system software can change data and programs without leaving an audit trail or can be used to modify or delete audit trails. Examples of system software include the operating system, system utilities, program library systems, file maintenance software, security software, data communications systems, and database management systems. Controls over access to and modification of system software are essential in providing reasonable assurance that operating system-based security controls are not compromised and that the system will not be impaired. If controls in this area are inadequate, unauthorized individuals might use system software to circumvent security controls to read, modify, or delete critical or sensitive information and programs. Also, authorized users of the system may gain unauthorized privileges to conduct unauthorized actions or to circumvent edits and other controls built into application programs. Such weaknesses seriously diminish the reliability of information produced by all of the applications supported by the computer system and increase the risk of fraud, sabotage, and inappropriate disclosures. Further, system software programmers are often more technically proficient than other data processing personnel and, thus, have a greater ability to perform unauthorized actions if controls in this area are weak. A common type of system software control weakness reported was insufficiently restricted access that made it possible for knowledgeable individuals to disable or circumvent controls in a wide variety of ways. For example, at one facility, 88 individuals had the ability to implement programs not controlled by the security software, and 103 had the ability to access an unencrypted security file containing passwords for authorized users. Significant system software control weaknesses were reported at 9 of the 24 agencies. In the remaining 15 agencies, this area of control had not been fully evaluated. We are working with the IGs to ensure that it receives adequate coverage in future evaluations. I would now like to describe in greater detail weaknesses at the two agencies that you have chosen to feature today: the Department of Veterans Affairs and the Social Security Administration. The Department of Veterans Affairs (VA) relies on a vast array of computer systems and telecommunications networks to support its operations and store the sensitive information the department collects in carrying out its mission. In a report released today, we identify general computer control weaknesses that place critical VA operations, such as financial management, health care delivery, benefit payments, life insurance services, and home mortgage loan guarantees, at risk of misuse and disruption. In addition, sensitive information contained in VA’s systems, including financial transaction data and personal information on veteran medical records and benefit payments, is vulnerable to inadvertent or deliberate misuse, fraudulent use, improper disclosure, or destruction—possibly occurring without detection. VA operates the largest health care delivery system in the United States and guarantees loans on about 20 percent of the homes in the country. In fiscal year 1997, VA spent over $17 billion on medical care and processed over 40 million benefit payments totaling over $20 billion. The department also provided insurance protection through more than 2.5 million policies that represented about $24 billion in coverage at the end of fiscal year 1997. In addition, the VA systems support the department’s centralized accounting and payroll functions. In fiscal year 1997, VA’s payroll was almost $11 billion, and the centralized accounting system generated over $7 billion in additional payments. In our report, we note significant problems related to the department’s control and oversight of access to its systems. VA did not adequately limit the access of authorized users or effectively manage user identifications (ID) and passwords. At one facility, the security software was implemented in a manner that provided all of the more than 13,000 users with the ability to access and change sensitive data files, read system audit information, and execute powerful system utilities. Such broad access authority increased the risk that users could circumvent the security software to alter payroll and other payment transactions. This weakness could also provide users the opportunity to access and disclose sensitive information on veteran medical records, such as diagnoses, procedures performed, inpatient admission and discharge data, or the purpose of outpatient visits, and home mortgage loans, including the purpose, loan balance, default status, foreclosure status, and amount delinquent. At two facilities, we found that system programmers had access to both system software and financial data. This type of access could allow the programmers to make unauthorized changes to benefit payment information without being detected. At four of the five facilities we visited, we identified user ID and password management control weaknesses that increased the risk of passwords being compromised to gain unauthorized access. For example, IDs for terminated or transferred employees were not being disabled, many passwords were common words that could be easily guessed, numerous staff were sharing passwords, and some user accounts did not have passwords These types of weaknesses make the financial transaction data and personal information on veteran medical records and benefits stored on these systems vulnerable to misuse, improper disclosure, and destruction. We demonstrated these vulnerabilities by gaining unauthorized access to VA systems and obtaining information that could have been used to develop a strategy to alter or disclose sensitive patient information. We also found that the department had not adequately protected its systems from unauthorized access from remote locations or through the VA network. The risks created by these issues are serious because, in VA’s interconnected environment, the failure to control access to any system connected to the network also exposes other systems and applications on the network. While simulating an outside hacker, we gained unauthorized access to the VA network. Having obtained this access, we were able to identify other systems on the network, which makes it much easier for outsiders with no knowledge of VA’s operations or infrastructure to penetrate the department’s computer resources. We used this information to access the log-on screen of another computer that contained financial and payroll data, veteran loan information, and sensitive information on veteran medical records for both inpatient and outpatient treatment. Such access to the VA network, when coupled with VA’s ineffective user ID and password management controls and available “hacker” tools, creates a significant risk that outside hackers could gain unauthorized access to this information. At two facilities, we were able to demonstrate that network controls did not prevent unauthorized users with access to VA facilities or authorized users with malicious intent from gaining improper access to VA systems. We were able to gain access to both mainframe and network systems that could have allowed us to improperly modify payments related to VA’s loan guaranty program and alter sensitive veteran compensation, pension, and life insurance benefit information. We were also in a position to read and modify sensitive data. The risks created by these access control problems were also heightened significantly because VA was not adequately monitoring its systems for unusual or suspicious access activities. In addition, the department was not providing adequate physical security for its computer facilities, assigning duties in such a way as to properly segregate functions, controlling changes to powerful operating system software, or updating and testing disaster recovery plans to ensure that the department could maintain or regain critical functions in emergencies. Many similar access and other general computer control weaknesses had been reported in previous years, indicating that VA’s past actions have not been effective on a departmentwide basis. Weaknesses associated with restricting access to sensitive data and programs and monitoring access activity have been consistently reported in IG and other internal reports. A primary reason for VA’s continuing general computer control problems is that the department does not have a comprehensive computer security planning and management program in place to ensure that effective controls are established and maintained and that computer security receives adequate attention. An effective program would include guidance and procedures for assessing risks and mitigating controls, and monitoring and evaluating the effectiveness of established controls. However, VA had not clearly delineated security roles and responsibilities; performed regular, periodic assessments of risk; implemented security policies and procedures that addressed all aspects of VA’s interconnected environment; established an ongoing monitoring program to identify and investigate unauthorized, unusual, or suspicious access activity; or instituted a process to measure, test, and report on the continued effectiveness of computer system, network, and process controls. In our report to VA, we recommended that the Secretary direct the CIO to (1) work with the other VA CIOs to address all identified computer control weaknesses, (2) develop and implement a comprehensive departmentwide computer security planning and management program, (3) review and assess computer control weaknesses identified throughout the department and establish a process to ensure that these weaknesses are addressed, and (4) monitor and periodically report on the status of improvements to computer security throughout the department. In commenting on our report, VA agreed with these recommendations and stated that the department would immediately correct the identified computer control weaknesses and implement oversight mechanisms to ensure that these problems do not reoccur. VA also stated that the department was developing plans to correct deficiencies previously identified by the IG and by internal evaluations and that the VA CIO will report periodically on VA’s progress in correcting computer control weaknesses throughout the department. We have discussed these actions with VA officials, and, as part of our upcoming review, we will be examining completed actions and evaluating their effectiveness. The Social Security Administration (SSA) relies on extensive information processing resources to carry out its operations, which, for 1997, included payments that totaled approximately $390 billion to 50 million beneficiaries. This was almost 25 percent of the $1.6 trillion in that year’s federal expenditures. SSA also issues social security numbers and maintains earnings records and other personal information on virtually all U. S. citizens. Through its programs, SSA processes approximately 225 million wage and tax statements (W-2 forms) annually for approximately 138 million workers. Few federal agencies affect so many people. The public depends on SSA to protect trust fund revenues and assets from fraud and to protect sensitive information on individuals from inappropriate disclosure. In addition, many current beneficiaries rely on the uninterrupted flow of monthly payments to meet their basic needs. In November 1997, the SSA IG reported serious weaknesses in controls over information resources, including access, continuity of service, and software program changes that unnecessarily place these assets and operations at risk. These weaknesses demonstrate the need for SSA to do more to assure that adequate controls are provided for information collected, processed, transmitted, stored, or disseminated in general support systems or major applications. Internal control testing identified information protection-related weaknesses throughout SSA’s information systems environment. Affected areas included SSA’s distributed computer systems as well as its mainframe computers. These vulnerabilities exposed SSA and its computer systems to external and internal intrusion; subjected sensitive SSA information related to social security numbers, earnings, disabilities, and benefits to potential unauthorized access, modification, and/or disclosure; and increased the risks of fraud, waste, and abuse. Access control and other weaknesses also increased the risks of introducing errors or irregularities into data processing operations. For example, auditors identified numerous employee user accounts on SSA networks, including dial-in modems, that were either not password protected or were protected by easily guessed passwords. These weaknesses increased the risk that unauthorized outsiders could access, modify, and delete data; create, modify, and delete users; and disrupt services on portions of SSA’s network. In addition, auditors identified network control weaknesses that could result in accidental or intentional alteration of birth and death records, as well as unauthorized disclosure of personal data and social security numbers. These weaknesses were made worse because security awareness among employees was not consistent at SSA. As a result, SSA was susceptible to security penetration techniques, such as social engineering, whereby users disclose sensitive information in response to seemingly legitimate requests from strangers either over the phone or in person. The auditors reported that during testing, they were able to secure enough information through social engineering to allow access to SSA’s network. Further, by applying intrusion techniques in penetration tests, auditors gained access to various SSA systems that would have allowed them to view user data, add and delete users, modify network configurations, and disrupt service to users. By gaining access through such tests, auditors also were able to execute software tools that resulted in their gaining access to SSA electronic mailboxes, public mailing lists, and bulletin boards. This access would have provided an intruder the ability to read, send, or change e-mail exchanged among SSA users, including messages from or to the Commissioner. In addition to access control weaknesses and inadequate user awareness, employee duties at SSA were not appropriately segregated to reduce the risk that an individual employee could introduce and execute unauthorized transactions without detection. As a result, certain employees had the ability to independently carry out actions such as initiating and adjudicating claims or moving and reinstating earnings data. This weakness was exacerbated because certain mitigating monitoring or detective controls could not be relied on. For example, SSA has developed a system that allows supervisors to review sensitive or potentially fraudulent activity. However, key transactions or combinations of transactions are not being reviewed or followed up promptly and certain audit trail features have not been activated. Weaknesses such as those I have just described increase the risk that a knowledgeable individual or group could fraudulently obtain payments by creating fictitious beneficiaries or increasing payment amounts. Similarly, such individuals could secretly obtain sensitive information and sell or otherwise use it for personal gain. The recent growth in “identity theft,” where personal information is stolen and used fraudulently by impersonators for purposes such as obtaining and using credit cards, has created a market for such information. According to the SSA IG’s September 30, 1997, report to the Congress (included in the SSA’s fiscal year 1997 Accountability Report), 29 criminal convictions involving SSA employees were obtained during fiscal year 1997, most of which involved creating fictitious identities, fraudulently selling SSA cards, misappropriating refunds, or abusing access to confidential information. The risk of abuse by SSA employees is of special concern because, except for a very few individuals, SSA does not restrict access to view sensitive data based on a need-to-know basis. As a result, a large number of SSA employees can browse enumeration, earnings, and claims records for many other individuals, including other SSA employees, without detection. SSA provides this broad access because it believes that doing so facilitates its employees’ ability to carry out SSA’s mission. An underlying factor that contributes to SSA’s information security weaknesses is inadequate entitywide security program planning and management. Although SSA has an entitywide security program in place, it does not sufficiently address all areas of security, including dial-in access, telecommunications, certain major mainframe system applications, and distributed systems outside the mainframe environment. A lack of such an entitywide program impairs each group’s ability to develop a security structure for its responsible area and makes it difficult for SSA management to monitor agency performance in this area. In two separate letters to SSA management, the IG and its contractor made recommendations to address the weaknesses reported in November 1997. SSA has agreed with the majority of the recommendations and is developing related corrective action plans. Substantively improving federal information security will require efforts at both the individual agency level and at the governmentwide level. Agency managers are primarily responsible for securing the information resources that support their critical operations. However, central oversight also is important to monitor agency performance and address crosscutting issues that affect multiple agencies. Over the last 2 years, a number of efforts have been initiated, but additional actions are still needed. First, it is important that agency managers implement comprehensive programs for identifying and managing their security risks in addition to correcting specific reported weaknesses. Over the last 2 years, our reports and IG reports have included scores of recommendations to individual agencies, and agencies have either implemented or planned actions to address most of the specific weaknesses. However, there has been a tendency to react to individual audit findings as they were reported, with little ongoing attention to the systemic causes of control weaknesses. In short, agencies need to move beyond addressing individual audit findings and supplement these efforts with a framework for proactively managing the information security risks associated with their operations. Such a framework includes determining which risks are significant, assigning responsibility for taking steps to reduce risks, and ensuring that these steps are implemented effectively and remain effective over time. Without a management framework for carrying out these activities, information security risks to critical operations may be poorly understood; responsibilities may be unclear and improperly implemented; and policies and controls may be inadequate, ineffective, or inconsistently applied. In late 1996, at the Committee’s request, we undertook an effort to identify potential solutions to this problem, including examples that could supplement existing guidance to agencies. To do this, we studied the security management practices of eight nonfederal organizations known for their superior security programs. These organizations included two financial services corporations, a regional electric utility, a state university, a retailer, a state agency, a computer vendor, and an equipment manufacturer. We found that these organizations managed their information security risks through a cycle of risk management activities, and we identified 16 specific practices that supported these risk management principles. These practices are outlined in an executive guide titled Information Security Management: Learning From Leading Organizations (GAO/AIMD-98-68), which was released by the Committee in May 1998 and endorsed by the CIO Council. Upon publication, the guide was distributed to all major agency heads, CIOs, and IGs. The guide describes a framework for managing information security risks through an ongoing cycle of activities coordinated by a central focal point. Such a framework can help ensure that existing controls are effective and that new, more advanced control techniques are prudently and effectively selected and implemented as they become available. The risk management cycle and the 16 practices supporting this cycle of activity are depicted in the following figures. In addition to effective security program planning and management at individual agencies, governmentwide leadership, coordination, and oversight are important to ensure that federal executives understand the risks to their operations, monitor agency performance in mitigating these risks, ensure implementation of needed improvements, and facilitate actions to resolve issues affecting multiple agencies. To help achieve this, the Paperwork Reduction Act of 1980 made OMB responsible for developing information security policies and overseeing related agency practices. In 1996, we reported that OMB’s oversight consisted largely of reviewing selected agency system-related projects and participating in various federal task forces and working groups. While these activities are important, we recommended that OMB play a more active role in overseeing agency performance in the area of information security. Since then, OMB’s efforts have been supplemented by those of the CIO Council. In late 1997, the Council, under OMB’s leadership, designated information security as one of six priority areas and established a Security Committee, an action that we had recommended in 1996. The Security Committee, in turn, has established relationships with other federal entities involved in security and developed a very preliminary plan. While the plan does not yet comprehensively address the various issues affecting federal information security or provide a long-range strategy for improvement, it does cover important areas by specifying three general objectives: promote awareness and training, identify best practices, and address technology and resource issues. During the first half of 1998, the committee has sponsored a security awareness seminar for federal agency officials and developed plans for improving agency access to incident response services. More recently, in May 1998, Presidential Decision Directive (PDD) 63 was issued in response to recommendations made by the President’s Commission on Critical Infrastructure Protection in October 1997. PDD 63 established entities within the National Security Council, the Department of Commerce, and the Federal Bureau of Investigation to address critical infrastructure protection, including federal agency information infrastructures. Specifically, the directive states that “the Federal Government shall serve as a model to the private sector on how infrastructure assurance is best achieved” and that federal department and agency CIOs shall be responsible for information assurance. The directive requires each department and agency to develop a plan within 180 days from the issuance of the directive in May 1998 for protecting its own critical infrastructure, including its cyber-based systems. These plans are then to be subject to an expert review process. Other key provisions related to the security of federal information systems include a review of existing federal, state, and local bodies charged with enhanced collection and analysis of information on the foreign information warfare threat to our critical infrastructures; establishment of a National Infrastructure Protection Center within the Federal Bureau of Investigation to facilitate and coordinate the federal government’s investigation and response to attacks on its critical infrastructures; assessments of U. S. government systems’ susceptibility to interception and exploitation; and incorporation of agency infrastructure assurance functions in agency strategic planning and performance measurement frameworks. We plan to follow up on the these activities as more specific information becomes available. The CIO Council’s efforts and the issuance of PDD 63 indicate that senior federal officials are increasingly concerned about information security risks and are acting on these concerns. Improvements are needed both at the individual agency level and in central oversight, and coordinated actions throughout the federal community will be needed to substantively improve federal information security. What needs to emerge is a coordinated and comprehensive strategy that incorporates the worthwhile efforts already underway and takes advantage of the expanded amount of evidence that has become available in recent years. The objectives of such a strategy should be to encourage agency improvement efforts and measure their effectiveness through an appropriate level of oversight. This will require a more structured approach for (1) ensuring that risks are fully understood, (2) promoting use of the most cost-effective control techniques, (3) testing and evaluating the effectiveness of agency programs, and (4) acting to address identified deficiencies. This approach needs to be applied at individual departments and agencies and in a coordinated fashion across government. In our report on governmentwide information security that is being released today, we recommended that the Director of OMB and the Assistant to the President for National Security Affairs develop such a strategy. As part of our recommendation, we stated that such a strategy should ensure that executive agencies are carrying out the responsibilities outlined in laws and regulations requiring them to protect the security of their information resources; clearly delineate the roles of the various federal organizations with responsibilities related to information security; identify and rank the most significant information security issues facing federal agencies; promote information security risk awareness among senior agency officials whose critical operations rely on automated systems; identify and promote proven security tools, techniques, and management best practices; ensure the adequacy of information technology workforce skills; ensure that the security of both financial and nonfinancial systems is adequately evaluated on a regular basis; include long-term goals and objectives, including time frames, priorities, and annual performance goals; and provide for periodically evaluating agency performance from a governmentwide perspective and acting to address shortfalls. In commenting on a draft of our report, the OMB’s Acting Deputy Director for Management said that a plan is currently being developed by OMB and the CIO Council, working with the National Security Council. The comments stated that the plan is to develop and promote a process by which government agencies can (1) identify and assess their existing security posture, (2) implement security best practices, and (3) set in motion a process of continued maintenance. The comments also describe plans for a CIO Council-sponsored interagency assist team that will review agency security programs. As of September 17, a plan had not yet been finalized and, therefore, was not available for our review, according to an OMB official involved in the plan’s development. We intend to review the plan as soon as it is available. Although information security, like other types of safeguards and controls, is an ongoing concern, it is especially important, now and in the coming 18 months, as we approach and deal with the computer problems associated with the Year 2000 computing crisis. The Year 2000 crisis presents a number of security problems with which agencies must be prepared to contend. For example, it is essential that agencies improve the effectiveness of controls over their software development and change process as they implement the modifications needed to make their systems Year 2000 compliant. Many agencies have significant weaknesses in this area, and most are under severe time constraints to make needed software changes. As a result, there is a danger that already weak controls will be further diminished if agencies bypass or truncate them in an effort to speed the software modification process. This increases the risk that erroneous or malicious code will be implemented or that systems that do not adequately support agency needs will be rushed into use. Also, agencies should strive to improve their abilities to detect and respond to anomalies in system operations that may indicate unauthorized intrusions, sabotage, misuse, or damage that could affect critical operations and assets. As illustrated by VA and SSA, many agencies are not taking full advantage of the system and network monitoring tools that they already have and many have not developed reliable procedures for responding to problems once they are identified. Without such incident detection and response capabilities, agencies may not be able to readily distinguish between malicious attacks and system-induced problems, such as those stemming from Year 2000 noncompliance, and respond appropriately. The Year 2000 crisis is the most dramatic example yet of why we need to protect critical computer systems because it illustrates the government’s widespread dependence on these systems and the vulnerability to their disruption. However, the threat of disruption will not end with the advent of the new millennium. There is a longer-term danger of attack from malicious individuals or groups, and it is important that our government design long-term solutions to this and other security risks. Mr. Chairman, this concludes our statement. We would be happy to respond to any questions you or other members of the Committee may have. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | GAO discussed the state of information security in the federal government, focusing on the Department of Veterans Affairs' (VA) and the Social Security Administration's (SSA) efforts to develop and maintain an effective security management program. GAO noted that: (1) as the importance of computer security has increased, so have the rigor and frequency of federal audits in this area; (2) during the last 2 years, GAO and the agency inspectors general (IG) have evaluated computer-based controls on a wide variety of financial and nonfinancial systems supporting critical federal programs and operations; (3) the most recent set of audit results described significant information security weakness in each of the 24 federal agencies covered by GAO's analysis; (4) these weaknesses cover a variety of areas, which GAO has grouped into six categories of general control weaknesses; (5) in GAO's report, it noted significant problems related to VA's control and oversight of access to its systems; (6) VA did not adequately limit the access of authorized users or effectively manage user identifications and passwords; (7) GAO also found that the department had not adequately protected its systems from unauthorized access from remote locations or through the VA network; (8) a primary reason for VA's continuing general computer control problems is that the department does not have a comprehensive computer security planning and management program in place to ensure that effective controls are established and maintained and that computer security receives adequate attention; (9) the public depends on SSA to protect trust fund revenues and assets from fraud and to protect sensitive information on individuals from inappropriate disclosure; (10) in addition, many current beneficiaries rely on the uninterrupted flow of monthly payments to meet their basic needs; in November 1997, the SSA IG reported serious weaknesses in controls over information resources, including access, continuity of service, and software program changes that unnecessarily place these assets and operations at risk; (11) internal control testing identified information protection-related weaknesses throughout SSA's information systems environment; (12) an underlying factor that contributes to SSA's information security weaknesses is inadequate entitywide security program planning and management; (13) substantively improving federal information security will require efforts at both the individual agency level and at the governmentwide level; and (14) over the last 2 years, a number of efforts have been initiated, but additional actions are still needed. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
Long-term fiscal simulations by GAO, Congressional Budget Office (CBO), and others all show that despite a 3-year decline in the federal government’s unified budget deficit, we still face large and growing structural deficits driven primarily by rising health care costs and known demographic trends. In fact, our long-range challenge has grown in the past three years and the projected tsunami of entitlement spending is closer to hitting our shores. The long-term fiscal challenge is largely a health care challenge. Although Social Security is important because of its size, the real driver is health care spending. It is both large and projected to grow more rapidly in the future. GAO’s current long-term simulations show ever-larger deficits resulting in a federal debt burden that ultimately spirals out of control. Figure 1 shows two alternative fiscal paths. The first is “Baseline extended,” which extends the CBO’s August baseline estimates beyond the 10-year projection period, and the second is an alternative based on recent trends and policy preferences. Our “Alternative simulation” assumes action to return to and remain at historical levels of revenue and reflects somewhat higher discretionary spending than in Baseline extended and more realistic Medicare estimates for physician payments than does the Baseline extended scenario. Although the timing of deficits and the resulting debt build up varies depending on the assumptions used, both simulations show that we are on an imprudent and unsustainable fiscal path. The bottom line is that the nation’s longer-term fiscal outlook is daunting under any realistic policy scenario or set of assumptions. Continuing on this unsustainable fiscal path will gradually erode, if not suddenly damage, our economy, our standard of living, and ultimately our national security. Our current path also increasingly will constrain our ability to address emerging and unexpected budgetary needs and they serve to increase the burdens that will be faced by future generations. Although Social Security, Medicare, and Medicaid dominate the long-term outlook, they are not the only federal programs or activities that bind the future. The federal government undertakes a wide range of responsibilities, programs, and activities that may either obligate the government to future spending or create an expectation for such spending. In fact, last year the U.S. government’s major reported liabilities, social insurance commitments, and other fiscal exposures continued to grow. They now total approximately $50 trillion—about four times the nation’s total output (GDP) in fiscal year 2006—up from about $20 trillion, or two times GDP in fiscal year 2000. (See fig. 2.) Absent meaningful reforms, these amounts will continue to grow every second of every minute of every day due to continuing deficits, known demographic trends, and compounding interest costs. GAO, Fiscal Exposures: Improving the Budgetary Focus on Long-Term Costs and Uncertainties, GAO-03-213 (Washington, D.C.: Jan. 24, 2003). leadership. In addition to the proposal that both of you are offering, I’m pleased to say that several other members on both sides of the political aisle and on both ends of Capitol Hill are also taking steps to answer the call for fiscal prudence by proposing bills to accomplish similar objectives. I was pleased to join you when you announced this proposal. As I said at the time, I believe it offers one potential means to achieve an objective we all should share: taking steps to make the tough choices necessary to keep America great and to help make sure that our country’s, children’s and grandchildren’s future is better than our past. Senators Conrad and Gregg, thank you for your leadership. I was especially pleased to see that the task force that would be created by your legislation was informed by GAO’s work on the key elements necessary for any task force or commission to be successful. Last year we looked at several policy-oriented commissions. (See app. I for a summary table on that work.) Our analysis suggests that there are a number of factors that can increase the likelihood a commission will be successful. Examples of those factors—and elements your proposal encompasses— are a broad charter—don’t artificially limit what can be discussed and don’t set policy preconditions (like “must support individual accounts”) for membership, involvement of leaders from both the executive and legislative branches—including elected officials, a report with specific proposals and a requirement for supermajority vote to make recommendations to the President and the Congress, and a process to require consideration of the proposals. A few of these points deserve elaboration. Having a broad charter and no preconditions is very important. This means that “everything is on the table”—and that is critical in order for the effort to be credible and have any real chance of success. But let me be clear what we mean by “everything is on the table”—it means that everything is open for discussion and debate. It does not mean advance agreement to a specific level of revenues or benefit changes. The only precondition should be the end goal: to put the nation’s fiscal outlook back on a prudent and sustainable path for the future. I believe that having true bipartisanship and active involvement by both the executive and the legislative branches is important. If any proposal is seen as partisan or the product of only one branch, it is unlikely to fly with the American people. Candidly, based on my interactions with thousands of Americans from across the nation during the past two years, there is little confidence in the ability of elected officials to rise above partisan battles and ideological divides. As a result, I believe that any related commission or task force should also involve knowledgeable professionals from selected nonpartisan institutions who have significant expertise and experience. Finally, the task force or commission will need to move beyond diagnosis to prescription. We know the path must be changed. What we need now are credible and specific legislative proposals that will accomplish that. Furthermore, these should come from a supermajority of the task force or commission members with a mechanism to assure a vote on a majority basis by the Congress. At your request, we are looking at how other countries have reformed their entitlement programs—not the substance of their reforms but rather the process that led up to the reform. As countries have sought to reform entitlements such as pensions and disability, they have often used commissions as a means to develop reform proposals that became the basis for legislation. For example, the 2003 Rurup Commission in Germany, composed of experts, public officials, and others, made recommendations for reform of public pensions that were enacted in 2004 and 2007. In the Netherlands, the 2000 Donner Commission composed of respected public figures representing the major political parties developed recommendations that became the basis for major disability reform legislation enacted in 2005. In the early 1990s, a working group of parliamentary members in Sweden developed the concept of a major structural reform of their public pension system that was worked out in detail in succeeding years and enacted in 1998. In addition to these types of commissions, several countries also have permanent advisory bodies tasked with periodically informing the government on pension policy challenges and reform options. Our related work is not yet complete, but some of what we have found to date would not surprise you. These special groups—whether commissions or task forces—can and do fill multiple roles including public education, coalition building, “setting the table” for action, and providing a means for and cover to act. Leadership is key and public education is also important. You asked that we comment on some particulars—and on areas where we think further refinements would increase the chances of success. Let me now turn to three areas: timing and how to ensure involvement of the newly-elected President, congressional action: whether—and if so how—to permit amendments to or substitutes for the commission’s proposals, and the supermajority vote requirement, and the chairmanship of the commission. A great strength of your proposal is that it calls for the task force or commission to deliberate throughout 2008. As you know, members of the Fiscal Wake-Up Tour believe that fiscal responsibility and intergenerational equity must be a top priority for the new President. We all agree that finding solutions will require leadership, bipartisan cooperation, a willingness to discuss all options and courage to make tough choices. For example, those who argue that spending must come down from projected levels should explain which programs they would target and how the savings would be achieved. Those who argue for higher taxes should explain what level of taxation they are willing to support, the manner in which the new revenue would be raised and the mechanisms that will help to ensure that any additional revenues will be used in a manner that will help rather than hinder our effort to be fiscally responsible. Those who are unwilling to do either should explain how much debt they are willing to impose on future generations of Americans. Indeed, we have suggested a number of key questions we believe it is reasonable to ask the candidates. These include the following: What specific spending cuts, if any, do you propose and how much of the problem would they solve? What specific tax increases, if any, do you propose and how much of the problem would they solve? What is your vision for the future of Social Security and what strategies would you pursue to bring it about? What is your vision for the nation’s health care system, including the future of Medicare, and what strategies would you pursue to bring it about? These questions and others should be addressed by all the (presidential) candidates so the public can assess whether he or she appreciates the magnitude of the problem, the consequences of doing nothing (or making the problem worse), and the realistic trade-offs needed to find real and sustainable solutions. Although I believe the candidates should recognize the seriousness of this challenge, I also believe it is unrealistic to expect candidates to offer coherent, fully comprehensive proposals at this point in the campaign. In that sense the task force or a similar commission performs a great service: candidates could promise to take seriously any information or proposals and to engage in a constructive manner with the group after the election. They could agree that for the task force or commission to have a chance of succeeding “everything must be on the table” at least for discussion. That said, it is important to find a way to involve whoever is elected as our new President. After all, it will be the person elected approximately 53 weeks from now who must use the “bully pulpit” and put their energy and prestige behind the effort to help ensure success. Although I think having a deadline is important, I believe that a December 9, 2008, deadline for the commission’s report does not offer enough time for the kind of input and involvement that will be necessary. Some way must be found to gain the active involvement and buy-in of the incoming President. In any event, it seems likely that the December 2008 deadline would need to be replaced—perhaps with a January or February 2009 date. You also asked us to think about the current requirement for a “fast track” up-or-down vote in the House and Senate and the requirement for a supermajority in both houses. As former Congressman and former Office of Management and Budget (OMB) Director Leon Panetta has said, in any effort to change our fiscal path “nothing will be agreed to until everything is agreed to.” This statement also offers a warning about the dangers of picking apart any package. Whatever process is developed for considering the task force’s recommendations should protect the proposal from being picked apart amendment by amendment. The task force is charged with developing— and agreeing to—a coherent proposal which, taken as a whole, will put us on a prudent and sustainable long-term fiscal path. Presumably, to reach agreement, the members will have made compromises—any proposal is going to have elements that represent concessions by the various members. In all likelihood those concessions will have been made in return for concessions by others. If individual elements can be eliminated by amendment, the likelihood that the package will achieve its goal will be reduced. The very process of coming up with a coherent proposal means that the package is likely to stand or fall as a whole. In that sense the prohibition on amendments makes some sense. At the same time, I believe it would make sense to permit alternatives. I say alternatives not amendments because I believe it is important that any alternatives achieve the same change in fiscal path as the task force’s proposal. The SAFE bill proposed by Senator Voinovich and by Representatives Cooper and Wolf does permit alternatives—but it holds them to the same standards and criteria as the proposal from the commission. Permitting alternative packages to be offered and voted upon may increase the credibility and acceptance of the end result. The Task Force bill requires both a supermajority to report out a proposal and a supermajority in both houses to adopt the proposal. The supermajority requirement within the task force (or commission) offers assurance that any proposal has bipartisan support. It offers stronger backing for a proposal that must reflect difficult choices. If a proposal comes to the Congress with a two-thirds or three-fourths vote of the task force, the necessity for a supermajority vote to enact the proposal in the Congress is less clear. It is even possible that this requirement could offer the opportunity for a minority to derail the process. Any package that makes meaningful changes to our fiscal path is going to contain elements that generate significant opposition. Therefore, although I think requiring a supermajority within the task force makes sense, requiring a supermajority vote for enactment of the task force or commission’s proposal by the Congress is inappropriate. In my view, such a requirement puts too many hurdles in the way of making tough choices and achieving necessary reforms. Finally, Chairman Conrad, Senator Gregg, let me raise a question about the role envisioned for the outgoing Administration. I believe you are correct to include executive branch officials. In this regard, I have the utmost respect for the current Secretary of the Treasury. I have met with him on several occasions and am well aware that he has made several statements about the need for action on our long-term fiscal challenge. At the same time, I believe that designating a cabinet official in an outgoing administration as the task force chairman presents some serious challenges and potential drawbacks. Both the strength and the weakness of having the Secretary of the Treasury participate is that he will be seen as representing the outgoing President. While participation by the executive branch at the highest level will be important, having an outgoing Administration official serve as chairman may serve to hinder rather than help achieve acceptance and enactment of any findings and recommendations. Given the fiscal history of the first 7 years of this century and the experience with the Commission to Strengthen Social Security, I would question whether having the Treasury Secretary or any other current Administration official serve as chairman is the right way to go. Before concluding, I would like to say a few words about what I hope is a renewed push to find a vehicle for addressing this very important challenge. Senator Voinovich has proposed the SAFE Commission. Its membership is different than your Task Force proposal but it seeks the same goal—improving our fiscal path. As I noted, Congressmen Cooper and Wolf have joined to introduce companion bills in the House: both to the SAFE Commission and to the Conrad-Gregg Bipartisan Task Force. As a result, both the Senate and the House have before them bills that seek to create vehicles for executive-legislative bipartisan development of credible, specific, legislative proposals to put us back on a prudent and sustainable fiscal path in order to ensure that our future is better than our past. We owe it to our country, children, and grandchildren to do no less. These are encouraging signs. I hope there is movement in this Congress. At the same time I think we must recognize that achieving and maintaining fiscal sustainability is not a one-time event. Even if a task force or commission is created and succeeds in developing a proposal and that proposal is enacted, it will be necessary to monitor our path. In that context I note that the proposal by Senators Feinstein and Domenici for a permanent commission would require periodic review and reporting of recommendations every 5 years to maintain the adequacy and long-term solvency of Social Security and Medicare. In our work looking at other countries we note that reform is an ongoing process and that no matter how comprehensive initial reforms, some adjustments are likely to be necessary. Something like the ongoing commission suggested by Senators Feinstein and Domenici may be a good companion and follow-on to the Task Force/Commissions envisioned by either the Bipartisan Task Force or the SAFE Commission bills. We will need to be flexible in our response to early challenges and success as we move forward. Changing our fiscal path to a prudent and sustainable one is hard work and achieving reform requires a process with both integrity and credibility. In our work on other countries’ entitlement reform efforts, we see that reforms are sometimes the culmination of earlier efforts that may have seemed “unsuccessful” at the time. For example, a 1984 Swedish commission on pension reform did not reach consensus on a proposal but its work helped set the stage for a process that resulted in a major reform. Similarly, the recent reforms of public pensions in Germany and disability in the Netherlands built upon a long series of incremental reform changes. Each reform effort can move the process forward and each country must find its own way. Today we can build on previous efforts in the United States. In this country we have been discussing Social Security reforms and developing reform options since the mid-1990s. We have had two major commissions on entitlement reform in the last decade—a Presidential commission on Social Security in 2001 and a Congressional commission on Medicare in 1998. There have also been discussion, studies and commissions on tax reform. As we said in our report on the December 2004 Comptroller General forum on our nation’s long-term fiscal challenge, leadership and the efforts of many people will be needed to change our fiscal path. The issues raised by the long-term fiscal challenge are issues of significance that affect every American. By making its proposal, this Committee has shown the kind of leadership that is essential for us to successfully address the long-term fiscal challenge that lies before us. The United States is a great nation, possibly the greatest in history. We have faced many challenges in the past and we have met them. It is a mistake to underestimate the commitment of the American people to their country, children, and grandchildren; to underestimate their willingness and ability to hear the truth and support the decisions necessary to deal with this challenge. We owe it to our country, children and grandchildren to address our fiscal and other key sustainability challenges. The time for action is now. Mr. Chairman, Senator Gregg, members of the Committee, let me repeat my appreciation for your commitment and concern in this matter. We at GAO stand ready to assist you in this important endeavor. Restricted; required revenue neutrality and keeping incentives for homeownership and charitable giving, and encouraging savings; required to consider equity and simplicity too 32 (22/10) 17 (9/8) 10 (0/10) 16 (0/16); Included 3 former Members of Congress 9 (0/9); Chair and Vice-Chair were former Senators; 1 former House Representative on panel; also included 4 professors and 2 “tax practitioners” Yes, functionally; technically Breaux = Chair; Thomas = “Administrative Chair” No; proposed recommendations failed to gain required 11 votes (Jan. 1995) (Dec. 2001) (July 2004) (Nov. 2005) No; but recommended 5 broad principles for crafting “solutions to our fiscal problems” n.a. Yes This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | GAO has for many years warned that our nation is on an imprudent and unsustainable fiscal path. During the past 2 years, the Comptroller General has traveled to 24 states as part of the Fiscal Wake-Up Tour. Members of this diverse group of policy experts agree that finding solutions to the nation's long-term fiscal challenge will require bipartisan cooperation, a willingness to discuss all options, and the courage to make tough choices. Indeed, the members of the Fiscal Wake-Up Tour believe that fiscal responsibility and intergenerational equity must be a top priority for the new President. Several bills have been introduced that would establish a bipartisan group to develop proposals/policy options for addressing the longterm fiscal challenge. At the request of Chairman Conrad and Senator Gregg, the Comptroller General discussed GAO's views on their proposal to create a Bipartisan Task Force for Responsible Fiscal Action (S. 2063). Long-term fiscal simulations by GAO, Congressional Budget Office (CBO), and others all show that despite some modest improvement in near-term deficits, we face large and growing structural deficits driven primarily by rising health care costs and known demographic trends. Under any realistic policy scenario or assumptions, the nation's longer-term fiscal outlook is daunting. Continuing on this unsustainable fiscal path will gradually erode, if not suddenly damage, our economy, our standard of living, and ultimately our national security. Our current path also increasingly will constrain our ability to address emerging and unexpected budgetary needs and increase the burdens that will be faced by future generations. As the Comptroller General stated when the bill was introduced, the Bipartisan Task Force for Responsible Fiscal Action offers one potential means to taking steps to make the tough choices necessary to keep America great, and to help make sure that our country's, children's, and grandchildren's future is better than our past. GAO noted that the bill incorporates key elements needed for any task force or commission to be successful: (1) a statutory basis, (2) a broad charter that does not artificially limit what can be discussed and does not set policy preconditions for membership, (3) bipartisan membership, (4) involvement of leaders from both the executive and legislative branches--including elected officials, (5) a report with specific proposals and a requirement for supermajority vote to make recommendations to the President and the Congress, and (6) a process to require consideration of the proposals. GAO also made some suggestions it believes could enhance the likelihood that the bill will achieve its overarching goals. GAO suggested the sponsors consider (1) including a way for the next President to be involved in the process of proposal development, (2) permitting alternative packages to be voted on that would achieve the same fiscal result, and (3) eliminating the requirement for a supermajority in Congress. With the same aim, GAO also expressed some reservations about the current approach to specifying the Task Force Chairman. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
To achieve directed force structure reductions, the Air Force has been reducing the number of F-15 and F-16 aircraft in its inventory. Between fiscal years 1991 and 1997, the Air Force plans to reduce its F-15 aircraft from 342 to 252. Over this same period, the Air Force plans to reduce its F-16 aircraft from 570 to 444. In 1991, F-15 and F-16 aircraft were configured in 42 squadrons. By fiscal year 1997, these aircraft will be configured in 37 squadrons. Until 1992, the Air Force predominantly organized its active fighter aircraft in wings of three squadrons, with 24 combat aircraft in each squadron. However, in 1992, the Air Force Chief of Staff directed that the squadrons be reduced to 18 aircraft. By 1997, most fighter squadrons will have been reduced to this smaller size, leaving only 54 aircraft in most wings. The Secretary of Defense has encouraged the services to consolidate forces wherever possible to reduce infrastructure and operating costs.However, the Air Force acknowledged in 1995 that while the force structure has been reduced by 30 percent, the supporting infrastructure has been reduced by only about 15 percent. The Air Force cited increased deployment flexibility and reduced span of control as the primary benefits for having smaller fighter squadrons. However, the Air Force has not demonstrated that these benefits are compelling. Moreover, the Air Force has neither documented instances of problems with deployment flexibility and span of control nor conducted studies that support its decision to use smaller squadrons. Air Force officials said that the primary benefit of using smaller-sized squadrons is increased operational deployment flexibility. With fewer fighters in the Air Force inventory, reducing squadrons to 18 aircraft increases the number of squadrons above the number there would have been had the aircraft been organized in traditional squadrons of 24 aircraft. Air Force officials stated that these additional squadrons are needed to respond to conflicts that reflect the new security environment. This new security environment is characterized by multiple contingency operations and the possibility of two nearly simultaneous military regional conflicts. On the basis of our analysis of Air Force fighter assistance in recent contingency operations, it appears that the Air Force would have considerable deployment flexibility even if the aircraft remained in the former 24-aircraft configuration. We examined the three contingency operations that were ongoing during June 1995 that required Air Force F-15 and F-16 assistance. For two operations, the Commander in Chief (CINC) for each theater operation required less than one squadron’s aircraft for each operation. For these operations, the Air Force rotated 18 squadrons of F-15s and F-16s (7 active and 11 reserve) to provide year-long coverage to support these contingency operations. We were told that for the third operation, the CINC’s requirement, which equated to one 18-aircraft squadron each of F-15s and F-16s, was met by rotating 6 F-15 and 6 F-16 continental United States (CONUS) based 18-aircraft fighter squadrons. We were advised that this number of squadrons was used because Air Combat Command (ACC) desired, for quality-of-life reasons, to maintain an 18-month interval between rotations for each squadron’s 3- to 4-month deployment overseas. However, using ACC’s stated goal of 8 to 9 months between overseas deployments, the CINC’s requirements for this latter operation could have been met with only three to four fighter squadrons. If the Air Force deployed squadrons in accordance with ACC’s stated goal, a larger number of squadrons would not be needed, particularly since reserve squadrons are available to augment the active force. We also question whether DOD’s current military strategy requires the larger number of squadrons afforded by the 18-aircraft squadron design. The Bottom-Up Review specified that 10 fighter wing equivalents (72 aircraft each) would be needed for each of two anticipated major regional conflicts. The term “fighter wing equivalent,” however, underscores that fighter requirements are not stated in terms of squadrons but rather in terms of the number of aircraft. The Secretary of Defense’s fiscal year 1996-2001 Defense Planning Guidance states Air Force requirements in terms of total aircraft, not squadrons. Further, Air Force officials at ACC and the 9th Air Force headquarters (the U.S. Central Command’s air staff) said that requirements for CINC missions are computed by the number of aircraft needed to successfully execute the mission, not by the number of squadrons. Moreover, officials at the 9th Air Force headquarters stated that the primary use of squadron organizations in a regional conflict operation is to manage the daily flight shifts and that squadron structures become almost invisible because all aircraft are controlled by the theater’s air component commander. Thus, from the CINC’s perspective, the number of squadrons in which aircraft are organized is largely immaterial. Air Force officials told us that another benefit of smaller squadrons was “span of control”—the ability to manage personnel and the collective tasks for which they are responsible. Until recently, flight line maintenance and associated personnel were controlled by the wing. When this function was shifted to the squadron in 1991-92, a typical 24-aircraft squadron would have increased from about 85 to over 300 people. This fourfold growth, according to Air Force officials, would have weakened the commander’s ability to effectively manage people and missions. These officials believed that the reduced number of squadron aircraft helps to offset this effect because a smaller squadron reduces the number of squadron personnel. However, we found that reducing the squadron to 18 aircraft only reduced personnel by about 10 percent (about 30 people). The Air Force’s standard for span of control for maintenance squadrons commanders is 700 people, about twice the number of personnel being supervised by flight squadron commanders. Although span of control may have been a perceived problem early in the Air Force’s wing reorganization, ACC officials are not aware of any instance where it has been raised as an issue. Discussions with a number of wing and squadron officials also indicated that the squadron commander’s span of control had not increased enough to be a problem. The Air Force’s reduction in squadron size was neither evaluated in a systematic manner, nor supported by documented studies. For example, no assessment of benefits versus drawbacks of the appropriate squadron size was conducted, and there were no studies to support scenarios where more squadrons would be needed. Some Air Force officials said that the basic rationale for moving to smaller squadrons was to minimize the reduction in wing and squadron commands as the number of aircraft in the force declined. We were told that the Air Force considered it inappropriate to identify command reductions during a period when the base realignment and closure (BRAC) process was ongoing because it would constitute an action that would prevent the BRAC process from proceeding as designed. According to Air Force officials, identifying changes that significantly reduce base facilities was against Air Force policy and the laws governing the BRAC proceedings. Although it is true that Department of Defense (DOD) entities were constrained from reducing force structure and closing bases beyond specified limits outside the BRAC process, the Air Force was not precluded from making recommendations on these matters during the BRAC process. In our view, such identifications would have facilitated the development of recommendations for base closures. Organizing the fighter force into 24-aircraft squadrons reduces the total number of squadrons and results in more economical operations than squadrons of 18 aircraft. For example, annual operating costs for 72 F-15s are about $12 million less if they are organized into squadrons of 24 aircraft instead of squadrons of 18. We calculated the savings from staffing standards and cost estimates provided by Air Force officials, using an Air Force’s cost estimation model (a more detailed description of our methodology is in app. III). The annual savings are primarily due to reduced military personnel requirements, in such areas as command, staff, administrative, and maintenance. The salary costs associated with reduced military personnel requirements account for about 70 percent of the total savings, of which over 90 percent is enlisted pay. Also, larger squadrons allow maintenance specialty shops to be used more efficiently, requiring little or no change in staffing. Other savings occur due to reduced training, medical services, supplies, and base operating support. The Air Force could modify its current configuration of fighter aircraft in a more cost-effective manner to increase the number of squadrons with 24 aircraft. This modification would entail consolidating some existing F-15 and F-16 squadrons with other squadrons to better maximize base utilization. Our four illustrative options (which are presented in detail in app. I) would have annual savings ranging from $25 million to $115 million annually. ACC officials we contacted stated that bases that previously had 24 aircraft per squadron and 72 aircraft per wing should be able to return to that level. Our review of Air Force base closure capacity analysis data indicated that most fighter wings on CONUS bases could increase squadron size to previous levels with little or no additional cost. For example, a capacity analysis prepared by Moody Air Force Base (AFB) officials stated that Moody will retain the capacity to support 2 additional fighter squadrons and increase 2 of its 18 sized F-16 fighter squadrons to 24 aircraft. Similarly, wing personnel at Shaw AFB and Langley AFB indicated that their installations could absorb 6 more aircraft per squadron or 18 per wing with no additional costs. These officials stated that because their bases previously had 24 aircraft per squadron and facilities were sized for 24 aircraft, returning to 24 would be little to no problem. Moreover, maintenance personnel stated that much of the support equipment could handle six additional aircraft with little additional investment. Deployment personnel at the 20th fighter wing at Shaw AFB stated that the supporting equipment for 24 aircraft would take the same number of transport planes to move as a squadron of 18 aircraft. Air Force officials at different levels of command cited several factors that should be considered when consolidating aircraft into fewer squadrons and wings. These factors include keeping aircraft with the same state of modernization and mission characteristics together. In addition, they stated that aircraft engines should be compatible at least in the squadron and preferably throughout the wing. Other factors officials said should be considered include the availability of training areas, impact on the CONUS/overseas mix, and the capacity of the receiving base to accept the additional aircraft and related personnel and equipment. Air Force officials noted that different modernization upgrades and specialized mission equipment can make the F-16 aircraft very different. For instance, newer F-16s have improved avionics that require different logistical support than earlier versions of the F-16. In addition, some aircraft have specialized equipment, such as the equipment needed to perform the night ground attack mission. Air Force officials stated that specialized training is required for pilots to perform this mission and believe mixing aircraft that have this capability with aircraft that do not will reduce unit readiness. Air Force officials also stated that having either F-15 and F-16 aircraft with different engines in the same wing complicates maintenance. For instance, different engines either from the same or different manufacturer can generate unique maintenance requirements. Because different support equipment and maintenance skills may be needed for various engines, maintaining different types of engines at the same wing can strain maintenance resources and ultimately reduce the availability of deployable aircraft. Additionally, Air Force officials said that any restructuring that affects aircraft outside the United States must consider agreements with foreign governments that govern the number of aircraft based in these countries. In general, the number of aircraft should not change materially. Considering the factors that Air Force officials believe are most important when consolidating forces we developed four alternatives for reorganizing the F-15 and F-16 fighter force. Our alternatives generally did not commingle aircraft with different type engines and modernization and mission characteristics. We also kept relatively constant the U.S./overseas basing mix and the number of aircraft in each theater, and we varied the number of aircraft in the Air Force’s composite wings. These options ranged from restructuring only fighter aircraft in the United States to restructuring all F-15s and F-16s worldwide. The “CONUS Only” alternative we developed is projected to save the Air Force about $25 million annually in operating costs. This would be achieved by increasing 6 existing fighter squadrons to 24 aircraft and eliminating 2 squadrons. The alternative of consolidating fighter squadrons worldwide would consolidate the F-15 and F-16 aircraft into 7 fewer squadrons than the Air Force currently plans and increase 17 squadrons to 24 aircraft and 2 squadrons to 30 aircraft. This alternative could save the Air Force a projected $115 million annually. Our other two alternatives would fall between these savings. Consolidating aircraft at fewer bases would also help the Air Force identify excess base infrastructure and candidate bases for closure. For example, three of the four alternatives would eliminate all fighter aircraft from at least one base, suggesting the potential of a base closure. If a base closure could be executed with savings similar to what DOD estimated for similar bases during the 1995 BRAC process, annual savings would average about $15 million for the first 6 years and about $50 million in each ensuing year. Air Force officials at headquarters and ACC expressed concerns about the implementation of our alternatives without the support of DOD and Congress. They stated that efforts in the past to move aircraft from a base without an equal substitution for the losing base have not been achievable. In their opinion, if the Air Force leadership decided to implement options to increase squadron and wing size back to 24 and 72, respectively, the Air Force would need the support of both DOD and Congress. We recommend that the Secretary of Defense, in his efforts to reduce the DOD’s infrastructure costs, require the Secretary of the Air Force to develop an implementation plan to operate the Air Force’s fighter force in larger, more cost-effective squadrons. If the Secretary of Defense believes that the plan could reduce costs, he should seek congressional support for it. DOD concurred with our findings and recommendation. DOD’s comments are reproduced in appendix II. A detailed explanation of our scope and methodology appears in appendix III. We conducted this review from February 1995 to February 1996 in accordance with generally accepted government auditing standards. We are sending copies of this report to the Secretaries of Defense and Air Force and interested congressional committees. We will also make copies available to others upon request. Please contact me at (202) 512-3504 if you or your staff have any questions about this report. Major contributors to this report are listed in appendix IV. We developed and refined four alternatives that demonstrate that the Air Force could organize its fighter aircraft more cost-effectively. Underpinning our analysis were principles that the Air Force cited as important. These factors included keeping the continental United States (CONUS)/overseas basing mix relatively constant; avoiding mixing aircraft with different modernization upgrades (blocks), mission characteristics, and engines; balancing capability throughout theaters; and assessing receiving base capacity. While these principles are plausible, our options vary the extent that these principles were used to gain greater economies. Moreover, the Air Force has not rigidly adhered to these principles. For example, different engines are contained in the F-15 wing at Eglin Air Force Base. The Air Force also plans to mix F-16s with different blocks. The following tables compare the Air Forces’s planned fiscal year 1997 mix of 18- and 24-aircraft squadrons at each base with the mix of squadrons that would be achieved with each of our four alternatives.Preceding each table, we described the specific factors we considered in developing each alternative. This alternative consolidates squadrons that are located in CONUS only. Under this alternative, fighter aircraft would remain at the same number of bases as the Air Force currently plans. The number of aircraft of one composite wing would be changed. Bases would be restricted to having the same aircraft that were in the Air Force’s plan. This alternative would result in annual operating costs savings of $25 million. Table I.1 compares the Air Force’s planned basing with alternative one. This alternative consolidates squadrons and uses one fewer base than currently planned by the Air Force. In order to execute this alternative, fewer than one squadron from CONUS would have to be shifted outside of CONUS. Two different aircraft blocks would be mixed, which is comparable to the Air Force’s plan. The number of aircraft at two composite wings would be changed. Also, aircraft other than F-15s and F-16s would have to be relocated to fully execute this alternative. This alternative would result in annual operating costs savings of $59 million. Table I.2 compares the Air Force’s planned basing with alternative two. This alternative consolidates fighters at one fewer base than currently planned by the Air Force. The number of aircraft in three composite wings would be changed. One squadron at base 4 would have 30 aircraft. One squadron substitution between the Air Force’s active and reserve components would be necessary. Some aircraft would be exchanged between theaters. Two different aircraft blocks were mixed at one wing, which is comparable to the Air Force’s plan. This alternative would result in annual operating costs savings of $101 million. Table I.3 compares the Air Force’s planned basing with alternative three. This alternative consolidates fighters at one fewer base than currently planned by the Air Force. The number of aircraft at two composite wings would be changed. One squadron at base 4 and one squadron at base 6 would have 30 aircraft each. One squadron substitution would be required between the Air Force’s active and reserve components. Also aircraft would be exchanged between theaters. Two different aircraft blocks were mixed at one wing, which is comparable to the Air Force’s plan. This alternative would result in annual costs savings of $115 million. Table I.4 compares the Air Force’s planned basing with alternative four. The objective of this review was to evaluate the cost-effectiveness of operating the fighter forces in smaller squadron sizes and the implications this might have on the Secretary of Defense’s efforts to reduce defense infrastructure. Our review focused on the Air Force’s active component fighter aircraft with a primary focus on the C and D model of F-15s and F-16s. To evaluate the benefits resulting from reduced squadron sizes, we interviewed officials in various Air Force Headquarters offices such as the Force Programming Division; the Combat Forces Division of the Directorate of Forces; the Combat Forces of the Directorate of Programs and Evaluation; and the Air Operations Group. We also interviewed Air Combat Command (ACC) officials, including officials from various staff functions, the 33rd Fighter Wing, 1st Fighter Wing, and the 20th Fighter Wing. Additionally, we interviewed officials from the U.S. Central Command Air Forces Headquarters. We examined a variety of Air Force documents, including peace-keeping and Gulf War deployment records, staffing requirements and historical levels, and various studies and analyses. We also reviewed the Secretary of Defense’s Defense Planning Guidance and Joint Strategic Capabilities Plan and the Air Force’s War Implementation and Mobilization Plan. To calculate the cost implications of operating smaller squadrons, we obtained estimated annual operating costs for F-15 and F-16 fighters from Air Force headquarters cost-modeling officials. Separate estimates were provided for squadrons of 18 and 24 aircraft in the U.S., Pacific, and European theaters. These are based on staffing estimates that we developed using planning factors provided by the Air Force. The planning factors included the number of officer and enlisted personnel in squadron overhead, flight crew, and maintenance positions for independent and dependent squadrons. To provide this data, the Air Force used its Systematic Approach to Better Long Range Estimating (SABLE) model, an automated model that uses various cost and planning factors to estimate the peacetime operating and support costs of flying units. Operating costs include cost elements in the operation and maintenance, military personnel, and other procurement appropriations. Within these appropriations, the major cost categories include military and civilian pay, aviation fuel, depot level repairables, and consumable supplies. These costs are estimated for each type and model of aircraft within each major command. The SABLE model only addresses variable costs but not any fixed costs. Similarly, it captures direct costs but few indirect costs such as the costs of maintaining the base and runway. The SABLE produces general cost estimates to evaluate force structure options. The estimated savings do not include any military construction, base closure, or other costs that may be associated with transferring aircraft from one specific location to another. Since 70 percent of the estimated cost savings resulted from reduced military personnel, our reliability assessment consisted of an analysis of the reasonableness of the military personnel planning factors provided by the Air Force. In conducting this assessment, we interviewed ACC manpower officials who developed the personnel factors that were used for the squadron located at U.S. bases. Since maintenance positions accounted for over 80 percent of the military personnel savings, we also reviewed the Logistics Composite Model (LCOM) that ACC officials used in developing their maintenance personnel factors. We also interviewed fighter wing and squadron command and maintenance officials at Langley, Eglin, and Shaw Air Force Bases and toured wing and squadron maintenance and flight line areas. We also reviewed historical staffing data that covered the period when the wings at these two bases previously had squadrons of 24 aircraft. To develop and evaluate alternatives for consolidating active F-15 and F-16 squadrons, we analyzed force structure organization at all bases that had combat F-15 and F-16 squadrons from 1991 to present, as well as the Air Force’s plans through 2001. We also reviewed and analyzed the base capacity assessment completed by each fighter base as part of the 1995 base realignment and closure (BRAC) process. Additionally, we met with various officials from Air Force Headquarters and ACC to identify and understand factors that would constrain the consolidation of these fighter aircraft. We also discussed squadron consolidation and constraining factors with fighter wing officials such as the wing commander, squadron commanders, maintenance officers, and facility and air space managers. The baseline for our alternatives was the Air Force’s planned fighter force structure for fiscal year 1997. Our alternatives ranged from restructuring only fighter aircraft in the United States to including all F-15 and F-16s worldwide. These options were discussed in open critiques with Air Force officials from both Air Force Headquarters and ACC. Our alternatives did not attempt to address political or international policies impacting basing decisions. Fred Harrison, Evaluator-in-Charge Dan Omahen, Senior Evaluator The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | GAO reviewed the cost-effectiveness of the Air Force's reconfiguration of F-15 and F-16 fighters into smaller squadrons, focusing on the consequences this might have on the Secretary of Defense's efforts to reduce defense infrastructure costs. GAO found that: (1) while smaller 18-aircraft squadrons provide more deployment flexibility than 24-aircraft squadrons, the larger configuration provides enough deployment flexibility to meet the Air Force's needs; (2) the ability of squadron commanders to manage the personnel and tasks of 24-aircraft squadrons has not proved to be a problem; (3) the Air Force's decision to reduce squadron size from 24 to 18 aircraft was not based on organized analysis or documented studies; (4) using 24-aircraft squadrons instead of 18-aircraft squadrons could reduce costs; (5) by consolidating some existing F-15 and F-16 squadrons with other squadrons to better maximize base utilization, the Air Force could cost-effectively increase the number of 24-aircraft squadrons; (6) all 18-aircraft squadrons could return to their original size of 24 aircraft with little or no effort and expense; (7) if the Air Force consolidates its squadrons, it should keep aircraft with the similar modernization, mission characteristics, and engine types together; and (8) at least four alternatives exist to consolidate the Air Force's squadrons that could save between $25 million and $115 million annually. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
HIPAA established the HCFAC program to consolidate and strengthen ongoing efforts to combat fraud and abuse in health care programs and expand resources for fighting health care fraud. The Attorney General and the Secretary of HHS through the HHS Office of Inspector General (HHS/OIG) administer HCFAC. The HCFAC program goals are to coordinate federal, state, and local law enforcement efforts to control fraud and abuse associated with health plans; conduct investigations, audits, and other studies of delivery and payment for health care for the United States; facilitate the enforcement of the civil, criminal, and administrative statutes applicable to health care; provide guidance to the health care industry, including the issuance of advisory opinions, safe harbor notices, and special fraud alerts; and establish a national database of adverse actions against health care providers. HIPAA requires the following types of collections to be deposited in the trust fund: criminal fines recovered in cases involving a federal health care offense; civil monetary penalties and assessments imposed in health care fraud amounts resulting from the forfeiture of property by reason of a federal penalties and damages obtained and otherwise creditable to miscellaneous receipts of the general fund of the Treasury obtained under sections 3729 through 3733 of Title 31, United States Code (commonly known as the False Claims Act), in cases involving claims related to the provision of health care items and services (other than funds awarded to a relator, for restitution, or otherwise authorized by law); and unconditional gifts and bequests. Funds for the HCFAC program are appropriated from the trust fund to an expenditure account, referred to as the Health Care Fraud and Abuse Control Account (control account) maintained within the trust fund. The Attorney General and the Secretary of HHS jointly certify that the funds transferred to the control account are necessary to finance health care fraud and abuse control activities. Only a portion of the funds collected and deposited to the trust fund are appropriated to the control account annually for the HCFAC program. The maximum amounts that may be appropriated for HCFAC each year are specified by HIPAA. The maximum amount for fiscal year 1997, the first year of HCFAC, was $104 million and HIPAA limited the amounts for each of the fiscal years 1998 through 2003 to an amount equal to the limit for the preceding fiscal year increased by 15 percent. For each fiscal year after 2003, the amount made available was capped at the 2003 limit (See table 1). In addition to the annual limits on the total amount made available for HCFAC, HIPAA includes annual minimum and maximum amounts that are earmarked specifically for HHS/OIG activities for the Medicare and Medicaid programs. For example, of the $240.6 million available in fiscal year 2003, a minimum of $150 million and a maximum of $160 million were earmarked for the HHS/OIG to ensure continued efforts by the HHS/OIG to detect and prevent fraud and abuse in the Medicare and Medicaid programs. HHS’s Centers for Medicare and Medicaid Services (CMS) performs the accounting for the HCFAC control account. Prior to fiscal year 2003, CMS set up allotments in its accounting system for each of the HHS and DOJ entities receiving HCFAC funds. The HHS and DOJ entities accounted for their HCFAC obligations and expenditures in their respective accounting systems and reported them to CMS. CMS then recorded the obligations and expenditures against the appropriate allotments in its accounting system. However, for fiscal year 2003, HHS changed the method of providing funds to DOJ from a direct allotment to a reimbursable agreement. This change requires DOJ components to prepare and submit billing packages to CMS to obtain reimbursement from DOJ’s allotment of the HCFAC funds. HHS and DOJ reported total deposits to the trust fund of about $766 million in fiscal year 2002 and $243 million in fiscal year 2003. On the basis of our review of a statistical sample of deposit transactions for fiscal years 2002 and 2003, we determined that the amounts HHS and DOJ reported as deposits to the trust fund were appropriate. As shown in figure 1, these deposits primarily consisted of penalties and multiple damages and criminal fines collected as a result of health care fraud cases. The considerable difference in the amount of criminal fines reported for fiscal year 2002 and 2003 is primarily due to large criminal fine collections from two major cases settled in prior years. In addition, the difference in the amount of penalties and multiple damages reported for fiscal years 2002 and 2003 was primarily due to collections of amounts from a major case settled in a prior year. Related to our review of criminal fine deposits, DOJ provided us with supporting documents related to a $13.0 million adjustment that was calculated and reported to the Department of the Treasury in September 2002 to correct the amount of criminal fine deposits previously reported in error. When we reviewed the supporting documents for the adjustment, we identified a mathematical error of approximately $130,000 in DOJ’s determination of the adjustment. While the amount of error has a minimal impact on the trust fund balance, we found that DOJ lacked supervisory review procedures for deposits, which may have contributed to the error going unnoticed. Lack of supervisory review could result in undetected material errors to the trust fund in the future. The Comptroller General’s Standards for Internal Control in the Federal Government state that management review is an important control activity in helping to ensure that all transactions are completely and accurately recorded. DOJ officials acknowledged the importance of this control activity and in response they developed new procedures to ensure proper review of all adjustments and deposit amounts before reports are sent to Treasury. In addition, in September 2004, DOJ made the necessary correction to the amount of criminal fine deposits reported to the trust fund. However, because the correction of the error was not made until after the end of fiscal year 2003, the HCFAC joint report for fiscal year 2003 did not include the correction. In fiscal years 2002 and 2003, $209 million and $240 million, respectively, were appropriated from the trust fund for performing HCFAC program activities. On the basis of our review of supporting documents, we determined that these amounts were consistent with HIPAA and that the amount of HCFAC funds specified in the joint reports was made available to HHS and DOJ. We also determined that HHS’s and DOJ’s expenditure of amounts appropriated from the trust fund was reasonable. However, we did note that some data on expenditures were not included in HHS and DOJ information systems. For example, some staff hours needed to monitor payroll expenses were not tracked in HHS/OIG workload tracking systems. Also, DOJ did not record some expenditure data in its accounting system as HCFAC expenses and therefore could not provide an electronic file of all nonpayroll expenses from which we could select a statistical sample of these fiscal year 2003 expenses. We tested nonpayroll expenses, selected on a nonstatistical basis, from hard copy documents and determined that they were adequately supported and related to HCFAC. However, having all data on HCFAC expenses in its accounting system could help managers in monitoring how HCFAC funds are spent. HIPAA specifies the maximum amounts that may be appropriated from the trust fund each year for HCFAC, as well as a minimum and maximum amount of the appropriations that must go to the HHS/OIG for Medicare and Medicaid antifraud activities. For fiscal years 2002 and 2003, HHS and DOJ each received the maximum amount from the trust fund allowed under HIPAA. In addition, HHS and DOJ entered into memorandums of agreement to agree on how much of the HCFAC appropriation each HHS and DOJ unit would receive. The amount allocated to each unit was included in the HHS and DOJ joint reports and is depicted in figure 2. In accordance with HIPAA, HHS/OIG was allocated amounts within the minimum and maximum funding allowed by statute—$145 million and $160 million, for fiscal years 2002 and 2003 respectively. In the HHS and DOJ joint report, the HHS/OIG, other HHS units, and DOJ provided information related to how the HCFAC funds were used. The HHS/OIG reported that its fiscal years 2002 and 2003 HCFAC funds were used in carrying out efforts to both detect health care fraud and abuse and prevent it. These efforts included several fraud prevention activities that reduced program losses as well as participation in prosecutions and settlements of cases involving Medicare and Medicaid fraud, and investigations, audits, and evaluations that helped reveal vulnerabilities or incentives for fraudulent practices. Other HHS components also reported on how they had expended their HCFAC funds including CMS. CMS received $2.7 million in fiscal year 2002 and $23.4 million in fiscal year 2003. The increase in funding for fiscal year 2003 was in support of several projects including the Medicaid Payment Accuracy Measurement Project, Medicare/Medicaid Data Analysis Project, and Medicaid Financial Management initiatives, including Medicaid Audits. DOJ reported that its funding was used to support its role in civil and criminal prosecution of health care professionals, providers, and others as well as its role in recovering funds that federal health care programs have paid as a result of fraud, waste, and abuse. We determined that expenditures charged by HHS and DOJ for HCFAC activities were reasonable. In evaluating HHS HCFAC expenditures, we focused on expenditures of the HHS/OIG. The HHS/OIG’s payroll and nonpayroll expenses represented about 96 percent of all HHS expenditures charged against HCFAC funds for fiscal years 2002 and 2003. We reviewed the methodology that the HHS/OIG used to charge expenditures against its HCFAC funding and determined that it was reasonable. The OIG charges a percentage of its total payroll and nonpayroll expenses to the HCFAC program. The percentage that is charged each year is based on the relative proportion of its annual HCFAC funding to its total funding. These amounts are then monitored throughout the year. As table 2 shows, HCFAC funding for fiscal years 2002 and 2003 was 80 and 81 percent, respectively, of the OIG’s total funding. HHS/OIG management takes several steps to help assure that HCFAC funds are expended on HCFAC-related activities. For one, management meets with its component offices at the beginning of the year to determine how much of each component’s resources will be devoted to HCFAC- related activities. Some component offices make plans to devote resources to HCFAC in excess of the 80-81 percent funding level, while other components plan to devote less. OIG management evaluates each component’s plans in relation to each component’s full-time equivalents (FTE) to ensure that OIG resources overall are spent on HCFAC activities in accordance with the funding level. In addition, throughout the year, three of the components, Office of Audit Services (OAS), Office of Evaluations and Inspections (OEI), and Office of Investigations (OI) track the staff time spent on various projects in their workload tracking system. The information in each component’s system is summarized and monitored quarterly to ensure that staff time is being spent on HCFAC in accordance with the funding. The OIG’s Office of Management and Policy (OMP) requests summary reports from the component offices that include the staff time spent on HCFAC activities and uses the information in determining if the OIG overall is performing HCFAC work as planned. The lead OMP staff person said that when material variances are identified in the amount of staff time devoted to HCFAC, the components are instructed to adjust the type of work performed. We reviewed the monitoring reports that the OMP staff prepares and these reports showed that the amount of time devoted to HCFAC activities for the OIG as a whole was in line with the planned amount. We also performed several tests on the information maintained in components’ workload systems as they are relied on in monitoring HCFAC payroll expenditures. For example, we analyzed the data in the components’ workload tracking systems to determine if the projects identified as HCFAC were appropriately classified as HCFAC-related in each component. We analyzed titles and supporting documents for all of the projects in the workload tracking systems of OAS and OEI—two components that account for about 55 percent of the OIG staff. We determined that the projects were appropriately classified as HCFAC or non-HCFAC. We also compared hours in the OAS, OEI, and OI workload tracking systems to hours in the payroll system to determine if the components’ systems included hours for all staff. We found that the hours recorded in OAS’s system agreed with hours in the payroll system. However, hours in OI’s and OEI’s systems did not agree with the payroll system. The OI system included approximately 52 percent fewer hours for fiscal year 2002 and 44 percent fewer hours for 2003 than were in the payroll system. OI managers were aware of the variance and explained that until they recently implemented a new system, their workload system did not include staff hours for administrative and supervisory staff. In determining the amount of staff hours spent on HCFAC-related assignments, OI concluded that administrative and supervisory time was spent in the same relative percentages as the staff who recorded their time in the workload system. In June 2003, OI upgraded its workload tracking system to record hours for all staff. In addition, OI implemented new procedures to help ensure that all hours were recorded in its workload system. The procedures include weekly automatic, system-generated electronic mail messages that are sent to supervisors informing them of employees that did not record their time and a reconciliation of hours in the HHS payroll system to hours in the workload system that is performed during periodic inspections at regional offices. The OEI workload tracking system included about 12 percent fewer hours than the payroll system for fiscal years 2002 and 2003. OEI officials said that they did not have procedures in place to identify missing hours. However, they believed that most of the people not entering data into the workload system were probably managers and administrative personnel whose hours would probably reflect the same allocation of hours between HCFAC and non-HCFAC work as those staff recording hours. In addition, the OMP staff person who monitored staff hours applied the HCFAC and non-HCFAC hours recorded in the workload system against the total FTEs for OEI to determine that the OIG as a whole performed HCFAC work as planned. Therefore, while this issue did not appear to impact the propriety of HCFAC payroll expenditures during our review period, incomplete staff hours in the component workload tracking systems could hinder OIG managers in monitoring the amount of HCFAC work performed in the future. Therefore, it is critical that all OIG components have procedures in place to ensure that workload data are complete. In assessing the reliability of DOJ fiscal year 2002 expenditures, we tested a statistical sample of the largest category of fiscal year 2002 nonpayroll expenses, which accounted for 69 percent of nonpayroll and 34 percent of DOJ’s total fiscal year 2002 HCFAC expenditures. We determined that nonpayroll expenses were adequately supported and related to HCFAC on the basis of our review of supporting documentation. In addition, we reviewed the payroll expenses charged by DOJ’s United States Attorneys Office (USAO) against HCFAC funds that represented 76 percent of DOJ’s fiscal year 2002 HCFAC payroll expenditures and 38 percent of DOJ’s total fiscal year 2002 HCFAC expenditures. We determined that the USAO methodology for charging salaries to HCFAC was reasonable. USAO charged the full annual salaries of 160 individuals against HCFAC program funds in fiscal year 2002 as a surrogate for the 160 FTEs that were funded by the program. USAO managers said that this was administratively easier than trying to charge a portion of the salary of all the staff that perform health care fraud and abuse work. To assess the reasonableness of this approach, we reviewed the hours recorded in the USAO workload system for fiscal year 2002. According to data in the system, USAO staff devoted about 587,168 staff hours (282.3 FTEs) to health care fraud-related activities during fiscal year 2002, which was about 76 percent more than the 160 FTEs (332,800 hours) funded by the program. In addition, to ensure that the salaries charged to HCFAC were reasonable, we compared the average annual salaries for the 160 staff (i.e., attorneys, paralegals, and administrative staff) charged to the HCFAC account to the average annual salary for the same positions USAO-wide. We found that the salaries were generally comparable. Our review of DOJ’s fiscal year 2003 HCFAC expenditures also included a review of USAO salaries charged against HCFAC funds as these amounts represented 75 percent of DOJ’s fiscal year 2003 HCFAC payroll expenditures and 49 percent of all fiscal year 2003 HCFAC expenses. USAO charged the salaries of 162 individuals against HCFAC funding in fiscal year 2003. Our review procedures were similar to our work on 2002 payroll expenditures, and we again found the payroll expenditure amounts to be reasonable. We also tested a nonstatistical selection of nonpayroll expenses for fiscal year 2003 from hard copy documentation included in DOJ billing packages. We determined that these expenses were adequately supported and related to HCFAC. We did not select a statistical sample of fiscal year 2003 nonpayroll HCFAC expenses because DOJ could not provide an electronic file of detailed transactions from its accounting system for all nonpayroll HCFAC expenditures. Only one of the four DOJ components properly recorded expenditures under the specific HCFAC account class in the accounting system. DOJ accounting policy, issued March 2003, required that each DOJ component record expenses charged against HCFAC funds under a specific HCFAC account class so that they can be readily identified as related to HCFAC. Managers for the components that did not follow this accounting policy told us that they recorded their fiscal year 2003 HCFAC expenses at a summary level under an account class for general expenses and not under the HCFAC account class as required because they instead prepared the hard copy billing packages for reimbursement, which were supposed to provide details on HCFAC expenditures. However, we found that the billing packages contained varying levels of detail. Without the full detail recorded in the accounting system it is difficult to monitor HCFAC expenditures. In addition, the lack of such expenditure detail could impede DOJ officials’ ability to prepare meaningful budgets to support future HCFAC funding requests. For the first time, some of the reported cost savings can be considered savings to the trust fund, resulting from trust fund expenditures for the HCFAC program. The joint reports cited cost savings of nearly $19.9 billion for fiscal year 2002 and $20.8 billion for fiscal year 2003, as a result of HHS/OIG recommendations and other initiatives. Of these amounts, about $1.5 billion in cost savings for fiscal year 2002 and $3.9 billion for fiscal year 2003 resulted from actions taken since the HCFAC program was created. However, the remaining cost savings ($18.4 billion for fiscal year 2002 and $16.9 billion for fiscal year 2003) continued to be related to actions that predate the HCFAC program and cannot be associated with expenditures from the trust fund for HCFAC activities. Further, since audit, evaluation, investigation, and litigation activities typically span several years, savings from such activities in fiscal years 2002 and 2003 may not be realized until future years. As has been the case in the past, most of the audits and evaluations related to the reported cost savings (i.e., 47 of the 50 audits cited by the OIG) were done by the OIG before the HCFAC program was created. However, the HHS/OIG cited cost savings related to three reports that were issued in fiscal year 2000. One of the three reports, which consolidated the results of seven HHS/OIG audits on Medicaid enhanced payments, found that payments to some providers were not based on the cost of providing services. The report included recommendations that resulted in changes in program regulations and administrative actions. For example, in January 2001, CMS issued a final rule to change Medicaid payment policies, placing limitations on enhanced payments under Medicaid upper-payment limit requirements for hospital services, nursing facility services, intermediate care facility services for the mentally retarded, and clinic services. In addition, CMS issued a final rule in January 2002 placing additional limitations on enhanced payments for hospitals. CMS projected that the regulatory changes would result in cost savings of $79.3 billion over 10 years beginning with about $1.4 billion in fiscal year 2002 and about $3.8 billion in fiscal year 2003. The two other reports issued in fiscal year 2000 that resulted in costs savings of about $100 million for both fiscal years 2002 and 2003 were related to recovering overpayments to nursing homes and Medicaid drug rebates. Because the three reports were issued since the HCFAC program was created and the savings occurred in fiscal years 2002 and 2003, the savings can be linked to expenditures from the trust fund. We reviewed the support for the total cost savings amounts reported by the HHS/OIG for fiscal years 2002 and 2003. We initially found an overstatement of $840 million in the amounts included in the draft report for fiscal year 2003. The overstatement occurred because the HHS/OIG did not record an adjustment for the revised cost savings amounts issued by the Congressional Budget Office (CBO). The annual cost savings amounts reported by the HHS/OIG are based on estimates issued by CBO of savings that are expected from implementation of health-care-related legislation. CBO revised its estimate of fiscal year 2003 cost savings that would be realized from implementation of the Medicare, Medicaid, and SCHIP Balanced Budget Refinement Act of 1999, but HHS officials did not recognize and factor in the effect of the adjustment in the fiscal year 2003 draft HCFAC report. HHS officials explained that the responsibility for preparing the cost savings amounts had recently been reassigned to another staff person who had not looked for CBO adjustments. The cost savings amounts were corrected and restated in the final report. HIPAA requires that HHS and DOJ issue a joint report on the HCFAC program for each fiscal year by January 1 of the following calendar year. For fiscal years 1997 through 2000, the joint HCFAC report was issued on or close to January 1 of the subsequent year. However, beginning with the report for fiscal year 2001 the report has been issued late and the length of the delay has increased each year. See table 3 for the timing of reports for fiscal years 2001, 2002, and 2003. The fiscal year 2003 report was more than a year late when it was released. HHS and DOJ officials told us that the joint reports have been issued late because of lengthy review processes within each agency. They have attempted to expedite the process but with little apparent success. Delays in issuing the HCFAC reports significantly erode the usefulness of these reports to Congress and others in making decisions about HCFAC program funding and oversight. While the information on HCFAC trust fund activity provided in the HHS and DOJ fiscal years 2002 and 2003 joint reports was reasonable, better tracking of time charges by HHS/OIG and nonpayroll expenditures by DOJ would improve their ability to monitor the use of HCFAC funds. In addition, the usefulness of the fiscal year 2002 and fiscal year 2003 joint annual reports was severely impaired due to their untimely issuance. Until HHS and DOJ streamline their internal review processes, the annual joint reports will continue to be delinquent and therefore of limited use to congressional decision makers and other interested parties. To improve HHS’s accountability over HCFAC program expenditures, we recommend that the HHS Inspector General require all HHS/OIG components to develop procedures for ensuring that all key staff hours spent on HCFAC activities are recorded in OIG workload tracking systems. To improve DOJ’s accountability for HCFAC program expenditures, we recommend that the Attorney General develop monitoring procedures to ensure that DOJ components record key HCFAC program expenditure data under the appropriate HCFAC account class in DOJ’s accounting system. To help ensure that the joint HHS and DOJ HCFAC reports are issued in a more timely manner, we recommend that the Secretary of HHS and the Attorney General develop a more expedited review process and notify congressional committees with oversight responsibility for the HCFAC program of delays in issuing the joint report within 1 month after missing the January 1 deadline and provide updates until the report is issued. A draft of this report was provided to HHS and DOJ for their review and comment. Written comments from HHS and DOJ are reprinted in appendixes III and IV. HHS and DOJ also provided technical comments that we incorporated as appropriate. In written comments, HHS concurred with our recommendation that the HHS Inspector General require all of its components to develop procedures for ensuring that all key staff hours spent on HCFAC activities are recorded in its workload tracking systems and noted that it is already moving to implement such procedures. Similarly, in its written comments, DOJ concurred with our recommendation for ensuring that DOJ components record key HCFAC program expenditure data under the appropriate HCFAC account class in DOJ’s accounting system and noted that its Justice Management Division will meet with the components to assist them in using the accounting classes designated for HCFAC funds. Regarding our recommendation for ensuring that HCFAC reports are issued in a more timely manner, HHS and DOJ agreed to develop a more expedited review process for the HCFAC reports. DOJ commented that it has already instituted several new procedures that it believes will shorten the time needed for future reports and HHS stated that it will work with DOJ in developing changes to the review process. HHS and DOJ, however, did not agree that they should report to Congress delays in issuing the HCFAC report by the mandated deadline. In their comments, HHS and DOJ noted that additional reporting, which requires clearance through both departments, would be counterproductive to clearing the annual HCFAC report. HHS added that such notification would not provide Congress with any substantial information and DOJ added that reporting on delays would be of little value to congressional oversight committees. Instead, DOJ officials stated that they propose to expedite the review and approval process, to the extent that source data are available and circumstances are within the department’s control. We disagree with HHS’s and DOJ’s position that Congress would not gain any value in knowing that the HCFAC report is going to be issued after the date that Congress mandated in law. Congress should be informed if reports that it may use in making future program funding and oversight decisions are not expected to be issued by the mandated report deadline. In addition, it appears that HHS and DOJ interpreted our recommendation for reporting delays in issuing the HCFAC report to mean sending Congress a report that would require a formal clearance process through both agencies. This was not our intent. HHS and DOJ officials can and should develop a mechanism for notifying Congress of delays that would not place an undue burden on their staff or interfere with issuing and clearing the HCFAC report itself. Such a mechanism could be as simple as sending an electronic mail message to all the committees of jurisdiction. DOJ also commented on the status of two remaining open recommendations from our prior report. We will continue to work with DOJ to obtain documentation to support the actions that DOJ said it is implementing. We are sending copies of this report to the Secretary of HHS, the Attorney General, and other interested parties. Copies will be made available to others on request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staffs have any questions, please contact me at (202) 512-8341 or by e-mail at [email protected]. Additional GAO contacts and acknowledgments are provided in appendix IV. To assess the reliability and reasonableness of information reported by HHS and DOJ in the joint HCFAC reports for fiscal years 2002 and 2003 as deposits to the trust fund and the sources of such amounts, we did the following. We reviewed the joint HHS and DOJ HCFAC reports for fiscal years 2002 and 2003 to identify amounts deposited to the trust fund. We interviewed HHS and DOJ officials to update our understanding of procedures related to deposits. We obtained data from HHS and DOJ reports and electronic databases for the various deposits as of September 30, 2002, and September 30, 2003, and selected deposit transactions on a statistical basis to determine whether the proper amounts were deposited to the trust fund. We assessed the reliability of the data by (1) performing electronic testing of required data elements, (2) reviewing existing information about the data and the system that produced them, and (3) interviewing agency officials knowledgeable about the data. We determined that the data were sufficiently reliable for the purposes of this report. The transactions that we selected on a statistical basis included the following: We selected a dollar unit sample of penalties and multiple damages totaling $276.8 million from a population totaling $322.6 million for fiscal year 2002, and a dollar unit sample totaling $181.2 million from a population totaling $229.8 million for fiscal year 2003. We selected a dollar unit sample of criminal fines totaling $435.5 million from a population totaling $443.5 million for fiscal year 2002 and a dollar unit sample totaling $1.9 million from a population totaling $2.5 million for fiscal year 2003. We selected a dollar unit sample of civil monetary penalties totaling $1.7 million from a population totaling $6.9 million for fiscal year 2002 and a dollar unit sample totaling $1.7 million from a population totaling $7.1 million for fiscal year 2003. We obtained supporting documentation for each sample transaction from various sources depending on the type of deposit. We traced amounts reported on the supporting documentation to reports and other records to confirm that proper amounts were reported as deposits. To assess the reliability of information reported by HHS and DOJ in the joint HCFAC reports for fiscal years 2002 and 2003 as appropriations from the trust fund for HCFAC, we obtained and reviewed the HIPAA legislation, which includes the maximum and minimum amounts that can be appropriated from the trust fund for HCFAC; obtained and reviewed the HCFAC funding requests for the HHS and DOJ components to determine whether activities included in the requests were consistent with the stated purposes of the HIPAA legislation; obtained the funding decision memorandum detailing how the funds would be distributed between HHS and DOJ, and obtained related documentation for fiscal years 2002 and 2003 to verify the HCFAC funds certified by HHS and DOJ officials; and compared amounts reported in the joint HCFAC reports to the approved funding decision memorandum and compared amounts from the decision memorandum to the OMB documentation (Apportionment Schedule SF-132) to verify that the amounts were made available. To assess the reliability of information reported by HHS and DOJ in the joint HCFAC reports for fiscal years 2002 and 2003 as the justification for the expenditure of HCFAC funds, we did the following. We reviewed the justifications provided in the reports and discussed them with HHS and DOJ officials. We obtained and analyzed data from the HHS/OIG components’ workload tracking systems on the number of hours recorded as worked on HCFAC projects. We reviewed these data for obvious errors and completeness and compared these data for the four selected components with hardcopy documents we obtained from these components, and to the HHS payroll system data. When we found discrepancies we brought them to the attention of the specific component and worked with them to obtain explanations for the discrepancies before conducting our analyses. On the basis of this, we determined that the data were sufficiently reliable for the purposes of this report. We evaluated the methodology used by the HHS/OIG to charge payroll expenses against HCFAC funds for fiscal years 2002 and 2003—these expenses represented 76 percent and 78.6 percent respectively of total HCFAC expenses. For our evaluation, we (1) obtained the total number of staff hours recorded in the workload tracking systems for each of the OIG components and compared the hours in these systems to hours in the HHS payroll system; (2) obtained a list of HHS/OIG projects and related staff hours included in the workload tracking systems for two OIG components, OAS and OEI (staff in OAS and OEI accounted for 55 percent of all OIG employees), and reviewed the project subjects to assess whether projects identified as HCFAC were appropriately classified; and (3) for the project subjects that were unclear, we obtained and reviewed documentation describing the work performed for the jobs to assess whether the job was appropriately classified as HCFAC or non-HCFAC. We analyzed HHS/OIG nonpayroll expenditures charged against HCFAC funds for fiscal years 2002 and 2003—these represented 20 and 17.4 percent respectively. We obtained reports from HHS on the amount of HCFAC and non-HCFAC expenditures by expenditure category (travel, rent, supplies, etc.) for each fiscal year; we then calculated the percentage charged to HCFAC and non-HCFAC funds for each category and compared them to the percentages used by the OIG to allocate expenses against HCFAC funding—80 percent for HCFAC in fiscal year 2002 and 81 percent in fiscal year 2003. We obtained DOJ expenditure and allotment reports for all five components that charge activity to the HCFAC program and calculated the total amount of payroll and nonpayroll expenditures. We evaluated the methodology used by the U.S. Attorney Offices (USAO) to charge payroll expenses against the HCFAC fund. These expenses accounted for 38 percent and 49 percent respectively of total DOJ expenses charged against fiscal years 2002 and 2003 HCFAC funds and 76 percent and 75 percent respectively of DOJ’s total fiscal years 2002 and 2003 HCFAC payroll expenses. USAO payroll expenses were equal to the annual salaries for 160 FTEs for fiscal year 2002 and 162 FTEs for fiscal year 2003. We reviewed the hours recorded in USAO’s workload system to ensure that the office devoted staff time to HCFAC-related activities equal to or greater than the annual hours of the 160 FTEs for both fiscal years. We compared the average annual salary for USAO staff positions (attorney, paralegal, administrative) charged to the HCFAC account to the average annual salary for the same staff positions USAO-wide to ensure that the salary amounts charged against HCFAC were reasonable. We interviewed an agency official knowledgeable about the data obtained from USAO’s workload system to identify any data problems and determined that the data were sufficiently reliable for the purposes of this report. We tested a statistical sample of the largest category of nonpayroll expenses, the Civil component advisory services, which accounted for 34 percent of total DOJ expenses charged against fiscal year 2002 HCFAC funds and 69 percent of the total nonpayroll expenses. We selected a dollar unit sample of 19 transactions totaling $13.1 million from a population totaling $16.5 million and compared the transaction data to supporting documentation such as invoices and advisory services contracts to make sure they agreed. We tested nonpayroll expenses charged against fiscal year 2003 HCFAC funds selected on a nonstatistical basis. We did not select a statistical sample of nonpayroll expenses because DOJ’s accounting system did not identify the complete population of expenditure transactions charged against HCFAC funds. We modified our methodology and (1) obtained copies of all billing packages submitted by DOJ to HHS for reimbursement, (2) selected a nonstatistical sample equal to 50 percent ($6.7 million of a total $13.4 million) of the total summary amounts listed on each billing package, and (3) traced and compared the data to supporting documentation, such as invoices and advisory services contracts. To assess the reliability of information reported by HHS and DOJ in the joint HCFAC reports for fiscal years 2002 and 2003 as cost savings, we obtained the schedule of HHS/OIG Cost Savings 1998-2011 and compared the data for fiscal years 2002 and 2003 to the HCFAC joint reports; obtained the fiscal years 2002 and 2003 HHS/OIG semiannual reports and traced and compared the amounts identified as cost savings to the amounts reported in the fiscal years 2002 and 2003 HCFAC joint reports; selected cost saving transactions on a nonstatistical basis, traced and compared the data to supporting documentation; and reviewed the dates of reports that the OIG cited as having findings and recommendations that resulted in the reported cost savings. In assessing the status of recommendations made in our prior report, we reviewed the recommendations included in our prior report and the comments provided by DOJ on our prior report to identify corrective actions that had been implemented or were to be implemented in the future; provided a list of the prior-year recommendations and their status per DOJ comments to DOJ management and requested supporting documentation for the corrective actions taken; and reviewed the supporting documentation to verify that the corrective actions were implemented and that the corrective actions completely addressed the recommendations. We conducted our work from August 2003 through January 2005 in accordance with U.S. generally accepted government auditing standards. We provided a draft of this report to HHS and DOJ for their comments. Written comments from the Acting Inspector General of HHS and the Assistant Attorney General for Administration at DOJ are included in appendixes III and IV, respectively. We also received technical comments from HHS and DOJ that were incorporated as appropriate. DOJ determined the amount of the overstatement and submitted an adjustment to the Bureau of Public Debt (BPD) in September 2002. However, in our review of the supporting documentation we identified a mathematical error in DOJ’s calculations. DOJ agreed with the revised amount and submitted the adjustment to BPD in September 2004. reports submitted to the Department of the Treasury are accurate. DOJ developed and implemented new procedures for reviewing collections reports for accuracy and approving them prior to submission to BPD. According to DOJ officials, the misposted non-HCFAC charge, along with the HCFAC charge that was posted to another account, have been corrected in the Financial Management Information System. GAO requested, but had not received at the end of fieldwork, documentation that supports the correction of the charges. According to DOJ officials, the department is continuing its ongoing financial management training efforts to reinforce the importance of accurate financial management processing and the minimization of data entry and errors. Also, the issue is also emphasized in monthly Financial Managers Council meetings and “clean audit” training. GAO requested, but had not received at the end of fieldwork, documentation to verify whether DOJ staff responsible for HCFAC accounting functions have completed the designated training. To facilitate providing Congress and other decision makers with relevant information on program performance and results, we recommend that the Attorney General and the Secretary of HHS assess the feasibility of tracking cost savings and expenditures attributable to HCFAC activities by the various federal programs affected. This recommendation is closed because of recent changes to HIPAA legislation. HIPAA had required GAO to report on cost savings and expenditures attributable to HCFAC activities by the various federal programs affected but did not require HHS and DOJ to track cost savings and expenditures in this manner. In December 2003, Congress passed Public Law 108- 173, which amended the HIPAA legislation. The amendment removed the language requiring GAO to identify any expenditures from the Trust Fund with respect to activities not involving the Medicare program. W. Ed Brown, H. Donald Campbell, Lisa Crye, Kelly Lehr, Kathryn Peterson, and Matthew Wood made key contributions to this report. Sharon Byrd provided statistical sampling technical assistance. High-Risk Series: An Update. GAO-05-207. Washington, D.C.: January 1, 2005. Medicare: CMS’s Program Safeguards Did Not Deter Growth in Spending for Power Wheelchairs. GAO-05-43. Washington, D.C.: November 17, 2004. Medicare Hospice Care: Modifications to Payment Methodology May Be Warranted. GAO-05-42. Washington, D.C.: October 15, 2004. Medicare Physician Payments: Concerns about Spending Target System Prompt Interest in Considering Reforms. GAO-05-85. Washington, D.C.: October 8, 2004. Comprehensive Outpatient Rehabilitation Facilities: High Medicare Payments in Florida Raise Program Integrity Concerns. GAO-04-709. Washington, D.C.: August 12, 2004. Medicaid Program Integrity: State and Federal Efforts to Prevent and Detect Improper Payments. GAO-04-707. Washington, D.C.: July 16, 2004. Comptroller General’s Forum on Health Care: Unsustainable Trends Necessitate Comprehensive and Fundamental Reforms to Control Spending and Improve Value. GAO-04-793SP. Washington, D.C.: May 2004. Criminal Debt: Actions Still Needed to Address Deficiencies in Justice’s Collection Processes. GAO-04-338. Washington, D.C.: March 5, 2004. Medicare Home Health: Payments to Most Freestanding Home Health Agencies More Than Covered Their Costs. GAO-04-359. Washington, D.C.: February 27, 2004. Medicaid: Improved Federal Oversight of State Financing Schemes Is Needed. GAO-04-228. Washington, D.C.: February 13, 2004. Financial Management: Status of the Governmentwide Efforts to Address Improper Payment Problems. GAO-04-99. Washington, D.C.: October 17, 2003. Medicare Provider Enrollment: Opportunities to Enhance Program Integrity Efforts. GAO-03-185. Washington, D.C.: March 17, 2003. Medicare: Payment for Blood Clotting Factor Exceeds Providers’ Acquisition Cost. GAO-03-184. Washington, D.C.: January 10, 2003. High-Risk Series: An Update. GAO-03-119. Washington, D.C.: January 1, 2003. Medicaid Financial Management: Better Oversight of State Claims for Federal Reimbursement Needed. GAO-02-300. Washington, D.C.: February 28, 2002. Medicare: Health Care Fraud and Abuse Control Program for Fiscal Years 2000 and 2001. GAO-02-731. Washington, D.C.: June 3, 2002. Civil Fines and Penalties Debt: Review of CMS’ Management and Collection Processes. GAO-02-116. Washington, D.C.: December 31, 2001. Medicare: Reporting on the Health Care Fraud and Abuse Control Program for Fiscal Years 1998 and 1999. GAO/AIMD-00-51R. Washington, D.C.: December 13, 1999. | Because of the susceptibility of health care programs to fraud and abuse, Congress enacted the Health Care Fraud and Abuse Control (HCFAC) program as part of the Health Insurance Portability and Accountability Act of 1996 (HIPAA) Pub. L. No. 104-191. HIPAA requires that the Departments of Health and Human Services (HHS) and Justice (DOJ) issue a joint annual report to Congress on amounts deposited to the Federal Hospital Insurance Trust Fund and amounts appropriated from the trust fund for the HCFAC program. It also requires GAO to submit reports biennially. This, our final report required by law, provides the results of our review of amounts reported as (1) deposits to the trust fund, (2) appropriations from the trust fund and justification for expenditure of such amounts by HHS and DOJ, and (3) savings resulting from expenditures from the trust fund. We also report on the repeated late issuance of the annual HCFAC report as well as the status of our prior recommendations. Our review of the HCFAC program for fiscal years 2002 and 2003 determined that amounts reported as trust fund deposits--$766 million (fiscal year 2002) and $243 million (fiscal year 2003)--were appropriate. The sources of these deposits were primarily penalties and multiple damages and criminal fines collected from health care fraud cases. Amounts reported as appropriations from the trust fund for HCFAC activities--$209 million (fiscal year 2002) and $240 million (fiscal year 2003)--were consistent with HIPAA. The HHS/OIG received funds within the minimum and maximum amounts allowed by HIPAA to carry out Medicare and Medicaid antifraud activities. The expenditures charged against HCFAC funds by HHS and DOJ for fiscal years 2002 and 2003 were reasonable but the HHS/OIG did not record time charges in its workload systems for all staff that worked on HCFAC activities. Also, DOJ did not record all fiscal year 2003 expenditures in its accounting system so they could be readily identified as HCFAC related. Failure to properly record staff hours and expenditure data could hinder DOJ and HHS in monitoring the uses of HCFAC funds. Some reported cost savings--$19.9 billion (fiscal year 2002) and $20.8 billion (fiscal year 2003) can be considered savings to the trust fund, resulting from trust fund expenditures for the HCFAC program, but most can not. For example, $1.5 billion of the cost savings for fiscal year 2002 and $3.9 billion for fiscal year 2003 are the result of HHS/OIG recommendations and other initiatives since the HCFAC program was created. However, the remaining cost savings continued to be largely the result of actions that predate the HCFAC program and cannot be associated with expenditures from the trust fund for HCFAC. HIPAA requires that HHS and DOJ issue to Congress a joint HCFAC report on January 1 of each year. However, DOJ and HHS have issued the last three reports late and the length of the delay has increased each year. HHS and DOJ cited onerous internal review processes as the reason for late issuance. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
The Department of Labor oversees a number of employment and training programs administered by state and local workforce boards and one-stop career centers established under the Workforce Investment Act of 1998 (WIA). The green jobs training programs Labor has overseen were created under the Green Jobs Act of 2007, which amended WIA. The Green Jobs Act of 2007 was passed as part of the Energy Independence which was intended to move the United and Security Act of 2007 (EISA),States toward greater energy independence and security and to increase the production of clean renewable fuels, among other objectives. This act directed the Secretary of Labor to work in consultation with the Secretary of Energy to create a new worker training program to prepare workers for careers in the energy efficiency and renewable energy industries. However, funds for these programs were not appropriated until the passage of the Recovery Act in 2009, according to Labor officials. The Recovery Act appropriated $500 million in funding for competitive green jobs grant programs at Labor. The current administration presented the green jobs training grant program as part of a broad national strategy both to create new jobs and to reform how Americans create and consume energy. Specifically, the administration articulated a vision for federal investments in renewable energy to involve coordination across a number of federal agencies to create new, well-paying jobs for Americans and to make such jobs available to all workers. The Employment and Training Administration (ETA) was responsible for overseeing the implementation of the green jobs training programs that were authorized in the Green Jobs Act of 2007 and funded through the Recovery Act. In June 2009, ETA announced a series of five Recovery Act grant competitions related to green jobs, three of which were primarily focused on training. All of these programs are scheduled to end before the end of July 2013. Table 1 describes these five programs and identifies the types of organizations eligible to receive each grant. Between September 2010 and October 2012, Labor’s OIG issued a series of three reports related to the department’s Recovery Act green jobs programs, including training programs. The most recent report raised questions about the low job placement and retention of trained program participants, the short amount of time for which many participants received training, and limitations of available employment and retention data, among other things. Labor has used a broad framework to define green jobs, incorporating various elements that have emerged over time as the understanding of what constitutes a green job has evolved. As part of the Green Jobs Act of 2007, WIA was amended to identify seven energy efficiency and renewable energy industries targeted for green jobs training funds. In addition, beginning in 2009, Labor issued information on 12 emerging green sectors as part of a broader effort to describe how the green economy was redefining traditional jobs and the skills required to carry Most recently, in 2010, the Bureau of Labor Statistics out those jobs.(BLS) released a two-part definition of green jobs that was used to count the number of jobs that could be considered green either because what the work produced or how the work was performed benefitted the environment.time to define green jobs. According to funding information provided by Labor and our survey of Labor’s directly-funded green jobs efforts, most funding for green jobs efforts at Labor has been directed toward programs designed to train individuals for green jobs, with less funding supporting efforts with other objectives, such as data collection or information materials. Indeed, approximately $501 million (84 percent) of the $595 million identified by offices at Labor as having been appropriated or allocated specifically for green jobs activities since 2009 went toward efforts with training and support services as their primary objective.million, or 12 percent of the total amount of funding for green jobs In total, approximately $73 activities, was reported appropriated or allocated for data collection and reporting efforts. Most of the funding for green jobs efforts was provided through the Recovery Act, which funded both training and non-training-focused projects at Labor in part to increase energy efficiency and the use of renewable energy sources nationwide. In addition to Recovery Act funding for green jobs efforts, funding information provided by Labor and through our survey of directly-funded green jobs efforts indicate that Labor has allocated at least an additional $89 million since 2009 to support seven other green jobs efforts that have been implemented by five of Labor’s offices (see fig. 2). For a brief description of each of Labor’s green jobs efforts for which funds were appropriated or allocated, see appendix II. The Recovery Act directed federal agencies to spend the funds it made available quickly and prudently, and Labor implemented a number of relatively brief but high-investment green jobs efforts simultaneously. As a result, in some cases, Recovery Act training programs were initiated prior to a full assessment of the demand for green jobs. Specifically, Recovery Act-funded green jobs training grantees designed and began to implement their green jobs training programs at the same time states were developing green job definitions and beginning to collect workforce and labor market information on the prevalence and likely growth of green jobs through the State Labor Market Information Improvement grants, which were also funded with Recovery Act funds. Furthermore, BLS launched its Green Jobs initiative—which included various surveys designed to help define and measure the prevalence of green jobs—after many green jobs training programs had begun. ETA officials noted that BLS’s development of the definition of green jobs was a deliberative and extensive process that required consulting stakeholders and the public. They also said that BLS’s timeline for defining green jobs differed from ETA’s timeline for awarding and executing grants, which was driven by Recovery Act mandates. Labor has made subsequent investments that build upon lessons learned through the Recovery Act grant programs. For example, ETA initiated the $38 million GJIF program in 2011 to support job training opportunities for workers in green industry sectors and occupations. In developing the GJIF grant program, ETA considered lessons learned through the Recovery Act grant programs. For example, various stakeholders including employers, the public workforce system, federal agencies, and foundations identified Registered Apprenticeship—training that combines job related technical instruction with structured on-the-job learning experiences for skilled trades and allows participants to earn wages—as a valuable workforce strategy. ETA acknowledged that upgrading basic skills, including literacy and math, is critical to ensure job placement and suggested that training participants exclusively in green skills is not always sufficient. Consequently, ETA required GJIF grantees to implement green jobs training programs that would either forge linkages between Registered Apprenticeship and pre-apprenticeship programs or deliver integrated basic skills and occupational training through community-based organizations. Figure 3 shows a timeline illustrating the rollout of selected green jobs grants at Labor and the time periods during as well as the timing of BLS’s efforts to which these grants were active,collect data on green jobs. With the exception of the GJIF grants, all of these efforts will have been completed by July 2013. Grantees from all but six states received at least one of the 103 green jobs training grants that were awarded by ETA,, but grantees were somewhat concentrated within certain regions of the country. Specifically, most states with three or more grantees were located in the Northeast, West, or Midwest regions of the country. Four states and the District of Columbia received five or more green jobs training grants: California, Michigan, New York, and Pennsylvania. Figure 4 shows the number of green jobs training grants awarded by state. In terms of organizational type, most green jobs training grants were awarded to nonprofit organizations and state workforce agencies (see fig. 5). Specifically, 44 percent of green jobs training grants were awarded to nonprofit organizations and 34 percent were awarded to state governmental agencies or departments. In addition, 10 percent of grantees were organized labor or labor management organizations. ETA officials from all six of its regional offices said that in terms of organizational type, green jobs training grantees did not differ substantially from the types of grantees ETA typically oversees. ETA officials said building partnerships had been an important focus of the green jobs grants, and indeed ETA’s grant solicitations required, or in some cases encouraged, grant recipients, regardless of organizational type, to develop partnerships with various stakeholders, such as representatives of the workforce system, industry groups, employers, unions, the education and training community, nonprofits, or community- based organizations. Staff from ETA’s regional offices said that some grantees developed new and successful partnerships as a result of the grants, including partnerships with labor unions. More than half of ETA’s green jobs training grantees implemented their grants through sub-grantees, or a network of local affiliates, rather than providing training services directly to participants. Grantees that contract with sub-grantees or local affiliates to provide services are responsible for monitoring and overseeing how all grant funds are used, effectively delegating day-to-day oversight responsibility from Labor to the primary grantee. In addition to Labor’s direct investments in green jobs, several offices at Labor have infused green elements into their ongoing activities even though funds were not specifically appropriated or allocated for these green jobs efforts. In total, of the 14 Labor offices we surveyed, 6 identified and implemented 48 such efforts (for a list of the efforts, see appendix III). Some of these offices added a “layer of green” to existing training programs or other activities. For example, according to material provided by Labor, most YouthBuild programs have incorporated green building into their construction training. Other efforts focused on providing information materials, forming partnerships, or conducting publicity and outreach, among other things. For example, the Women’s Bureau created a guide on sustainable careers for women and Labor’s Occupational Safety and Health Administration contributed to an Environmental Protection Agency publication on best practices for improving indoor air quality during home energy upgrades. Further, in 2010 the Center for Faith-Based and Neighborhood Partnerships hosted a roundtable discussion about green jobs between the Secretary of Labor and leaders from national foundations and discussed how to create employment opportunities for low-income populations in the green jobs industry. Although funding for green jobs efforts at Labor has shifted and green jobs efforts funded through the Recovery Act are winding down, a few of Labor’s ongoing programs or efforts continue to emphasize green jobs or skills, and Labor continues to incorporate green elements into existing programs by coordinating internally on an as-needed basis. After the passage of the Recovery Act, a number of Labor’s offices worked together to implement the requirements of the act, and Labor officials said that they collaborated on green jobs efforts on a fairly regular basis and that more formal green jobs meetings across the department were common. For those green jobs efforts where green elements have been infused into ongoing activities even though funds were not specifically appropriated or allocated for green jobs efforts, offices at Labor indicated through our survey that they continue to coordinate on such efforts within Labor and across other federal agencies, albeit in a less formal manner. For example, according to our survey of these indirectly-funded green jobs efforts, for 37 of 46 of the efforts listed in appendix III, offices said that they coordinated with others at Labor, and for 30 of 46 of the efforts, In addition, it is they reported coordinating with other federal agencies.likely that coordination on green jobs efforts will continue to occur on an ad-hoc basis, especially as funding and priorities within the department shift. For example, Labor recently reported that due to federal budget cuts, BLS has discontinued its reporting on employment in green jobs. According to a Labor official, after the Recovery Act was passed, Labor collaborated with other departments, such as the Department of Energy (Energy) and the Department of Housing and Urban Development (HUD) to foster job growth for a new green economy. For example, Labor’s Occupational Safety and Health Administration worked with Energy on retrofitting and safety activities, and Labor also partnered with HUD to provide green jobs training and possible employment opportunities to public housing residents. In addition, Labor entered into various Memorandums of Understanding (MOU) after the Recovery Act was passed to collaborate on green jobs-related issues with other federal agencies. For example, the Secretaries of Energy, Labor, and the Department of Education announced a collaboration to connect jobs to training programs and career pathways and to make cross-agency communication a priority. While these examples highlight coordination on green jobs efforts after the passage of the Recovery Act, little is known about the effectiveness of these efforts. To identify the potential demand for green jobs in their communities, all (11 of 11) grantees we interviewed had broadly interpreted Labor’s green jobs definitional framework to include as green any job that could be linked, directly or indirectly, to a beneficial environmental outcome. While Labor created its framework to provide local flexibility, the wide variation in the types of green jobs obtained by program participants illustrates just how broadly Labor’s definition can be interpreted and raises questions about what constitutes a green job—especially in cases where the job essentially takes the form of a more traditional job (see table 2). In general, grantees we interviewed considered jobs green if they could link the job to (1) a green industry, (2) the production or installation of goods that benefit the environment, (3) the performance of services that potentially lead to environmental benefits, or (4) environmentally beneficial work processes. For example, in some cases, grantees we interviewed considered jobs green because they were linked to the renewable energy industry, such as solar panel installation or sales. Grantees considered other jobs green because the goods being produced benefited the environment, such as the pouring of concrete for a wind turbine or the installation of energy efficient appliances. In some cases the green job was service-based, such as an energy auditor or energy surveyor. Finally, other grantees considered jobs green because of the environmentally beneficial processes being used, such as applying paint in an efficient manner or using advanced manufacturing techniques that reduce waste. Even for jobs where parts of the work have a link to environmentally beneficial outcomes, workers may only use green skills or practices for a portion of the time they work. For instance, technicians trained to install and repair high-efficiency heating, ventilation, and air conditioning (HVAC) systems may in the course of their work also install less energy efficient equipment. All grantees we interviewed said they had worked closely with local employers to align their training program with the green skills needs of local employers. All agreed developing effective relationships with employers was crucial to aligning any training program with available jobs. Labor’s three Recovery Act green jobs training programs, as well as the GJIF program, all required applicants to demonstrate how they would partner with local employers to develop and implement their training programs. Most (9) grantees told us they had assembled advisory boards consisting of representatives from local businesses and industry associations to help inform them about available green jobs and the skills that would most likely be in demand by local employers. Further, all grantees said they engaged in ongoing communication with employers to stay abreast of changes in the local economy and employer needs, and most (10) made changes to their program curricula or tailored their training in response to employer input. Labor’s data show that green jobs training grantees primarily offered training in the construction and manufacturing industries. Specifically, nearly half of all participants of the Recovery Act-funded green jobs training programs received training focused on construction, and approximately 15 percent received training in manufacturing. Over 5 percent of participants in those programs received training in other industries that included utilities, transportation, and warehousing. Grantees in Labor’s newer GJIF program focused even more heavily on construction—approximately 94 percent of participants were trained in construction and around 3 percent in manufacturing. Most grantees (9) we spoke to had infused green elements into existing training curricula for more traditional skills. However, the extent to which the training focused on green versus traditional skills varied across programs and often depended upon the skill level of targeted participants. Most (7) of the programs we visited generally targeted relatively low- skilled individuals with limited work experience and were designed to teach participants the foundational skills they would need to pursue a career in a skilled trade in which green skills and materials can be used. For example, those programs typically used their green job grant funds to incorporate green skills into existing construction, carpentry, heating/air- conditioning, plumbing, or electricity programs. The programs generally involved a mixture of classroom and hands-on training and taught traditional skills, such as how to read blueprints, use tools, install and service appliances, and frame buildings. In teaching these skills, however, instructors also showed students the way the processes or products used in performing these tasks could lead to environmentally beneficial outcomes. For example, participants were taught various ways to weatherize a building to conserve energy, to efficiently operate heavy machines to save fuel, or to install solar panels as part of a green construction project. In contrast, two programs we visited focused more exclusively on short- term green skills training to supplement the existing traditional skills of relatively higher-skilled unemployed workers. For example, one green awareness program taught participants to identify ways to perform their work, such as manufacturing, in a more environmentally beneficial manner, often by identifying and reducing waste. Another program added a component to their comprehensive electrical training program to train unemployed registered electricians how to install and maintain advanced energy efficient lighting systems. The grantees associated with both of these programs, as well as other grantees, noted that employer demand for workers with green skills may sometimes be most effectively met through short-term training of higher-skilled unemployed workers or incumbent workers. The overall impact of Labor’s green jobs training programs remains largely uncertain partly because some individuals are still participating in training and are not expected to have outcomes yet, and because final outcome data are submitted to Labor approximately 3 months after the grant period ends. The most recent performance outcome data for the three Recovery Act-funded and GJIF green jobs grants are as of December 31, 2012, at which time approximately 60 percent of the Recovery Act-funded programs had ended and grantees had submitted final performance outcome data. According to Labor officials, complete outcome data for the remaining Recovery Act-funded green jobs grantees will likely not be available until October 2013 because many grants were extended to June 2013. They also said that final performance outcome data for the GJIF grant—which is scheduled to end in June 2014—will likely not be available until October 2014. Our analysis of data reported by Recovery Act-funded green jobs grantees with final outcome data shows that these grantees collectively reported enrolling and training more participants than they had proposed when setting their outcome targets. However, their placement of program participants into employment lagged in comparison—these grantees reported placing 55 percent of the projected number of participants into jobs. When final data become available for the remaining 40 percent of grantees, the final figure comparing reported employment outcomes to proposed targets may change.employment outcomes will compare to their projected targets, and whether the employment outcomes of this program will benefit either from economic changes or lessons learned since the Recovery Act programs began. Moreover, it remains to be seen how GJIF grantees’ Developing a complete and accurate assessment of Labor’s green jobs training programs is further challenged by the potential unreliability of certain outcome data—particularly for placement into training-related employment. In its October 2012 report, Labor’s OIG questioned the reliability of the Recovery Act green jobs training programs’ employment and retention outcome data because a significant proportion of sampled data for employment and retention outcomes were not adequately supported by grantee documentation. We reviewed the OIG’s data review process and found it appropriate for assessing reliability and therefore also consider the data unreliable for evaluating program While outcome data for the ongoing GJIF program are still performance.being reported and the OIG did not assess the reliability of this program’s data, Labor’s method for collecting these data remains largely unchanged from that used for the Recovery Act-funded green jobs training programs. Consequently, these outcome data—particularly for placement into training-related employment—could also be questionable. Labor officials noted that they have been collecting additional information on employment outcomes and wages using state unemployment insurance (UI) wage record data on program participants, and will continue to do so into early 2015 for the GJIF program. Results of their most recent analyses of UI data showed that, of the participants who had exited at least one of the three Recovery Act-funded green jobs training programs between April 1, 2011, and March 31, 2012, 52 percent had Similar analyses provided by Labor showed that, obtained employment.of participants who had exited between October 1, 2010, and September 30, 2011, 83 percent of those who had become employed had retained their employment for at least 6 months and had average earnings of around $25,000 for the 6 month period. Results of Labor’s analysis of UI wage data for participants of the GJIF program shows that 40 percent of participants who had exited between April 1, 2011, and March 31, 2012, had entered employment. However, the UI data do not capture whether jobs obtained were training-related for either the Recovery Act-funded or GJIF programs, so, absent additional relevant information, the extent to which grantees placed participants into training-related employment may never be reliably known. According to Labor officials, once complete, these additional UI wage data may provide more definitive information on the extent to which program participants entered employment and will be used by the department to develop a broader picture of the grant programs’ level of success in achieving employment outcomes. Specifically, Labor officials said that while there is not a formal process to study the UI data, program staff routinely examine these data to identify lessons learned and best practices that could be applied to future grant programs. Labor officials said the data could be used to compare the green jobs training programs against other training programs across the agency, such as those under WIA, if resources permit. While Labor officials consider the UI data to be more definitive than the grantee-reported job placement data to measure overall program outcomes once the grant period ends, they stressed the importance of having real-time data to monitor grantee performance during implementation. While the UI wage record data provide an alternative source of information on job placement outcomes, due to a 9-month lag time, these data are of limited usefulness regarding program management. Specifically, because of the time lag, grantees could not use these data to monitor their progress toward meeting program goals in real-time. Further, Labor could not use the data to hold grantees accountable for meeting grant goals, as all grant periods will have ended before the data are complete. Consequently, ensuring the reliability of grantee reported outcome data remains vitally important, particularly for grant programs whose primary objective is to prepare workers for attaining employment in a targeted emerging industry. The grantees we interviewed were generally positive about Labor’s green jobs training programs, with most speaking optimistically about the potential value of the green skills obtained by the program participants. Most grantees we met with said that they believe there to be a continued national movement towards lowering energy usage—whether due to economic, policy, or cultural changes—and all projected that the demand for workers with green skills credentials will continue to rise. All (11 of 11) were of the opinion that possessing green skills in addition to more traditional skills provides workers with an advantage as they seek a new job or move along a career pathway, and most (10) cited the need for training programs that provide nationally or industry-recognized green credentials. Two noted that having multiple credentials was particularly valuable. Lastly, some (5) grantees mentioned that the benefits of the green jobs training, like most job training, may not become apparent immediately, but may often be realized later during the worker’s career, especially as demand for green skills grows. However, all grantees noted there have been challenges associated with developing and implementing Labor’s green jobs training programs. For example, most (8) of the grantees we interviewed said that the lack of credible green jobs labor market information had limited their ability to identify or predict the level of available green jobs or the demand for green skills in their local area. Although state workforce agencies received funding to conduct green jobs labor market information studies under the State Labor Market Information Improvement grants, most resulting data were issued after many Recovery Act training programs had already begun. In addition, the BLS surveys were released from March 2012 through March 2013, after GJIF grantees had submitted their applications outlining their training programs to Labor. Having access to the final results of the state labor market information studies could have provided Recovery Act grantees with additional insights into their state’s economic activity in the energy efficiency and renewable energy industries, as well as jobs within those industries when they were developing their training programs. Similarly, BLS survey results could have provided GJIF grantees with a national snapshot of establishments that produce green goods and services and the jobs of workers involved in green activities, among other information, and may have provided grantees with additional context for the development and implementation of their green jobs training programs. Labor officials said the rapidly evolving nature of the green industries has resulted in multiple changes to employer green job demand information over the course of the grant periods, further complicating their attempts to provide labor market information for this sector. In addition, most (9) grantees we met with said Labor’s green jobs training grants did not afford them enough time to both develop local partnerships and recruit, train, and place program participants. All grantees said developing partnerships can be especially time-consuming if such partnerships had not existed prior to the grant award.noted that given how important local partnerships are to developing successful training programs, training programs that require such partnerships should have longer grant periods than those afforded by the Recovery Act and GJIF programs. Most (9) Furthermore, most (8) grantees mentioned how developing and implementing a relatively new type of training, like green skills, can require additional time in order to fill knowledge gaps among employers. This may be especially true in light of changing state and local energy policies. For example, according to the Department of Energy, as of March 2013, 29 states have established standards aimed at generating a certain percentage of the state’s energy using renewable sources by a specified year. Furthermore, many municipalities throughout the country are requiring that local construction projects adhere to environmentally friendly requirements. However, most (9) grantees we spoke with said some employers may not recognize how changing policies will affect their businesses. In fact, they believe this lack of understanding may be limiting demand for workers trained in green skills. To address this problem, one of the grantees we interviewed had developed a 1-day training program for local business managers to educate them about how they could benefit from the green skills that participants were obtaining through the organization’s training program. Most (6) grantees said at times during the implementation of their green jobs training program, they were, in effect, attempting to simultaneously drive both supply and demand for workers with green skills, which took considerable time and effort. In addition, although all grantees we interviewed had engaged with employers who had committed their support for the training curriculum, they also said this did not always translate into green jobs for program participants. Above all, most (9) pointed to the slow economic recovery as the reason their predictions—and those made by employers—regarding green job growth were not fully realized. For example, one grantee explained how the local housing market had not recovered as quickly as anticipated, and as a result, demand for workers with green skills—such as green construction techniques, weatherization practices, and the installation of energy efficient appliances—has been sluggish. In addition, most (10) grantees explained that because green skills are often intertwined with traditional skills training and the skilled labor industries, their programs’ participants were negatively affected by the overall poor economy. For example, most grantees (9) noted how their program participants, despite their additional layer of green skills training, found themselves competing with a high number of unemployed workers who were also seeking to regain employment in more traditional jobs such as carpentry or electrical work. Most (9) grantees also noted that renewable energy sectors, such as solar power, have not grown in their regions as was predicted several years ago. Lastly, most (7) grantees we interviewed said it is difficult to accurately measure the value of green skills training in terms of green job placement. In general, they said this is partly because, unlike jobs in other growing industries, like health care, there are few distinctly green jobs. One grantee we met with said she believes the term “green job” is misleading, and complicates program implementation. This grant official said that funding should be directed toward supplementing traditional skills training with green skills that can be used on any job rather than on preparing workers for specific jobs identified as green. Based on our interviews with grantees of Labor’s green jobs training programs, and the descriptions of their experiences implementing those programs, we identified several lessons learned that may warrant consideration when implementing similar targeted grant programs for other emerging industries (see table 3). Labor has provided all green jobs grantees with technical assistance to help them implement their grant programs and comply with relevant federal laws and regulations. For example, Labor officials have hosted technical assistance webinars on topics such as financial management and how to engage employers. Labor also maintains a website for each green jobs training grant program and a green jobs community of practice on its online platform, Workforce3One. In addition, Labor has published bimonthly digests for Recovery Act grantees since January 2011 that highlight new technical assistance materials and other grant-related information. Finally, Labor has compiled and periodically updated a technical assistance guide that briefly describes and provides hyperlinks for its technical assistance resources, including webinar recordings and promising practices. Several grantees we interviewed (4 of 11) reported participating in webinars and referring to technical assistance materials posted to Workforce3One. In addition, ETA has funded three separate studies to assess the implementation of selected green jobs programs funded by the Recovery Act. Specifically, Labor funded a 2-year implementation evaluation that examined the implementation of the three Recovery Act-funded green jobs training programs and issued both interim and final reports. , Labor also funded an evaluation of the State Labor Market Information Improvement grants and issued a final report and additional related products in 2013. Finally, Labor has funded an ongoing impact evaluation scheduled to be completed in 2016. This study was designed to test the extent to which selected grantees of one of the four green-jobs training programs overseen by ETA—Pathways Out of Poverty— improved worker outcomes by imparting skills and training valued in the labor market. To support its technical assistance efforts to grantees, Labor entered into a grant agreement with the National Governors Association, which together with two partner organizations formed a Technical Assistance Partnership (TA Partnership). In conjunction with Labor officials, the TA Partnership has facilitated monthly conference calls for each grant program so grantees can learn from their peers and receive program- specific technical assistance. The TA Partnership has also compiled and updated reports that highlight promising practices grantees have implemented. Finally, the TA Partnership and Labor officials have held annual grantee conferences, which have covered various topics including strategies to retain and place program participants and the importance of nationally recognized credentials. Several (4 of 11) grantees we interviewed mentioned participating in the monthly conference calls and annual conferences and said that generally they had been helpful. While Labor provided guidance and technical assistance on how to document eligibility for the green jobs training programs, it provided little guidance on what documentation grantees were expected to maintain regarding program outcomes, particularly with respect to job placement. Specifically, while Labor provided guidance on how to report required performance data into its Recovery Act Database, this guidance does not specify what documentation, if any, grantees were to maintain for reported job placements, including those considered training-related. Our Standards for Internal Control in the Federal Government provides that internal control and all transactions and other significant events should be clearly documented and readily available for examination. However, in its last green jobs report, the OIG found that nearly a quarter of reported outcomes were not supported by adequate documentation. One regional official noted that sub-grantees may not have known what documentation was required and staff in another office said that in some cases primary grantees may not have done enough to ensure that the sub-grantees they were responsible for overseeing understood documentation requirements. While Labor officials have not issued additional guidance to GJIF grantees regarding how to document job placement and retention outcomes, they said they have taken other steps that address the OIG’s recommendation to improve the quality of grantee reported performance data and utilize lessons learned from Recovery Act-funded green jobs training programs for other discretionary grant programs. First, ETA officials noted that they have formed an internal workgroup focused on improving the technical assistance provided to ETA’s discretionary grantees about how to report program outcomes. This group hopes to issue recommendations in September 2013, and ETA officials believe these recommendations will help improve grant application instructions, and help ETA refine their reporting systems, among other things. Second, ETA officials told us that they had initiated a grant re-engineering project in August 2012 to identify common grant management challenges and develop strategies for addressing such challenges. For instance, the group has discussed ways to improve ETA’s grant solicitation process, such as by including clearer expectations and benchmarks for performance in its solicitations for grant applications and by taking steps to ensure greater comparability of goals across grantees. Labor hopes to begin implementing the group’s recommendations for new discretionary grant programs in August 2013. ETA monitors most grants, including its green jobs training grants, through a risk-based strategy that prioritizes monitoring activities based upon grantees’ assessed risk-levels and availability of resources, among other factors, and is described in its Core Monitoring Guide. Specifically, according to officials from all six of ETA’s regional offices, ETA’s federal project officers monitor grantees as part of their ongoing duties, which include calling grantees to offer technical assistance. In addition, ETA’s federal project officers perform quarterly desk reviews, during which they review financial reports and quarterly performance reports that grantees are required to submit. For the green jobs training grants, these reports include information such as the total amount of grant funds spent, the number of participants who began or completed training, a timeline for grant activities and deliverables, grantee accomplishments, and technical assistance needs. During these quarterly reviews, federal project officers compare grantees’ reported performance outcomes and spending rates to those goals set by grantees in their grant proposals. Based upon their review of each grantee’s reported information, federal project officers enter information about each grantee into Labor’s Grant Electronic Management System (GEMS), which assesses risk and generates a risk level for each grantee. The GEMS assessment of each grantee’s risk level is then used by Labor to develop its risk-based monitoring strategy, which involves prioritizing site visits based on grantees’ assessed risk-levels and availability of resources, among other factors. According to regional officials from all six offices, nearly all green jobs training grantees received at least one on-site monitoring visit, typically about halfway through the period of performance. During these site visits, federal project officers assessed grantees’ management and performance and documented any noncompliance findings and requirements for corrective action, as necessary. For example, Labor’s site visit guide includes questions for federal project officers to consider about financial and performance data reporting systems and performance outcomes. As a result of its on-site monitoring activities, Labor officials identified and required certain grantees to correct a variety of issues concerning the management of their grants. Many monitoring reports for the Recovery Act-funded green jobs training grants indicated that grantees were not on track to meet their performance outcomes. In such cases Labor required grantees to submit written corrective action plans that described what strategies they would undertake to increase project outcomes and how they would ensure that remaining funds would be used in a timely way to accomplish project objectives. Labor officials said that grantees have made significant progress toward attaining their goals for beginning and completing training as a result of both the grantees’ own efforts and ETA’s technical assistance and monitoring efforts. These officials also stressed that while ETA holds grantees accountable to adhering to their grant statements of work, grantees are not contractually obligated to meet performance outcomes.Unlike contracts or WIA-funded programs, which can impose sanctions for failing to meet projected targets, the accountability mechanisms for these green jobs grant programs were more limited. For example, ETA officials said that if a grantee does not achieve its placement outcomes, this can affect whether the grantee receives a period of performance extension for the current grant or, potentially, a future grant from ETA. Officials said that they had not withdrawn funding from any grantees for failing to meet performance targets for any of the four green jobs training programs. However, in some cases ETA officials decided not to grant extension requests for grantees reporting poor performance. As a result, some grant funds remained unexpended and will be returned to the Treasury, as required. In addition to insufficient progress toward targeted outcomes, the monitoring reports of the Recovery Act-funded green jobs training grantees identified other noncompliance findings, including insufficient monitoring of sub-grantees. For example, a number of monitoring reports indicated that primary grantees had not sufficiently monitored their sub- grantees. These findings are notable given that such a large percentage of grantees implemented their programs through a network of sub- grantees. Both GAO and the Department of Justice’s OIG have stressed the importance of sufficient sub-recipient monitoring to the grant oversight process. Other noncompliance findings included grantees lacking adequate documentation to show program participants were eligible for services or grantees having failed to follow acceptable procurement processes. According to officials from all six regional offices, federal project officers did not identify any instances of fraud, waste, or abuse during their on-site monitoring visits. The Recovery Act funded multiple, substantial investments in training programs targeted to a specific emerging industry—energy efficiency and renewable energy. Most of these programs have already ended or are currently winding down, although a few of Labor’s continuing programs, such as YouthBuild, have incorporated many green elements since 2009, and the Green Job Innovation Fund program is scheduled to remain active through June of 2014. Despite the sizeable investment in green jobs, the green jobs training programs have faced a number of implementation challenges and final outcomes remain uncertain, particularly regarding placement into green jobs. A number of these challenges have stemmed from the need to implement the grants quickly and simultaneously before green jobs had been defined and more had been learned about the demand for green skills. Others, such as problems with the reliability of outcome data, can be traced to management issues that have compromised Labor’s ability to measure the program’s success, particularly regarding placing participants into training-related employment. Specifically, because Labor did not establish clear and timely guidelines for how to document green job placement outcomes, Labor is not able to assess the extent to which the targeted green jobs training programs placed participants in employment related to the training they received. The challenges for an emerging industry such as energy efficiency and renewable energy are substantial. Uncertainty and debate still surround the question of what constitutes a green job. Under Labor’s current framework, almost any job can be considered green if a link between the employee’s tasks and environmental benefits can be made. Indeed, most grantee officials we interviewed said that most green jobs they have trained participants for are primarily traditional skilled-trades jobs, such as carpentry or electrical work. Many have been termed “green” because the worker has been trained to be mindful of energy use and reduce waste, or has been placed where the worker’s tasks resulted in a product or service that benefited the environment, such as a light-rail construction site. Such an approach provides certain benefits within the context of an emerging industry, in that many of the skills workers obtain can be transferred to traditional jobs in cases where local demand for green jobs falls below expectations. It also may serve to raise general worker awareness about energy efficiency and waste reduction, to the benefit of the employer or nation. Nonetheless, this emphasis on training that often takes the form of traditional skills training with an added layer of green may not fully align with the intent of the targeted training funds. By funding several evaluations of green jobs training and labor market information programs, Labor has positioned itself to build upon lessons learned through implementing these individual programs. A fundamental consideration is whether it is prudent to implement job training programs for an emerging industry before more is known about the demand for skills and workers. Another consideration is whether it would be more or less effective for federally-funded training programs to focus on providing valuable green skills and credentials applicable on a wide variety of jobs, rather than to devote considerable attention to what is defined as a green job. Even though Labor is scaling back its own green jobs efforts, energy efficiency and renewable energy will likely remain a national priority. Labor has established a green jobs community of practice on its online platform, Workforce3One, which, if maintained and used, can continue to facilitate information-sharing among grantees and workforce professionals regarding what green skills and credentials employers in their communities value most. In addition, the substantial investment in energy efficiency and renewable energy made through these grant programs also provides Labor an opportunity to identify broader lessons learned about the challenges and benefits associated with offering targeted training in an emerging industry, which could help inform the development of training for other emerging industries in the future. Without the benefit of such lessons learned and a continued focus on what is needed to address emerging industries, state and local workforce entities may grapple with similar challenges in the future. To enhance Labor’s ability to implement training programs in emerging industries, GAO recommends that the Secretary of Labor identify lessons learned from implementing the green jobs training programs. This could include: Identifying challenges and promising strategies associated with training workers for emerging industries—through both targeted grant programs and existing programs—and considering ways to improve such efforts in the future. For example, taking a more measured or multi-phased approach could allow the time necessary to better determine demand for an emerging industry and establish the partnerships needed to properly align training with available jobs. Taking steps to ensure training programs adequately document outcome variables, particularly for targeted programs where tracking training relatedness is of particular interest. We provided a draft of this report to the Department of Labor. Labor provided a written response (see app. IV). Labor agreed with our recommendation. Specifically, Labor’s response noted that the department has already begun assessing lessons learned from the implementation of its green jobs grants. Labor also cited efforts to compile lessons learned to inform the design and implementation of future grant initiatives, including new approaches to capture program outcomes. Labor agreed that documenting outcomes is important and said it will work to provide technical assistance to ensure grantees adequately document outcomes. Finally, Labor noted the department will continue to collect information on employment outcomes and wages and will analyze these data once they are complete to provide a more definitive and final picture of the extent to which former green jobs training participants entered and retained employment. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the Secretary of Labor, the Committee on Homeland Security and Governmental Affairs, the Committee on Oversight and Government Reform, and other interested parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staffs have any questions about this report, please contact me at (202) 512-7215 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix V. Our objectives were to determine: (1) what is known about the objectives and coordination of the Department of Labor’s (Labor) green jobs efforts, (2) what type of green jobs training grantees provided and how selected grantees aligned their training to meet employers’ green jobs needs, (3) what is known about program outcomes and what challenges, if any, grantees faced in implementing their programs, and (4) what Labor has done to assist and monitor its green jobs grantees. To address these objectives, we reviewed relevant federal laws, regulations, and departmental guidance and procedures. We also created a data collection instrument and two questionnaires to obtain information from Labor officials. In addition, we analyzed data from Labor and interviewed selected grantees by phone or in person in five states—California, Illinois, Louisiana, Minnesota, and Pennsylvania—as well as Labor officials. We conducted this performance audit from May 2012 through June 2013 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Our data collection strategy for obtaining information on green jobs efforts across Labor consisted of two phases. First, we created a data collection instrument to obtain information on green jobs efforts across Labor. In the data collection instrument, we asked offices at Labor to list two separate sets of efforts: (1) efforts where federal funds were appropriated or allocated specifically for green jobs activities and (2) efforts where federal funds were not specifically appropriated or allocated for green jobs activities, but where the office sought to incorporate green elements into either an existing program or ongoing activity. We distributed the data collection instrument to 14 of Labor’s 28 offices: Occupational Safety and Health Administration (OSHA), Mine Safety and Health Administration (MSHA), Women’s Bureau (WB), Employment and Training Administration (ETA), Veterans’ Employment and Training Services (VETS), Office of the Assistant Secretary for Policy (OASP), Bureau of International Labor Affairs (ILAB), Bureau of Labor Statistics (BLS), Center for Faith-Based and Neighborhood Partnerships (CFBNP), Office of Federal Contract Compliance Programs (OFCCP), Wage and Hour Division (WHD), Office of Workers’ Compensation Programs (OWCP), Office of Disability Employment Policy (ODEP), and the Office of Public Affairs (OPA). These 14 offices were selected based on the likelihood of their administering a green jobs effort or program. For example, we did not distribute the data collection instrument to Labor’s Office of Inspector General, Office of the Solicitor, or Office of the Chief Financial Officer. Second, we used the information we collected on the two separate sets of green jobs efforts in the data collection instruments to inform two follow- up questionnaires. For the first set of green jobs efforts, offices at Labor initially identified 16 efforts where funds were specifically appropriated or allocated for green-job related activities. For each of the 16 efforts, we sent a questionnaire by e-mail. The questionnaire focused on (1) the goals and objectives of the green jobs efforts, (2) how green jobs were defined for each of the efforts, (3) whether offices coordinated with others on these efforts, and (4) funding levels for each of the efforts. We pre- tested the questionnaire with two respondents from OSHA in December and made revisions. We then sent the questionnaires out on a rolling basis between January 16 and February 22, 2013. We determined 2 of the 16 efforts to be out of scope. Of the remaining 14 directly-funded green jobs efforts across five offices (OSHA, ETA, VETS, ILAB, and BLS), we received completed questionnaires for 13 and one partially completed questionnaire by April 3, 2013. We also identified 3 additional directly-funded efforts, for a total of 17 efforts. For the second set of green jobs efforts, offices at Labor initially identified 54 efforts where funds were not specifically appropriated or allocated for green jobs efforts, but green elements were incorporated into existing programs or ongoing activities. We identified two additional green efforts that fall under this category. We sent a brief questionnaire consisting of two questions by e-mail in an attached Microsoft Word form. The two questions included in the questionnaire were pre-tested as part of the more detailed survey mentioned above. All questionnaires were sent on January 29, 2013, or on February 22, 2013. We determined 10 of the 56 efforts to be out of scope. Of the remaining 46 efforts across six offices (OSHA, WB, ETA, VETS, ILAB, and CFBNP), we received completed questionnaires for all 46 efforts by March 21, 2013. Labor later identified 2 additional efforts, for a total of 48 efforts. Because the majority of Recovery Act funding for green jobs efforts were directed toward training programs, we focused much of our review on four grant programs—the three training- focused green jobs training programs funded by the Recovery Act (Energy Training Partnership grants, Pathways Out of Poverty grants, and State Energy Sector Partnership and Training grants) as well as the newer Green Jobs Innovation Fund. To report on the characteristics of Labor’s 103 green jobs training grantees, we obtained data from Labor on each training-focused green jobs grant administered by ETA. Specifically, we obtained information on the grantee’s location, organizational type, and whether or not the grantee had sub-grantees. To better understand the type of green jobs training grantees provided, how grantees aligned their training to meet green jobs needs, and what challenges, if any, they faced in implementing their programs, we analyzed data from Labor and interviewed 11 out of the 103 green jobs training grantees between August 2012 and April 2013. We conducted site visits in four states and interviewed grantees in two additional states by phone. We visited grantees in California, Illinois, Minnesota, and Pennsylvania, and interviewed grantees in Connecticut and Louisiana by phone. We selected grantees in these states because these states had a relatively high number of Labor green jobs grant recipients, grantees in these states received GJIF grants, and the states varied in their geographic locations. We selected both Recovery Act- and GJIF- funded green jobs training grantees, but emphasized GJIF-funded grantees since unlike many of the Recovery Act programs, the GJIF program is still active. During each site visit we interviewed Labor’s green jobs training grant officials, training providers, local employers, and, to the extent possible, program participants. Similarly, during our phone calls we interviewed grant officials and in one case employers. During the interviews, we collected information about the types of green jobs training that were funded by Labor’s green jobs training grants and the outcomes of grantees’ programs, including the impact of the training with respect to green job placement, or otherwise. We specifically asked grantees about any challenges they may have encountered as they developed and implemented their program, including whether they experienced challenges with respect to placing participants into green jobs. In addition, we collected information on how local employers were involved in the development of the training programs and the green job opportunities they were able to offer program participants. We cannot generalize our findings beyond the interviews we conducted. To assess the reliability of Labor’s training type and outcome data, we (1) reviewed existing documentation related to the data sources, including Labor’s Office of Inspector General (OIG) reports, (2) electronically tested the data to identify obvious problems with completeness or accuracy, and (3) interviewed knowledgeable agency officials about the data. We determined that the data were sufficiently reliable for limited purposes. For example, we determined that training type data were sufficiently reliable for purposes of reporting out on the industries for which grantees most frequently trained participants. We included information about the extent to which Recovery Act-funded green jobs training grantees collectively reported meeting their enrollment, training completion, and entered employment targets for those grantees for which final data were available as of December 31, 2012. However, based upon the OIG’s findings, we determined that the outcome data were not sufficiently reliable to determine the success of the programs. Finally, based upon the OIG’s findings, we determined that the data on the extent to which grantees entered training-related employment were not reliable enough to report, even compared to targeted levels. To describe Labor’s technical assistance efforts, we reviewed technical assistance guides and material posted to Workforce3One, interviewed Labor officials, and discussed Labor’s technical assistance with selected grantees. To describe and assess Labor’s monitoring efforts, we reviewed its Core Monitoring guide, interviewed Labor officials in Washington, D.C. and in each of ETA’s six regional offices—Atlanta, Boston, Chicago, Dallas, Philadelphia, and San Francisco—and obtained and reviewed copies of Labor’s monitoring reports for green jobs training grantees, including recipients of Energy Training Partnership, Pathways Out of Poverty, and State Energy Sector Partnership and Training grants. Energy Training Partnership (ETP) grants Through the Energy Training Partnership Grants, ETA awarded nearly $100 million to 25 projects. Grantees were to provide training and placement services in the energy efficiency and renewable energy industries to workers impacted by national energy and environmental policy, individuals in need of updated training related to the energy efficiency and renewable energy industries, and unemployed workers. Grantees were required to partner with labor organizations, employers and workforce investment boards. Grant awards ranged from approximately $1.4 to $5 million. In total, Pathways Out of Poverty grantees received approximately $150 million in Recovery Act funds. The grant aimed to help targeted populations find pathways out of poverty through employment in energy efficiency and renewable energy industries. Grants ranged from approximately $2 million to $8 million and were awarded to eight national nonprofit organizations with local affiliates and to 30 local public organizations or private nonprofit organizations. Through SESP, ETA awarded nearly $190 million to state workforce investment boards in partnership with state workforce agencies. The grants were designed to provide training, job placement, and related activities that reflect a comprehensive statewide energy sector strategy including the governor’s overall workforce vision, state energy policies, and training activities that lead to employment in targeted industry sectors. ETA made 34 awards that ranged from approximately $2 to $6 million each. Green Jobs Innovation Fund (GJIF) The Green Jobs Innovation Fund was authorized under the Workforce Investment Act to help workers receive job training in green industry sectors and occupations and access green career pathways. In total, $38 million in grant funds were awarded to six organizations with networks of local affiliates to develop green jobs training programs. These programs were required to incorporate green career pathways either by forging linkages between Registered Apprenticeship and pre-apprenticeship programs or by integrating the delivery of technical and basic skills training through community-based partnerships. Job Corps is a residential job training program for at-risk youth. The Job Corps program aims to teach participants the skills they need to secure a meaningful job, continue their education, and be independent. Job Corps has instituted a number of measures in recent years to “green” its job training programs and facilities. Recovery Act funding was used to incorporate “green” training elements into the automotive, advanced manufacturing, and construction trades at Job Corps centers nationwide and to pilot three new “green” training programs at selected Job Corps centers: Solar Panel Installation, Weatherization, and SmartGrid technology. Targeted topic training grant in which applicants propose training based on the occupational safety and health topics chosen by OSHA. Alternative Energy Industry Hazards and Green Jobs Industry Hazards were included as topics in FY 2009 and FY 2010, respectively. Veterans’ Workforce Investment Program (VWIP) Reported description VWIP supports veterans’ employment and training services to help eligible veterans reintegrate into meaningful employment and to stimulate the development of effective and targeted service delivery systems. In FYs 2009 and 2010, project proposals received priority consideration if they supported “Green Energy Jobs” and proposed clear strategies for training and employment in the renewable energy economy. Reported description ETA awarded approximately $48.8 million in State Labor Market Information Improvement Grants to support the research and analysis of labor market data to assess economic activity in energy efficiency and renewable energy industries and identify occupations within those industries. Grant activities included collecting and disseminating labor market information, enhancing strategies to connect job seekers to green job banks, and helping ensure that workers find employment after completing training. ETA awarded 30 grants of between $763,000 and $4 million. This is a survey-based program, covering 120,000 business establishments, which provides a measure of national and state employment in industries that produce goods or provide services that benefit the environment. This program provides occupational employment and wage information for businesses that produce green goods and services. This is a special survey of business establishments designed to collect data on establishments’ use of green technologies and practices and the occupations of workers who spend more than half of their time involved in green technologies and practices. Green Career Information staff within the Employment Projections program produces career information on green jobs including wages, expected job prospects, what workers do on the job, working conditions, and necessary education, training, and credentials. Recovery Act green jobs grantees who were doing green jobs data collection and training in the states. These Recovery Act funds to O*NET were for the specific purpose of focusing occupational research and data collection on green jobs on an accelerated pace. The Technical Assistance Partnership led by the National Governor’s Association supported Recovery Act-funded green jobs grantees. Green Capacity Building Grants (GCBG) Reported description In total, ETA awarded $5 million in Recovery Act funds to training programs already funded by the Department of Labor to build their capacity to provide training in the energy efficiency and renewable energy industries. ETA awarded 62 of these grants, with awards ranging from $50,000 to $100,000. ETA used $5 million of the $500 million authorized for the Recovery Act green jobs grants for administrative expenses (salaries and expenses). This does not include any funds that were retained for technical assistance for these grants. Administrative expenses were in part used to fund three separate evaluations of Recovery Act green jobs programs: (1) a Labor Market Information evaluation, (2) a green jobs and health care implementation report, and (3) a 5-year impact evaluation. This guidance was funded by the Recovery Act and is a guidance document for R&D workers and employers in the nanotechnology field. Trilateral Roundtable: The Employment Dimension of the Transition to a Green Economy (February 3-4, 2011) The U.S. Department of Labor, Human Resources and Skills Development Canada and the European Commission brought together U.S., Canadian, and European experts representing governments, trade unions, industry, and nongovernmental organizations to discuss the transition to the green economy. Discussions focused on defining and measuring green jobs, establishing effective green jobs partnerships, designing green skills development and training, ensuring green jobs serve as a pathway out of poverty, and examining the quality of green jobs, as well as the sustainability of green jobs investments by governments. Reported description In June 2009, Labor/ETA/OA published a report entitled, “The Greening of Registered Apprenticeship: An Environmental Scan of the Impact of Green Jobs on Registered Apprenticeship and Implications for Workforce Development.” More recently, as part of the 75th Anniversary of the National Apprenticeship Act in 2012, OA put out a call to sponsors across the county to collect Registered Apprenticeship Innovators or Trailblazers. This process identified a number of innovative programs across the country, including several specific examples of apprenticeship programs with a focus on green efforts. Labor officials participated in a technical review of economic research presented in “What Green Growth Means for Workers and Labour Market Policies: An Initial Assessment.” Subsequently the paper appeared as Chapter 4 in the 2012 OECD Employment Outlook. An OSHA website providing green job safety information on specific green jobs, such as green roofing, waste management, wind energy, recycling, weatherization, and geothermal industries. Provided safety information for weatherization jobs in collaboration with Department of Energy, Environmental Protection Agency, and National Institute for Occupational Safety and Health (NIOSH) OSHA worked with EPA in the publication of this guidance which identifies critical indoor environmental quality risks and worker assessment protocols, and provides guidance to address these issues. Through OSHA and The Joint Commission and Joint Commission Resources (JCR) Alliance, JCR developed an article that discusses the importance of adopting sustainable products and practices for cleaning, sanitizing, and disinfecting healthcare facilities. The article also provides requirements for selecting green cleaning products (January 2013). Publishing of a manual: Why Green Is Your Color: A Woman’s Guide to a Sustainable Career. Designed to assist women with job training and career development. A series of teleconferences for workforce practitioners about how to connect women with green jobs. A fact sheet accompanied each teleconference. Reported description In May 2010, the Deputy Director of CFBNP facilitated a partnership between OSHA’s Cincinnati office and East End Community Services in Dayton, OH – a Pathways Out of Poverty sub-grantee seeking a training module on safe handling of asbestos and lead removal as part of a green jobs training program. Interagency working groups (with Energy, Education, and HUD) Labor officials participated in the October 13-14, 2011 ELSAC meeting in Paris, France. One topic discussed at the meeting was the OECD’s green jobs project. Labor staff articulated labor and employment priorities to the U.S. interagency for inclusion in U.S. government positions for Rio+20, including for the U.S. position paper and during negotiations of the Rio outcome document. The two-day Symposium convened experts from 16 Asia-Pacific Economic Cooperation member economies and international organizations to discuss sustainable economic development policies. The event was hosted by the Department of Education, in partnership with Labor. US-Brazil Memorandum of Understanding on Labor Cooperation (March 20-21, 2012) U.S. Secretary of Labor and her counterpart from Brazil signed a Memorandum of Understanding on Labor Cooperation in May 2012. The memorandum highlights cooperation in the area of green jobs. The Women’s Bureau Director led a Labor delegation meeting with officials from Brazil’s Ministry of Environmental Affairs at U.S. EPA about the definition of green jobs, and initiatives in both countries. The October 2012 conference working group meetings considered green jobs in follow-up to the XVII IACML Declaration and Plan of Action adopted by the ministers of labor of the Americas in November 2011. The Plan of Action called for specific follow-up actions related to green jobs including, inter-alia, in-depth exchange of best practices in the region. DOL officials met with 47 women leaders from Sub-Saharan Africa under the African Growth and Opportunity Act’s African Women Entrepreneur Program, sharing best practices, perspectives and strategies to train and employ women in green jobs. Reported description Presentations by Deputy Assistant Secretary of Labor for Occupational Safety and Health to general session, sharing of knowledge, development of informational products and participation in quarterly meetings. OSHA and Labor participated in an interagency Recovery through Retrofit Working Group comprised of over 80 technical staff members from the Departments of Energy, Housing and Urban Development, and Labor; Environmental Protection Agency; ,and USD that drafted standards for workers who will be involved in retrofitting homes to make them more energy efficient. The group met in Denver for a 3 days and a follow-up meeting was held in Washington D.C. This is the Vice President’s initiative. As a part of the working group, OSHA provided technical advice and input in the worker protection aspects of the standards that were drafted. Reported description On December 1, 2010, Secretary of Labor and Assistant Secretary for Employment and Training met with leaders from several national foundations to discuss significant investments in green jobs programs, as well as effective strategies that create employment and advancement opportunities for low- income populations in the green job industry. In 2012, the Director of CFBNP wrote a blog for Fatherhood.gov about an Employment and Training Administration’s grantee RecycleForce that provides green jobs to ex-offenders. Job Train’s “Earth Day Every Day Campaign” Held the week of April 19th-23rd, 2010, the campaign was designed “to raise environmental awareness among students and staff and serve as friendly reminders to be more energy efficient.” Labor staff briefed an official from the Chinese Embassy on Labor green jobs initiatives. Labor staff briefed the liaison on Labor green jobs efforts. Small Business Forum: “Green Jobs: Safety & Health Outlook for Workers and Small Employers” A forum on OSHA’s green jobs efforts and workplace hazards associated with green jobs. Presentation – “What You Need to Know About the Safe Use of Spray Polyurethane Foam (SPF) Briefing on Spray Polyurethane Foam” OSHA Team attended as participating partner and Assistant Secretary spoke. OSHA co-chaired the topic, “OSH in Green Economy” for the conference on behalf of the United States. OSHA led the discussions and wrote the accompanying white paper. OSHA senior staff made presentations at conference on hazards of green jobs. Reported description OSHA personnel made presentations in Atlanta, GA; Los Angeles, CA; Philadelphia, PA; and Detroit, MI. OSHA participated in: The Employment Dimension of the Transition to a Green Economy”. The event brought together experts from government, trade unions, industry, and other stakeholders to exchange information, best practices, and ideas on preparing workers and employers to meet the increasingly complex skill demands of this transition. OSHA made a presentation on Green Jobs hazards. Roundtable has received presentations from CPWR, NIOSH and Department of Commerce on green jobs within the construction industry. First Annual Research Exchange on Advancing Patient, Worker and Environmental Safety and Sustainability in the Health Care Sector. OSHA presentation on focus on green jobs in relation to the healthcare industry. The audience was mainly healthcare workers, employers and researchers. Provides information to employers on practices to help keep workers safe when working with cleaning chemicals, including green cleaning products. The posters are available in English, Chinese, Tagalog and Spanish. The poster includes a section devoted to Green Cleaners. Topic: Making Green Jobs Good Jobs – We All Want To, So What is OSHA Doing to Make it Happen? Discussions at over 30 U.S. locations involving business and community leaders regarding emerging employment opportunities in green job fields. Posters, mobile marketing displays, postcards, flash drives. Reported description ETA designed the Green jobs CoP to serve as a platform for workforce professionals and green job thought leaders to discuss and share promising practices, to create partnerships for green job workforce solutions, and to leverage Recovery Act investments. Specifically, the Green Jobs CoP was designed to provide an interactive platform for providing technical assistance through webinars, discussion boards, blogs and other online resources to workforce professionals, particularly those at the state and workforce investment board levels as well as green jobs grantees (including recipients of upcoming Solicitation for Grant Applications). Reported description The YouthBuild program targets out-of-school youth ages 16 to 24 and provides them with an alternative education pathway to a high school diploma or GED. Most YouthBuild programs have incorporated green building into their construction training. As part of this training, participants learn about environmental issues that affect their communities and how they can provide leadership in this area. Homeless Veterans’ Reintegration Program The purpose of this program is to expedite the reintegration of homeless veterans into the labor force. These grants are intended to address two objectives: to provide services to assist in reintegrating homeless veterans into meaningful employment within the labor force, and to stimulate the development of effective service delivery systems that will address the complex problems facing homeless veterans. The programs’ technical assistance guide refers to collecting data on green jobs participants. Web-based training to help women find and succeed in green jobs. Pilot training projects designed to prepare women to enter high-growth, high- demand green jobs. In addition to the contact named above, Laura Heald, Assistant Director; Amy Buck, Meredith Moore, and David Perkins made significant contributions to all phases of the work. Also contributing to this report were James Bennett, David Chrisinger, Stanley Czerwinski, Beryl Davis, Andrea Dawson, Peter Del Toro, Alexander Galuten, Kathy Leslie, Sheila McCoy, Kim McGatlin, Jean McSween, Rhiannon Patterson, Karla Springer, Vanessa Taylor, and Mark Ward. Grants to State and Local Governments: An Overview of Funding Levels and Selected Challenges. GAO-12-1016. Washington, D.C.: September 25, 2012. Renewable Energy: Federal Agencies Implement Hundreds of Initiatives. GAO-12-260. Washington, D.C.: February 27, 2012. Workforce Investment Act: Innovative Collaborations between Workforce Boards and Employers Helped Meet Local Needs. GAO-12-97. Washington, D.C.: January 19, 2012. Climate Change: Improvements Needed to Clarify National Priorities and Better Align Them with Federal Funding Decisions. GAO-11-317. Washington, D.C.: May 20, 2011. Recovery Act: Energy Efficiency and Conservation Block Grant Recipients Face Challenges Meeting Legislative and Program Goals and Requirements. GAO-11-379. Washington, D.C.: April 7, 2011. Multiple Employment and Training Programs: Providing Information on Colocating Services and Consolidating Administrative Structures Could Promote Efficiencies. GAO-11-92. Washington, D.C.: January 13, 2011. Recovery Act: States’ and Localities’ Uses of Funds and Actions Needed to Address Implementation Challenges and Bolster Accountability. GAO-10-604. Washington, D.C.: May 26, 2010. Recovery Act: Funds Continue to Provide Fiscal Relief to States and Localities, While Accountability and Reporting Challenges Need to Be Fully Addressed. GAO-09-1016. Washington, D.C.: September 23, 2009. Employment and Training Program Grants: Evaluating Impact and Enhancing Monitoring Would Improve Accountability. GAO-08-486. Washington, D.C.: May 7, 2008. Workforce Investment Act: Additional Actions Would Improve the Workforce System. GAO-07-1061T. Washington, D.C.: June 28, 2007. Workforce Investment Act: Employers Found One-Stop Centers Useful in Hiring Low-Skilled Workers; Performance Information Could Help Gauge Employer Involvement. GAO-07-167. Washington, D.C.: December 22, 2006. Workforce Investment Act: Substantial Funds Are Used for Training, but Little Is Known Nationally about Training Outcomes. GAO-05-650. Washington, D.C.: June 29, 2005. Workforce Investment Act: Employers Are Aware of, Using, and Satisfied with One-Stop Services, but More Data Could Help Labor Better Address Employers’ Needs. GAO-05-259. Washington, D.C.: February 18, 2005. Workforce Investment Act: States and Local Areas Have Developed Strategies to Assess Performance, but Labor Could Do More to Help. GAO-04-657. Washington, D.C.: June 1, 2004. Workforce Investment Act: One-Stop Centers Implemented Strategies to Strengthen Services and Partnerships, but More Research and Information Sharing Is Needed. GAO-03-725. Washington, D.C.: June 18, 2003. Internal Control: Standards for Internal Control in the Federal Government, GAO/AIMD-00-21.3.1. Washington, D.C.: November 1999. | Labor received $500 million from the Recovery Act to help create, better understand, and provide training for jobs within the energy efficiency and renewable energy industries, commonly referred to as "green jobs." Since 2009, Labor has also "greened" existing programs and funded additional green jobs training grants and other efforts. In light of the amount of funding targeted to green programs within Labor, GAO examined: (1) what is known about the objectives and coordination of Labor's green jobs efforts, (2) what type of green jobs training grantees provided and how selected grantees aligned their training to meet employers' green jobs needs, (3) what is known about program outcomes and what challenges, if any, grantees faced in implementing their programs, and (4) what Labor has done to assist and monitor its green jobs grantees. To conduct this work, GAO reviewed relevant federal laws and regulations; surveyed selected offices within Labor using two questionnaires--one for directly- funded green jobs efforts and one for other efforts; interviewed Labor officials and 11 out of 103 green jobs training grantees; and analyzed relevant Labor documents and data. Of the $595 million identified by Labor as having been appropriated or allocated specifically for green jobs activities since 2009, approximately $501 million went toward efforts with training and support services as their primary objective, with much of that funding provided by the American Recovery and Reinvestment Act of 2009 (Recovery Act). Because the Recovery Act directed federal agencies to spend funds quickly and prudently, Labor implemented a number of high-investment green jobs efforts simultaneously. As a result, in some cases, Recovery Act training programs were initiated prior to a full assessment of the demand for green jobs, which presented challenges for grantees. While Labor's internal agencies initially communicated with each other and with other federal agencies after the Recovery Act was passed, most Recovery Act grants have ended or are winding down. Labor created its green jobs definitional framework to provide local flexibility, and grantees we interviewed broadly interpreted Labor's framework to include any job that could be linked, directly or indirectly, to a beneficial environmental outcome. Labor's training data show most participants were trained in construction or manufacturing. While the findings of our site visits are not generalizable, all grantees we interviewed said they had worked closely with local employers to align their training program with the green skills needs of local employers. Most grantees we interviewed also told us they had incorporated green elements into existing training programs aimed at traditional skills, such as teaching weatherization as part of a carpentry training program. The outcomes of Labor's green jobs training programs remain uncertain, in part because data on final outcomes were not yet available for about 40 percent of grantees, as of the end of 2012. Analysis of grantees with final outcome data shows they collectively reported training slightly more individuals than they had projected, but job placements were at 55 percent of the target. Training-related job placement rates remain unknown because Labor's Office of Inspector General (OIG) found these data unreliable. Grantees we interviewed were generally positive about Labor's green job training programs, but most said they had faced challenges during implementation, including: (1) a lack of reliable green jobs labor market information, (2) insufficient time to meet grant requirements, (3) knowledge gaps surrounding green skills and changing energy policies, and (4) difficulty placing participants into green jobs, primarily due to the overall poor economy. Labor has provided technical assistance and taken steps to monitor green jobs training grantees through on-site monitoring visits and quarterly reviews. During these visits and reviews, Labor officials assessed grantee performance, such as by comparing reported program outcomes, including job placements, to targeted performance levels. However, Labor provided only limited guidance on how to document reported job placements. Labor officials required grantees with lower than projected performance levels to implement corrective action plans. In addition, Labor officials told us they have taken steps to improve the quality of grantee reported data, such as by forming an internal workgroup to identify ways to improve the technical assistance they provide to grantees on reporting performance outcomes. GAO recommends that Labor identify lessons learned from the green jobs training programs to enhance its ability to implement such programs in emerging industries. Labor agreed with our recommendation. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
According to 2007 NHIS data, fewer than 40 percent of adults in the United States reported ever having been tested for HIV. In a recent survey by the Henry J. Kaiser Family Foundation, the primary reason people gave for not being tested is that they do not think they are at risk. The second most common reason was that their doctor never recommended HIV testing. While 38 percent of adults said that they had talked to their doctor about HIV, only 17 percent said that their doctor had suggested an HIV test. According to this survey, African Americans and Latinos were more likely than adults overall to have had such a conversation with their doctor and for the doctor to have suggested testing. Sixty-seven percent of African Americans and 45 percent of Latinos said that they had talked to their doctor about HIV and 29 percent of African Americans and 28 percent of Latinos said that their doctor had suggested an HIV test. Technological advances have increased the benefits associated with HIV testing as well as with regular care and treatment for HIV. First, advances in testing methods, such as rapid HIV tests, have made testing more feasible in a variety of different settings and increased the likelihood that individuals will receive their results. Rapid tests differ from conventional HIV tests in that results are ready sometime from immediately after the test is performed to 20 minutes after the test is performed, which means that individuals can get tested and receive their results in the same visit. Second, the advent of highly active antiretroviral therapy (HAART) has transformed HIV from a fatal disease to a treatable condition. For example, a 25-year-old individual who is in care for HIV can expect to live only 12 years less than a 25-year-old individual who does not have HIV. In addition, studies have found that people generally reduce risky behaviors once they learn of their HIV-positive status. According to one study, people who are unaware that they are HIV positive are 3.5 times more likely to transmit the disease to their partners than people who know their status. At the same time, research has shown that individuals are often unaware of their status until late in the course of the disease despite visits to health care settings. For example, one study looked at HIV case reporting in a state over a 4-year period. The study found that of people who were diagnosed with HIV late in the course of the disease, 73 percent made at least one visit to a health care setting prior to their first reported positive HIV test, and the median number of prior visits was four. Funding for HIV testing can come from insurance reimbursement by private insurers as well as Medicaid and Medicare, although these payers do not cover HIV testing under all circumstances. Funding for HIV testing can also come from other government sources, such as CDC, CARE Act programs, or state and local funding. A study by CDC and the Henry J. Kaiser Family Foundation that looked at the insurance coverage of individuals at the time of their HIV diagnosis from 1994-2000 found that 22 percent were covered by Medicaid, 19 percent were covered by other public-sector programs, and 27 percent were uninsured. The cost of an HIV test varies based on a number of factors, including the type of test performed, the test result, and the amount of counseling that is associated with the test. For example, from a payer’s perspective, the costs of a rapid HIV test are higher for someone who is HIV positive than for someone who is not, primarily because rapid testing requires an initial rapid test and a confirmatory test when the result is positive with counseling conducted after both tests. Additionally, eliminating pretest counseling can lower the cost of HIV testing by about $10, regardless of the type of test. According to the most recent data available from CDC, in 2006, the cost of an HIV test could range from $10.16 to $86.84 depending on these and other factors. CDC issued its first recommendations for HIV testing in health care settings in 1987. These recommendations focused on individuals engaged in high-risk behaviors and specifically recommended that people who were seeking treatment for STDs be tested for HIV on a routine basis. Throughout the 1990s and 2000s CDC updated these recommendations periodically to reflect new information about HIV. For example, in 2001, CDC modified its recommendations for pregnant women to emphasize that HIV testing should be a routine part of prenatal care and that the testing process should be simplified to eliminate barriers to testing, such as requiring pretest counseling. CDC’s 2001 recommendations also recommended that HIV testing be conducted routinely in all health care settings with a high prevalence of HIV; in low-prevalence settings it was recommended that HIV testing be conducted based on an assessment of risk. In 2003, CDC introduced a new initiative called “Advancing HIV Prevention: New Strategies for a Changing Epidemic.” The initiative had a number of strategies, including two that specifically applied to health care settings: (1) making HIV testing a routine part of medical care; and (2) further reducing perinatal transmission of HIV by universally testing all pregnant women and by using HIV rapid tests during labor and delivery or postpartum if the mother had not been tested previously. Elements of the Advancing HIV Prevention initiative were incorporated into CDC’s revised HIV testing recommendations for heath care settings in 2006. The 2006 recommendations represent a major shift from prior recommendations for health care settings in that they no longer base HIV testing guidelines on risk factors. Rather, they recommend that routine HIV testing be conducted for all patients ages 13 through 64 in all health care settings on an opt-out basis. CDC also recommends that persons at high risk of HIV be tested annually; that general consent for medical care encompass consent for HIV testing (i.e., separate written consent is not necessary); and that pretest information, but not pretest counseling be required. According to CDC, tracking the prevalence of HIV is necessary to help prevent the spread of the disease. CDC’s surveillance system consists of case counts submitted by states on the number of HIV and AIDS diagnoses, the number of deaths among persons with HIV, the number of persons living with HIV or AIDS, and the estimated number of new HIV infections. HIV laboratory tests, specifically CD4 or viral load tests, can be used to determine the stage of the disease, measure unmet health care needs among HIV-infected persons, and evaluate HIV testing and screening activities. Current CDC estimates related to HIV are not based on data from all states because not all states have been reporting such data by name long enough to be included in CDC’s estimates. While all states collect AIDS case counts through name-based systems, prior to 2008 states collected HIV data in one of two different formats, either by name or by code. CDC does not accept code-based case counts for counting HIV cases because CDC does not consider them to be accurate and reliable, primarily because they include duplicate case counts. In order for CDC to use HIV case counts from a state for CDC’s estimated diagnoses of HIV infection, the name-based system must be mature, meaning that the state has been reporting HIV name-based data to CDC for 4 full calendar years. CDC requires this time period to allow for the stabilization of data collection and for adjustment of the data in order to monitor trends. In its most recent surveillance report, CDC used the name-based HIV case counts from 34 states and 5 territories and associated jurisdictions in its national estimates. Name-based HIV reporting had been in place in these jurisdictions since the end of 2003 or earlier. Under the CARE Act, approximately $2.2 billion in grants were made to states, localities, and others in fiscal year 2009. Part A of the CARE Act provides for grants to selected metropolitan areas that have been disproportionately affected by the HIV epidemic to provide care for HIV- positive individuals. Part B provides for grants to states and territories and associated jurisdictions to improve quality, availability, and organization of HIV services. Part A and Part B base grants are determined by formula based on the number of individuals living with HIV and AIDS in the grantee’s jurisdiction. For the living HIV/AIDS case counts HRSA used to determine fiscal year 2009 Part A and Part B base grants, see appendices II and III. Part C provides for grants to public and private nonprofit entities to provide early intervention services, such as HIV testing and ambulatory care. Part F provides for grants for demonstration and evaluation of innovative models of HIV care delivery for hard-to-reach populations, training of health care providers, and for Minority AIDS Initiative grants. Since the 2006 reauthorization of CARE Act programs, HRSA has placed an emphasis on states’ unmet need, which is the number of individuals in a state’s jurisdiction who know they are HIV positive but who are not receiving care for HIV. According to the framework used by HRSA, addressing unmet need is a three-step process. First, states are required to produce an unmet need estimate, which is submitted to HRSA on the state’s annual Part B grant application. To calculate the unmet need, the state must determine the total number of individuals who are aware of their HIV positive status in their jurisdiction, and then subtract the number of individuals who are receiving care for HIV. Second, the state must assess the service needs and barriers to care for individuals who are not receiving care for HIV, including finding out who they are and where they live. Third, the state must address unmet need by connecting these individuals to care. CDC and HRSA have coordinated on activities to assist health care professionals who provide HIV-related services. HRSA has encouraged routine HIV testing by providing for training for health care providers, as part of CDC-funded initiatives. CDC has taken other steps to encourage routine HIV testing by funding special initiatives that focus on certain populations. Since 2006, CDC and HRSA have coordinated activities to assist health care professionals who provide HIV-related services. In 2007, CDC and HRSA initiated a clinic-based research study to develop, implement, and test the efficiency and effectiveness of an intervention designed to increase client appointment attendance among patients at risk of missing scheduled appointments in HIV clinics, “Increasing Retention in Care among Patients Being Treated for HIV Infection.” An interagency agreement outlined the responsibilities of CDC and HRSA with respect to the study. For example, under the agreement, CDC is responsible for maintaining data gathered from the study and HRSA is responsible for presenting their findings at national and international conferences. Each agency provided $1.3 million for the study in fiscal year 2009 and will continue to provide funds for the study until its final year of operation in 2011. In coordination with a federal interagency work group, CDC and HRSA have also participated in the development and publication of a document for case managers who work with individuals with HIV. The document, “Recommendations for Case Management Collaboration and Coordination in Federally Funded HIV/AIDS Programs,” outlines best practices for, and six recommended components of, HIV case management for federally funded HIV case management agencies. The document also describes how case management is practiced in different settings and methods for strengthening linkages among case management programs. CDC and HRSA were the lead authors of the document and shared staff time and production expenses. The agencies published the document in February 2009. CDC also provided HRSA with funding to expand HIV consultation services offered to health care professionals at the National HIV/AIDS Clinicians’ Consultation Center. The National HIV/AIDS Clinicians’ Consultation Center is a component of the HRSA-administered AIDS Education and Training Centers (AETC) program. The Consultation Center operates hotline systems to provide consultation to health care professionals, including the PEPline and Perinatal Hotline. Health care professionals access the PEPline to receive information on post-exposure management for health care professionals exposed to blood-borne pathogens and the Perinatal Hotline for information on treatment and care for HIV-diagnosed pregnant women and their infants. CDC provided HRSA with $169,000 to support the PEPline and Perinatal Hotline in fiscal year 2007 and $90,000 to support the PEPline in fiscal year 2008. In addition, CDC provided HRSA with $180,000 during fiscal years 2007 and 2008 for the enhancement of existing consultation services at the Consultation Center for health care professionals who expand HIV testing and need assistance in managing a resulting increase in patients who are HIV positive. In addition, CDC and HRSA have coordinated to prevent duplication of HIV training provided to health care professionals. The CDC-funded National Network of STD/HIV Prevention Training Centers, HRSA-funded AETCs, and other federal training centers, participate in the Federal Training Centers Collaboration to ensure that HIV training opportunities are not duplicated among the centers. The agencies hold biennial national meetings to increase training coordination of STD/HIV prevent and treatment, family planning/reproductive health, and substance abuse prevention to maximize the use of training resources. In addition to coordinating on HIV activities that assist health care professionals, CDC and HRSA have participated in the CDC/HRSA Advisory Committee on HIV and STD Prevention and Treatment. The Advisory Committee was established by the Secretary of HHS in November 2002 to assess HRSA and CDC objectives, strategies, policies, and priorities for HIV and STD prevention and care and serves as a forum to discuss coordination of HIV activities. The committee meets twice a year and is comprised of 18 individuals who are nominated by the HHS Secretary to serve 2- to 4-year terms and are knowledgeable in such public health fields as epidemiology, infectious diseases, drug abuse, behavioral science, health care delivery and financing, state health programs, clinical care, preventive health, and clinical research. The members assess the activities administered by HRSA and CDC, including HIV testing initiatives and training programs, and make recommendations for improving coordination between the two agencies to senior department officials, including the HHS Secretary. Officials from CDC and HRSA regularly attend the meetings to present current HIV initiatives administered by their agencies. Officials from 6 of the 14 state and local health departments we interviewed said that CDC and HRSA coordination on HIV activities could be improved. For example, officials from 3 of these health departments attributed the lack of coordination to differing guidelines CDC and HRSA use for their grantees. Officials from 1 health department stated that although they have the same desired outcome, CDC and HRSA do not always coordinate on activities that they fund. They noted that the two agencies have inconsistent policies for HIV-related activities, such as confidentiality guidelines and policies for data sharing. Officials from another health department stated that the two agencies could improve coordination on HIV testing and guidelines for funding HIV testing initiatives. Since the release of CDC’s 2006 routine HIV testing recommendations, HRSA has encouraged routine HIV testing by providing for training for health care providers, as part of CDC-funded initiatives. CDC and HRSA developed interagency agreements through which CDC provided $1.75 million in 2007 and $1.72 million in 2008 to HRSA-funded AETCs to develop curricula, training, and technical assistance for health care providers interested in implementing CDC’s 2006 routine HIV testing recommendations. As of June 2008, AETCs had conducted over 2,500 training sessions to more than 40,000 health care providers on the recommendations. HRSA provided for training during CDC-funded strategic planning workshops on routine HIV testing for hospital staff. CDC officials said that in 2007, the agency allocated over $900,000 for workshops in eight regions across the country on implementing routine HIV testing in emergency departments. CDC reported that 748 attendees from 165 hospitals participated in these workshops. HRSA-funded AETCs from each of the eight regions provided information on services they offer hospitals as they prepare to implement routine HIV testing, and also served as facilitators during the development of hospital-specific strategic plans. In addition, HRSA provided for training as part of a CDC-funded pilot project to integrate routine HIV testing into primary care at community health centers. HRSA officials said that their primary role in this project, called “Routine HIV Screening within Primary Care in Six Southeastern Community Health Centers,” was to provide for training on routine HIV testing and to ensure that HIV-positive individuals were connected to care, and that CDC provided all funding for the project. CDC officials told us that the first phase of the project funded routine HIV testing in two sites in Mississippi, two sites in South Carolina, and two sites in North Carolina. The CDC officials said that in 2008 four sites in Ohio were added and that these sites are receiving funding through CDC’s Expanded HIV Testing initiative. CDC officials said that they plan to start a second phase of the project with additional testing sites. CDC has taken other steps to encourage routine HIV testing by funding special initiatives that focus on certain populations. In 2007, CDC initiated a 3-year project for state and local health departments called the “Expanded and Integrated Human Immunodeficiency Virus (HIV) Testing for Populations Disproportionately Affected by HIV, Primarily African Americans” initiative or the Expanded HIV Testing initiative. In the first year of the initiative, CDC awarded just under $35 million to 23 state and local health departments that had an estimated 140 or more AIDS cases diagnosed among African Americans in 2005. Individual awards were proportionately based on the number of cases, with amounts to each jurisdiction ranging from about $700,000 to over $5 million. Funding after the first year of the initiative was to be awarded to these same health departments on a noncompetitive basis assuming availability of funds and satisfactory performance. Funding for the second year of the initiative was just over $36 million and included funding for 2 additional health departments, bringing the total number of funded departments to 25. CDC asked health departments participating in the Expanded HIV Testing initiative to develop innovative pilot programs to expand testing opportunities for populations disproportionately affected by HIV— primarily African Americans—who are unaware of their status. CDC required health departments to spend all funding on HIV testing and related activities, including the purchase of HIV rapid tests and connecting HIV-positive individuals to care. CDC strongly encouraged applicants to focus at least 80 percent of their pilot program activities on health care settings, including settings to which CDC had not previously awarded funding for HIV testing, such as emergency rooms, inpatient medical units, and urgent care clinics. Additionally, CDC required that programs in health care settings follow the agency’s 2006 routine HIV testing recommendations to the extent permitted by law. Programs in non-health care settings were to have a demonstrated history of at least a 2 percent rate of HIV-positive test results. The 2006 reauthorization of CARE Act programs included a provision for the Early Diagnosis Grant program under which CDC would make HIV prevention funding for each of fiscal years 2007 through 2009 available to states that had implemented policies related to routine HIV testing for certain populations. These policies were (1) voluntary opt-out testing of all pregnant women and universal testing of newborns or (2) voluntary opt-out testing of patients at STD clinics and substance abuse treatment centers. CDC’s fiscal year 2007 appropriation prohibited it from using funding for Early Diagnosis grants. In fiscal year 2008, CDC’s appropriation provided up to $30 million for the grants. CDC officials told us that in 2008, the agency awarded $4.5 million to the six states that had implemented at least one of the two specified policies as of December 31, 2007. In fiscal year 2009, CDC’s appropriation provided up to $15 million for grants to states newly eligible for the program. CDC officials said that in 2009, one state received funding for implementing voluntary opt-out testing at STD clinics and substance abuse treatment centers. CDC officials also told us that they provided HRSA with information on how the Early Diagnosis Grant program would be implemented, but have not coordinated with the agency on administration of the program. Officials from just over half of the state and local health departments we interviewed said that their departments had implemented routine HIV testing in their jurisdictions, but that they generally did so in a limited number of sites. Officials from most of the health departments we interviewed and other sources knowledgeable about HIV have identified barriers to routine HIV testing, including lack of funding. Officials from 9 of the 14 state and local health departments we interviewed said that their departments had implemented routine HIV testing, but 7 said that they did so in a limited number of sites. Specifically, officials from 5 of the state health departments we interviewed said that their departments had implemented routine HIV testing in anywhere from one to nine sites and officials from 2 of the local health departments said that their departments had implemented it in two and four sites, respectively. Officials from all but 1 of these 7 departments said that their departments used funding from CDC’s Expanded HIV Testing initiative to implement routine HIV testing. CDC’s goal for its Expanded HIV Testing initiative is to test 1.5 million individuals for HIV in areas disproportionately affected by the disease and identify 20,000 HIV-infected persons who are unaware of their status per year. During the first year of the initiative, health departments that received funding under the CDC initiative reported conducting just under 450,000 HIV tests and identifying approximately 4,000 new HIV-positive results. The two other health departments that had implemented routine HIV testing—one state health department and one local health department located in a large city—had been able to implement routine HIV testing more broadly. These departments had implemented routine HIV testing prior to receiving funding through the Expanded HIV testing initiative, and used the additional funding to expand the number of sites where it was implemented. For example, the local health department had started an initiative to achieve universal knowledge of HIV status among residents in an area of the city highly affected by HIV. The department used funding from the Expanded HIV Testing initiative and other funding sources to implement routine HIV testing in this area and other sites throughout the city, including 20 emergency rooms. An official from the state health department said that while the department had already funded routine HIV testing in some settings, for example STD clinics and community health centers, funding from the Expanded HIV Testing initiative allowed them to fund routine HIV testing in other types of settings, for example emergency rooms. Officials from five health departments we interviewed said that their departments had not implemented routine HIV testing in their jurisdictions, including three state health departments and two local health departments. None of these health departments received funding through CDC’s Expanded HIV Testing initiative, and officials from two of the state health departments specifically cited this as a reason why they had not implemented routine HIV testing. Officials from all of the departments that had not implemented routine HIV testing said that their departments do routinely test certain populations for HIV, including pregnant women, injection drug users, and partners of individuals diagnosed with HIV. Officials from 11 of the 14 state and local health departments we interviewed and other sources knowledgeable about HIV have identified barriers that exist to implementing routine HIV testing. Officials from 5 of the 11 health departments cited lack of funding as a barrier to routine HIV testing. For example, an official from 1 state health department told us that health care providers have said that they would do routine HIV testing if they could identify who would pay for the cost of the tests. The need for funding was corroborated by officials from an organization that contracts with state and local health departments to coordinate HIV-related care and services. These officials told us that they had often seen routine HIV testing end when funding streams dried up and noted that there has been little implementation of CDC’s 2006 routine HIV testing recommendations in their area outside of STD clinics and programs funded through the Expanded HIV Testing initiative. Officials from state and local health departments we interviewed and other sources also cited lack of insurance reimbursement as a barrier to routine HIV testing. When identifying lack of funding as a barrier to routine HIV testing, officials from two state health departments we interviewed explained that there is a general lack of insurance reimbursement for this purpose. Other organizations we interviewed and CDC also raised the lack of insurance reimbursement for routine HIV testing as a barrier. For example, one provider group that we spoke with said that many providers are hesitant to offer HIV tests without knowing whether they will be reimbursed for it. In a recent presentation, CDC reported that out of 11 insurance companies, as of May 2009, all covered targeted HIV testing, but only 6 reimbursed for routine HIV testing. CDC also reported that as of this same date only one state required that insurers reimburse for HIV tests regardless of whether testing is related to the primary diagnosis. CDC noted that legislation similar to this state’s has been introduced, but not passed, in two other states as well as at the federal level. Medicare does not currently reimburse for routine HIV testing, though the Centers for Medicare & Medicaid Services initiated a national coverage analysis as the first step in determining whether Medicare should reimburse for this service. While federal law allows routine HIV testing as a covered service under Medicaid, individual states decide whether or not they will reimburse for routine HIV testing. According to one study, reimbursement for routine HIV testing has not been widely adopted by state Medicaid programs. Many insurers, including Medicare and Medicaid, base their reimbursement policies on the recommendations of the U.S. Preventive Services Task Force, which is the leading independent panel of private-sector experts in prevention and primary care. While the Task Force has recommended that clinicians conduct routine HIV testing when individuals are at increased risk of HIV infection and for all pregnant women, it has not made a recommendation for routine HIV testing when individuals are not at increased risk, saying that the benefit in this case is too small relative to the potential harms. In addition, officials from three state health departments we interviewed discussed legal barriers to implementing routine testing. For example, officials from one department said that implementation of routine HIV testing would require a change in state law to eliminate the requirement for pretest counseling and written informed consent. Similarly, officials from another department said that while their department had been able to conduct routine testing through the Expanded HIV Testing initiative, expanding it further might require changing state law to no longer require written informed consent for HIV testing. The officials explained that while the initiative did have a written informed consent form, the department had been able to greatly reduce the information included on the form in this instance. The department is currently in the process of looking for ways to further expand HIV testing without having to obtain changes to state law. According to a study published in the Annals of Internal Medicine, as of September 2008, 35 states’ laws did not present a barrier to implementing routine HIV testing, though the 3 states discussed above were identified as having legal barriers. Officials from 3 of the state and local health departments we interviewed discussed operational barriers to integrating routine HIV testing with the policies and practices already in place in health care settings. For example, an official from a state health department said that the department tries to work past operational barriers to routine HIV testing, but if after 6 months the barriers prove too great in one site the department moves implementation of routine HIV testing to another site. An official from another state health department noted that in hospital settings it can take a long time to obtain approval for new protocols associated with routine HIV testing. NASTAD conducted a survey of the 25 state and local health departments that received funding through the Expanded HIV Testing initiative and found that health departments reported some barriers in implementing routine HIV testing, including obtaining buy-in from staff in health care settings and providing adequate training, education, and technical assistance to this staff. Other barriers mentioned by officials from health departments we interviewed included health care providers not being comfortable testing everyone for HIV and the ability of providers to provide care for the increased number of people who might be diagnosed through expanded HIV testing. CDC officials estimated that approximately 30 percent of the agency’s annual HIV prevention funding is spent on HIV testing. For example, according to CDC officials, in fiscal year 2008 this would make the total amount spent on HIV testing about $200 million out of the $652.8 million CDC allocated for domestic HIV prevention to its Division of HIV/AIDS Prevention. Of the $200 million CDC officials estimated was spent on testing, CDC did report that, in fiscal year 2008, $51.1 million was spent on special HIV testing initiatives, such as the Expanded HIV testing initiative and the Early Diagnosis Grant program. CDC officials said that, outside of special testing initiatives, they could not provide the exact amount CDC spent on HIV testing. CDC’s Division of HIV/AIDS Prevention spends the majority of its domestic HIV prevention budget in connection with cooperative agreements, grants, and contracts to state and local health departments and other funded entities. CDC officials explained that grantees submit reports to CDC on the activities they fund at the middle and end of the year. The officials said that while project officers check to see that these reports are consistent with how grantees planned to spend their funding, CDC does not routinely aggregate how much all grantees spent on a given activity, including HIV testing. In addition, outside of the Expanded HIV Testing initiative, CDC does not maintain data on how funds for HIV testing are distributed to different settings within jurisdictions. For example, this would mean that CDC does not have data on how much money a state health department spends on testing in emergency rooms, versus how much money it spends on testing in community-based organizations. According to data from NHIS, nearly 70 percent of all HIV tests in the United States were conducted in a private doctor’s office, HMO, or hospital setting in 2007. Specifically, 50 percent of all HIV tests were conducted in a private doctor’s office or HMO and nearly 20 percent of all HIV tests were conducted in a hospital setting, including emergency departments. The remaining tests were conducted in a variety of settings, including public clinics and HIV counseling and testing sites. Less than 1 percent of all HIV tests were conducted in a correctional facility, STD clinic, or a drug treatment facility. These data are similar to earlier data from NHIS. In 2002, NHIS found that 44 percent of all HIV tests were conducted in a private doctor’s office or HMO and 22 percent of all HIV tests were conducted in a hospital setting. Analysis of CDC surveillance data on the settings in which HIV-positive individuals are diagnosed suggests that approximately 40 percent of all HIV-positive results in the United States occurred in a private doctor’s office, HMO, or hospital setting in 2007, the most recent year for which data were available. These data also suggest that hospital inpatient settings account for a disproportionate number of HIV-positive results discovered late in the course of the disease. In 2007, hospital inpatient settings accounted for 16 percent of all HIV-positive results. Among HIV cases diagnosed in 2006, these same settings accounted for 31 percent of HIV-positive results that occurred within 1 year of an AIDS diagnosis. While CDC surveillance data can provide some indication of the types of settings where the greatest percentage of HIV-positive results occur, data limitations did not permit a more detailed analysis of HIV-positive results by setting type. Specifically, information on facility of diagnosis was missing or unknown for nearly one out of every four HIV cases reported through the surveillance system in 2007. CDC officials told us that in the past the agency used data from the Supplement to HIV/AIDS Surveillance project to examine the types of settings where individuals test positive for HIV, but this project ended in 2004. CDC reported that in place of the Supplement to HIV/AIDS Surveillance project, the agency has implemented the Medical Monitoring Project. However, data from the Medical Monitoring Project were not available at the time of our analysis. CDC has calculated a national estimate of more than 200,000 undiagnosed HIV-positive individuals—that is, individuals who were unaware they are HIV positive and were therefore not receiving care for HIV. CDC estimated that 232,700 individuals, or 21 percent of the 1.1 million people living with HIV at the end of 2006, were unaware that they were HIV positive. CDC does not have a national estimate of the total number of diagnosed individuals not receiving care, but CDC has calculated a national estimate of more than 12,000 diagnosed HIV-positive individuals who did not receive care within a year after they were diagnosed with HIV in 2003. CDC reported that the estimated proportion of individuals with HIV who did not receive care within a year of diagnosis—which CDC measures by the number of HIV-positive individuals who did not have a reported CD4 or viral load test within this time—was 32.4 percent, or 12,285 of the 37,880 individuals who were diagnosed with HIV in 2003. Since this estimate is based on the number of HIV-positive individuals who did not receive care within a year of diagnosis, this estimate does not include all individuals diagnosed with HIV who are not receiving care. For example, an individual may receive care within a year of diagnosis, but subsequently drop out of care 2 years later. Or an individual may receive care 2 years after diagnosis. In these examples, the individuals’ change in status as receiving care or not receiving care is not included in CDC’s estimate of the proportion of diagnosed individuals not receiving care. Although CDC has published these estimates, the agency has noted limitations to the data used to calculate the number of diagnosed HIV- positive individuals not receiving care for HIV. First, not all states require laboratories to report all CD4 and viral load test results; without this information being reported, CDC’s estimates may overstate the number of individuals who did not enter into care within 1 year of HIV diagnosis. Additionally, in the past, CDC only required jurisdictions to report an individual’s first CD4 or viral load test, which did not allow CDC to provide an estimate of all HIV-positive individuals who are not receiving care for HIV after the first year. CDC is currently disseminating updated data collection software which will permit the collection and reporting of all results collected by states. However, CDC officials told us that this software is still going through quality control checks. While CDC calculates national estimates of the number of undiagnosed HIV-positive individuals not receiving care for HIV and the number of diagnosed HIV-positive individuals who did not receive care within a year of diagnosis, the agency does not calculate these estimates at the state level. CDC officials said that these estimates are not available at the state level because not all states have mature name-based HIV reporting systems. CDC officials said that the agency is determining what it will need to estimate the number of undiagnosed individuals at the state level once all states have mature HIV reporting systems. CDC officials also said that once the new data collection software to collect CD4 and viral load test results from states is ready, data on all diagnosed HIV-positive individuals not receiving care may be available at the state level for those states with mature name-based HIV reporting systems with laboratory reporting requirements. HRSA also collects states’ estimates of the number of diagnosed HIV- positive individuals not receiving care for HIV, but data are not consistently collected or reported by states, and therefore estimates are not available for comparison across all states. States report their estimates of the number of diagnosed HIV-positive individuals who are not receiving care as unmet need estimates to HRSA as a part of the states’ CARE Act Part B grant applications. However, these estimates have limitations and are not comparable across states. One limitation is that not all states require laboratory reporting of CD4 and viral load results for all individuals who receive the tests. States use reported CD4 and viral load test results to calculate their unmet need, and, according to HRSA, without data for all individuals who receive CD4 or viral load tests, a state may overestimate its unmet need. Another limitation is that the estimates submitted in the states’ fiscal year 2009 grant applications were calculated using differing time periods. For example, New Hampshire calculated its unmet need estimate using HIV cases collected as of December 31, 2004, while Colorado calculated its estimate using data collected as of June 30, 2008. Additionally, not all states have access to information on the number of individuals receiving care through private insurance; therefore, these individuals are counted as part of the state’s unmet need. According to officials we interviewed, several barriers exist that could prevent HIV-positive individuals from receiving care. HRSA officials told us that structural barriers within the health care system, such as no or limited availability of services, inconvenient service locations and clinic hours, and long wait times for appointments can influence whether an individual is receiving care for HIV. Other barriers identified by HRSA officials are the quality of communication between the patient and provider, lack of or inadequate insurance, financial barriers, mental illness, and substance abuse. HRSA officials also noted that personal beliefs, attitudes, and cultural barriers such as racism, sexism, homophobia, and stigma can also have an impact on an individual’s decision to seek care. Officials from two states and one local health department we spoke with stated that transportation was a barrier, while officials from two state health departments stated that lack of housing was a barrier for access to care. Unstable housing can prevent individuals with HIV from accessing health care and adhering to complex HIV treatments because they must attend to the more immediate need of obtaining shelter. Agencies have implemented initiatives to connect diagnosed individuals to care for HIV. For example, part of CDC’s Expanded HIV Testing initiative focused on connecting individuals diagnosed with HIV to care. In the first year of the initiative, 84 percent of newly diagnosed patients received their HIV test results and 80 percent of those newly diagnosed were connected to care. CDC has also funded two studies that evaluated a case management intervention to connect HIV-positive individuals to care for HIV. In these studies, case management was conducted in state and local health departments and community-based organizations and included up to five visits with a case manager over a 3-month period. In one of these studies, 78 percent of individuals who participated in case management were still in care 6 months later. HRSA has developed two initiatives as Special Projects of National Significance. The first initiative, “Enhancing Access to and Retention in Quality HIV Care for Women of Color,” was developed to implement and evaluate the effectiveness of focused interventions designed to improve timely entry and retention into quality HIV care for women of color. The second initiative, the “Targeted HIV Outreach and Intervention Model Development” initiative, was a 5-year, 10-site project implemented to bring underserved HIV-positive individuals into care for HIV. According to HRSA, results of the initiative indicated that individuals are less likely to have a gap of 4 months or more of care when they have had nine or more contacts with an outreach program within the first 3 months of these programs. In collaboration with AIDS Action, an advocacy organization formed to develop policies for individuals with HIV, HRSA has also funded the “Connecting to Care” initiative. AIDS Action and HRSA developed the initiative to highlight successful methodologies to help connect or reconnect individuals living with HIV to appropriate and ongoing medical care. The methodologies were identified from cities across the country and are being utilized in different settings. The initiative includes two publications with 42 interventions that have been reported to be successful in connecting HIV-positive individuals to care. The publications provide a description, logistics, strengths and difficulties, and outcomes of each intervention and focus specifically on homeless individuals, Native Americans, immigrant women, low-income individuals in urban and rural areas, and currently or formerly incarcerated individuals. AIDS Action has held training workshops that provided technical assistance to explain the interventions, including how to apply the best practices from successful programs. HRSA provides grants under Part C of the CARE Act to public and private nonprofit entities to provide early intervention services to HIV-positive individuals on an outpatient basis that can help connect people to care. Part C grantees are required to provide HIV medical care services that can include outpatient care, HIV counseling, testing, and referral, medical evaluation and clinical care, and referrals to other health services. These programs also provide services to improve the likelihood that undiagnosed individuals will be identified and connected to care, such as outreach services to individuals who are at risk of contracting HIV, patient education materials, translation services, patient transportation to medical services, and outreach to educate individuals on the benefits of early intervention. HRSA and CDC are currently collaborating on a clinic-based research study, “Increasing Retention in Care among Patients Being Treated for HIV Infection.” The study is designed to develop, implement, and test the efficacy of an intervention intended to increase appointment attendance among individuals at risk of missing scheduled appointments in HIV clinics. In addition to CDC and HRSA initiatives, officials we interviewed told us that state and local health departments have implemented their own initiatives to connect HIV-positive individuals to care. Officials from six states and five local health departments we spoke with stated that their departments use case management to assist HIV-positive individuals through the process of making appointments and to help address other needs of the individuals. For example, officials from one of these health departments explained that some case managers sign up qualified individuals for an AIDS Drug Assistance Program and others assist with locating housing or with substance abuse issues, which can also be barriers to staying in care. Case managers make sure individuals are staying in care by finding patients who have missed appointments or who providers have been unable to contact. In addition, officials from one state and four local health departments we spoke with told us that their departments use mental health professionals and officials from one state and three local health departments told us that their departments use substance abuse professionals to connect individuals to care, since individuals who need these services are at a high risk of dropping out of care. Officials from two health departments said that their departments use counseling and officials from one health department said that partner counseling is conducted when an individual is diagnosed with HIV. HHS provided technical comments on a draft of the report, which we incorporated as appropriate. We are sending copies of this report to the Secretary of Health and Human Services. The report will be available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staffs have any questions, please contact me at (202) 512- 7114 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may found on the last page of this report. Other staff who made major contributions to this report are listed in appendix IV. U.S. federal prisons have become a principal screening and treatment venue for thousands of individuals who are at high risk for human immunodeficiency virus (HIV) or who have HIV. According to a 2008 report by the Bureau of Justice Statistics, the overall rate of estimated confirmed acquired immune deficiency syndrome (AIDS) cases among the prison population (.46 percent) was more than 2.5 times the rate of the general U.S. population at the end of calendar year 2006. The Bureau of Justice Statistics also reported that 1.6 percent of male inmates and 2.4 percent of female inmates in state and federal prisons were known to be HIV positive. To ensure that infected individuals are aware of their HIV-positive status and to ensure that they receive care while in prison, 21 states tested all inmates for HIV at admission or at some point during their incarceration. Forty-seven states and all federal prisons tested inmates if they had HIV-related symptoms or if they requested an HIV test. The Ryan White Comprehensive AIDS Resources Emergency Act of 1990 (CARE Act) was enacted to address the needs of jurisdictions, health care providers, and people with HIV and their family members. CARE Act programs have been reauthorized three times (1996, 2000, and 2006) and are scheduled to be reauthorized again in 2009. The CARE Act Amendments of 2000 required the Health Resources and Services Administration (HRSA) to consult with the Department of Justice and others to develop a plan for the medical case management and provision of support services to individuals with HIV when they are released from the custody of federal and state prisons. The plan was to be submitted to Congress no later than 2 years after the date of enactment of the CARE Act Amendments of 2000. You asked us to review the implementation status of the plan and to determine the extent of any continued coordination between HRSA and the Department of Justice to transition prisoners with HIV to CARE Act programs. However, HRSA officials told us that they did not create this plan or coordinate with the Department of Justice to create this plan. Additionally, the requirement for this plan was eliminated by the 2006 Ryan White Treatment Modernization Act. We are therefore providing information related to other steps that HRSA has taken to address the provision of HIV prevention and care for incarcerated persons with HIV transitioning back to the community and into CARE Act funded programs. Additionally, we provide information on steps taken by the Centers for Disease Control and Prevention (CDC) and states to address this issue. To provide information related to the steps that CDC and HRSA have taken to address the provision of HIV prevention and care for incarcerated persons, we interviewed CDC and HRSA officials. We also interviewed officials from nine state health departments about their programs for incarcerated persons with HIV transitioning back to the community and into CARE Act-funded programs, and the limitations of these programs. From these nine state health departments, officials from eight states provided responses about their programs. The remaining state did not have a transition program in place. Our sample is not generalizable to all state and local health departments. The U.S. prison system has been the focus of many studies on HIV testing for prisoners and care for those with HIV while in prison and upon their release. Studies have been conducted to determine the number of individuals who are accessing HIV testing and treatment for the first time upon their incarceration. Studies have also been conducted to evaluate how infected prisoners fare in their HIV treatment upon release from prison, as inmates often encounter social and economic changes including the need to secure employment and housing, establish connections with family, and manage mental health and substance abuse disorders. For example, one recent study of the Texas state prison system published in the Journal of the American Medical Association discussed an evaluation of the proportion of infected individuals who filled a highly active antiretroviral therapy (HAART) prescription within 10, 30, and 60 days after their release from prison, respectively. The study found that 90 percent of recently released inmates did not fill a prescription for HAART therapy soon enough to avoid a treatment interruption (10 days) and more than 80 percent did not fill a prescription within 30 days of release. Only 30 percent of those released filled a prescription within 60 days. Individuals on parole and those who received assistance in completing a Texas AIDS Drug Assistance Program application were more likely to fill a prescription within 30 and 60 days. Because those who discontinue HAART are at increased risk of developing a higher viral burden (resulting in greater infectiousness and higher levels of drug resistance), it is important for public health that HIV-positive prisoners continue their HAART treatment upon release from prison. CDC, HRSA, and several states we interviewed have implemented programs to aid in the transition of HIV-positive persons from prison to the community with emphasis on their continued care and treatment. CDC and HRSA have funded demonstration projects to address HIV prevention and care for prisoners with HIV upon their release from incarceration. Selected state health departments and their respective state departments of corrections have coordinated to help HIV-positive prisoners in their transition back to the community. CDC and HRSA have funded various projects to address the provision of HIV prevention and care for prisoners with HIV upon their release from incarceration. CDC and HRSA have also provided guidance to states regarding HIV-related programs. The list below describes the projects and guidance. CDC and HRSA jointly funded a national corrections demonstration project in seven states (California, Florida, Georgia, Illinois, Massachusetts, New Jersey, and New York). This demonstration project was funded from 1999 to 2004. The goal of the demonstration project was to increase access to health care and improve the health status of incarcerated and at-risk populations disproportionately affected by the HIV epidemic. The “HIV/AIDS Intervention, Prevention, and Community of Care Demonstration Project for Incarcerated Individuals within Correctional Settings and the Community” involved jail, prison, and juvenile detention settings. The project targeted inmates with HIV, but also those with hepatitis B and hepatitis C, tuberculosis, substance abuse, and sexually transmitted diseases (STD). According to a HRSA report, the project was able to enhance existing programs in facilities, and develop new programs both within facilities and outside of them. Many states integrated lessons learned through the project at varying levels throughout their state. CDC funded Project START to develop an HIV, STD, and hepatitis prevention program for young men aged 18-29 who were leaving prison in 2001. The goal of this project was to test the effectiveness of the Project START interventions in reducing sexually risky behaviors for prisoners transitioning back to the community. State prisons in California, Mississippi, Rhode Island, and Wisconsin were selected. A study describing the Project START interventions indicated a multi-session community re-entry intervention can lead to a reduction in sexually risky behavior in recently released prisoners. CDC funded a demonstration project at multiple sites in four states (Florida, Louisiana, New York, and Wisconsin) where prisoners in short- term jail facilities were offered routine rapid initial testing and appropriate referral to care, treatment, and prevention services within the facility or outside of it. From December 2003 through June 2004, more than 5,000 persons had been tested for HIV, and according to a CDC report, 108 (2.1 percent) had received confirmed positive results. CDC officials told us that CDC is currently completing three pilot studies which began in September 2006. These studies were conducted to develop interventions for HIV-positive persons being released from several prisons or halfway houses in three states: California (prisons), Connecticut (prisons), and Pennsylvania (halfway houses). CDC officials explained that CDC has established a Corrections Workgroup within the National Center for HIV/AIDS, Viral Hepatitis, STD, and Tuberculosis Prevention. In March of 2009, the workgroup hosted a Corrections and Public Health Consultation: “Expanding the Reach of Prevention.” This forum provided an opportunity for subject matter experts in the fields of corrections and academia as well as representatives from health departments and community-based organizations to develop effective prevention strategies for their correctional systems. According to a Special Projects of National Significance program update, HRSA’s “Enhancing Linkages to HIV Primary Care and Services in Jail Settings” initiative seeks to develop innovative methods for providing care and treatment to HIV-positive inmates who are reentering the community. This 4-year project, which began in September 2007, is different from the “HIV/AIDS Intervention, Prevention, and Community of Care Demonstration Project for Incarcerated Individuals within Correctional Settings and in the Community” in that it focuses entirely on jails. HRSA defines jails as locally operated facilities whose inmates are typically sentenced for 1 year or less or are awaiting trial or sentencing following trial. Under the initiative, HRSA has awarded grants to 10 demonstration projects in the following areas: Atlanta, Georgia; Chester, Pennsylvania; Chicago, Illinois; Cleveland, Ohio; Columbia, South Carolina; New Haven, Connecticut; New York, New York; Philadelphia, Pennsylvania; Providence, Rhode Island; and Springfield, Massachusetts. Besides funding demonstration projects and creating workgroups, HRSA and CDC have issued guidance to states. HRSA issued guidance in September 2007 explaining allowable expenditures under CARE Act programs for incarcerated persons. The guidance states that expenditures under the CARE Act are only allowable to help prisoners achieve immediate connections to community-based care and treatment services upon release from custody, where no other services exist for these prisoners, or where these services are not the responsibility of the correctional system. The guidance provides for the use of funds for transitional social services including medical case management and social support services. CARE Act grantees can provide these transitional primary services by delivering the services directly or through the use of contracts. Grantees must also develop a mechanism to report to HRSA on the use of funds to provide transitional social services in correctional settings. In 2009, CDC issued HIV Testing Implementation Guidance for Correctional Settings. This guidance recommended routine opt-out HIV testing for correctional settings and made suggestions for how HIV services should be provided and how prisoners should be linked to services. The guidance also addressed challenges that may arise for prison administrators and health care providers who wish to implement the guidelines in their correctional facilities. Of the eight state health departments in our review that had HIV transition programs in place, several have implemented programs that coordinate with the state’s department of corrections to provide prisoners with support services to help them in their transition back to the community. We provide examples of three of these programs below. Officials from one state health department said that their department uses CARE Act and state funding to provide a prerelease program that uses the state’s department of corrections prerelease planners to make sure that prisoners with HIV are linked to care. Prisoners meet with their prerelease planner 60-90 days prior to release, and the planner links them to care services, has them sign up for the AIDS Drug Assistance Program and Medicaid, and follows up with them after their release to ensure that they remain in care. Additionally, the department of corrections provides 30 days of medications to prisoners upon release. The state department of health has been working with the department of corrections to help them transition HIV-positive prisoners for the past 10 years. According to officials from another state health department, their department uses state funds to provide transitional case management for HIV prisoners who are transitioning back into the community. Specialized medical case managers meet and counsel prisoners with HIV who are within 6 months of being released. Within 90 days of release, the prisoner and the medical case manager may meet several times to arrange housing, complete a Medicaid application, obtain referrals to HIV specialists and to the AIDS Drug Assistance Program, and provide the prisoner with assistance in obtaining a state identification card. Case managers will also work with the prisoner for 3 months after release so that the prisoner is stable in the community. After 90 days, the person can be transferred into another case management program or they can drop out. The client is kept on the AIDS Drug Assistance Program if they are not disabled. According to officials from a third state health department, their department uses “Project Bridge,” a nationally recognized program to transition prisoners back into the community and into CARE Act programs. The Project Bridge program provides transition services to prisoners. Ninety-seven percent of the Project Bridge participants receive medical care during the first month of their release from prison. The state attributes the success of this program to the productive relationship between the state health department and the department of corrections. Project Bridge participants are involved in discharge planning with case managers starting 6 months before their discharge. Participants then receive intense case management for approximately 18-24 months after their release. During this period they are connected with medical and social services. According to state officials, the program has also been effective in decreasing recidivism rates. Officials we interviewed from state health departments described several limitations to their departments’ programs. One state health department official explained that their department does not have the staff to coordinate services for all of the state’s 110 jails. Officials from two other state health departments explained that state budget cuts are threatening the continuation of their departments’ prisoner transition programs. One state health department official explained that finding the transitioning HIV-positive prisoner housing in the community is often very difficult. The lack of available housing has impacted their HIV care because they are so focused on finding housing that they are unable to focus on taking their medication or going to medical appointments. One state health department official explained that their department’s prisoners with HIV are sometimes not interested in being connected to care in the community. Another state health department official explained that the lack of funding for prisoner transition programs is a limitation of their program. Appendix II: Part A Grantees’ Living HIV/AIDS Cases Used by HRSA to Determine Fiscal Year 2009 CARE Act Base Grants Atlanta, Ga. Austin, Tex. Baltimore, Md. Baton Rouge, La. Bergen-Passaic, N.J. Boston, Mass. Caguas, P.R. Charlotte-Gastonia, N.C.-S.C. Chicago, Ill. Dallas, Tex. Denver, Colo. Detroit, Mich. Dutchess County, N.Y. Fort Lauderdale, Fla. Fort Worth, Tex. Hartford, Conn. Houston, Tex. Indianapolis, Ind. Jacksonville, Fla. Jersey City, N.J. Kansas City, Mo. Las Vegas, Nev. Los Angeles, Calif. Memphis, Tenn. Miami, Fla. Middlesex-Somerset-Hunterdon, N.J. Minneapolis-St. Paul, Minn. Nashville, Tenn. Nassau-Suffolk, N.Y. New Haven, Conn. New Orleans, La. New York, N.Y. Newark, N.J. Norfolk, Va. Oakland, Calif. Orange County, Calif. Orlando, Fla. Philadelphia, Pa. Phoenix, Ariz. Ponce, P.R. Portland, Ore. Riverside-San Bernardino, Calif. Sacramento, Calif. San Antonio, Tex. San Diego, Calif. San Francisco, Calif. San Jose, Calif. San Juan, P.R. Santa Rosa, Calif. Seattle, Wash. St. Louis, Mo. Tampa-St. Petersburg, Fla. Vineland-Millville-Bridgeton, N.J. Washington, D.C. West Palm Beach, Fla. In addition to the contact above, Thomas Conahan, Assistant Director; Robert Copeland, Assistant Director; Leonard Brown; Romonda McKinney Bumpus; Cathleen Hamann; Sarah Resavy; Rachel Svoboda; and Jennifer Whitworth made key contributions to this report. | Of the estimated 1.1 million Americans living with HIV, not all are aware of their HIV-positive status. Timely testing of HIV-positive individuals is important to improve health outcomes and to slow the disease's transmission. It is also important that individuals have access to HIV care after being diagnosed, but not all diagnosed individuals are receiving such care. The Centers for Disease Control and Prevention (CDC) provides grants to state and local health departments for HIV prevention and collects data on HIV. In 2006, CDC recommended routine HIV testing for all individuals ages 13-64. The Health Resources and Services Administration (HRSA) provides grants to states and localities for HIV care and services. GAO was asked to examine issues related to identifying individuals with HIV and connecting them to care. This report examines: 1) CDC and HRSA's coordination on HIV activities and steps they have taken to encourage routine HIV testing; 2) implementation of routine HIV testing by select state and local health departments; 3) available information on CDC funding for HIV testing; and 4) available data on the number of HIV-positive individuals not receiving care for HIV. GAO reviewed reports and agency documents and analyzed CDC, HRSA, and national survey data. GAO interviewed federal officials, officials from nine state and five local health departments chosen by geographic location and number of HIV cases, and others knowledgeable about HIV. The Secretary of Health and Human Services (HHS) is required to ensure that HHS agencies, including CDC and HRSA, coordinate HIV programs to enhance the continuity of prevention and care services. CDC and HRSA have coordinated to assist health care professionals who provide HIV-related services. For example, in 2007 and 2008, CDC provided funding to HRSA to expand consultation services at the National HIV/AIDS Clinicians' Consultation Center. Both CDC and HRSA have taken steps to encourage routine HIV testing--that is, testing all individuals in a health care setting without regard to risk. For example, CDC has funded initiatives on routine HIV testing and HRSA has provided for training as part of these initiatives. Officials from over half of the 14 selected state and local health departments in GAO's review reported implementing routine HIV testing in their jurisdictions. However, according to officials we interviewed, those that implemented it generally did so at a limited number of sites. Officials from most of the selected health departments and other sources knowledgeable about HIV have identified barriers that exist to implementing routine HIV testing, including lack of funding and legal barriers. CDC officials estimated that approximately 30 percent of the agency's annual HIV prevention funding is spent on HIV testing. For example, according to CDC officials, in fiscal 2008, this would make the total amount spent on HIV testing about $200 million out of the $652.8 million CDC allocated for domestic HIV prevention to its Division of HIV/AIDS Prevention. However, CDC officials said that they could not provide the exact amount the Division spends on HIV testing, because they do not routinely aggregate how much all grantees spend on a given activity, including HIV testing. CDC estimated that 232,700 individuals with HIV were undiagnosed--that is, unaware that they were HIV positive--in 2006, and were therefore not receiving care for HIV. CDC has not estimated the total number of diagnosed HIV-positive individuals not receiving care, but has estimated that 32.4 percent, or approximately 12,000, of HIV-positive individuals diagnosed in 2003 did not receive care for HIV within a year of diagnosis. State-level estimates of the number of undiagnosed and diagnosed HIV-positive individuals not receiving care for HIV are not available from CDC. HRSA collects states' estimates of the number of diagnosed individuals not receiving care, but data are not consistently collected or reported by states, and therefore estimates are not available for comparison across all states. HHS provided technical comments on a draft of this report, which GAO incorporated as appropriate. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
The government has been providing housing assistance in rural areas since the 1930s. At that time, most rural residents worked on farms, and rural areas were generally poorer than urban areas. For example, in the 1930s very few rural homes had electricity or indoor plumbing. Accordingly, the Congress authorized housing assistance specifically for rural areas and made USDA responsible for administering it. However, rural demographic and economic characteristics have greatly changed over time. By the 1970s virtually all rural homes had electricity and indoor plumbing. Today, less than 2 percent of the nation’s population lives on farms, and advances in transportation, technology, and communications have – or have the potential to – put rural residents in touch with the rest of the nation. The federal role has also evolved, with HUD, the Department of Veterans Affairs (VA), and state housing finance agencies becoming significant players in administering housing programs. Homeownership in the United States is at an all-time high with 68 percent of the nation’s households owning their own home. In rural areas, the homeownership rate is even higher — 76 percent. However, according to the Housing Assistance Council, affordability is the biggest problem facing low-income rural households. Rural housing costs have increased and income has not kept pace, especially for rural renters who generally have lower incomes than owners. As a result, rural renters are more likely to have affordability problems and are twice as likely as rural owners to live in substandard housing. Although the physical condition of rural housing has greatly improved over time, it still lags somewhat behind that of urban housing. The most severe rural housing quality problems are found farthest from the nation’s major cities, and are concentrated in four areas in particular: the Mississippi Delta, Appalachia, the Colonias on the Mexican border, and on Indian trust land. Minorities in these areas are among the poorest and worst housed groups in the nation, with disproportionately high levels of inadequate housing conditions. Migrant farm workers in particular have difficulty finding affordable, livable housing. The higher incidence of housing quality problems, particularly in these four areas, offsets many of the advantages of homeownership, including the ability to use homes as investments or as collateral for credit. USDA’s Farmers Home Administration managed rural housing programs and farm credit programs until reorganization legislation split these functions in 1994. Farm credit programs were then shifted to the new Farm Service Agency. Housing programs were moved to the newly created RHS in the new Rural Development mission area which was tasked with helping improve the economies of rural communities. RHS currently employs about 5,500 staff to administer its single family, multifamily, and community facilities programs. RHS’s homeownership programs provide heavily subsidized direct loans to households with very low and low incomes, guaranteed loans to households with low and moderate incomes, and grants and direct loans to low-income rural residents for housing repairs. Multifamily programs provide direct and guaranteed loans to developers and nonprofit organizations for new rental housing that is affordable to low and moderate income tenants; grants and loans to public and nonprofit agencies and to individual farmers to build affordable rental housing for farm workers; housing preservation grants to local governments, nonprofit organizations, and Native American tribes; and rental assistance subsidies that are attached to about half the rental units that RHS has financed. In addition, RHS administers community facilities programs that provide direct and guaranteed loans and grants to help finance rural community centers, health care centers, child care facilities, and other public structures and services. For fiscal year 2003, RHS received an appropriation of $1.6 billion. Of this amount, the largest share, $721 million, is for its rental assistance program. Congress also authorized about $4.2 billion for making or guaranteeing loans, primarily for guaranteeing single-family loans. RHS oversees an outstanding single-family and multifamily direct loan portfolio of about $28 billion. Table 1 lists RHS’s programs, briefly describes them, and compares the spending for them in fiscal year 1999 with the spending for them in fiscal years 1979 and 1994. The table also shows that, although RHS’s single and multifamily guaranteed programs are relatively new, by 1999 RHS had guaranteed more single- and multifamily loans than it made directly. While RHS administers its programs in rural areas, HUD, VA, and state housing finance agencies provide similar programs nationwide, including assistance to households that may be eligible for RHS programs in rural areas. For example, RHS’s single-family loan guarantee program serves moderate-income homebuyers as does the Federal Housing Administration’s (FHA) much larger single-family insurance program. VA and most state housing finance agencies also offer single-family loan programs. In the multifamily area, HUD’s multifamily portfolio is similar to RHS’s multifamily portfolio and HUD’s project-based section 8 program operations parallel RHS’s rental assistance program. Further, in contrast to RHS, HUD has more established systems for assessing the quality of its multifamily portfolio through its Real Estate Assessment Center (REAC) and for restructuring financing and rental assistance for individual properties through its Office of Multifamily Housing Assistance Restructuring (OMHAR). Given the diminished distinctions between rural and urban areas today, improvements in rural housing quality and access to credit, and RHS’s increasing reliance on guaranteed lending and public/private partnerships, our September 2000 report found the federal role in rural housing is at a crossroads. We listed arguments for and against fundamentally changing the programs’ targeting, subsidy levels, and delivery systems, as well as merging RHS’s programs with HUD’s or other agencies’ comparable programs. A number of arguments have been presented to support continuing RHS’s housing programs separately from HUD and other agencies or for maintaining a separate system for delivering these programs, including the following: Some rural residents need the close supervision offered by RHS local offices because they do not have access to modern telecommunications or other means of obtaining information on affordable housing opportunities; Rural borrowers often need a local service office familiar with their situation in the first year of a loan; Rural areas could lose their federal voice in housing matters; Rural areas could lose the benefits of the lower rates and terms RHS’s direct and guaranteed loan programs currently offer; and HUD and other potential partners have not focused on rural areas. Proponents of arguments for merging RHS’s housing programs with other housing programs or not maintaining a separate system for delivering housing programs in rural areas present a different set of arguments: RHS’s field role has changed from primarily originating and servicing direct loans to leveraging deals with partner organizations; In some states, local banks, nonprofit organizations, social workers, and other local organizations are doing much of the front-line work with rural households that was previously done by RHS staff; The thousands of RHS staff with local contacts could provide a field presence for HUD, and other public partners, applying their leveraging and partnering skills to all communities; and RHS and HUD could combine management functions for their multifamily portfolios that are now provided under separate systems. We also noted that without some prodding, the agencies are not likely to examine the benefits and costs of merging as an option. As a first step toward achieving greater efficiency, we suggested that the Congress consider requiring RHS and HUD to explore the potential benefits of merging similar programs, such as the single-family insured lending programs and the multifamily portfolio management programs, taking advantage of the best practices of each and ensuring that targeted populations are not adversely affected. Since we issued our report in September 2000, it appears that RHS and FHA have shared some mutually beneficial practices. First, RHS’s single- family guaranteed program plans to introduce its automated underwriting capabilities through technology that FHA has already developed and has agreed to share with RHS. Second, RHS, FHA, and VA have collaborated in developing common reporting standards for tracking minority and first- time homeownership statistics. Third, we understand that there have been discussions between RHS and HUD staff on developing a model to restructure RHS section 515 mortgages using techniques that HUD has learned through restructuring similar HUD section 236 mortgages. Our September 2000 report also identified a number of actions that RHS officials and others have identified that could increase the efficiency of existing rural housing programs, whether or not they are merged. I will limit my discussion today to two issues that deal with RHS’s field structure. The first issue involves state delivery systems. When state Rural Development offices were given the authority to develop their own program delivery systems as part of the 1994 reorganization, some states did not change, believing that they needed to maintain a county-based structure with a fixed local presence to deliver one-on-one services to potential homeowners. Other states tried innovative, less costly approaches to delivering services, such as consolidating local offices to form district offices and using traveling loan originators for single-family programs. However, RHS has undergone a major shift in mission during the past few years. RHS is still a lending agency like its predecessor, the Farmers Home Administration, but it now emphasizes community development, and uses its federal funding for rural communities to leverage more resources to develop housing, community centers, schools, fire stations, health care centers, child care facilities, and other community service buildings. Some state Rural Development officials we spoke with questioned the efficiency and cost-effectiveness of maintaining a county- based field structure in a streamlined environment where leveraging, rather than one-on-one lending, has become the focus of the work. For example, the shift in emphasis from direct to guaranteed single-family lending moved RHS from relying on a labor intensive loan generation process to one that relies on private lenders to underwrite loans. When we performed our audit work in 2000 we found that Mississippi, which maintains a county-based Rural Development field structure, had more staff and field offices than any other state but the next to lowest productivity as measured by dollar program activity per staff member. Ohio, however, which ranked fifth in overall productivity, operated at less than one-fifth of Mississippi’s cost per staff member. We recognize that it is more difficult to underwrite single-family loans in the Mississippi Delta and other economically depressed areas than in rural areas generally, and Mississippi does have a substantial multifamily portfolio. Nevertheless, the number of field staff in Mississippi far exceeded that in most other states. Ohio, whose loan originators were based in four offices and traveled across the state with laptop computers, ranked seventh in the dollar value of single-family guaranteed loans made and fifth in the dollar amount per staff member of direct loans made. Ohio had also done a good job of serving all of its counties, while Mississippi had experienced a drop in business in the counties where it had closed local offices. Ohio’s travel and equipment costs had increased with the use of traveling loan originators. The Maine Rural Development office had also fundamentally changed its operational structure, moving from 28 offices before the reorganization to 15 afterwards, and in 2000 it operated out of 3 district offices. The state director at the time, who had also headed the Farmers Home Administration state office in the 1970s, said that he had headed the agency under both models and believed the centralized system to be much more effective. He added that under the new structure, staff could no longer sit in the office waiting for clients to come to them but had to go to the clients. He also maintained that a centralized structure was better suited to building the partnerships with real estate agents, banks, and other financial institutions that had become the core element of RHS’s work. The second issue involves the location of field offices. Consistent with its 1994 reorganization legislation, USDA closed or consolidated hundreds of county offices and established “USDA service centers” with staff representing farm services, conservation, and rural development programs. However, the primary goal of the task team that designed the service centers was to place all the county-based agencies together, particularly those that dealt directly with farmers and ranchers, to reduce personnel and overhead expenses by sharing resources. But while the farm finance functions from the old Farmers Home Administration fit well into the new county-based Farm Service Agency, the housing finance functions that moved to the new state Rural Development offices were never a natural fit in the centers. The decision to collocate Rural Development and Farm Service offices was based on the fact that Rural Development had a similar county-based field structure and the Department needed to fill space in the new service centers. Collocating Rural Development and Farm Service offices designed to serve farmers and ranchers makes less sense today, especially in states where Rural Development operations have been centralized. How to deal with the long-term needs of an aging portfolio is the overriding issue for section 515 properties. In the program’s early years, it was expected that the original loans would be refinanced before major rehabilitation was needed. However, with prepayment and funding restricted, this original expectation has not been realized, and RHS does not know the full cost of the long-term rehabilitation needs of the properties it has financed. RHS field staffs perform annual and triennial property inspections that identify only current deficiencies rather than the long-term rehabilitation needs of the individual properties. As a result, RHS does not know whether reserve accounts will cover long-term rehabilitation needs. Without a mechanism to prioritize the portfolio’s rehabilitation needs, including a process for ensuring the adequacy of individual property reserve accounts, RHS cannot be sure it is spending its limited rehabilitation funds as effectively as possible and cannot tell Congress how much funding it will need to cover the portfolio’s long-term rehabilitation costs. RHS’s state personnel annually inspect the exterior condition of each property financed under the section 515 program and conduct more detailed inspections every 3 years. However, according to RHS guidelines, the inspections are intended to identify current deficiencies, such as cracks in exterior walls or plumbing problems. Our review of selected inspection documents in state offices we visited confirmed that the inspections are limited to current deficiencies. RHS headquarters and state officials confirmed that the inspection process is not designed to determine and quantify the long-term rehabilitation needs of the individual properties. RHS has not determined to what extent properties’ reserve accounts will be adequate to meet long-term needs. According to RHS representatives, privately owned multifamily rental properties often turn over after just 7 to 12 years, and such a change in ownership usually results in rehabilitation by the new owner. However, given the limited turnover and funding constraints, RHS properties primarily rely on reserve accounts for their capital and rehabilitation needs. RHS officials are concerned that the section 515 reserve accounts often are not adequate to fund needed rehabilitation. RHS and industry representatives agree that the overriding issue for section 515 properties is how to deal with the long-term needs of an aging portfolio. About 70 percent of the portfolio is more than 15 years old and in need of repair. Since 1999, RHS has allocated about $55 million in rehabilitation funds annually, but owners’ requests for funds to meet safety and sanitary standards alone have totaled $130 million or more for each of the past few years. RHS headquarters has encouraged its state offices to support individual property owners interested in undertaking capital needs assessments and has amended loan agreements to increase their rental assistance payments as necessary to cover the future capital and rehabilitation needs identified in the assessments. However, with varying emphasis by RHS state offices and limited rental assistance funding targeted for rehabilitation, the assessments have proceeded on an ad hoc basis. As a result, RHS cannot be sure that it is spending these funds as cost-effectively as possible. To better ensure that limited funds are being spent as cost-effectively as possible, we recommended that USDA undertake a comprehensive assessment of the section 515 portfolio’s long-term capital and rehabilitation needs, use the results of the assessment to set priorities for the portfolio’s immediate rehabilitation needs, and develop an estimate for Congress on the amount and types of funding required to deal with the portfolio’s long-term rehabilitation needs. USDA agreed with the recommendation and requested $2 million in the President’s 2003 budget to conduct a comprehensive study. RHS staff drafted a request for proposal that would have contracted out the study, but the Undersecretary for Rural Development chose to lead the study himself. Plans are to develop an inspection and rehabilitation protocol by February 2004 on the basis of an evaluation of a sample of properties. Finally, I would like to mention some work we have begun on the Section 521 rental assistance program. With an annual budget of over $700 million, the rental assistance program is the largest line item appropriation to the Rural Housing Service. This is a property-based subsidy that provides additional support to units created through the Section 515 multifamily and farm labor housing programs. RHS provides this subsidy to property owners through 5-year contracts. The objectives for our current work are to review (1) how RHS estimates the current and future funding needs of its Section 521 rental assistance program; (2) how RHS allocates the rental assistance; and (3) what internal controls RHS has established to monitor the administration of the rental assistance program. We anticipate releasing a report on our findings in February of 2004. Mr. Chairman, this concludes my prepared remarks. I would be pleased to answer any questions you or any other members of the Committee may have. For questions regarding this testimony, please contact William B. Shear on (202) 512-4325 or at [email protected], or Andy Finkel on (202) 512-6765 or at [email protected]. Individuals making key contributions to this testimony included Emily Chalmers, Rafe Ellison, and Katherine Trimble. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | Federal housing assistance in rural America dates back to the 1930s, when most rural residents worked on farms. Without electricity, telephone service, or good roads connecting residents to population centers, residents were comparatively isolated and their access to credit was generally poor. These conditions led Congress to authorize separate housing assistance for rural residents, to be administered by USDA. Over time, the quality of the housing stock has improved and credit has become more readily available in rural areas. Also, advances in transportation, computer technology, and telecommunications have diminished many of the distinctions between rural and urban areas. These changes call into question whether rural housing programs still need to be maintained separately from urban housing programs, and whether RHS is adapting to change and managing its resources as efficiently as possible. Our testimony is based on two reports--the September 2000 report on rural housing options and May 2002 report on multifamily project prepayment and rehabilitation issues. GAO found that while RHS has helped many rural Americans achieve homeownership and has improved the rural rental housing stock, it has been slow to adapt to changes in the rural housing environment. Also, RHS has failed to adopt the tools that could help it manage its housing portfolio more efficiently. Specifically, dramatic changes in the rural housing environment since rural housing programs were first created raise questions as to whether separately operated rural housing programs are still the best way to ensure the availability of decent, affordable rural housing. Overlap in products and services offered by RHS, HUD, and other agencies has created opportunities for merging the best features of each. Even without merging RHS's programs with HUD's or those of other agencies, RHS could increase its productivity and lower its overall costs by centralizing its rural delivery structure. RHS does not have a mechanism to prioritize the long-term rehabilitation needs of its multifamily housing portfolio. As a result, RHS cannot be sure it is spending limited rehabilitation funds as effectively as possible and cannot tell Congress how much funding it will need in the future. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
In December 1994, the heads of state and government of the 34 democratic countries in the Western Hemisphere agreed at the first Summit of the Americas in Miami, Florida, to conclude negotiations to create a Free Trade Area of the Americas (FTAA) no later than 2005. These negotiations are an extension of the economic reform and integration that has occurred in much of the hemisphere over the past decade, fueling increased trade and investment within and outside of the region. Since then, the FTAA trade ministers have established a framework for the FTAA negotiations and negotiators have begun drafting the text of the agreement. The FTAA negotiations were initiated within the context of ongoing unilateral liberalization in many countries. Following a serious debt crisis, sluggish economic growth, and spiraling inflation in the 1980s, most Latin American economies shifted their economic strategies from protected, state-assisted industrialization to externally oriented, export-driven development. These strategies included lowering trade barriers and taking steps to attract foreign investment. As a result, economic growth doubled, rising from 1.7 percent on average in the 1980s to 3.4 percent in the 1990s; inflation decreased significantly; and trade expanded rapidly. All but 1 of the 34 nations participating in FTAA negotiations are members of the World Trade Organization (WTO), which sets trade rules on a global basis through a process of multilateral negotiations among its members. As part of their economic liberalization programs of the past decade, countries in the Western Hemisphere have also pursued economic integration through numerous free trade and customs union agreements. The largest trading bloc outside of the North American Free Trade Agreement (NAFTA) is Mercosur, which comprises Brazil, Argentina, Paraguay, and Uruguay. Other regional blocs include the Caribbean Community and Common Market (CARICOM), the Andean Community, and the Central American Common Market (CACM). Countries in the region, particularly Mexico and Chile, have concluded numerous bilateral free trade and investment agreements with others in the region. These subregional agreements provide greater access for industrial goods and have sometimes covered agriculture, services, and investment. Countries in the Western Hemisphere also are making agreements with those outside of the hemisphere. Mexico recently concluded a free trade agreement with the European Union (EU), and Chile and Mercosur are negotiating their own bilateral free trade agreements with the EU. Trade among Latin American countries and between Latin America and the rest of the world expanded rapidly during the 1990s. Overall trade by the region grew by 10.8 percent annually on average, outpacing world trade growth (6.6 percent) over the same period. However, intra-regional trade between members of the same trade blocs grew faster than extra-regional trade. This was particularly true for Mercosur and the Andean Community, where intra-regional trade grew twice as fast as extra-regional trade. Trade within Latin America as a whole also grew faster than trade between Latin America and the rest of the world (see fig. 1). Although the 1990s was a decade of continued reform and expanded trade, new challenges arose. For example, Mexico and Brazil both faced serious financial crises in 1995 and 1998, respectively—Hurricane Mitch devastated parts of Central America in 1998, and the Andean region has struggled with political instability and effects of the drug trade. Also, Argentina has been mired in recession and has recently faced its own financial crisis. Despite reforms, many countries still face high unemployment rates and wide disparities between the wealthy and the poor. These economic and social obstacles create challenges for continued reform, economic development, and liberalization. The prospects for the FTAA agreement, which evolved out of the reform process, will be affected by how well countries resolve these challenges. At the same time, a successfully concluded FTAA agreement may help secure the liberalization that has already taken place and extend it to new areas. Beyond these economic benefits, the FTAA is widely regarded as a centerpiece of efforts to forge closer and more productive ties among Western Hemisphere nations, increase political stability, and strengthen democracy. While the FTAA should provide benefits, it may also adversely affect certain sectors. In addition, some labor and environmental groups are concerned that potential FTAA provisions may reduce the ability of countries to set and enforce high standards for health, safety, and the environment. As in the case with other international trade agreements, the FTAA has also drawn the attention of organizations and individuals apprehensive about increased globalization of international economic activity. Some progress has been made in the FTAA process, including building a technical foundation for FTAA negotiations. At the March 1998 San José Ministerial, ministers agreed on guiding principles for the FTAA. An organizational structure and objectives for negotiations were established, and overall and interim deadlines were set. Since then, draft chapters reflecting proposals on the topics under negotiation have been prepared. Milestones for progress in the current negotiating phase have been set, but challenges remain, including bridging differences on key topics. Since beginning the process in 1994, the 34 participating countries have succeeded in building a technical foundation for the negotiations. As shown in figure 2, from December 1994 to March 1998, participants developed guiding principles for FTAA negotiations. For example, they agreed that all decisions in the FTAA negotiating process would be made by consensus and that the FTAA would be a single undertaking, meaning that the agreement would be completed and implemented as a whole rather than in parts. They also agreed that the FTAA agreement would (1) be consistent with the rules and disciplines—or practices—of the WTO; (2) improve WTO rules and disciplines whenever possible and appropriate; and (3) coexist with other subregional agreements, such as Mercosur and NAFTA, to the extent that the rights and obligations of those agreements go beyond or are not covered by the FTAA. They also reached consensus on the overall structure, scope, and objectives of the negotiations. The participating countries then formally initiated the negotiations in 1998 at the San José Ministerial and the Santiago Summit of the Americas. The FTAA negotiations are organized into nine negotiating groups and four special committees and overseen by the vice-ministerial level Trade Negotiations Committee (TNC) (see fig. 3). The ministers set out the workplans for the negotiating process and select new chairs for the negotiating groups and committees in 18-month cycles. The chairmanship of the negotiations changes at the start of each 18-month negotiating cycle, with Ecuador serving as chair for the current cycle of negotiations. Brazil and the United States are set to co-chair the final cycle from November 2002 to December 2004. In preparation for the Buenos Aires Ministerial in April 2001, the negotiating groups produced a first draft text on their specific issues. The draft text is heavily bracketed, indicating that agreement on specific language has not been reached. Nevertheless, the draft text will form the basis for future negotiations, which are expected to narrow differences on the range of proposals currently under consideration. At the April 2001 Buenos Aires Ministerial and Quebec City Summit, FTAA countries set out objectives and interim deadlines to promote the progress of the negotiations during the current 18-month negotiating cycle (May 2001 to Oct. 2002), which will culminate at the next trade ministerial to be held in Ecuador (see fig. 4). Ministers also set specific goals and timetables for the current cycle: To move toward consensus on draft rules, ministers directed negotiating groups to consolidate text and eliminate—to the maximum extent possible—material that is in dispute. To prepare to begin negotiations on market access schedules, ministers instructed specific groups to develop recommendations by April 1, 2002, on the methods and modalities (basic ground rules) for these negotiations. The ministers also asked the groups to develop, where appropriate, inventories by April 2002 of tariffs, nontariff barriers, subsidies, and other practices that distort trade. The ministers directed negotiating groups to initiate negotiations on market access schedules no later than May 15, 2002. In addition, heads of state and government agreed at the Quebec City Summit to conclude the negotiations no later that January 2005 and to seek the entry into force of the agreement no later than December 2005. Despite this progress, numerous challenges remain. Among them are technical and substantive differences on the nine topics being negotiated. Chapter 2 of this report addresses five of the nine topics being addressed in an FTAA that relate to market opening: market access, agriculture, services, investment, and government procurement. Rules are also being developed on four other trade topics, which are addressed in chapter 3, including intellectual property rights (IPR); dispute settlement; subsidies, antidumping, and countervailing duties; and competition policy. Three special committees provide input to the TNC on crosscutting themes— namely, the treatment of smaller economies, civil society, and electronic commerce (e-commerce), which are addressed in chapter 4. Chapters 2, 3, and 4 provide an overview of the topic or theme and its importance, describe the mandate of the group and its progress to date, identify controversial or otherwise important issues, and discuss next steps. Chapter 5 discusses the potential effect of a completed FTAA on U.S. trade and investment with other Western Hemisphere countries. Appendix I presents information on U.S. trade and investment with the 34 countries negotiating the FTAA. Our objectives for this report were to describe (1) the progress made to date and the issues that remain in negotiating greater market opening among FTAA countries, (2) the progress made to date and the issues that remain in developing other rules and institutional provisions for an eventual FTAA agreement, (3) the significant crosscutting themes affecting the FTAA negotiations and how have they been addressed to date, and (4) the potential effects of a completed FTAA on U.S. trade and investment with other Western Hemisphere countries. To address the first three objectives, we reviewed executive branch documents, related publications, and economic literature, and we held discussions with lead U.S. government negotiators for each FTAA negotiating group. We also reviewed FTAA documents, including the draft FTAA agreement. We had discussions with foreign government officials representing each of the major negotiating blocks and with officials from the Inter-American Development Bank (IDB), the Organization of American States (OAS), and the United Nations Economic Commission for Latin America and the Caribbean (ECLAC), collectively known as the Tripartite Committee, which provides technical support to the negotiations. We reviewed formal comments about the FTAA that were made in response to Federal Register notices and submitted to the Office of the U.S. Trade Representative (USTR). We also met with experts on the FTAA and international trade negotiations and representatives from business and civil society groups that have expressed interest in the FTAA process. In addition, we traveled to Buenos Aires, Argentina, to take part in the Americas Business Forum and Academic Colloquium associated with the FTAA Trade Ministerial and attended public briefings by USTR and the Department of State for civil society representatives. To address the fourth objective, we analyzed U.S. and regional trade and investment data from 1990 to 2000, current U.S. and regional trade barriers, and market-distorting government policies. We also examined the extent to which FTAA countries were members of multilateral and bilateral trade and investment agreements with the United States. U.S. merchandise trade data came from Department of Commerce official trade statistics. Exports were measured in terms of domestic exports at “free alongside ship” value. Imports were measured in terms of imports for consumption at customs value. U.S. services trade and investment data came from the Bureau of Economic Analysis’ Survey of Current Business. World trade data came from the United Nations international trade database. U.S. tariff data came from the U.S. International Trade Commission. Some tariff rates are given as specfic rates of duty (e.g., $5 per bushel) rather than ad valorem (percentage of value) rates. Ad valorem equivalent rates are conversions of specific rates to ad valorem rates, which allow average tariff rates to be calculated. To the extent that they were available from the International Trade Commission, these rates were used in the calculation of overall average tariff rates. U.S. trade and tariff data were analyzed at the eight- digit level of detail based on the harmonized system. For determining U.S. imports subject to duties, the tariff schedule was combined with disaggregated trade data that identified imports by preferential trade program. Other FTAA countries’ average tariff rates came from the World Bank and the IDB. In some instances, these organizations calculated different average tariff rates for the same country in the same year. For example, the World Bank lists Uruguay’s average tariff rate at about 4.5 percent in 1999, while the IDB reports an average tariff rate at above 12 percent. For consistency, we reported World Bank calculated tariffs, unless they were not available for a particular country. In that case, we reported IDB tariff rates. We relied on reports by the International Trade Commission on U.S. services trade; the International Trade Commission and the United Nations on U.S. and world investment; and the WTO, OAS, IDB, and ECLAC on each of the negotiating areas. We did not estimate an economywide model of the overall effects of the FTAA. We did review economic studies that analyze some aspects of FTAA liberalization in an economywide framework. We also did not estimate the impact of an FTAA on production, labor, and prices overall or for individual sectors of the U.S. economy. For all four objectives, we relied on our past and ongoing work on trade liberalization in the Western Hemisphere. Because FTAA negotiation ground rules only allow countries to divulge their own positions, this report generally does not name the countries holding particular positions unless officials from those countries told us that it was acceptable to do so. We acknowledge that we analyzed the FTAA from the U.S. perspective, not that of other countries participating in the process. Finally, given the relatively early stage of FTAA negotiations, and the recent emergence of key information, such as a public version of the draft agreement, U.S. civil society groups and the public that either favor or oppose the FTAA are likely to be forthcoming with more concrete positions on FTAA negotiating topics. For example, USTR issued a Federal Register notice on July 12, 2001, soliciting specific views from the public on the draft FTAA agreement, but these comments were not received in time to be reflected in this report. We conducted our work from September 2000 through August 2001 in accordance with generally accepted government auditing standards. The five FTAA groups charged with negotiating market-opening opportunities—market access, agriculture, services, investment, and government procurement—have drafted rules and are now developing the databases and methods that they will use to schedule the reduction and elimination of trade barriers among FTAA participants. Each group faces a number of issues. The market access group has the broadest set of responsibilities, including tariff and nontariff barriers for industrial goods; rules of origin; customs procedures; and technical barriers to trade, such as product standards. Also, before this group can begin negotiations on tariff elimination—one of the principal goals of a free trade area—it must agree on which tariff rates to use as a starting point. The agriculture group faces many controversial issues in its discussions, including whether to include domestic support payments to farmers (subsidies) in the FTAA agreement and how to treat sensitive products. The services negotiating group faces tough choices on the scope, structure, and timing of liberalization. Discussions on investment reveal broad agreement on many basic principles, but they also reveal differences on coverage, investor-state dispute settlement, and labor and environmental provisions. Finally, government procurement is a relatively new area for many FTAA participants and presents both opportunities in terms of market opening and challenges in terms of common experience. The group also must resolve differences over how prescriptive FTAA rules should be. Table 1 provides an overview of the five FTAA groups charged with negotiating market-opening opportunities. The remainder of this chapter describes each of these topics, its importance, and the group’s negotiating mandate; progress to date; significant issues; and next steps. Information on the potential economic impact of trade liberalization for these topics can be found in chapter 5. The market access negotiating group is crafting the rules and tariff elimination schedules for intraregional trade in industrial products, which was approximately $650 billion in 1999. Through these market access negotiations, the United States is seeking to eliminate trade barriers and related impediments that restrict U.S. exports of goods to the hemisphere. Tariff barriers for FTAA countries on this trade, although falling, are still generally high, with applied tariffs of many FTAA countries set at rates double the U.S. average of 4.8, as shown in figure 5. Other impediments, such as inefficient customs procedures, can also hinder trade. The market access group covers a greater number of issues than any of the other eight negotiating areas. Its broad scope includes the elimination of industrial tariff and nontariff measures, rules of origin, safeguards, customs procedures, and standards and technical barriers to trade. These issues affect whether a product can be imported, the ease with which the import occurs, and whether the product receives a preferential tariff rate. Trade ministers charged this negotiating group with a mandate to produce an agreement that progressively eliminates tariffs and nontariff barriers and other measures that restrict trade between participating countries. During the last 18-month negotiating period, the group compiled draft proposals for the rules governing market access issues. Participants have noted that all regional groups have been active in this process, which has been challenging, given the broad scope of the market access group. The resulting 113-page draft text includes a range of proposals, some of which are similar to WTO multilateral disciplines, while others recommend wholly new measures. The texts on rules of origin and safeguards, in particular, will be specifically tailored to a regional agreement. The market access group will ultimately produce three major products: a chapter on the overall rules covering market access, detailed country schedules for tariff liberalization, and detailed rules of origin. To meet these objectives, negotiators will need to cover a wide range of topics over the next 18 months. They will decide on the parameters for eliminating tariff and nontariff barriers and then begin negotiations on market access schedules. These decisions will affect the speed at which countries remove their barriers to FTAA imports. In addition, the negotiators will draft commitments on the other areas under their mandate. Rules of origin, which will determine how products qualify for FTAA preferential rates, will likely be complex to negotiate and potentially controversial for certain products. To craft a regional safeguard mechanism to protect industries harmed by surges in imports, negotiators will need to address countries’ desire to provide temporary relief for seriously trade-affected industries without making it too easy to create new barriers to trade. Finally, as tariff barriers are reduced, burdensome customs procedures and potentially restrictive technical standards could become important impediments to importing into a particular market. Tariff and nontariff barriers are the principal policy tools countries use to protect domestic markets. The FTAA negotiating group on market access is responsible for conducting the negotiations to eliminate tariffs on trade among the 34 countries. Before substantive negotiations can begin on the elimination of tariffs on specific goods, however, participating countries must agree on the methods and modalities, or ground rules, they will follow during later negotiations. FTAA countries must agree on the following: The base rate or starting point from which tariffs will be reduced. This issue involves reaching consensus on whether to use current (applied) rates, bound rates, or some other measure as the base from which to start negotiating. Current or applied rates are the tariff rates a country currently levies on particular goods. Bound tariffs are the maximum duties that a country has committed in the WTO to apply on those goods. Under WTO rules, a country may increase its applied rates up to but no higher than its bound rates. In practice, applied rates are often significantly lower. FTAA countries must determine the starting point, or base rate, from which tariffs will be eliminated. The higher this initial starting point, the longer it may take for actual tariff reductions to be realized. For example, if countries use the bound rate as the starting point, they may not be required to cut their applied rates until later in the reduction period. However, if the applied rate is used as the starting point, then importers will see liberalization within the first years of the agreement. To ensure that phased duty reductions produce genuine market openings, the United States is proposing that the base rate from which tariffs are phased out be the lower of either a product’s most favored nation applied rate in effect during the FTAA negotiations or the WTO bound rate at the end of the FTAA negotiating process. Pace of tariff elimination. This issue aims to define the process and timing used by countries to reduce a product’s tariff to zero. A common approach is to divide goods into baskets. For example, the tariffs on one basket of goods could be reduced to zero in 5 years, another basket in 7 years, and a third in 10 years. According to WTO rules tariffs must be eliminated on substantially all products within 10 years. The United States has proposed that products be grouped into three baskets with tariffs eliminated either immediately, in 5 years, or in 10 years for the most sensitive. Reference period for trade data. Negotiators must decide what years of trade data will be used to calculate each country’s concessions on a trade-weighted basis and to identify which countries have been the primary suppliers of particular products over the reference period. If a country is a major supplier of a product, it may be entitled to special status in the negotiations on that product. This issue will be negotiated with the assistance of a database that is being compiled on tariff rates and information on trade flows from 1997 to 2001. The database is expected to be ready by November 1, 2001. The database also will be useful for countries in determining their negotiating priorities by providing information on current trade and tariff levels. FTAA negotiations on rules of origin requirements may be complex and sensitive. These requirements will determine whether a product qualifies for tariff preferences under the FTAA. For example, they may require that for a certain product to be considered from the FTAA region, at least 60 percent of its value must come from FTAA countries’ labor, parts, and production. Negotiations on rules of origin may be complex if they are specified differently for specific products and entail unique requirements. Also, origin rules may be more restrictive for some sensitive products. For example, under two unilateral trade programs, the United States recently offered tariff preferences for import-sensitive apparel imports, but only if producers use certain U.S.-made fabrics and materials to produce them. Rules of origin are intended to ensure that the benefits of a free trade agreement primarily accrue to the countries covered by the agreement. However, the more restrictive the requirements are for particular products, the more difficult it is for exporters to qualify for the preferential duty. Restrictive rules of origin requirements have been identified as a reason why some exporters do not fully use special tariff preferences offered to them. Negotiators must agree upon the types of origin requirements they will use in the FTAA agreement. There are generally two types of origin requirements. The first are value tests, which confer origin on the basis of the percentage of value added in a country. For example, a value test may confer origin if at least 60 percent of the worth of a product comes either from FTAA inputs or the production process in an FTAA country. The second type of origin requirement is the tariff shift approach. This approach confers origin if the production process transforms a product and its inputs enough to classify it as a different product in the tariff schedule. For example, a tariff shift approach might confer origin if a final product, such as a washing machine, is categorized differently on the tariff schedule than its individual parts. With the tariff shift approach, countries also must decide at what level of detail on the tariff schedule the shift takes place and whether to make those decisions on a product-by-product basis. For example, two products may be in the same aggregate grouping, such as automobiles and auto parts, but they may be in different groups at a more detailed level. The U.S. unilateral preference programs, such as the Generalized System of Preferences, generally follow a value-added approach, while NAFTA generally uses a tariff shift approach. Several Latin American trade agreements have used a combination of a value-added approach with a tariff shift approach at one uniform level of detail. The United States supports a tariff shift approach for the FTAA, but without a uniform rule for the level of detail for the shift. U.S. negotiators argue that the tariff shift approach is less complicated and burdensome to administer than the value- added approach, and that the level of detail at which the shift takes place should depend on the type of product. Negotiators have the challenge of crafting a safeguard mechanism that meets FTAA countries’ desire to provide temporary assistance to industries seriously injured by increased regional competition without making it too easy to erect new trade barriers. Safeguards are temporary measures that either freeze or roll back trade liberalization when it is shown that the liberalization has caused injury to a domestic industry. These measures are intended to provide the industry with time to adjust to increased competition. However, if these measures are too easy to apply, they can potentially extend protections that the agreement intended to remove. The draft FTAA text includes a variety of proposals that cover (1) the procedures a country must follow to use a safeguard measure; (2) the degree of injury or impact on the domestic industry that must be shown; (3) the types of measures that could be applied (e.g., tariff or quotas); and (4) the length of time the measures can stay in place. Many proposals draw on the WTO safeguard measure, which allows for tariffs or quotas to be used for up to 4 years if increased quantities of imports can be shown to cause or threaten to cause serious injury to a domestic industry. The United States has proposed that the FTAA only allow tariffs, not quotas, to be used (as in NAFTA), for up to 3 years, if imports are shown to be a substantial cause or threat of serious injury to a domestic industry. Also, the United States has proposed that FTAA safeguard measures would only be available for countries during a 10-year transitional period. Negotiators also must decide whether FTAA countries may be exempted in certain circumstances if other FTAA countries use the WTO safeguard mechanism. In addition, countries may decide to negotiate separate sector-specific safeguards that would provide separate rules for a particular product or sector, such as textiles. Negotiating customs procedures within a trade agreement in the Western Hemisphere is a new and challenging undertaking. Other trade agreements, including the WTO, have had few concrete applications in this area. Thus, many of the countries involved in the FTAA process lack experience in this subject. Other challenges include the countries’ human capital and institutional capacity for implementing customs procedures and the overhaul of laws and procedures necessary to enforce these proposals. The San José, Costa Rica, Ministerial provided FTAA negotiators with a mandate to simplify customs procedures to facilitate trade and reduce administrative costs and also to promote customs mechanisms and measures to ensure that these operations be conducted with transparency, efficiency, integrity, and responsibility. The proposals as of July 3, 2001, in the bracketed text include proposals on transparency and information dissemination, automation, and combating fraud and other illicit customs- related activities. However, some countries have wanted to include only general customs principles, not specific policies. The major objectives of the United States include transparency of customs procedures and their administration, establishment of an advance customs rulings regime, institution of a review and appeals process for customs decisions, and improvement of customs processing. To achieve these objectives, the United States has proposed that (1) the customs procedures chapter require all FTAA countries to make publicly available information regarding customs laws, regulations, guidelines, procedures, and rulings; (2) countries be required to provide a system for issuing advance rulings before importing a good, including determinations of tariff classification, customs valuation, or country of origin; and (3) a two-step entry process separate the release of merchandise from final payment of duty, thereby reducing time and costs associated with processing. The FTAA negotiators have much work to do to reach consensus on customs procedures. Some countries reportedly view many of the proposals in the draft text as too politically sensitive, (e.g., proposals on anticorruption measures); others as technologically inappropriate for some nations to adopt (e.g., automation); and others as too burdensome. Standards and technical barriers to trade can be very significant to exporters because, despite tariff elimination, products still may be denied access if they fail to meet certain technical requirements. The WTO Agreement on Technical Barriers to Trade preserves the rights of countries to apply restrictions on imports for human health, safety, or environmental reasons while establishing procedures for avoiding measures that discriminate against imports unnecessarily. FTAA countries must decide if they want to apply additional rules in this area through the FTAA agreement. New rules may impact the balance between domestic regulatory interests and the elimination of trade barriers. The United States has not yet submitted a proposal in this area because it continues to develop its position on whether certain new disciplines would be appropriate. Proposals in the draft text are numerous and diverse and reflect FTAA countries’ domestic perspectives on regulation. Countries may decide that additional rules are useful for expanding existing WTO commitments or that additional notifications and consultations are necessary when such measures involve FTAA partners. During the current 18-month negotiation phase, the market access group will face a challenging workload, including negotiating the schedules of tariff elimination, drafting detailed rules of origin, and reducing differences in the draft text on the market access rules. Ministers specifically tasked the group at the April 2001 Ministerial to complete the hemispheric database on current applied and bound tariff rates by November 1, 2001; compile a preliminary inventory of nontariff measures along with a methodology for removing them by April 1, 2002; intensify negotiations on a safeguard regime and submit a report on their progress to trade ministers by April 1, 2002; decide the methods and modalities for negotiating the tariff schedules and rules of origin by April 1, 2002; and begin negotiations on tariff schedules and rules of origin by May 15, 2002. The hemispheric database and preliminary inventory will provide negotiators with information on each country’s current tariff and nontariff barriers that will be used in negotiating their elimination. The trade ministers also instructed the market access group to coordinate with the negotiating group on agriculture since both groups will be negotiating the modalities and country-specific schedules to eliminate tariffs on their respective products. Agriculture is one of the most hotly debated issues in the FTAA negotiations. According to the U.S. Secretary of Agriculture, the FTAA could expand U.S. agricultural exports to the hemisphere by more than $1.5 billion annually. The U.S. Department of Agriculture’s Economic Research Service estimates that an FTAA could increase agricultural exports and imports and increase agricultural income for almost every FTAA country. FTAA countries view agriculture as a top trade priority, with each maintaining offensive and defensive interests. The United States, for example, would like to see increased access to South American grain markets but maintains high tariffs on sugar and orange juice and provides U.S. farmers with domestic support payments on a number of products. Chile, on the other hand, does not provide its farmers with domestic supports but maintains a price band system for wheat, wheat flour, vegetable oil, and sugar that is designed to insulate domestic markets from international price fluctuations. FTAA agriculture negotiators seek to move beyond WTO obligations in the hemisphere by further reducing and eliminating tariffs and nontariff barriers, eliminating export subsidies, addressing other trade-distorting practices, and facilitating the implementation of the WTO sanitary and phytosanitary (SPS) agreement. The Negotiating Group on Agriculture, established by the San José declaration, was given several mandates to meet these goals. Because the agriculture and market access groups are closely related, the ministers decided that the objectives of the market access group should also apply to the negotiating group on agriculture. This means that the agriculture group will work to progressively eliminate tariffs and nontariff barriers, all agricultural tariffs will be subject to negotiation, and different trade liberalization timetables may exist. However, ministers agreed that rules of origin, customs procedures, and technical barriers to trade involving agriculture would be addressed solely in the market access group. The agriculture group was also mandated to (1) eliminate agricultural export subsidies affecting trade in the hemisphere, (2) identify and address other trade-distorting practices for agricultural products, and (3) ensure that SPS measures are applied consistently with the WTO SPS agreement. (See ch. 5 for more information on the economic impact of tariff reductions for agricultural products.) To date, the agriculture group has prepared a 45-page draft text that presents a range of proposals on market access for agricultural goods, export subsidies, other practices that distort trade in agriculture, and SPS measures. The group is working toward agreement on this draft text and must also prepare schedules for the reduction of agricultural tariffs, nontariff barriers, export subsidies, and other trade-distorting practices. Before they can begin negotiations on the schedules, they must decide how to conduct these negotiations. The agriculture group faces four significant issues. FTAA countries have not agreed on whether the agreement will address domestic supports. They also have not determined whether sensitive agricultural products will receive exceptions in the tariff negotiations. While FTAA countries have agreed to eliminate export subsidies within the hemisphere, they have not determined how to address third-party export subsidies. Finally, while all FTAA countries seek the full implementation of the WTO SPS agreement, they have not agreed on how to treat it within the text of the FTAA. One controversial issue within the agriculture group is the issue of whether to include domestic support programs in the negotiations on other trade- distorting measures. Some countries have proposed that the FTAA go beyond the current WTO agreement on agricultural domestic supports by reducing and eliminating some supports that are currently permitted. These countries feel that that much of their trade protection comes in the form of tariffs, and, if they eliminate tariffs, their products would be disadvantaged in the face of subsidized products. Brazilian officials have been particularly vocal on the issue of domestic supports, declaring that the negotiations could not proceed if the United States refuses to address domestic support programs. The United States, however, has publicly stated that commitments to domestic support reduction can only be achieved in multilateral negotiations, such as those in the WTO. U.S. negotiators argue that because U.S. competitors, such as the EU, employ such supports, reducing them in the FTAA instead of the WTO would amount to unilateral disarmament. At least one other country has a similar position on this issue. This impasse has led several FTAA experts to conclude an FTAA agreement on agriculture will depend on progress made in addressing domestic support in the WTO. Once the agriculture group begins tariff and nontariff negotiations, negotiators must determine how to handle each country’s sensitive sectors. There has been no discussion on specific agricultural products beyond the San José declaration, which states that all products will be subject to negotiation. Two FTAA experts reported that they expect certain products will receive special treatment in the negotiations, such as longer phase-out periods or outright exceptions to tariff elimination. Others have stated that they oppose exceptions to tariff elimination for agricultural products. The issue of product exceptions will be controversial because many of the products that are sensitive to one country are strong exports for another. For example, Brazil has called for increased access to the U.S. orange juice market and is a major producer of sugar, two products for which the United States maintains relatively high tariffs. However, both industries have asked U.S. negotiators to exclude their products from the negotiations. In addition, portions of the U.S. fruit, vegetable, and beef industries have requested some degree of product exception. Although ministers have agreed to eliminate export subsidies in the hemisphere, they have not reached agreement on how to handle third-party export subsidies, nor have they agreed on what constitutes an export subsidy. If they eliminated their own subsidies within the hemisphere, they would face a disadvantage in the face of third-party countries that use export subsidies on products coming into the hemisphere. Similarly, FTAA countries disagree on whether they need to create rules on the use of export subsidies outside of the hemisphere. Solutions proposed so far have included negotiating with third parties not to apply their subsidies, suspending tariff preferences, and allowing for the option of fines if export subsidies are used in either of these situations. Some FTAA countries want to go beyond the definition of an export subsidy currently used by the WTO agreement on agriculture to include other programs, such as export credits, credit guarantees, insurance programs, and food aid. The United States, however, has proposed using the WTO definition of export subsidies. The United States does not want export credits; export credit guarantees or insurance programs, when provided in a manner consistent with WTO rights and obligations; and international food aid to be considered to constitute export subsidies for purposes of the FTAA, but it does call for the staged elimination of exclusive export rights granted to state trading enterprises (such as the Canadian Wheat Board). FTAA countries have agreed to fully implement the WTO SPS agreement but have not agreed on how best to accomplish that goal. Some countries have put forward proposals that would include a detailed rewrite of the WTO SPS agreement in the FTAA text. Instead, the United States has proposed that FTAA countries agree to strengthen collaboration on matters within the purview of the WTO SPS committee and relevant international bodies. The United States also seeks agreement from FTAA countries to exchange information on new research data and risk assessment procedures and to coordinate technical assistance. In addition, several U.S. agriculture groups have identified SPS issues that they would like addressed within the context of an agreement. For example, the National Cattlemen’s Beef Association has called for the full eradication of foot-and- mouth disease in the hemisphere. Ministers directed the agriculture group to undertake several actions in the next negotiating phase, including establishing modalities for market- opening negotiations, beginning the market access negotiations, and intensifying efforts to resolve differences in the draft text. Among other things, ministers instructed the group to develop recommendations on the modalities for tariff negotiations by April 1, 2002, in order to begin these negotiations by May 15, 2002; accelerate the process of identifying nontariff measures so as to have, by April 2002, a preliminary inventory of such measures; submit recommendations on the scope and methodology for eliminating export subsidies affecting trade in agricultural products in the hemisphere by April 1, 2002; make recommendations on the types of measures and a methodology to develop disciplines on the treatment of all other practices that distort trade in agricultural products by April 1, 2002; establish a notification and counter-notification for SPS measures by April 2002 and develop mechanisms to facilitate the full implementation of the WTO SPS agreement; and submit a new version of the draft text by August 2002. According to FTAA experts, many similar proposals in the text could be consolidated during this negotiating phase. This could result in a text that has more clearly stated positions by next August. Still, these experts believe that while the group may be able to negotiate away many of the brackets by consolidating and eliminating redundancies, it is doubtful that they will be able to resolve the major issues. Finding common ground on the methods and inventories for negotiating export subsidies and other trade-distorting practices, including domestic supports, may be challenging. Latin American countries are looking for some progress on export subsidies in April 2002 before they proceed with the tariff negotiations. Specifically, they would like to see a commitment from the United States to negotiate domestic support. As the world’s leading exporter of services ($253 billion in 1999) and with its market for services relatively open, the United States has a broad interest in liberalizing services trade across most sectors. The FTAA negotiations include a range of service sectors, including telecommunications, financial, professional, distribution, and travel and tourism services. Although the services negotiating group has made progress, substantive negotiations lie ahead on key topics, including the scope, structure, and timing of market-opening commitments. Many FTAA countries have just begun to liberalize their service sectors, and most have made limited multilateral commitments to open their markets. For example, an OAS study found that, except for Argentina, Canada, and the United States, all other countries in the FTAA made moderately low to very low service commitments in the WTO. However, many service sectors, such as telecommunications and distribution, are important to a domestic economy’s overall productivity and development. Liberalizing these service sectors can foster greater competition and efficiency. Some countries, such as Argentina, Brazil, and Venezuela, have privatized previously state-owned service monopolies as part of their economic reform plans, and some subregional trade agreements, such as those among Mercosur and the Andean Communities countries, call for negotiations to liberalize services trade. FTAA trade ministers agreed that the mandate of the negotiating group on services is to establish disciplines that will progressively liberalize trade in services and create a free trade area under conditions of certainty and transparency. (See ch. 5 for more information on the economic impact of hemispheric services liberalization.) Over the 18 months leading up to the ministerial of April 2001, the services negotiators compiled a 38-page draft text of proposals covering the scope and provisions of the services chapter of the agreement. The draft text contains several broad topics that will be included in the agreement (such as provisions on most-favored nation and national treatment), but the specific disciplines and final language still must be negotiated. The draft text also includes proposals on numerous other topics that at least one country had recommended including in the final chapter. These additional topics include safeguards, subsidy provisions, general or security exceptions to the rules, and special rules for domestic regulations. In addition to the rules for services trade, the services group also will need to complete individual country schedules of market access commitments. In these schedules, each country will describe what they pledge to do to liberalize specific sectors and what reservations to the general rules they propose to take for individual sectors or measures. To produce a chapter with the rules for services trade and the individual countries’ schedules of commitments, negotiators face several challenges. One such challenge negotiators face involves the scope of the services chapter. In the WTO services agreement, the coverage includes both a cross-border supply of services and the supply of a service by a company with a commercial presence in another country’s market. Companies can establish a commercial presence by investing, but unlike the WTO, the FTAA has a separate negotiating group on investment (discussed below). The United States wants to deal with services-related investment primarily in the investment chapter. However, the current draft text contains other proposals that would include the commercial presence of a service provider under the scope of the services chapter. Negotiators will need to reconcile other scope-related issues, including (1) the ways in which services provisions in the agreement apply to subnational levels of government and (2) the timing for developing additional disciplines for sectors, such as telecommunications or specialized provisions for financial services. The WTO already has additional agreements on basic telecommunications and financial services, but not all FTAA countries are signatories or have fully adopted these agreements. The United States has recommended that there be specialized provisions for financial services partly because of the regulatory issues related to the sectors’ importance to the overall economy. Negotiators also must address the structure of the market access schedules of commitments that each country will negotiate. There are two approaches to scheduling services commitments: a top-down “negative list” approach and a bottom-up “positive list” approach. In a negative list, all service sectors are subject to the core rules, and countries must then indicate which sectors or measures they would seek to exclude from coverage. For example, a services agreement may have a “national treatment” provision that foreign service providers will be treated at least as well as domestic service providers. If a country intends to subject foreign service providers in the insurance industry to additional regulations, then it would need to take an exception to the national treatment rule. The positive list approach works the opposite way. A country specifies in its schedule only the commitments it plans to make. If a sector is not included in the schedule, then it is not covered by the agreement. The WTO General Agreement on Trade in Services generally follows a positive list approach, and NAFTA follows a negative list approach. The United States advocates using the negative list approach in the FTAA services chapter, arguing that it is ambitious but allows countries the flexibility to deal with domestic sensitivities (by scheduling reservations). Other FTAA countries, however, have proposed using a positive list approach or some variant. Although most major subregional agreements in the Western Hemisphere have used a negative list, Mercosur used a positive list approach for its services liberalization. Business representatives throughout the hemisphere that met at the April ministerial were split over whether the FTAA should use a positive or negative list approach. Some U.S. civil society and labor groups oppose using a negative list approach because they believe it later may limit government social policies if exceptions for particular sectors are not built into the agreement. They are concerned that countries could use a comprehensive services agreement to challenge government provision of social services, such as health and education, if those services compete with private sector firms. USTR has stated that it does not intend to use the FTAA to promote the privatization of social services. Negotiators will also have to agree on the timing of liberalization to be achieved through the market access commitments. Countries will begin in May 2002 to negotiate the schedules of commitments to allow access into their markets. Since this phase has not yet begun, countries generally have not revealed their goals nor is it clear how difficult it will be to resolve differences. However, U.S. service companies are considered some of the most competitive in the world, and some FTAA countries may be concerned about the final commitments they will make and the speed at which liberalization will take place. Related to this, the draft text includes potential language on a safeguard mechanism for services. A safeguard measure may provide negotiators some incentive to commit to greater liberalization because they will have a mechanism to ease potentially adverse effects. The negotiators’ mandate calls for the progressive liberalization of trade in services, but achieving this may be difficult for two reasons. First, services involve domestic regulatory and qualitative provisions that may in practice restrict foreigner’s access to markets. Given these domestic regulations, free market access may be hard to define. Second, countries may differ on whether to “progressively liberalize” services means to achieve full liberalization through one round of negotiations or through a series of rounds in future years, which would be scheduled in the agreement. Some subregional agreements, including Mercosur, have used successive rounds of negotiations, while NAFTA countries liberalized services through a single agreement. In addition, some members of subregional agreements are attempting to preserve preferences under those agreements from the scope of the FTAA liberalization. During the current 18-month phase of negotiations, the services group will try to bridge differences in the draft text of proposed rules. These will include refining the text in agreed-upon areas of negotiation, such as most- favored nation and national treatment provisions, and deciding which additional subjects the agreement should cover. Simultaneously, negotiators will seek agreement on the modalities for negotiating specific country schedules of commitments by April 1, 2002, for negotiations set to begin May 15, 2002. These decisions include whether to use a positive or negative list approach, the structure of the schedules (i.e., the format), and the process to use in negotiating country commitment offers. Although many outstanding details still need to be resolved, FTAA negotiations on investment have yielded broad agreement on the thrust of the chapter and the types of investment protection that the investment chapter will address. However, the breadth of the forms of investment that will be covered and whether the establishment or entry of investment will be covered remains controversial. Consistent with the agreed mandate for the negotiation, the United States proposes a comprehensive agreement that covers both entry and operation of investment and direct and portfolio investment. Portfolio investment, both stocks and bonds, is commercially important for the United States, accounting for 60 percent of the $661 billion U.S. investment in FTAA countries in 1999. Figure 6 shows the relative shares of U.S. FTAA investments in foreign direct investment (FDI), stocks, and bonds. However, some other countries reportedly believe this comprehensive approach is too broad. The investment chapter is also where the outcome of internal U.S. debates could make it more or less difficult to reach an overall FTAA agreement. The debates center on two issues—the extent of the ability of investors to challenge government actions as contravening FTAA investment disciplines and the inclusion of labor and environmental provisions in the text of an FTAA. As one of the largest foreign investors in Latin America, with investment growing sharply in recent years, the United States has a keen interest in FTAA negotiations on this issue. Some U.S. investment is subject to conditions that hinder efficiency, and much of it is not protected by international agreements. For example, the United States has bilateral investment treaties in force with only 8 of the 33 other FTAA participants. NAFTA protects U.S. investment in another two FTAA participants (Canada and Mexico). But the United States does not have agreements with countries such as Brazil, the largest Latin American nation. Although unilateral liberalization of investment regimes has occurred, it could be reversed in the absence of international agreements. In addition to better protection of U.S. investors, an FTAA could further liberalize investment regimes and improve U.S. options for serving growing local markets throughout the hemisphere. Other FTAA participants see FTAA investment rules as a way to send a positive signal to foreign investors, which they seek to attract to foster economic growth and stimulate competition and technology transfer. An investment agreement could set basic ground rules for entry and treatment of investment, increasing certainty, and lowering risk for potential investors. Companies from Chile, Mexico, and elsewhere in Latin America are also beginning to invest abroad. Indeed, the smaller nations in the region are reportedly the key drivers for an ambitious investment accord within the FTAA. FTAA investment negotiations aim to go beyond the WTO’s coverage of the issue, which is limited, and to build upon subregional agreements such as NAFTA, which contains extensive investment disciplines with respect to the United States, Canada, and Mexico. Such an investment agreement could commit the parties to open their market to investment from elsewhere in the hemisphere, set minimum standards of treatment for investors, and establish mechanisms for the resolution of disputes. The mandate of the negotiating group on investment, as established by the San José Ministerial Declaration, is to “establish a fair and transparent legal framework to promote investment through the creation of a stable and predictable environment that protects the investor, his investment and related flows, without creating obstacles to investments from outside the hemisphere.” The group is to develop (1) a framework incorporating comprehensive rights and obligations on investment and (2) a methodology to consider potential reservations and exceptions to the obligations. The negotiating group has produced a 43-page draft FTAA chapter on investment that incorporates the proposals received to date from FTAA participants. The draft chapter addresses a number of issues, including: scope of application; standards of treatment (national treatment, most-favored nation treatment, and a minimum or general standard of treatment); performance requirements; key personnel; transfers; expropriation and compensation; compensation for losses; general exceptions and reservations; dispute settlement, which accounts for 16 of the draft’s 43 pages; basic definitions, including of investment and investor; transparency of laws and regulations; and commitments not to relax labor and environmental laws to attract investment. The Tripartite Committee has also produced a compendium of investment agreements, a comparison of investment regimes, and annual reports on investment flows. Discussions to date reportedly reveal broad agreement among FTAA governments about many basic investment disciplines. In part, this is due to the foundation laid in more than 60 bilateral investment treaties by countries within the region and various subregional agreements. These agreements have established common approaches to defining investment and investor, setting standards of treatment for investors, and settling disputes. As a result, key participants report that the broad outlines of an FTAA agreement on investment are visible. However, several topics appear likely to be controversial or otherwise important in the negotiations and many other details must be resolved. The investment chapter has also fueled debate on two issues that are controversial domestically—the ability of investors to challenge government actions as contravening FTAA investment disciplines, and the inclusion of labor and environmental provisions in the FTAA. The outcome of these debates ultimately could affect the willingness of FTAA countries to conclude an overall agreement. Investment is a lightning rod for opposition to the FTAA by U.S. environmental, labor, and consumer nongovernmental organizations, which are concerned that investment rules could undermine a government’s ability to act in the public interest. The FTAA’s draft investment rules have already drawn fire from such organizations, largely on the grounds that multinational corporations may be given too much power relative to governments and citizens. Their biggest concern is over the prospect that private investors would be given direct access to investor- state dispute settlement to challenge government noncompliance with the FTAA. Governments can be required to pay the investor monetary damages if the investor’s complaint is upheld by a final award. Such investor-state provisions have been widely embraced under NAFTA and bilateral investment treaties in effect throughout the world and are favored by U.S. business as an efficient and impartial means for enforcing their rights, in lieu of local court systems, which might be very slow or otherwise deficient. Although tribunals have no authority to recommend or require changes to domestic legislation that violates the provisions, proceedings brought under NAFTA have provoked concerns that such challenges could undermine a government’s ability to protect health, safety, and the environment; affect the balance between federal and state control; and sideline U.S. courts in favor of international arbitration. FTAA investment negotiations are also the epicenter for another topic that has been controversial domestically, the treatment of labor and the environment in an FTAA. A U.S. proposal to include provisions on labor and the environment in the FTAA investment chapter revealed deeply held and divergent opinions among FTAA participants on the overarching question of whether an FTAA should include labor and environmental provisions at all, and, if so, how they would be enforced. The United States remains divided domestically on this issue. Late in 2000, the United States tabled language similar to NAFTA stating that countries agree not to relax environmental or labor standards to attract investment. However, strong opposition from most other FTAA nations on the grounds that labor and environmental provisions were “off the table” in FTAA negotiations resulted in the initial exclusion of this proposal from the draft chapter. The controversy prompted a call for guidance. At their April 2001 meeting in Buenos Aires, FTAA ministers decided that “any delegation has a right to make proposals it deems relevant for the effective progress of the process, which may eventually be placed in brackets” (signifying that the language contained therein is not agreed on). Within the FTAA negotiating group on investment, coverage is an important and difficult issue. One question is whether the agreement will only cover treatment of investment once admitted or include a general “right of establishment” obliging governments to permit investment to enter. Consistent with the goal of obtaining a comprehensive agreement, the United States proposed that nondiscriminatory treatment apply to the “preestablishment” phase of investment, which would, except where parties negotiate reservations for sensitive sectors, effectively accord the signatories’ investors the right to establish, acquire, or expand an investment on an equal footing with domestic and other foreign investors. Other FTAA participants also support covering the preestablishment phase. However, even though many FTAA nations have unilaterally liberalized foreign investors’ entry, some are reluctant to guarantee a general right of establishment to foreign investors in an FTAA. Another difficult issue is whether and how to cover portfolio investment. A majority of countries, including the United States, have proposed a broad, asset-based definition of investment that includes portfolio investment, some contracts and concessions, and intellectual property. However, they differ on the specific details of this definition. For example, some propose to narrow the definition to exclude speculative and certain other transactions and to allow governments to limit transfers if problems arise. Given the Asian financial crisis and concern that short-term fluctuations in capital flows contribute to currency fluctuations and balance of payments crises, certain nations oppose covering portfolio investment in the definition at all. Other countries propose addressing this concern by providing an exception to the transfers protections for these situations, rather than foreclosing portfolio investment from all protections of the agreement. Approaches to performance requirements also differ. Performance requirements—such as local content, trade balancing, local hiring or management, and technology transfer requirements—are sometimes conditions for obtaining incentives or benefits from the host government and can also be conditions for establishing an investment. The United States proposes to go beyond the WTO Trade-Related Investment Measures or "TRIMS" agreement and current bilateral investment treaties in prohibiting (subject to certain exceptions) several such performance requirements and is proposing disciplines similar to NAFTA. NAFTA and the U.S. FTAA proposal discipline certain performance requirements whether they are tied to an advantage or imposed as a condition for establishment. Other performance requirements, such as technology transfer, are only disciplined when they are a condition for establishment. Other FTAA participants also want to go beyond the WTO, but still others want to be able to employ such tools, which they see as important to promoting development. The negotiating group on investment has been charged to come up with a second draft of its chapter by August 2002. In addition, it is also to present recommendations on negotiating modalities and procedures to the TNC by April 1, 2002. The TNC is to evaluate the negotiating group’s recommendations on modalities at its first meeting after April 1, 2002, to initiate investment coverage negotiations no later than May 15, 2002. There are two basic modality issues. One is the approach that will be taken to negotiating coverage. The United States is expected to propose a “top- down” or negative list approach to coverage, which starts from the premise that all sectors are covered unless specifically reserved or excepted. The alternative is a “bottom-up” or positive list approach, which starts from the assumption that nothing is covered and builds up from there by identifying covered sectors. Both methods could result in similar levels of market access commitments initially because a member would be expected to include a reservation in a negative list approach for any sector in which it declines to take commitments under a positive list approach. However, the choice of approach might have implications for future investment access. Under a negative list approach, new investment measures would have to conform (unless they fell within one of the general exceptions enumerated in the FTAA). Under a positive list approach, new discriminatory measures would be allowed in sectors or areas not included in members’ schedules. The second issue is the form that reservations will take. NAFTA bases reservations and exceptions primarily on existing law, permitting exceptions for sectors on a limited basis. In contrast, U.S. bilateral investment treaties except broad sectors. Again, the degree of specificity could have implications for future access. The negotiating group on investment will need to coordinate with the FTAA negotiating group on services as it performs these tasks. The United States has proposed that the FTAA investment chapter apply to all investment, whether it relates to a good or a service. Because some services are provided through investment and others are provided through cross-border trade, how the issue of taking reservations is handled in both the investment and services chapter will be important to determining the ground rules for service providers in the hemisphere. Government procurement in the FTAA negotiations offers potentially great market-opening opportunities for the participants. The OAS estimates that the market for government procurement in the Western Hemisphere is valued at approximately $250 billion. U.S. observers are encouraged by potential market-opening possibilities in this area because most FTAA countries are not bound by international rules on government procurement. In addition, outside of North America, many FTAA countries have limited experience with international government procurement regimes. This is because, unlike other negotiating groups, the FTAA government procurement negotiations do not proceed from a commonly applied WTO agreement. At the FTAA San José Ministerial, the trade ministers formed the negotiating group on government procurement with the mandate to expand access to the government procurement markets of FTAA countries. More specifically, ministers directed the group to (1) achieve a framework to ensure transparency of government procurement processes, without necessarily implying an identical system for each country; (2) ensure nondiscrimination in government procurement; and (3) ensure impartial and fair review for resolving procurement complaints and appeals by suppliers. The FTAA government procurement regime may be similar to other multilateral agreements, such as the WTO Government Procurement Agreement or NAFTA, which cover the terms of contracts for a wide range of goods and services. Under these agreements, the entities or enterprises to be covered are specified, as are minimum purchase values, called thresholds. Generally, the higher the number of entities and enterprises covered by an agreement and the lower the threshold, the more liberalizing the agreement. Although FTAA government procurement negotiations will begin to address market access concessions in the upcoming phase of the negotiations, the bulk of the 34-page draft text submitted by the negotiating group to the ministers in Buenos Aires focuses on ways to conduct the procurement proceedings. The text includes proposed language on a wide range of rules and technical matters, including the application of principles such as national treatment and most- special and differential treatment for smaller economies; the thresholds and valuation of contracts; procurement exceptions; publication of laws and rules governing procurement processes; specific procurement procedures, including the qualification of the process for selecting and awarding contracts; and review and appeal procedures, including dispute settlement. One aspect of the negotiation-transparency is significant for government procurement and is addressed in the draft text. According to IDB, government procurement has been considered a nontariff barrier due to the tendency to award contracts to national firms rather than to make decisions that are based only on price and quality. This tendency has resulted in an inefficient and sometimes corrupt process. Government procurement experts believe that an agreement that is transparent in its explication of procedures and its means to verify the application of rules provides a variety of benefits. For example, a transparent agreement renders fraud and corruption more difficult. It would also enhance the opportunity for competition in bidding, resulting in higher quality procurements and budgetary savings to governments. The United States is seeking an FTAA that would require publication and wide dissemination of all laws, regulations, judicial decisions, and other measures governing government procurement. Negotiators will have to resolve differences in two basic approaches to the government procurement chapter. One approach, backed by the United States, is rules-based, which would rely on detailed procedural provisions while avoiding unnecessarily burdensome requirements. The United States and other parties to the Government Procurement Agreement and NAFTA have considered that this approach is necessary because of the nature of government procurement, which can be influenced by government policy and politics, in addition to commercial considerations. As a result, the United States believes that, to enjoy the concessions that will be negotiated, the agreement must include specific procedural provisions on topics such as the publication of timetables and tendering procedures as found in both NAFTA and the Government Procurement Agreement. However, other countries prefer a principles-based approach to the FTAA government procurement chapter, which would rely on more general guidelines. Proponents of this approach argue that it is better to make a basic commitment to nondiscrimination but not prescribe specific procedures that all of the parties are to follow. It would thus be up to the local authorities to develop their own procedures. An advocate of this approach noted that no degree of specificity would prevent a country determined to avoid compliance from doing so, and that ultimately good faith in applying the principles has to be relied on. If discrimination was found, a challenge could still be brought under the dispute settlement provisions. To move forward with government procurement negotiations, ministers at Buenos Aires instructed the government procurement group to submit recommendations to the TNC by April 1, 2002, on the guidelines, procedures, and deadlines for negotiations so that the negotiation of concessions can begin no later than May 15, 2002, and submit a new version of the draft text of the government procurement chapter to the TNC by August 2002. The ministers also provided a directive to the government procurement negotiators to identify, by April 1, 2002, the scope and details of the statistical information that the countries should exchange with each other and use to support their negotiations. This directive was issued because some delegations felt it would be necessary in order to prepare for an exchange of statistical data on their government procurement markets before commencing the market access negotiations. On the basis of the ministerial directive, the negotiators must decide on the statistical systems and entity lists they need to undertake the negotiations. For example, there is no point in requiring statistical information on procurement by every government agency in the hemisphere before the FTAA governments have a clearer understanding of the likely scope of the market access negotiations, according to USTR. FTAA negotiators are also developing trade rules and institutional provisions for four other topics: IPR; subsidies, antidumping, and countervailing duties; dispute settlement; and competition policy. Negotiating groups have prepared draft chapters on their respective topics. Some issues under consideration in these groups are controversial. For example, FTAA participants have fundamental disagreements regarding IPR. The United States would like the FTAA to represent a state-of-the-art agreement that goes beyond the obligations of other relevant agreements. Some developing countries, on the other hand, are reluctant to go beyond their current obligations. FTAA countries’ interests also diverge in the area of antidumping measures. A U.S. proposal to reserve its right to apply its trade remedies angered many other FTAA participants who want to curb the use of these measures. Other issues under consideration in these groups will require intense effort to finalize outstanding details. For example, FTAA participants must resolve dispute settlement issues such as compliance, appeals, and the participation of outside parties. Similarly, FTAA negotiators must determine the level of detail that the competition policy agreement needs to have to effectively proscribe anticompetitive business conduct. Table 2 provides an overview of the topics covered in this chapter. The remainder of this chapter describes each of these topics, its importance, and the group’s negotiating mandate; progress to date; significant issues; and next steps. Information on the potential economic effect of trade liberalization for these topics can be found in chapter 5. According to the WTO secretariat, IPR is defined as the rights given to persons over the creations of their minds, such as a book or software program, usually providing the creator with an exclusive right over the use of his or her creation for a period of time. The goal is to reward creativity and to establish an environment conducive to the broad sharing of ideas. IPR is one of the most important issues to the United States because it enjoys a decisive competitive advantage in terms of high-tech, knowledge- based industries and advancing the interests of these industries in the FTAA through strengthening IPR could result in significant gains for the U.S. economy. U.S. software firms, for example, would benefit if FTAA nations agreed that their governments would use only legitimate software in their agency operations. On the other hand, developing countries want to include in IPR disciplines such as folklore, and traditional knowledge. Therefore, IPR negotiations clearly marks the vast differences in economic and technological interests of developed and developing countries. IPR negotiations in the FTAA promise to be challenging because FTAA nations have fundamentally divergent interests and have not, for a variety of reasons, made much progress on IPR negotiations. As a result, considerable work remains. Some of the topics under consideration are controversial or completely new to trade negotiations. The FTAA could go beyond the WTO and NAFTA by addressing technologies, treaties, and issues that have emerged since these landmark trade agreements were concluded in 1994 and 1993, respectively. For example, since then, industries such as biotechnology and e-commerce have emerged as commercially significant industries, a number of new IPR treaties have been concluded, and others are under negotiation. The specific mandate for the FTAA IPR negotiations as stated in the San José Ministerial is to reduce distortions in trade in the hemisphere and promote and ensure adequate and effective protection to IPR. The mandate notes in doing so, changes in technology must be considered. The IPR negotiating group has developed a 106-page draft chapter that compiles proposals from different FTAA nations involved on 15 topics: (1) trademarks, (2) geographical indications, (3) copyrights and related rights, (4) folklore, (5) layout designs of integrated circuits, (6) patents, (7) the relationship between the protection of traditional knowledge and access to genetic resources and intellectual property, (8) utility models, (9) industrial designs, (10) plant varieties, (11) undisclosed information, (12) unfair competition, (13) anticompetitive practices in contractual licenses, (14) enforcement of IPR, and (15) technical cooperation. A variety of factors have hindered progress in FTAA negotiations on IPR. First, some FTAA nations slowed progress in the previous phase of negotiations. In part, this was due to a fundamental reticence by some FTAA nations to go beyond their current obligations under the WTO Agreement on Trade-Related Aspects of Intellectual Property (TRIPS). Also, because of its importance to the overall FTAA package, which will be a “single undertaking,” some nations reportedly tried to ensure that negotiations on IPR did not get ahead of negotiations on topics of more interest to them, such as agriculture or subsidies, antidumping, and countervailing duties, according to U.S. officials. Second, the subject of IPR is complex. Each area involves a different agency, specialty, or industry, which makes the negotiating environment challenging. Third, the interests of FTAA participants in IPR differ widely. The United States, the leading proponent in FTAA IPR negotiations, is pushing for a state-of-the- art IPR agreement that reflects changes in technology, improved international rules, and better enforcement. The U.S. proposal for the FTAA chapter on intellectual property goes beyond the obligations that the United States and most FTAA countries have undertaken through the TRIPS agreement. It would extend NAFTA disciplines to countries elsewhere in the region. The United States is particularly interested in strengthened enforcement of IPR because many FTAA countries have been lax in enforcing TRIPS provisions. (See ch. 5 for estimates of the economic impact of lax IPR enforcement.) Copyright piracy, for example, is still commonplace. The United States also wants to ensure that the FTAA does not undermine IPR protections secured under the WTO and NAFTA. Developing countries in the region have not traditionally been strong supporters of IPR. However, in the past decade, along with other economic reforms and the advent of TRIPS, they have experienced a progressive evolution in views and policies in favor of greater IPR protection. Laws and institutions now exist in key nations such as Mexico, Brazil, and Argentina, according to trade experts. However, some FTAA countries are reluctant to take on obligations beyond TRIPS, particularly since many improvements in IPR resulting from the FTAA will have to apply unconditionally to all other WTO nations under the “most favored nation” principle contained in Article 4 of the TRIPS agreement. In addition, some FTAA countries have identified areas such as traditional knowledge and folklore that can benefit them within an IPR regime. However, these countries face resource and technical challenges to effectively enforcing IPR and believe that there may be trade-offs between stronger IPR enforcement and other domestic objectives. Because IPR negotiations are at a relatively early stage, it is difficult to tell which issues will prove controversial, a U.S. official said. Potentially difficult subjects for negotiation include—copyrights in a digital era, compulsory licensing, limitations to patentabilty, enforcement, and the relationship of trademarks to geographical indications. Proposals by other FTAA nations on folklore, genetic resources, and traditional knowledge also may pose difficulties for the negotiators. In the area of copyrights, the United States is proposing to ensure protection of copyrighted works in a digital environment by having the FTAA incorporate the substantive provisions of two treaties concluded in 1996 under the auspices of the World Intellectual Property Organization (WIPO). These treaties deal with music, programs, and literary works provided over the Internet. Although a number of countries in the hemisphere already have acceded to the WIPO treaties, others have yet to do so. The WIPO provisions provide important new rights, such as an exclusive right for authors to make their works available on-line. However, the United States faces resistance to its proposal. Some nations do not want to go beyond their current TRIPS obligations. Others object to the U.S. proposal because it contains language intended to clarify certain principles contained in WIPO treaties. For example, the WIPO treaty prohibits tampering with technology designed to prevent unauthorized access to protected works, performances, and phonograms. The U.S. FTAA proposal clarifies that this prohibition must cover both the building of devices capable of tampering with the protected subject matter (e.g., decoding devices) and the actions of actually doing so (e.g., hacking), with appropriate exceptions permitted. There are also many contentious issues in the area of patents. For example, compulsory licensing, or government permission to produce a patented product or process without authorization of the patent holder, is a contentious issue in the IPR debate over the proper balance between providing incentives for research and the need for public access. This is especially acute with regard to making medicines affordable and accessible. In FTAA negotiations, the United States proposes to clearly specify the circumstances under which FTAA members can grant compulsory licenses. Another FTAA participant has proposed that FTAA members should be given greater scope to grant compulsory licenses, including if a patent holder does not “work” the patented product within a specified period of time. The positions of other FTAA participants range from keeping the status quo under TRIPS—which does not specify the circumstances when governments can grant compulsory licenses, but sets procedural requirements when doing so—to easing TRIPS’ procedural requirements. The issue of whether certain items can or should be excluded from patentability is another on which FTAA members diverge significantly, with three major options proposed. First, reflecting U.S. leadership in medical and agricultural biotechnology, the United States is trying to narrow (from TRIPS) the categories of products or processes for which patents may be refused. The United States is proposing that the FTAA result in a requirement for members to grant patents for all subject matter except medical and diagnostic procedures, provided that the basic criteria for granting a patent are met—namely, that they are new, involve an inventive step, and are capable of industrial application. FTAA countries would retain the right under TRIPS to refuse patents for products or processes whose commercial use in FTAA countries would jeopardize public order or morality, or seriously jeopardize human, animal, or plant health or the environment. The second option, proposed by other FTAA countries, is that FTAA members retain the right to exclude plants and animals (other than microorganisms) and biological processes from patentability, even if they otherwise met the criteria for patentability. The third option proposed is that FTAA members be prohibited from granting patents for plants, animals, and biological processes. Enforcement is also an important issue in the FTAA negotiations. The United States has serious concerns about the enforcement of IPR in the FTAA region and has proposed various steps to bolster enforcement through the FTAA. For example, the United States has proposed that violators may be required to pay damages for IPR violations commensurate with the harm suffered, including compensation that is based on the full retail value. Other FTAA participants, particularly developing countries, have resisted the United States on the issue of enforcement. Many of these FTAA participants are already facing difficulty implementing their TRIPS obligations and say that they lack the resources and capacity to enforce IPR and face other problems that are more pressing, such as violent crime and drug trafficking. In the area of trademarks, the main concern involves the relationship between trademarks and geographical indications. Both trademarks and geographical indications provide consumers with information about the source of products; both marks are considered distinct IPR rights that entitle the owner to exclusive use of the mark once it is registered. Geographical indications, such as Idaho potatoes, Florida oranges, and Washington State apples, are marks that identify a good as originating from a geographic area where the quality, reputation, or other characteristic of the good is essentially attributable to that area. FTAA negotiators must decide whether the registration of one type of mark should preclude the later registration of the other type of mark. The United States wants the FTAA to establish the principle that the owner of the mark that was registered first—regardless of whether the original (first) mark is a trademark or a geographical indication—has the right to preclude registration of another mark sought at a later date. This would prevent problems such as the one that occurred in Europe when a geographical indication was granted for Budweiser beer (made in a town by the name Budvar Ceske in the Czech Republic), despite the fact that a trademark on the same name was already registered earlier to the U.S. company Anheuser-Busch. Other FTAA participants from developing countries strongly resist the U.S. proposal because they perceive it as going beyond their current TRIPS obligations. To capture greater returns from IPR, some developing FTAA countries are proposing to go beyond TRIPS and include topics such as traditional knowledge, folklore, and genetic resources in the FTAA’s IPR chapter. Traditional knowledge involves knowledge and practices such as traditional healing methods. Some claim that this knowledge, which exists in local communities and is often passed from generation to generation, can be valuable in the pursuit of innovative medicines. Other countries prefer that a technical forum, such as the WIPO, continue to vet those issues before they are addressed in the FTAA. They are also skeptical that these concepts should be considered new forms of intellectual property. The IPR negotiating group is now working to remove or consolidate duplicative language from the bracketed text. This process is expected to be completed over the summer of 2001, and substantive negotiations are expected to begin October 2001. The group will work to eliminate differences in the updated draft text by August 2002. A politically sensitive issue for FTAA negotiators involves the trade remedies used to counter “unfairly traded” imports. These measures are (1) antidumping duties, which are imposed on “dumped” imports (i.e., imports sold at a price lower than normal value) and (2) countervailing duties, which are imposed on subsidized imports. An importing country imposes antidumping or countervailing duties to remedy the injury to the domestic industry caused by the dumped or subsidized imports. Of these trade remedy measures, antidumping has been very controversial. Proponents believe an antidumping regime is necessary to offset unfair trade practices, while opponents view it as a protectionist system that shelters noncompetitive firms or industries while penalizing domestic consumers. The United States is one of the most frequent users of antidumping measures, which are allowed under rules established in the WTO, and is strongly in favor of having the use of these measures be governed by WTO rules rather than FTAA-specific rules. Many other FTAA countries also employ antidumping measures as a way to address unfairly traded imports. The OAS reports that most regional trade agreements involving countries in the Western Hemisphere allow their members to use antidumping measures as long as they comply with WTO rules. Five FTAA countries predominantly account for the use of antidumping measures in the region. However, the OAS also reports that 19 FTAA countries had never used antidumping measures as of 2000. In addition, the Canada-Chile free trade agreement would eliminate the use of antidumping between the two countries. A Chilean trade negotiator explained that Chile believes the use of safeguard measures is preferable to antidumping because safeguard measures are a more specific instrument and are temporary. (See ch. 5 for more information on the hemispheric use of antidumping.) Ministers at the San José Ministerial created the negotiating group on subsidies, antidumping, and countervailing duties with a mandate to (1) examine ways to deepen disciplines, if appropriate, and enhance compliance with the terms of the WTO Agreement on Subsidies and Countervailing Measures and (2) achieve a common understanding with a view to improving, where possible, the rules and procedures regarding trade remedy laws to avoid creating unjustified barriers to trade in the hemisphere. In advance of the ministerial meeting in Buenos Aires, the subsidies, antidumping, and countervailing duties group prepared a fully bracketed, 17-page draft text covering a range of issues. The text includes detailed technical provisions on such topics as the determination of dumping and injury, investigations and evidence, the application of provisional measures, assessing and collecting duties, special provisions for developing countries, and dispute settlement proceedings. The draft proposals submitted by the various FTAA negotiators vary widely in their implications for a hemispheric antidumping regime—from maintaining the status quo to eliminating antidumping measures altogether within the region. The United States strongly advocates an approach that would maintain the current WTO rules on antidumping and countervailing duties in the FTAA. Many other proposals in the draft text are modifications to the WTO rules on antidumping measures that, according to the U.S. negotiator, would make it more difficult to use such measures within the FTAA. For example, a WTO threshold for sales below cost (a measure used to determine the cost of a dumped product) would be doubled, and another proposal would have the effect of raising the standard for the determination of injury. The United States strongly opposes these modifications, arguing that they would weaken the U.S. antidumping law. Further, modifications would present serious legal and practical problems by effectively creating dual trade remedy regimes that would greatly complicate dumping investigations, which often include suppliers from multiple countries. Another proposal introduces a procedure not now included in the WTO agreement, which would provide for a public interest inquiry that could result in the imposition of reduced dumping or countervailing duties. Yet another very different proposal contained in the draft text calls for the outright renunciation of antidumping measures on imports from within the region once the free trade area is established. FTAA subsidies, antidumping, and countervailing duties negotiators have addressed three topics of note so far: the proposed antidumping draft text, the possibility of deepening disciplines on nonagricultural subsidies, and the relationship between trade and competition policy. The most difficult of the three issues involved the draft text. The United States’ publicly stated position is that its ability to maintain effective remedies against dumped or subsidized imports is essential to achieving support for the overall goal of trade liberalization. To this end, the United States proposed draft text stating that each party reserves the right to apply its antidumping and countervailing duty laws, and that no provision of the agreement shall be construed as imposing obligations with respect to these laws. According to U.S. negotiators, the United States proposed that this draft text be included as a stand-alone proposal, separate from the other draft text, because it represents an entirely different approach to the draft chapter. They further explained that this language was intended to maintain the status quo under WTO trade remedy rules, which the other proposals would have the effect of modifying. The U.S. proposal created a controversy among FTAA participants. According to one of the foreign lead negotiators, many other participants were angered by the U.S. proposal because they believed the United States wanted to take antidumping off the negotiating table. U.S. negotiators stated that their proposal represented a different approach under which, although substantive WTO rules would remain unchanged, some improvements in the areas of transparency and due process could be explored. The controversy was defused by several ministerial directives, as discussed below. Because the issue of antidumping is so politically sensitive, other such flare-ups may recur throughout the course of the FTAA negotiations. Deepening the WTO disciplines on domestic subsidies on nonagricultural goods is a much less controversial aspect of the process to date. The United States has advocated exploring options for deepening WTO-level subsidy disciplines and improving transparency, consistent with the mandate of the negotiating group. This issue should receive more attention during the next phase of the negotiations than it has so far. A final issue that has been addressed by this negotiating group is the relationship between trade and competition policy. At the outset of the FTAA discussions, for example, some countries wanted to examine the possibility of injecting antitrust concepts into antidumping rules to more narrowly circumscribe the antidumping remedies. Both the antidumping and competition policy negotiating groups undertook studies to examine the relationship, which were then reviewed by the groups. The United States believes that competition rules and trade remedies address distinctly different problems. At their April 2001 meeting, the FTAA ministers directed the two negotiating groups to use the studies for further discussion, rather than to solicit additional studies. To move the negotiations forward, ministers directed the subsidies, antidumping, and countervailing duties group to undertake several actions during the next negotiating phase. Specifically, the ministers instructed the group to intensify its efforts to reach a common understanding with a view to improving, where possible, the operation and enforcement of hemispheric trade remedy laws, and submit recommendations on the methodology to be used to achieve this objective to the TNC by April 1, 2002; intensify its work of identifying options for deepening, where appropriate, existing disciplines on subsidies in the WTO Agreement on Subsidies and Countervailing Measures, and submit recommendations on the methodology to be used to achieve this objective to the TNC by April 1, 2002; identify, using a previously prepared study on trade and competition, any interaction between trade remedies and competition policy that may merit further consideration by the TNC, and provide the results to the TNC by April 1, 2002; and submit a new version of the draft text by August 2002. As part of the compromise reached to move the process forward, the ministers at Buenos Aires required the subsidies, antidumping, and countervailing duties group to prepare recommendations on its methodology to meet the ministerial mandates concerning trade remedy laws and subsidy disciplines. Some FTAA experts believe that this requirement was put in place to ensure that all negotiating groups move forward in tandem and that progress on subsidies and antidumping is commensurate with the rest of the negotiations. The U.S. negotiator believes that the methodology directive is not as significant for this group as reaching agreement on the draft text, which will be more challenging. Although the issues involved are arcane, the topic of dispute settlement in the FTAA process is recognized as a linchpin for the effective operation of the FTAA agreement as a whole. FTAA participants, including the United States, appear to agree on the nature of the dispute settlement mechanism to be created, but three specific issues are likely to be controversial: how to handle compliance, whether to allow appeals, and the extent of public access to the process. Finalizing an FTAA dispute settlement chapter also will require resolving other issues, such as the FTAA’s jurisdiction versus other international agreements, third-party rights, and institutional issues. The FTAA’s dispute settlement mechanism will serve a critical role in a final FTAA agreement for three reasons. First, it will ensure that the rights secured and commitments made in an FTAA are upheld. Because the FTAA is expected to go beyond the WTO and other international agreements, FTAA dispute settlement is viewed as the only meaningful way to enforce those commitments. Second, a well-functioning FTAA dispute settlement system will deter countries from adopting measures that do not comply with the FTAA. Third, it will bolster members’ confidence by preserving the balance of benefits attained in negotiations and ensuring they have recourse to effective and impartial redress. The FTAA dispute settlement chapter is expected to create a way to resolve government-to-government disputes over the application and implementation of the FTAA agreement. Specifically, the negotiating group’s mandate is to establish a fair, transparent, and effective mechanism to settle disputes design ways to promote the use of arbitration and alternative dispute settlement mechanisms to solve private trade controversies in the framework of the FTAA. The primary achievement of the negotiating group has been to develop a draft 30-page chapter on dispute settlement that consolidates proposed legal text from all FTAA participants. Negotiations on the chapter are at an early stage, and positions continue to evolve as domestic consultations continue. Participants report considerable work will be required to bridge substantive differences and resolve technical and practical issues. The draft chapter on dispute settlement covers definitions, scope of application, principles, general provisions, and procedures for dispute settlement, including consultations and resort to a neutral body or panel; the nature of a final FTAA dispute settlement decision and consequences of failure to implement a decision; the obligation to use FTAA dispute settlement to redress violation or impairment of benefits of the FTAA agreement; the extent to which the dispute settlement procedure will be confidential or transparent in nature; differences in levels of development and effective access; and alternative dispute resolution, such as private commercial arbitration. Discussions on dispute settlement in FTAA are at an early stage, but participants report that there is agreement on many of the fundamentals. For example, there appears to be wide agreement about the nature of the FTAA dispute settlement process—namely, that it have both diplomatic and quasi-judicial features to secure a positive and mutually acceptable resolution to the dispute at hand. A dispute settlement process would likely have three stages: (1) mandatory consultations between the complaining country (or countries) and the country whose measure is at issue; (2) if such consultations fail, establishment of a neutral panel to rule on whether the complaint of noncompliance is warranted; (3) the expectation that a country found to be in violation of its FTAA obligations would respond by complying or by offering compensation. Failing either response, it could face retaliation in the form of new restrictions on its trade. That said, important differences in the negotiating process remain and many complex issues must be worked out. A major substantive difference in the FTAA negotiations concerns compliance. The two models of dispute settlement under active consideration are the WTO and NAFTA. A key difference between the two models are the steps taken at the compliance stage. Under the WTO, before a complaining party that has won a favorable ruling can retaliate, it must wait for the outcome of a possible appeal; the passage of a “reasonable period of time” for the party found to be in breach to comply; and, if it still fails to compy, the possibility of up to three additional authorization or arbitration procedures. Under NAFTA, the aggrieved country can automatically retaliate 30 days after the panel ruling. The United States has aggressively, and often successfully, employed dispute settlement in complaints against foreign nations’ measures in the WTO. However, the WTO process has been accused of taking too long and failing to reliably produce compliance. Resolving the different approaches to compliance will require FTAA participants to balance a desire for certain enforceability against practical and defensive considerations. Recent events highlight that the United States may be forced to defend its own measures and could have difficulty meeting short deadlines or complying with adverse rulings. For example, the United States is now struggling to comply with rulings in two recent cases involving a multibillion dollar U.S. tax program known as the Foreign Sales Corporation and U.S. restrictions on Mexican trucking services. Another major difference in the FTAA negotiations is evident on the subject of appeals. Some FTAA nations have proposed a standing appeals body that would, upon request of either party, examine the legal bases for panel rulings and accept, reject, or modify them. Other nations did not propose an appeals stage. Although the U.S. proposal did not include an appeals stage, the U.S. government is currently evaluating its position. Legal experts that we contacted noted that appeals can add time and expense. However, they stated that the WTO’s track record of dispute settlement is more extensive and better regarded than NAFTA’s, in part because the WTO Appellate Body helps promote consistency and legal rigor in panel decisions. Moreover, the WTO Appellate Body has served as an important check in the system, substantially revising panel rulings against the United States that raised sovereignty problems. NAFTA’s general dispute settlement process does not contain an appeals mechanism, and this dispute settlement process has rarely been used. FTAA governments differ widely on the subject of the openness (or transparency) of the dispute settlement procedure. The United States has proposed that FTAA dispute settlement include several transparency guarantees, such as open hearings; immediate public access to documents, such as legal briefs; and opportunities for interested private persons, organizations, or companies to be notified of the initiation of FTAA dispute settlement and to provide input into the process. Other FTAA countries not only oppose the U.S. stance on openness but have proposed language for the draft FTAA dispute settlement chapter that requires a confidential process that generally precludes direct or indirect input and participation by nongovernmental organizations. While not inherently controversial, several complex issues also face FTAA negotiators on dispute settlement. The first is known as “choice of forum” and has to do with who makes the decision regarding whether the FTAA will be used as the forum to settle a dispute among FTAA participants and on what basis. This issue arises because most FTAA participants are members of the WTO and subregional integration agreements such as NAFTA. These agreements may contain substantive obligations on the same topic and provide separate dispute settlement forums. A problem could arise if (1) jurisprudence built up in one forum that is at odds with another or (2) a country sought to pursue a complaint about the same measure in two different forums on the same or different grounds. The United States has proposed that, as a rule, the complaining country choose the forum in which to pursue a given complaint and, by that choice, foreclose recourse to any other forum. However, the U.S. proposal recognizes that in situations where the FTAA goes beyond the WTO, the FTAA may express a preference that FTAA dispute settlement be used. Second, the relationship of FTAA dispute settlement to other agreements can also affect third-party rights. Third-party rights are the rights of parties other than the complaining country and the country complained against to participate in a dispute as a co-complainant or as a third party after proceedings have been initiated by another country. Problems could arise if (1) one FTAA country wanted to pursue a complaint about a given measure in the FTAA and another FTAA country wanted to pursue a complaint about the same measure in the WTO or (2) an FTAA country proposed to pursue a complaint under a subregional agreement, such as NAFTA, to which other FTAA participants did not have access. The challenge is to minimize multiple litigation while ensuring that all parties’ rights are not diminished. To address this challenge, the United States proposed that FTAA countries be notified of the intent to file a formal WTO complaint against an FTAA member’s measure. A third party’s stated desire to complain about the same measure would give rise to consultations with an aim to reach agreement on a single forum. The United States also proposed that if a country failed to join an FTAA dispute as a complaining party, it would normally forego litigation about the same matter at the WTO or the FTAA. Third, making an FTAA dispute settlement system operational will require resolution of institutional issues and the outcome of negotiations on other substantive chapters of the FTAA. For example, the WTO has an institutionalized secretariat that provides considerable support to WTO dispute settlement. Smaller economies are anxious to ensure that they have the monetary and technical resources required to participate meaningfully in FTAA dispute settlement, including secretariat support. The question of whether the FTAA dispute settlement system will handle all disputes regardless of which agreement (or chapter) is involved depends on the outcome of negotiations on other substantive chapters. The alternative is that the general dispute settlement procedure would be supplemented by special dispute settlement rules for specific topics or discrete dispute settlement procedures. A related issue is whether any general or specific standards of review would apply to guide panels. The WTO, for example, contains deferential standards of review with respect to antidumping. The negotiating group on dispute settlement has three mandates for the current phase of FTAA negotiations: (1) prepare a revised draft chapter for presentation to the TNC by August 2002, (2) submit to the Technical Committee on Institutional Issues the negotiating group’s preliminary views on the institutions needed to implement FTAA dispute settlement mechanisms, and (3) consider whether proposals for special dispute settlement mechanisms made by other FTAA negotiating groups are compatible with the general dispute settlement procedures developed for the FTAA as a whole and report their conclusions to the TNC or to the Technical Committee on Institutional Issues, as appropriate. FTAA countries, including the United States, agree that each FTAA signatory should implement measures to proscribe anticompetitive business conduct but disagree over the level of detail needed in the agreement. Competition policy, a new legal area for most FTAA participants, consists of the rules and regulations that foster a competitive environment in a national economy, partly through more efficient allocation of resources. Competition policy laws, also referred to as antitrust laws, typically address price-fixing, the misuse of market power by monopolies, and the control of mergers and acquisitions. Only 12 of the 34 FTAA countries currently have such laws, and current agreements in the Western Hemisphere treat competition policy in differently. For example, the Andean Community and CARICOM have adopted supranational institutions to deal with regional competition disputes. Mercosur seeks to build a common competition policy framework among its members. NAFTA promotes a strengthening of national competition policy laws and increased cooperation among national competition agencies. The competition policy group was established to develop rules to guarantee that the benefits of FTAA liberalization are not undermined by anticompetitive business practices. Specifically, the group has been mandated to (1) establish competition policy juridical and institutional coverage at the national, subregional, or regional level and (2) develop mechanisms that promote competition policy and guarantee the enforcement of regulations on free competition among and within countries of the hemisphere. As of July 3, 2001, the 15-page draft text contained sections on what competition law should look like, how official monopolies and state enterprises should be treated, what national and subregional institutions on competition policy should cover, what mechanisms for cooperation and exchange of information should exist, what type of dispute settlement might be appropriate for the provisions in the chapter, and what technical assistance is necessary. FTAA countries disagree over the level of detail the FTAA agreement should provide on the implementation of competition policy law and the formation of competition policy agencies. All FTAA countries, including the United States, believe that each FTAA country should have a competition agency at the national or subregional level responsible for the enforcement of antitrust laws. However, they disagree over how much detail is needed to define competition policy and develop competition policy agencies. The United States, seeking minimal detail, does not believe it is appropriate to specify detailed provisions on the substantive coverage of antitrust laws. Other countries have submitted detailed proposals that seek to identify specific actions that qualify as anticompetitive. One FTAA expert explained that other countries prefer greater detail either because it is helpful for their civil code legal systems, as opposed to common law in the United States, or because they fear they would not get adequate resources for the implementation of competition policy from their home country governments without strong language in the FTAA agreement. FTAA countries also have not reached agreement on what type of dispute settlement procedures should be developed to oversee the implementation and operation of competition policy laws within the hemisphere. The draft text currently contains two alternative proposals on dispute settlement— one that calls for disputes to be settled through the general FTAA dispute settlement mechanism and another that calls for the development of a Competition Policy Review Mechanism. The United States supports the creation of a forum within the FTAA to provide a peer review of each FTAA country’s implementation of the competition policy chapter and to serve as a venue for the discussion of competition policy issues. According to a U.S. government negotiator, other countries also prefer an FTAA peer review of competition policy laws and implementation in lieu of a binding dispute settlement process because they fear that dispute settlement would subject their national laws to supranational judgments. If formed, the peer review mechanism also could serve as the oversight body for the implementation of the competition policy chapter and a mechanism for providing technical assistance. Ministers mandated the competition policy group to reach agreement on as much of the text as possible by August 2002. Some FTAA experts stated that it would be relatively easy to negotiate the competition policy chapter because countries differ only on the level of detail, not the text’s major thrust and purpose. They believe the group may be able to eliminate many, but not all, of the brackets during this negotiating phase. In addition, language from new trade agreements such as the recently concluded Canada-Costa Rica trade agreement may also be submitted for consideration. FTAA ministers have taken steps to address three themes that cut across the FTAA negotiations—smaller economies, e-commerce, and civil society—creating “non-negotiating groups" to address them. These groups serve as a conduit for information but do not produce text on trade rules as do the negotiating groups. The theme of smaller economies is significant because many FTAA countries consider themselves small or developing. While FTAA countries have agreed to take differences of size and levels of development into account in negotiating the FTAA, they have not agreed on what form this treatment should take. E-commerce is an emerging theme that intersects the negotiations on market access, services, and IPR. The third crosscutting theme, civil society, has been controversial. To foster public support for the FTAA, ministers have solicited input on the FTAA from business and other nongovernmental groups within the Western Hemisphere, collectively known as civil society. However, many observers questioned the negotiators’ commitment to transparency and willingness to use the public input. Table 3 provides an overview of the three crosscutting themes. The remainder of this chapter describes each of these themes, its importance, and the group’s mandate; progress to date; significant issues; and next steps. By various estimates, as many as 25 of the 34 FTAA countries could be considered to be smaller or developing economies. These economies are generally characterized by a high degree of trade openness, a lack of economic diversity (high dependence on only a few industries for exports), a dependence on trade taxes for government revenues, and relatively small firms. The treatment of these economies is a crucial crosscutting theme in the FTAA negotiations because smaller economies are concerned about their ability to effectively implement and benefit from a new agreement. According to some experts, these participants are concerned that they may not have sufficient resources to implement new trade obligations. Countries that base much of their government revenues on tariffs, for example, may have difficulty finding alternative sources for that revenue if tariffs are phased out. These participants are also concerned that the very factors that make trade beneficial to small countries may make it difficult for some of them to achieve these benefits under the agreement. For example, the dislocation of a key sector has a proportionately larger impact on a small economy than on a larger, more diversified economy. Because so many FTAA participants consider themselves to be small or developing, the theme of smaller economies has repeatedly been discussed throughout the negotiations. At their first ministerial in Denver, Colorado, in 1995, FTAA countries, including the United States, acknowledged the wide differences in levels of development and size of economies, and pledged to actively look for ways to provide opportunities to facilitate the integration of smaller economies and increase their level of development. FTAA countries have repeatedly reaffirmed this principle in subsequent meetings. One step that ministers took to address the concerns of smaller economies was to form a consultative group on the issue. As set out in the San José Ministerial Declaration, the Consultative Group on Smaller Economies was established with a mandate to (1) follow the FTAA process and (2) provide the TNC with information on issues of concern to smaller economies and to make recommendations on these issues. Since its inception, the consultative group has served as a forum for the discussion of issues relevant to smaller economies. For example, certain FTAA countries have begun sharing their negotiating group proposals on smaller economies in the group’s meetings. In addition, the group has served as a mechanism for the discussion and coordination of technical assistance. The Tripartite Committee prepared a technical cooperation needs assessment for the group, which outlines the technical assistance needs of 17 of the FTAA participants. The group also has invited prospective donors to share any information they may have on their technical assistance programs. In addition to the consultative group, ministers have directed the negotiating groups to take the concerns of smaller economies into account in their negotiations. All negotiating group texts contain proposals on technical assistance or treatment for smaller economies. For example, the draft text on competition policy contains a section on technical assistance provisions to help countries develop and implement competition policy laws and institutions. Similarly, the market access draft text contains proposals for safeguard provisions that, under certain circumstances, would exempt products from smaller economies from the safeguard measures applied by other FTAA countries. The term “smaller economy” within the context of the FTAA has not been defined. Although various methods exist to identify the size of economies, including population, land area, and gross domestic product, each method produces a different set of countries. While these different sets do overlap, not all countries designated small by one method are considered small by others. For example, a 1998 study states that per capita gross domestic product in the Bahamas, which has only 0.16 percent of Brazil’s land area, is three times larger than Brazil’s per capita gross domestic product. According to one FTAA expert, the smaller economies group tried to define a “smaller economy” in the FTAA context but failed to agree on a single definition. To solve this dilemma, the United States has proposed that the treatment of smaller economies be decided on a case-by-case basis in the negotiations instead of grouping countries by a single definition. Other countries oppose this plan because they feel it may exclude them from receiving special consideration that they might otherwise receive under a categorical definition. The type of treatment that countries with smaller and less developed economies will receive under the FTAA has not been determined. The WTO allows for the special and differential treatment of developing countries by giving them longer time periods to implement tariff reductions, more favorable thresholds for applying certain commitments such as countervailing duties, and greater flexibility with regard to certain obligations. Under the FTAA negotiations, decisions about the treatment of smaller or developing economies will be important for the tariff and nontariff barrier modality discussions because they are supposed to define which countries may be eligible for what type of treatment. All FTAA countries agree that the rights and obligations of the FTAA need to be assumed by all countries participating in the process. However, the United States and others recognize that some countries may need longer phase-in periods to effectuate such rights and obligations. Other countries would like to see more aggressive and categorical treatment of smaller economies, similar to what has occurred under the WTO. In addition to special treatment, smaller economies are seeking technical assistance to strengthen their participation in the negotiations and increase their ability to carry out FTAA objectives. Developing countries have already faced resource constraints in their attempts to carry out existing international trade obligations under the WTO. According to U.S. officials, smaller FTAA economies, particularly the Caribbean nations, have been vocal about their need for technical assistance and have influenced some negotiating dates due to their concerns over resource constraints. The Tripartite Committee has already provided several countries with assistance in preparing for the negotiations and in implementing FTAA business facilitation measures. Several negotiating groups have incorporated into their texts specific language on technical assistance. The United States would like to see the smaller economies group spend more time on the issue of technical assistance, with countries identifying their technical assistance needs through their country-specific proposals. An important step concerning smaller economies during the next negotiating phase will be the development of guidelines for the treatment of differences in size or level of development. The Buenos Aires declaration states that the TNC, with the assistance of the consultative group, must develop no later than November 2001 some guidelines or directives for negotiating groups to apply treatment that takes into account differences in levels of development and size of economies. The smaller economies group is currently in the process of developing recommendations on these guidelines, which it will forward to the TNC in September 2001. Another important next step involves the provision of technical assistance. At the Buenos Aires Ministerial, the United States and the IDB indicated that they would further explore ways to meet these technical assistance needs. Their success in identifying funding for smaller economies may affect the negotiations, if smaller economies feel their technical assistance needs are not being met. As e-commerce and the use of the Internet have expanded over the past several years, trade negotiators have begun to grapple with how existing trade agreements cover these activities and whether new commitments are needed. Since the development of e-commerce is relatively new, few government regulations or border measures currently exist to control the flow of electronic transmissions. FTAA governments generally share the goals of fostering a supportive environment and maintaining an open trading regime for e-commerce. The United States, as a leading user and developer of e-commerce, has a commercial interest in expanding its use and maintaining an open trading environment for digital products and services. Other FTAA partners also perceive economic and social benefits from expanded use of e-commerce and the Internet for their own countries and want to remain technologically integrated into the global economy. To address their mutual interests in developing a digitally connected hemisphere, trade ministers established the Joint Government-Private Sector Committee of Experts on Electronic Commerce in 1998. The committee’s mandate is to make recommendations to the ministers on how to increase and broaden the benefits to be derived from the electronic marketplace. However, the committee is a non-negotiating group and will not develop rules for the FTAA agreement. Made up of government and private sector representatives, the joint committee has provided ministers with recommendations on issues related to its mandate. The committee has also provided a forum for countries to share their experiences and develop approaches to encouraging the development of e-commerce activities. The committee has issued two public reports that made recommendations on topics such as strengthening information infrastructure; increasing participation of governments, smaller economies, and small businesses; clarifying the rules of the market; developing on-line payment services; and addressing certification and authentication issues. Participants say that the committee has provided a useful role in facilitating information sharing among FTAA countries on best practices and e-commerce concerns. E-commerce issues are closely connected to several areas of the FTAA negotiations, including market access, services, IPR, and government procurement. Since the FTAA e-commerce group is a non-negotiating group, any commitments countries want related to e-commerce must be agreed on in one of the negotiating groups. For example, competition among Internet service providers and access to telecommunications networks are issues likely to be addressed in the services negotiating group. Protection of copyrighted materials and original works distributed over the Internet would be addressed in the IPR negotiating group. Market access negotiations on goods also may entail e-commerce-related issues because certain goods, such as books and videos, can be transmitted digitally or shipped physically. Because many countries, including the United States, use e-commerce to conduct government procurement, issues may also arise in the negotiating group on government procurement. Negotiators must be aware of the interrelationship between e-commerce issues and their specific topic because the use and efficiency of e-commerce transactions rely on an open environment across all steps in the production, marketing, sale, and distribution of a product. For example, if a country maintains an open telecommunications environment with high levels of Internet use, e-commerce still can be stymied if the country’s custom procedures are onerous and deter shipments of small packages. In addition, negotiators also need to be aware of any e- commerce-related decisions made at the WTO or other multilateral fora since they may have an impact on the FTAA negotiations. At the April FTAA ministerial, trade ministers instructed the joint committee to continue to identify and review specific issues. The committee also recommended that it continue to share national experiences and broadly analyze the factors that led to their success or failure. The joint committee’s work for the third phase of discussions will address the digital divide, consumer protection, and e-government and other issues. The views of civil society groups (nongovernmental groups representing business, labor, environment, and other interests) will likely affect the level of U.S. public support for the FTAA. Although multilateral trade agreements, such as the FTAA, are conducted at a government-to- government level, public support for the outcome is an important factor in generating the political will to conclude an agreement. Civil society parties thus need information about the progress of the negotiations and a vehicle for expressing their viewpoints. At the outset of the negotiations, the ministers committed to a transparent process and welcomed the contributions of the private sector. In 1998, at the San José Ministerial, the trade ministers reaffirmed their commitment to transparency to facilitate the constructive participation of different sectors of society. The ministers formed the Committee of Government Representatives on the Participation of Civil Society with a mandate to receive civil society views on trade matters and present them to the ministers. The committee pursued its mandate by soliciting the views of civil society on two occasions through a formal submissions process. Acceptable submissions had to meet a specified format and present the views constructively. The committee issued an open invitation; countries also solicited input through their own national mechanisms. For example, the United States solicited input via the Federal Register process, through the Internet, and by direct solicitation. The first round of submissions occurred before the Toronto Ministerial in 1999 and garnered about 60 acceptable responses from civil society groups in the hemisphere. A second round of submissions in 2000 before the Buenos Aires Ministerial resulted in 77 acceptable responses. In both cases, the committee submitted a report on the results at the ministerial meetings. The United States has championed this theme, in part because the committee on civil society provides a vehicle for discussing labor and environmental issues in the FTAA. The United States had sought to create an FTAA study group to address the relationship between the FTAA’s goals and labor issues, but many other FTAA countries objected, arguing that labor issues were more appropriately addressed in another international forum such as the International Labour Organization. In addition, according to FTAA experts, some countries believe that participants bear the responsibility of taking their citizens’ views into account and are skeptical of the value of including civil society input in the negotiations. The committee’s formation provided a compromise solution. Open to submissions on an array of FTAA-related topics, the committee gives organizations and individuals interested in the FTAA a way to voice their concerns within the FTAA process. As the negotiations enter the next phase, three aspects of the discussion on the participation of civil society in the FTAA process are worth noting. These are the transparency of the process, the difficulty the committee has had in reporting submissions by civil society, and the extent to which submissions are considered by negotiating groups. First, the level of transparency in the negotiation process has been in question. While the FTAA ministers continue to declare that they are committed to a transparent process that facilitates the constructive participation of nongovernmental sectors, the specific means to do so had not been spelled out before the Buenos Aires Ministerial. As a result, nongovernmental organizations and business and government representatives in the United States and elsewhere in the hemisphere criticized the FTAA process as lacking transparency. For example, although USTR released public summaries of U.S. positions, 50 Members of Congress, along with business representatives and nongovernmental organizations, all called for the release of the actual negotiating text. U.S. negotiators hope that the implementation of new outreach measures will go some way toward dampening the criticism that the process lacks transparency. In response to broad demands for a more transparent process, the FTAA ministers agreed on April 7, 2001, in Buenos Aires to publicly release the draft text of the nine negotiating groups. They determined that publication of the text would help increase the transparency of the negotiating process and help build broad public support for the FTAA. The text, which had been negotiated in English and Spanish, was translated into French and Portuguese and released to the public on the FTAA internet site on July 3, 2001. This text gives a snapshot of the status of the FTAA negotiations as they stood as of the Buenos Aires Ministerial, including the range of topics and proposals before the negotiators. The publicly released text is the same text from which negotiations are now proceeding, according to USTR. Because the text is heavily bracketed, it may be difficult for outside observers to understand or to assess potential areas of agreement or consensus. In addition, the FTAA governments agreed not to include country identifiers in the text in order to keep the negotiations more fluid. Further, there is no guarantee that future revisions to the text will be made available to the public. This is important for two reasons. Entirely new proposals may be made, and the text is likely to change significantly as the negotiating groups work to eliminate brackets and duplication. Second, the committee has had difficulty in reaching consensus on how to report the results of public submissions through the TNC to the trade ministers. This indicates the sensitivity of discussions about civil society in the FTAA as well as the challenges associated with a process run by consensus. During the first round of submissions, one FTAA country blocked the committee from preparing recommendations on the basis of the public input received, which was an objective sought by the United States. The committee’s report was thus limited to statistical information about the submissions with minimal description of the contents, according to U.S. officials. During the second round of submissions before the Buenos Aires Ministerial, the committee again had difficulty reaching consensus on the reporting issue, but eventually reached a compromise. The committee’s report to the TNC on the second round of submissions provided a more comprehensive and descriptive summary of the input. Third, some are concerned about the extent to which the public submissions are considered by the negotiating groups. Civil society representatives we interviewed told us that they were disappointed because there was little evidence that their input was being given serious consideration in the negotiations. Since the ministers had not initially directed that the civil society submissions be provided to the negotiators, the submissions were channeled through the committee. Negotiators theoretically could request the submissions through the committee, but U.S. officials noted that due to translation and logistical problems, the U.S. negotiators who were interested in considering the submissions were forced to rely on executive summaries rather than the complete submissions. The negotiators during the next phase of the negotiations should have access to civil society submissions because, at Buenos Aires, the FTAA ministers directed the committee to transmit the submissions to the appropriate negotiating groups. The United States has been actively pressing for each negotiating group to consider civil society input, according to U.S. negotiators. The participation of civil society in the FTAA process is expected to increase following the Buenos Aires Ministerial, according to U.S. officials. The FTAA ministers declared in Buenos Aires that the committee was “to foster a process of increasing and sustained communication with civil society, to ensure that civil society has a clear perception of the development of the FTAA negotiating process.” To do so, the committee as instructed to take the following steps: develop a list of options to increase and sustain communication with civil society for consideration by the TNC at its next meeting in September 2001; forward to the nine negotiating groups the submissions made pertaining to their respective issues; forward to the nine negotiating groups the submissions related to the FTAA process in general; and invite civil society groups to present their conclusions about the FTAA negotiations from other fora and seminars within the hemisphere. The ministers did not explicitly request another round of formal civil society submissions after the ministerial. U.S. officials stated that the committee is going to consider a variety of approaches as it develops its list of options for the TNC to consider, including, among others, the possibility of a third open invitation to civil society. Options may also include seminars, outreach briefings in the hemisphere, and other methods for providing information to the public on the progress of the negotiations. In addition, according to FTAA experts, other FTAA participants are being much more supportive of the civil society committee than they were earlier in the process. U.S. negotiators believe that by providing a means to communicate civil society views to ministers, the committee also offers an opportunity to begin to build broad-based support within the hemisphere for an eventual agreement. A comprehensive FTAA would unite a diverse set of economies into the world’s largest trading bloc involving nearly 40 percent of the world production and significant shares of U.S. trade and investment. Such an agreement would benefit U.S. exporters by reducing some relatively high trade barriers on U.S. exports to the region. By comparison, most FTAA exports to the United States entered duty-free in 2000. However, some U.S. import-competing industries, such as textiles, apparel, and certain agricultural goods, have traditionally received higher levels of protection. These industries would face increased competition and potentially lower production and employment if current U.S. barriers were lowered. The overall impact on the U.S. economy of removing U.S. and other FTAA countries’ tariff barriers may be relatively small since the total U.S. trade with non-NAFTA FTAA countries is only about 1 percent of the $11 trillion U.S. economy. An FTAA agreement, however, would cover much more than merchandise trade. Services, investment, IPR, and government procurement are commercially important areas in which the United States may gain improved market access and privileges. The FTAA would provide new coverage in investment and government procurement because the United States currently has only a few bilateral agreements with other FTAA countries in those areas. The United States also hopes to expand coverage in services and IPR beyond existing WTO agreements. U.S. trade and investment in the Western Hemisphere have increased rapidly over the past decade. Over 80 percent of U.S. merchandise trade and about half of services trade and investment in the region are with NAFTA partners Canada and Mexico. However, merchandise trade with non-NAFTA FTAA countries has more than doubled over the past decade, and services trade and FDI have increased in both value and share relative to the rest of the world. Figures 7, 8, and 9 show the shares of U.S. merchandise trade, services trade, and FDI with key trade partners. Appendix I provides more information on current U.S. trade and investment with FTAA countries. Over the past decade, FTAA countries have pursued the liberalization and integration of their economies through a wide variety of interregional free trade and customs union agreements. These changes have lowered barriers to U.S. exports, but tariffs and other barriers still remain relatively high on many U.S. exports. FTAA countries’ overall average tariff rates are about twice that of the United States, with about one-third above 10 percent. Barriers on agricultural products are generally higher than industrial goods. Some U.S. products also face higher tariff rates than other competitors that have preferential access to some FTAA markets through subregional trade agreements. For FTAA countries, the U.S. market is relatively open with 87 percent of FTAA imports entering duty- free and an average trade-weighted U.S. tariff on FTAA imports of less than 1 percent. However, the United States maintains high barriers on certain agricultural products, such as sugar, peanuts, and citrus, and on textiles and apparel products, which are important exports of various FTAA countries. For some of these products, imports are limited by quota or by prohibitively high tariffs after an initial quantity has been imported. Reductions in these barriers may increase imports, lower prices, and reduce U.S. production. FTAA negotiations also include antidumping measures, which place additional duties on products if a country finds that the products have been sold at less than their normal value. Changes in antidumping rules may have mixed results for the United States because it is the country that has initiated the most cases and had the most cases initiated against it within the FTAA region. Overall, some economic studies suggest that the elimination of tariff and nontariff barriers in the region would likely have a small impact on the U.S. economy because of the relatively small size of U.S. trade with the region compared with U.S. production. The FTAA could expand opportunities for U.S. exporters by removing tariff and nontariff barriers on U.S products. Average tariff levels in the region fell from over 40 percent in the mid-1980s to 12 percent in the mid-1990s, prompting sizable increases in both intra- and extra-regional trade flows, according to the IDB. In 1985, Brazil’s simple average tariff rate was 51 percent, while Argentina’s was 35 percent. In 1999, these rates fell to 14 and 11 percent, respectively. Current tariff averages for FTAA countries are generally significantly lower than the averages during the 1980s and early 1990s. However, compared with the U.S. and Canadian simple average tariff rates of less than 5 percent, other FTAA countries’ rates are still relatively high. Some countries, such as Chile and Bolivia, have relatively uniform tariff schedules that apply an across-the-board rate for most products, with some exceptions. Chile recently lowered its uniform rate from 9 to 8 percent on January 1, 2001. It is scheduled to continue lowering the rate until it reaches 6 percent in 2003. Other countries tend to apply a wider range of rates, with the highest duties applied to sensitive products. Brazil, for instance, charges its highest duties (35 percent) on automobile parts; Nicaragua charges rates between 45 to 55 percent on certain types of corn and rice imports; and Canada maintains out-of-quota tariff rates of over 250 percent on certain dairy imports. In addition, only Canada, Costa Rica, El Salvador, Panama, and the United States have agreed to eliminate tariffs on certain high-technology products through the WTO Information Technology Agreement. These products are important exports for the United States, and some countries, including Brazil, maintain high tariffs on them. In addition to lowering the tariff rates that exporters are currently charged, the FTAA would also commit countries to not raise these rates in the future. Through their WTO commitments, FTAA countries have already bound most of their tariffs at certain levels. However, in many cases these bound rates are relatively high and countries charge much lower rates in practice. A World Bank study found that for the 10 Latin American countries it examined, the overall trade-weighted average bound rates ranged between 25 and 57 percent and were at least two or three times as high as the current rates the countries charged. Under WTO rules, countries can increase their current rates to their bound levels at any time. The FTAA would reduce these higher bound rates, in many cases to zero, and provide additional certainty for FTAA exporters. U.S. agricultural exporters also stand to gain from tariff elimination through the FTAA. Since agricultural tariffs are generally higher than those on industrial goods, the FTAA may lead to more substantial changes in agricultural trade than in other sectors. For example, Costa Rica’s average tariff on imports of manufactured goods is 5.4 percent compared with 16.8 percent on agricultural goods. Agricultural tariffs in Barbados, Belize, Guyana, and Jamaica all average over 20 percent and are generally twice the average tariff on industrial goods. Table 4 shows simple average tariff rates across FTAA countries for all agricultural and industrial goods. The pattern of higher tariffs on agricultural goods holds true for all FTAA countries except Chile and the Mercosur countries of Argentina, Brazil, and Uruguay. For Mercosur countries, protection of certain industrial goods, such as automobiles, raises their average rates on industrial goods slightly higher than tariffs on agricultural goods. The bound rates that countries committed to in the WTO are even higher, with the average across all agricultural goods in South America at 39 percent, in Central America at 54 percent, and in the Caribbean Islands at 86 percent. The U.S. Department of Agriculture’s Economic Research Service estimated that elimination of agricultural barriers would lead to the expansion of U.S. exports to FTAA countries by 8 percent in the first 5 years and an increase in U.S. imports from FTAA countries by 6 percent. The study predicted that U.S. exports of wheat to Brazil and exports of corn, soybeans, and cotton across the hemisphere would increase. Certain agricultural products also tend to face trade-distorting measures, such as price bands and export subsidies, which the FTAA may address. For example, the United States has initiated a WTO dispute case against Canada over its export programs involving dairy products and has initiated a review of the Canadian wheat marketing board to determine how its status as a state-trading monopoly may restrict competition and harm U.S. producers. Chile has used complex price bands, which apply additional duties on imports, to maintain domestic prices within a certain range for wheat, wheat flour, edible vegetable oils, and sugar. For example, due to recent low international prices for wheat products, Chile applies duties as high as 90 percent on imports of wheat, a key U.S. export. FTAA negotiations may address these and other agricultural trade practices that distort domestic and international markets. Negotiators already have reached agreement on the elimination of agricultural export subsidies, another trade-distorting practice. The WTO agriculture agreement allows exports subsidies, but only if the WTO is notified and the subsidies are reduced over time. Proponents of the FTAA argue that it will eliminate the disadvantage U.S. exporters face from subregional agreements within the hemisphere and help them maintain or expand market share. Subregional trade agreements have proliferated in the Western Hemisphere as part of a larger reform process undertaken by many FTAA countries that has included lowering tariffs on all partners. However, for those countries that are members of free trade agreements or custom unions, duties are even lower or are eliminated. The United States is only party to one (NAFTA) of numerous trade agreements in the region. USTR, the U.S. Department of Agriculture, and some business associations have cited the Chile-Canada free trade agreement as an important example of how U.S. exports are disadvantaged because the agreement provides preferential access for Canadian products in sectors such as forest products, wheat, vegetable oils, and potatoes. Both Canada and the United States are major producers of these products and compete in Chile and elsewhere. In addition, the EU has recently concluded a free trade agreement with Mexico and is pursuing negotiations with Chile and Mercosur. An FTAA might, however, undercut existing U.S. trade preference programs by eliminating similar disadvantages faced by some FTAA countries in the U.S. market compared with Canada and Mexico through NAFTA. For example, Congress recently improved textile and apparel access for Caribbean Basin countries through the Caribbean Basin Trade Partnership Act, partly to match the expanded access Mexico achieved through NAFTA. Andean Community countries have also sought similar provisions, citing lost sales to Caribbean competitors. For most U.S. sectors, tariff liberalization through the FTAA would likely have a limited impact on U.S. imports. This is because the U.S. market is already relatively open to imports from FTAA nations. For example, in 2000, most FTAA goods entered the United States duty-free or at very low rates. The overall average tariff rate on products entering the U.S. market is less than 5 percent. However, the United States also provides countries with further tariff reductions on certain products through several specialized programs. These include nonreciprocal trade preference programs, such as the Generalized System of Preferences, and reciprocal trade preference programs, such as the Agreement on Trade in Pharmaceutical Products. Most of these programs offer duty-free entry or very low tariff rates on a range of products. Therefore, the average tariff rates facing many countries’ products imported by the United States are even lower than the normal average U.S. tariff rate. For example, the trade- weighted average U.S. tariff rate on imports from FTAA countries is only 0.79 percent. However, the trade-weighted tariff rates vary across countries and regional groups depending on the types of products imported by the United States. Table 5 shows that NAFTA countries face the lowest U.S. tariff rates, while Central American countries face the highest overall average. About 87 percent ($376 billion) of FTAA imports entered the United States duty-free in 2000. Another 7 percent paid duties between 0 and 5 percent, and only about 3 percent faced duties of above 15 percent. Through NAFTA, Canada and Mexico had an even higher share of their products (94 percent) enter duty-free. The Andean Community and Central American nations had the lowest shares of duty-free products, with about 40 percent of their imports facing no duties. Table 6 shows the share of imports facing different ranges of tariff rates by each regional group. The United States maintains high tariffs on certain sensitive products whose production may decline if current trade barriers are reduced and competition from imports increases. The tariffs on some of these products are as high as 48 percent, and some products are subject to tariff-rate quotas with out-of-quota duties as high as 350 percent. Central American FTAA countries have the largest share of imports facing tariffs greater than 15 percent. A large portion of these products is accounted for by textile and apparel goods, which until recently had only limited coverage under preference programs. Tariff rates on these products generally are between 20 and 33 percent. Textile and apparel products are important exports for Mexico, Caribbean Basin (including Central America) countries, and Andean Community countries. Mexico and the Caribbean Basin each account for about 14 percent ($9.7 billion) of U.S. apparel imports in 2000 (Andean exports are very small in comparison). Mexico has preferential access through NAFTA, and Caribbean Basin exports have recently gained preferential access through the Caribbean Basin Trade Partnership Act. However, the United States offers the Caribbean Basin Trade Partnership Act unilaterally and can withdraw or modify it. Some FTAA countries would prefer to lock in access to the U.S. market through a reciprocal agreement like the FTAA. In addition to textiles and apparel, several agricultural products also receive protection through higher U.S. tariff rates and tariff-rate quotas. These include products such as tobacco, sugar, peanuts, dairy, and citrus products. For example, the out-of-quota tariff rate quota is 350 percent for tobacco and is above 100 percent for peanut products. The United States also provides domestic support programs for some of these industries, particularly dairy, sugar, and peanuts. Since limiting access to the market is essential for maintaining certain price levels, removal of trade barriers and increased competition from FTAA suppliers will impact these programs. U.S. Department of Agriculture’s Economic Research Service reported that, while providing consumers access to more inexpensive imports, the FTAA might lead to significant declines in U.S. prices and production in the sugar, peanut, and orange juice markets. Brazil is a major producer of both sugar and orange juice, and Argentina already supplies over 85 percent of U.S. peanut imports. Sugar also has been a sensitive product in trade negotiations among Brazil, Argentina, and Chile. More restrictive rules on antidumping investigations under the FTAA may have mixed effects on the United States because it is both the largest initiator and defendant in these cases in the hemisphere. Protections for import-competing U.S. industries, such as steel and fertilizers, might be more limited. However, U.S. exporters facing antidumping measures abroad might benefit. U.S. consumers and producers that would gain access to relatively cheaper imports may also benefit. Although antidumping duties are only applied on specific products and generally involve a small share of overall imports, they affect sensitive goods, and the threat of an investigation may lead exporters to restrain shipments. Proponents of antidumping argue that it is important to some industries in the United States as protection against unfair competition when other trade barriers are lowered. The degree to which the FTAA agreement augments or modifies WTO provisions in these areas will affect how important the agreement is to current FTAA countries’ practices. The United States has argued for no changes that would restrain the use of antidumping by FTAA countries (see ch. 3 for more information on antidumping negotiations). Until the early 1980s, Canada and the United States were the primary users of antidumping measures. However, Argentina, Brazil, and Mexico recently have become important users of antidumping measures. Of the 485 antidumping investigations initiated by one FTAA country against another between 1987 and 2000, the United States was the largest initiator (with 30 percent of the cases) and the largest defender (with 38 percent of the cases). Brazil, which was the fourth largest initiator (with 8 percent of the cases), was the second largest defender (with 21 percent of the cases). Figure 10 shows the number of cases initiated and defended by the top five FTAA users of antidumping. U.S. antidumping orders in effect as of April 2001 against FTAA countries include (1) certain steel products from Argentina, Brazil, Canada, and Mexico; (2) frozen concentrated orange juice from Brazil; and (3) salmon from Chile. The duties placed on these imports can be substantial and vary by product. For example, duties on imports from Brazil of silicon metal ranged between 87 and 94 percent in 2000, and duties on frozen concentrate orange juice were about 15 percent in 2001. The largest number of antidumping investigations against the United States are by its NAFTA partners Canada and Mexico, which account for 73 percent of all cases. Brazil is third with 14 percent, including measures in effect on certain chemical imports from the United States. The relatively small size of non-NAFTA FTAA U.S. merchandise trade compared with the size of the U.S. economy will limit the overall impact of FTAA tariff liberalization on the U.S. economy. U.S. trade with FTAA countries outside of NAFTA was $123 billion in 2000, only about 1 percent of the approximately $10 trillion U.S. economy. However, certain sectors that are relatively more protected (both in the United States and abroad) may see significant changes in trade flows. For example, some economic impact studies that use economywide models have found that U.S. exports in sectors such as furniture, textiles, and clothing and some agricultural products could increase substantially, while exports in sectors such as mining, base metals, and petroleum could fall slightly. Likewise, imports of certain metals, nonelectrical machinery and leather products could increase, while paper and wood imports could fall. Overall, these studies estimate a small but positive impact of an FTAA, with an increase in U.S. output annually of about 1 percent or less. However, the models focus only on tariff elimination and do not generally include other aspects of the FTAA agreement that are difficult to quantify, such as services and IPR. Some supporters of the FTAA argue that U.S. exports to non-NAFTA countries could grow significantly with a free trade agreement, just as U.S. exports to Mexico did through NAFTA. While U.S. exports to Mexico and other FTAA (non-NAFTA) countries in 1990 were about 7 and 6 percent of total U.S. exports, respectively, exports to Mexico rose to 14 percent ($100 billion) of total U.S. exports by 2000. The FTAA negotiations involve several areas in which the United States has important commercial interests but has few multilateral and bilateral agreements with FTAA countries. The United States is the world’s leading exporter of services and one of the largest investors in Latin America. FTAA countries began liberalizing their service sectors and opening their economies to foreign investment as part of their economic reform programs. However, these reforms are relatively new and are not yet bound by international or bilateral commitments with the United States, as they would be under an FTAA. Also, as the leading producer of software, pharmaceuticals, and other cutting-edge technologies, protection of IPR is also an area of commercial importance to the United States. FTAA countries have been implementing relatively new multilateral and bilateral IPR commitments, and the United States is seeking to expand these in the FTAA. Finally, no FTAA countries besides the United States and Canada are party to the WTO Government Procurement Agreement. FTAA negotiations in this area present an opportunity to provide new commitments that would guarantee the opportunity for U.S. merchandise and services supplies to compete for contracts in regional procurement markets. U.S. service providers stand to gain from liberalization in the FTAA as existing trade barriers are lowered in the region. The service sector is a commercially important area for the United States. World services exports were $1.3 trillion in 1998, and the United States was the largest exporter, accounting for nearly 20 percent of services exports. By comparison, Canada, the next largest FTAA exporter, accounted for just 2.5 percent. Domestically, services account for 78 percent of the U.S. gross domestic product and a growing share (21 percent) of U.S. exports. Many FTAA countries’ service sectors, such as telecommunications and energy, have traditionally been highly regulated and controlled by monopoly or state enterprises. However, as part of their larger reform process, many FTAA countries have begun privatizing state enterprises and opening some sectors to increased international competition. For example, Brazil, Argentina, Chile, and Venezuela recently have begun privatizing segments of their telecommunications sector, and Argentina opened its domestic telecommunications market to full competition in 2000. Countries have initiated these reforms partly because service sectors provide resources for other elements of the economy and can be important engines for economic development. However, the reforms are relatively new and countries have made only limited multilateral commitments in the WTO. For example, the number of commitments most FTAA countries (except for Canada, Argentina, and the United States) made in the WTO services agreement was ranked in a recent OAS study as moderately low to very low. Additional multilateral agreements on basic telecommunications and financial services have since been concluded and enjoyed greater participation. Also, U.S. service providers may receive less favorable treatment and access than other competitors. FTAA countries have engaged in numerous subregional trade agreements, many of which include services. These include Mercosur, CARICOM, the Andean Community, and many Mexican and Chilean bilateral free trade agreements. As they do for merchandise trade, such subregional agreements put countries not party to the agreement at a disadvantage. A comprehensive FTAA could provide new protections for existing and future U.S. investments. The United States is one of the largest investors in Latin America, and the stock of U.S. FDI in FTAA countries accounted for about a quarter ($265 billion) of all U.S. FDI abroad. Canada was the largest recipient, followed by Brazil and Mexico. FTAA countries, primarily Canada, accounted for only about 10 percent of FDI in the United States. Although FTAA countries have sought to attract more investment in their economies, the United States has investment treaties protecting the rights of investors with only a few FTAA countries. An FTAA agreement could provide stronger protections for U.S. investment in Latin America. Along with direct investment, capital also is provided to countries through short- term portfolio investments such as stocks and bonds. The United States has proposed that each form of investment be covered under the FTAA agreement. Investment is a commercially important area to the United States and one that is increasingly interconnected with trade. About 35 percent of goods exports and about 19 percent of services exports were related to U.S. investments abroad from 1990 to 1997. Multinational companies choose between producing a product in the domestic market and exporting it across the border, locating in the foreign market and producing the product there, or producing a product jointly in several countries. When trade barriers are high, the incentive to locate abroad is increased. Free trade agreements coupled with investment provisions can enable businesses to make more efficient investment decisions that are not distorted by government policies. Also, for some industries, particularly certain service sectors, local production of the service is preferred due to legal reasons and because it is more efficient. U.S. service sales through U.S.-owned foreign affiliates recently exceeded U.S. cross-border sales. Worldwide, sales by multinational corporations in the 1990s expanded at a much faster rate than global exports, and their levels of production grew from 5 percent of gross domestic product in 1982 to 10 percent in 1999, according to the United Nations. Sales of foreign affiliates worldwide ($14 trillion in 1999 and $3 trillion in 1980) are now nearly twice as high as global exports. Economic reforms in many FTAA countries have recently opened some markets to increased investment, and the privatization of state-owned firms also has drawn significant foreign capital from the United States, Europe (particularly Spain), and elsewhere. Brazil was the largest recipient of new FDI in Latin America, receiving 41 percent ($30 billion) in 2000, with Mexico second at 18 percent ($13 billion). A high percentage of these investments went toward acquiring assets in the telecommunications, energy, and finance sectors. FDI in Brazil has been particularly large because the government privatized state-owned electric power companies, banks, and retail establishments. Brazil also changed its constitution to allow foreign investment in petroleum, shipping, telecommunications, and natural gas sectors and passed patent reform legislation that increased incentives for direct investment, according to the U.S. Department of Commerce. FDI in Argentina, Chile, and Venezuela also has increased substantially in recent years due to acquisitions of state- owned service enterprises. Overall, new FDI in Latin America was $74 billion in 2000. For U.S. investors, many of these new investments are not covered by bilateral or multilateral investment agreements. The United States has investment agreements in force with only 10 countries in the FTAA region. NAFTA provides strong protections for investments in Canada and Mexico, and the other eight agreements are bilateral investment treaties. Not covered by any bilateral agreement with the United States are Brazil, which accounts for $35 billion (13 percent) of the U.S. stock of FDI in FTAA countries; Chile, which accounts for about $10 billion (4 percent); and Venezuela, which accounts for $7 billion (3 percent). Multilateral provisions covering investments are limited. The WTO includes provisions related to investment in three of its agreements: services, goods, and IPR. However, there is no broad multilateral agreement that protects investment specifically. The Organization for Economic Cooperation and Development began negotiations on such an agreement, but these were suspended over differences among countries and complaints from civil society groups. On the other hand, numerous subregional trade agreements and bilateral investment treaties exist within the FTAA region that do not include the United States. Countries have liberalized their investment provisions to encourage reform and competition and to attract needed capital for economic development. Currently, all but three countries in the region (the Bahamas, St. Kitts and Nevis, and Suriname) have signed at least one bilateral investment treaty with another FTAA country. Many Caribbean countries have signed bilateral investment treaties with European countries, and Brazil has one with the EU. Subregional trade agreements also have included investment provisions, including NAFTA, Mercosur, the Andean Community, and several of the Mexican and Chilean bilateral free trade agreements. As a leader in several areas of technology and medicine and with large investments in the research and development of new products and processes, the United States has important commercial interests in promoting the protection and enforcement of IPR abroad. The existing WTO agreement on IPR provides important disciplines that protect copyrights, patents, and other intellectual properties. The United States seeks to expand these provisions in the FTAA to provide greater protections. Cross-border transfers of royalties and licenses provide one measure of international sales of intellectual properties. These are fees collected by those who sell the rights to use industrial processes, techniques, formulas, and designs; copyrights and trademarks; business format franchising rights; broadcast rights; and the right to distribute and use computer software. In 1999, U.S. exports of these intangible intellectual properties amounted to $4.3 billion to FTAA countries, with Canada accounting for 39 percent; Mexico, 18 percent; and Brazil, 12 percent. U.S. imports of intellectual properties were only $844 million and came primarily from Canada (72 percent). Software licensing was one of the fastest growing segments of trade in intangible intellectual properties. In addition to intangible intellectual properties, the United States also is a large exporter of numerous products that embody intellectual properties such as videos, recordings, software, pharmaceuticals, chemicals, and other physical products. According to the U.S. International Trade Commission, the continued growth of U.S. intellectual property exports depends, in part, on the ability of U.S. trade partners to protect such properties. Also, a country’s ability to attract foreign investment is partly tied to the strength of its protections on intellectual properties because companies want assurance that the intellectual properties they transfer to their new operations will be protected. Some U.S. industry associations have identified piracy and lost sales in the region due to IPR problems. For example, piracy losses for software in Latin America were estimated by the industry at around $870 million in 2000. At 58 percent, Latin America had the second highest piracy rate of all world regions, behind Eastern Europe. Also, the pharmaceutical industry attributes lost sales ranging from $66 to $82 million in Argentina and Brazil to inadequate intellectual property protections. Although FTAA countries’ adoption on laws protecting IPR has improved significantly over the past decade, problems still exist. For example, the United States has initiated WTO dispute settlement procedures against Argentina and Brazil over limitations in their IPR laws. Also, Paraguay was designated a priority foreign country under Special 301 provisions of the Trade Act of 1974 because of its role as a regional center for piracy, particularly of optical media. Other FTAA countries designated in the USTR Special 301 report on intellectual property protections included the Dominican Republic, Guatemala, and Peru. Government procurement is a relatively important component of many FTAA countries’ economies. FTAA countries’ government expenditures comprise 10 to 15 percent of the gross domestic product and can be higher for some smaller economies. The IDB estimated the size of the Latin American procurement market in 1996 at between $131 billion and $197 billion. Currently, only the United States and Canada are party to the WTO Government Procurement Agreement. NAFTA also provides some government procurement access among the United States, Canada, and Mexico. The United States does not have any multilateral or bilateral agreements with the remaining FTAA countries. Therefore, the FTAA could provide significant new access to procurement markets for U.S. exporters of goods and services. U.S. agricultural and electrical manufacturing and pharmaceutical companies are among those supporting stronger government procurement provisions through the FTAA. Also, FTAA governments could benefit through reduced expenses due to more competitive and inexpensive products. Some civil society and labor groups argue that the FTAA should allow for government discretion for social or environmental reasons when making procurement decisions, and some U.S. companies favor maintaining programs that provide preferences for domestic suppliers. Since most FTAA countries have not yet made bilateral or multilateral commitments in this area, the degree to which they will grant access to their procurement markets in an FTAA is unclear. | The 34 democratic countries of the Western Hemisphere pledged in December 1994 to form Free Trade Area of the Americas (FTAA) no later than 2005. The FTAA agreement would eliminate tariffs and create common trade and investment rules among the 34 democratic nations of the Western Hemisphere. When completed, the FTAA agreement will cover about 800 million people, more than $11 trillion in production, and $3.4 trillion in world trade. The five FTAA negotiating groups pursuing liberalization of trade and investment--market access, agriculture, investment, services, and government procurement--have submitted initial proposals and agreed on a date to begin market access negotiations, but the groups face short-term and long-term issues. In the short-term, these groups must resolve several practical issues in order to begin negotiations on market access schedules no later than May 15, 2002, and to narrow differences and prepare revised trade rule chapters by August 2002. Over the long-term, these market-opening groups face fundamental questions about how much and how fast to liberalize. Narrowing outstanding differences may be difficult for the four other negotiating groups, which have made initial proposals on rules governing intellectual property; subsidies, antidumping, and countervailing duties; competition policy; and dispute settlement. Some groups face fundamental differences. Other negotiating groups have reached agreement on basic principles but disagree on key details. Two of the three crosscutting themes--smaller economies and civil society--have proven controversial. Because the FTAA's smaller economies are concerned about their capacity to implement such a vast agreement and its potential economic effects on their countries, they have been seeking assurances of technical assistance and other special treatment. The FTAA process has been viewed as not sufficiently open to the public, and past efforts to include nongovernmental interests, such as business, labor, the environment, and academia, have been widely seen as ineffective. Some steps have been taken to address these concerns, and other steps are being considered. As a comprehensive agreement, the FTAA could have wide-ranging effects on U.S. trade and investment with other Western Hemisphere countries. The elimination of tariff and nontariff barriers would improve U.S. market access; put U.S. exporters on an equal footing with competitors in FTAA markets; and expand trade, particularly in highly protected sectors such as agriculture. On the other hand, some protected U.S. sectors, including textiles, apparel, and agriculture, may face increased import competition and declining production if barriers were lowered. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
Anthrax is an acute infectious disease caused by the spore-forming bacterium called Bacillus anthracis. The bacterium is commonly found in the soil and forms spores (like seeds) that can remain dormant for many years. Although anthrax can infect humans, it occurs most commonly in plant-eating animals. Human anthrax infections are rare in the United States and have normally resulted from occupational exposure to infected animals or contaminated animal products, such as wool, hides, or hair. Infection can occur in three forms, two of which are relevant to this testimony. They are (1) cutaneous, which usually occurs through a cut or abrasion and (2) inhalation, which results from breathing aerosolized anthrax spores into the lungs.Aerosolization occurs when anthrax spores become airborne, thus enabling a person to inhale the spores into the lungs. After the spores enter the body, they can germinate into bacteria, which then multiply and secrete toxins that can produce local swelling and tissue death. The symptoms are different for each form of infection and are thought to appear within about 7 days of exposure, although individuals have contracted inhalation anthrax as long as 43 days after exposure. Depending on the extent of exposure and its form, a person can be exposed to anthrax without developing an infection. Before the 2001 incidents, the fatality rate for inhalation anthrax was approximately 75 percent, even with appropriate antimicrobial medications. People coming in contact with anthrax in its natural environment have generally not been at risk for inhalation anthrax, and before 2001, no cases of inhalation anthrax had been reported in the United States since 1976, although 224 cases of cutaneous anthrax were diagnosed between 1944 and 1994.Fatalities are rare for cutaneous anthrax. Because so few instances of inhalation anthrax have occurred, scientific understanding about the number of spores needed to cause the disease is still evolving. Before the 2001 incidents, it was estimated that a person would need to inhale thousands of spores to develop inhalation anthrax. However, based on the cases that occurred during the fall of 2001, experts now believe that the number of spores needed to cause inhalation anthrax could be very small, depending on a person’s health status and the aerosolization capacity of the anthrax spores. In total, the contaminated letters caused 22 illnesses and resulted in 5 deaths from inhalation anthrax. Numerous postal facilities were also contaminated. The first two cases of disease involved media employees in Florida. The employees—one of whom died—contracted inhalation anthrax and were thought to have contracted the disease through proximity to opened letters containing anthrax spores. Media employees also developed anthrax in New York—the second location known to be affected. The initial cases in New York were all cutaneous and were also thought to have been associated with opened envelopes containing anthrax spores. The initial cases at the next site—New Jersey—involved postal employees with cutaneous anthrax. The postal employees were believed to have contracted the disease through handling the mail—as opposed to opening or being exposed to opened letters containing anthrax spores. Unlike the incidents at other locations, which began when cases of anthrax were detected, the incident at the Hart Building—the fourth location—began with the opening of a letter containing anthrax spores and the resulting exposure to the contamination. The discovery of inhalation anthrax in the first postal worker from Brentwood revealed that even individuals who had been exposed only to taped and sealed envelopes containing anthrax could contract the inhalation form of the disease. Subsequent inhalation cases in Washington, D.C.; New Jersey; New York; and Connecticut—the sixth location affected—underscored that finding and also demonstrated that exposure and illness could result from cross contamination of mail. (See app. I for a time line of selected events related to the anthrax incident in the fall of 2001.) On or about October 9, 2001, at least two letters containing anthrax spores entered the U.S. mail stream—one was addressed to Senator Thomas Daschle, the other to Senator Patrick Leahy. The letters were mailed in Trenton, New Jersey, and forwarded to the Brentwood facility in Washington, D.C., where they were processed on high-speed mail sorting machines and further processed in the facility’s government mail section before delivery. On October 15, a staff member in Senator Daschle’s office opened the contaminated envelope. The envelope contained a powdery substance, which the accompanying letter identified as anthrax, that was released in a burst of dust when the envelope was opened. The U.S. Capitol Police were notified, and the substance was quickly tested and confirmed to be anthrax. Brentwood managers analyzed the path of the letter through the facility. Although the machine that processed the letter was reportedly shut off—at least for a period of time—the facility itself was not closed or evacuated at that time. Within days, a Brentwood employee was suspected of having contracted inhalation anthrax. The Postal Service closed the facility on October 21, 2001, after CDC confirmed that the employee had the disease. Thereafter, two other Brentwood employees, Mr. Curseen, Jr., and Mr. Morris, Jr., died. Both were subsequently found to have died of inhalation anthrax. The Brentwood facility is a large 2-story facility that operated 24 hours a day, 7 days a week. About 2,500 employees worked at Brentwood, processing mail on one of three shifts. Brentwood processed all the mail delivered to addresses on Capitol Hill, including the Hart Building. Brentwood was the second processing and distribution center closed for an extended period because of anthrax contamination. The Postal Service reported that it plans to reopen the facility in phases; by late November administrative personnel will begin working in the facility and limited mail processing operations will begin shortly after that. Brentwood is expected to be fully operational by spring 2004. The other facility—the Trenton Processing and Distribution Center—located in Hamilton, New Jersey, was closed 3 days before Brentwood on October 18, 2001, after CDC confirmed that a New Jersey postal employee had cutaneous anthrax. It is in the process of being decontaminated. The Postal Service’s decision to wait for CDC’s confirmation of a case of inhalation anthrax before closing Brentwood and referring the facilities’ employees for medical treatment was consistent with the public health advice the Postal Service received and the health risk information available at the time. However, the Postal Service’s decision contrasted with the more immediate decision to close the Hart Building after anthrax contamination occurred. As a result, postal employees questioned whether the Postal Service’s decision adequately protected their health. The Postal Service’s decision to wait for CDC’s confirmation of a case of inhalation anthrax before closing Brentwood and referring its employees for medical treatment was consistent with the advice provided by CDC and the D.C. Department of Health, as well as the available health risk information. CDC called for such confirmation before closing a facility or recommending medical treatment because, at the time, public health authorities believed postal employees were unlikely to contract inhalation anthrax from exposure to contaminated mail. Postal officials reported that they consulted CDC and the D.C. Department of Health about the possible health risks to Brentwood employees after learning that Senator Daschle’s letter—opened on October 15, 2001—contained anthrax. Even though the letter would have passed through Brentwood, the public health authorities said that they did not consider the facility’s employees at risk, given the results of ongoing investigations of anthrax incidents in Florida and New York and the scientific understanding at that time. Specifically, as discussed, no postal employees were known to have developed symptoms of anthrax after contaminated letters had passed through the postal system on the way to destinations in Florida and New York, and anthrax spores were not considered likely to leak out, or escape from, a taped and well-sealed envelope in sufficient quantities to cause inhalation anthrax. Accordingly, the Postal Service reported that it kept the Brentwood facility open in order to keep the mail moving. This goal was important to managers whom we interviewed, who cited the psychological importance of keeping the mail flowing in the aftermath of the September 11 terrorist attacks. On October 18, 2001, CDC confirmed that a postal employee in New Jersey had cutaneous anthrax. On that day, the Postal Service, in consultation with the New Jersey Department of Health and Senior Services, closed the Trenton Processing and Distribution Center. According to New Jersey public health officials, the facility was closed to facilitate environmental testing of the Trenton facility. While the contaminated letters to Senator Daschle and Senator Leahy were both processed through the Trenton and Brentwood facilities, it is not clear why the Postal Service did not take the same precautionary measures at Brentwood. We are pursuing this issue as part of our ongoing work. Although the Postal Service followed CDC’s advice and kept Brentwood open until CDC confirmed a case of inhalation anthrax, the Postal Service took interim steps to protect its employees. First, the Postal Service arranged for a series of environmental tests at the Brentwood facility, even though it reported that CDC had advised the Postal Service that it did not believe such testing was needed at that time. The results of the first test— taken and available on October 18, 2001—were from a quick test conducted by a local hazardous materials response team. The results were negative. Three days later, on October 21, 2001, CDC confirmed that a Brentwood employee had inhalation anthrax, and the Postal Service closed the facility and referred its employees for medical treatment. The positive results of more extensive environmental testing—also conducted on October 18, 2001—were not available until October 22—after the facility had already closed. In addition, Postal Service managers said they asked the D.C. Department of Health three times before October 21 for nasal swabs and antibiotics for Brentwood employees; however, the health department said the swabs and antibiotics were unnecessary. We have not yet been able to confirm this information with the D.C. Department of Health. Finally, the Postal Service took actions to protect its employees from low-level environmental risks. For example, it provided protective equipment such as gloves and masks and, according to postal managers, shut down the mail-sorting machine that processed the Daschle letter, at least for a time. Additionally, the Postal Service provided information on handling suspicious packages and required facility emergency action plans to be updated. In 1999, the Postal Service developed guidance for responding to anthrax and other hazardous incidents. The guidance, which was developed in response to hundreds of hoaxes, includes steps for notifying first responders, evacuating employees, and providing information and medical care to employees. The Postal Service reported that the guidance deals with observable events—specifically, spills—not events that are not observable, such as aerosolization of powders. As a result, the Postal Service said that it did not view the guidance as being applicable to the situation that occurred at Brentwood. Given that the situation at Brentwood differed from the situation contemplated in its guidance, the Postal Service sought advice from CDC and others. According to CDC officials, the health and safety of postal employees was always the first concern of postal managers during discussions with CDC. Furthermore, they said that the Postal Service was receptive to their advice about the need to close Brentwood to protect postal employees after a diagnosis of inhalation anthrax was confirmed. The Postal Service’s decision to wait for a confirmed case of inhalation anthrax before closing the facility and referring employees for medical treatment differed from the decision to implement precautionary measures immediately after anthrax contamination was identified at the Hart Building. The decisions differed, in part, because there was an observable incident at the Hart Building, but not at Brentwood. In addition, different parties made the decisions. At Brentwood, the Postal Service made the decision in consultation with CDC and the D.C. Department of Health. These parties were not involved in the decision-making at the Hart Building. Instead, because the Hart Building is one of many congressional offices surrounding the U.S. Capitol, the Attending Physician for the U.S. Congress—who functions independently from the District of Columbia— provided advice and made decisions about how to deal with the contamination there. The incident at the Hart Building was immediately viewed as high risk to employees there because the envelope opened in Senator Daschle’s office contained a visible white powder that the accompanying letter identified as anthrax, which was quickly confirmed by testing of the substance. Consequently, the Office of the Attending Physician of the U.S. Congress arranged for congressional employees to receive antibiotics immediately and advised closure of the Hart Building the following day. Since 2001, the Postal Service has developed new guidance to address security risks in the mail. Its Interim Guidelines for Sampling, Analysis, Decontamination, and Disposal of Anthrax for U.S. Postal Service Facilities—first issued in November 2001—states that postal facilities will be closed if a confirmed case of inhalation anthrax is identified or when evidence suggests that anthrax has been aerosolized in a postal facility. The Postal Service said that it plans to complete an update to these guidelines soon, and we plan to determine whether the new guidelines will adequately address the situation that occurred at Brentwood as part of our ongoing work. In addition, the Postal Service has tested and begun to install new biodetection technology in postal facilities. This technology is designed to enhance safety by quickly identifying unobservable evidence of aerosolized anthrax, thereby allowing for a prompt response. We plan to review the guidance associated with this technology as we complete our work. The Postal Service communicated health risk and other information to its Brentwood employees during the anthrax crisis, but some of the information it initially provided changed as public health knowledge evolved, intensifying employees’ concerns about whether adequate measures were being taken to protect them. Most significantly, information on the amount of anthrax necessary to cause inhalation anthrax and the likelihood of postal employees’ contracting the disease turned out to be incorrect. Other factors, including difficulties in communicating the uncertainty associated with health recommendations and employees’ long-standing distrust of postal managers, also challenged efforts to communicate effectively. The Postal Service has made additional efforts to communicate with Brentwood employees since the facility’s closure, but challenges remain, particularly the need to effectively communicate information on any possible residual risks. The Postal Service used a wide variety of methods to communicate information to employees; however, some of the information it initially provided changed with changes in public health knowledge. For example, on the basis of the science at that time, the Postal Service and CDC initially informed employees that an individual would need to be exposed to 8,000 to 10,000 spores to contract inhalation anthrax. This view turned out to be incorrect when two women in New York and Connecticut died from inhalation anthrax in October and November 2001 without a trace of anthrax spores being found in their environments. Their deaths caused experts to conclude that the number of spores needed to cause the disease could be very small, depending on a person’s health status and the aerosolization capacity of the spores. Postal employees were also told that they were at little risk of contracting inhalation anthrax because, in the view of public health officials, anthrax was not likely to escape from a taped and well-sealed envelope in sufficient amounts to cause inhalation anthrax. In addition, on October 12, 2001, CDC issued a health advisory, which the Postal Service distributed to its employees, indicating that it is very difficult to refine anthrax into particles small enough to permit aerosolization. This information also proved to be incorrect when the U.S. Army Medical Research Institute of Infectious Diseases’ analyses of the anthrax in Senator Daschle’s letter in mid-October 2001 revealed that the substance was not only small enough to escape from the pores of a taped and well-sealed envelope but also highly refined and easily dispersed into the air. Finally, an error occurred on October 10, when the Postal Service instructed employees to pick up suspicious letters and isolate them in sealed containers. The message was corrected within a few days when employees were instructed not to touch suspicious letters. Nevertheless, Brentwood employees we spoke with cited the miscommunication as an indication that the Postal Service was not concerned about their safety. As a result of these and other issues, union and management officials report lingering bitterness between Brentwood employees and postal management. Communicating information proved challenging for several reasons. First, the incidents occurred in the turbulent period following the terrorist attacks of September 11, 2001, when the nation was focused on the response to those events. In addition, the anthrax incidents were unprecedented. The response was coordinated by the Department of Health and Human Services, primarily through CDC, and CDC had never responded simultaneously to multiple disease outbreaks caused by the intentional release of an infectious agent. Furthermore, when the incidents began, CDC did not have a nationwide list of outside experts on anthrax, and it had not yet compiled all of the relevant scientific literature. Consequently, CDC had to do time-consuming research to gather background information about the disease before it could develop and issue guidance. Moreover, since anthrax was virtually unknown in clinical practice, many clinicians did not have a good understanding of how to diagnose and treat it. As a result, public health officials at the federal, state, and local levels were basing their health-related actions and recommendations on information that was constantly changing. According to the testimony of CDC’s Associate Director for Science, National Institute for Occupational Safety and Health, before a Subcommittee of this Committee last year, CDC “clearly did not know what we did not know last October and this is the cardinal sin that resulted in tragic deaths.” Effective communications were further complicated by the evolving nature of the incidents and the media’s extensive coverage of the response to anthrax at other localities. Comparing the various actions taken by officials at different points in time and in different locations confused postal employees and the public and caused them to question the consistency and fairness of actions being taken to protect them. For example, when employees at the Brentwood postal facility received doxycycline for prophylaxis instead of ciprofloxacin, they incorrectly concluded that they were receiving an inferior drug. In part, this was because the media had characterized ciprofloxacin as the drug of choice for the prevention of inhalation anthrax. Ciprofloxacin also had been used as the primary medication in earlier responses, including the response to anthrax atthe Hart Building. CDC initially recommended ciprofloxacin for several reasons; however, when CDC subsequently determined that the anthrax was equally susceptible to doxycycline and other drugs, it began recommending the use of doxycycline instead. The switch to doxycycline was considered desirable for a variety of reasons, including its (1) lower risk for side effects, (2) lower cost, and (3) greater availability. Local and CDC officials we spoke with told us that they were challenged to explain the switch in medications and to address perceptions of differential treatment. Additional misunderstandings arose over the administration of nasal swabs to postal employees. Nasal swabs are samples taken from the nasal passages soon after a possible exposure to contamination to determine the location and extent of exposure at a site, but not to diagnose infection. Nasal swabs were administered to congressional employees on October 15 after the contaminated letter was opened to determine which employees might have been exposed and based on this where and how far the aerosolized anthrax spores had spread. Some Brentwood employees questioned why they did not also receive nasal swabs at this time and saw this difference as evidence of disparate medical treatment. As noted, the Postal Service reported requesting nasal swabs for its employees, but the CDC and the D.C. Department did not consider them necessary. Nasal swabs were then provided to at least some employees after Brentwood was closed on October 21. However, further confusion appears to have occurred about the purpose of the nasal swabs when employees who were tested did not receive the results of the swabs. The confusion occurred partly because the Postal Service issued a bulletin dated October 11, 2001, that incorrectly indicated that nasal swabs were useful in diagnosing anthrax and the media described nasal swabs as the “test” for anthrax. The bulletin was subsequently corrected, but the media continued to refer to the swabs as a test. Public health officials acknowledged that this confusion about the purpose of the nasal swabs created a great deal of anxiety within the postal community and the public. As a result, public health entities continued to collect the samples when people asked for them, simply to allay the individuals’ fears. Another area of confusion relates to the process used to administer the anthrax vaccine to interested postal employees. When the vaccine used by the military became available in sufficient quantities that it could be provided to others, CDC offered it to postal employees and congressional staff. While considered safe, it had not been approved for use in postexposure situations. Consequently, the Food and Drug Administration required CDC to administer the vaccine using extensive protocols related to the distribution of an “investigational new drug.” These protocols required postal employees to complete additional paperwork and undergo additional monitoring which, according to some Brentwood employees, gave some employees the impression that they were being used as “guinea pigs” for an unsafe treatment. CDC officials acknowledged that CDC did not effectively communicate information about the vaccine program and that, in hindsight, these deficiencies probably resulted in the “wrong perception.” CDC officials have also acknowledged that they were unsuccessful in clearly communicating the degree of uncertainty associated with the health information they were providing, which was evolving during the incidents. For example, although there were internal disagreements within CDC over the appropriate length of prophylaxis, this uncertainty was not effectively conveyed to postal employees and the public. Consequently, in December 2001, when postal employees and others were finishing their 60- day antimicrobial regimen called for in CDC’s initial guidance, they questioned CDC’s advice about the need to consider taking the drugs for an additional 40 days. CDC officials have since acknowledged the need to clearly state when uncertainty exists about the information distributed to the public and appropriately caveating the agency’s statements. CDC, local public health officials, union representatives, and postal officials told us that employees’ mistrust of postal managers complicated efforts to communicate information to them. According to these parties, postal employees were often suspicious of management’s motives and routinely scrutinized information they received for evidence of any ulterior motives. This view appears consistent with the results of our past work, which has identified persistent workplace problems exacerbated by decades of adversarial labor-management problems. These problems were so serious that in 2001, we reported that long-standing and adversarial labor-management relations affected the Postal Service’s management challenges. The need to address this long-standing issue was also raised in the July 2003 report of the President’s Commission on the U. S. Postal Service. According to postal managers, the Postal Service has made additional efforts to communicate with the employees who were at Brentwood, including holding “town hall” meetings to explain the facility’s decontamination process to postal employees and the public. The Postal Service has reported that it is also updating its 1999 guidance for responding to anthrax and other hazardous materials. At present, however, the revision of the guidance has not yet been completed and it is, therefore, unclear whether the revisions will address the issues that occurred at Brentwood. Nevertheless, the Postal Service assisted the National Response Team—a group of 16 federal agencies with responsibility for planning, preparing, and responding to activities related to the release of hazardous substances—in the development of improved guidance entitled Technical Assistance for Anthrax Response. This guidance provides a number of recommendations about communicating information during emergency situations, including the need for agencies to “admit when you have made a mistake or do not know the information.” While information on the process and outcome of decontamination efforts is technically complex and therefore challenging to present clearly to the public, the revised guidelines may be helpful in future discussions about the safety of a facility. We have not reviewed the details of the facility’s decontamination or its subsequent testing and, therefore, cannot comment on the effectiveness of decontamination efforts. However, in general, discussions about the success of decontamination and any residual risk to individuals center on two related topics. The first topic entails a discussion of the degree to which contamination has been reduced, bearing in mind that all sampling and analytical methods have a limit of detection below which spores may be present but undetected. Against that backdrop, it is also important to discuss how many anthrax spores are required to infect humans and to explain that the number is variable, depending upon the route of infection (e.g., skin contact or inhalation) and the susceptibility of each individual to infection. In light of this, it is particularly important to properly communicate to Brentwood employees a clear understanding of the decontamination approach that was undertaken at the facility and the nature and extent of any residual risk there. Likewise, the Postal Service’s communications to employees must be clear and unbiased to (1) clearly communicate the limitations of testing and the associated risks while, at the same time, (2) avoid inducing unnecessary fear or concern. If provided with clear and unbiased information, employees will be able to make informed decisions about their health and future employment. In this regard, the Postal Service has given employees who worked at Brentwood an opportunity to be reassigned to certain other mail processing centers in the region if they do not want to return to Brentwood. In our view, providing complete information to employees is important for them to make informed decisions about working at Brentwood. According to recent information that the Postal Service provided to its employees, the facility, which public health authorities have certified as safe for occupancy, is “100 percent free of anthrax contamination” and there is “no remaining health risk” at the facility. This latter information is not consistent with what CDC’s Associate Director for Science, National Institute for Occupational Safety and Health, told this Committee’s Subcommittee on the District of Columbia in July 2002. Specifically, she said that while a science-based process can allow workers to safely return to Brentwood, it is not possible to eliminate risk entirely or to guarantee that a building is absolutely free of risk. We discussed our concerns with Postal Service officials about their characterization of the facility as completely free of anthrax contamination, and they agreed to revise their statements to indicate that it is not possible to guarantee that a building is absolutely risk free. According to the Postal Service, a misunderstanding resulted in the incorrect information being distributed to employees before the document had been fully reviewed. The Postal Service said that it would correct the information and distribute the new information to employees who worked at Brentwood within the next 2 weeks. The Postal Service, CDC, and others have learned a great deal from the 2001 anthrax incidents and have taken various steps to address the problems that occurred and to enhance their preparedness for any future incidents. Among the lessons learned are that the risk to employees of contracting anthrax through contaminated mail is greater than was previously believed and more caution is needed to respond to that greater risk. It is now clear, for example, that anthrax spores can be released in the air, or aerosolized, when sealed letters pass through the Postal Service’s processing equipment and that a limited number of anthrax spores can cause inhalation anthrax in susceptible individuals. This increased risk of contracting inhalation anthrax indicates that decisions about closing facilities need to consider other factors as well as the presence of an observable substance, such as a powder. The Postal Service and CDC have responded to this need for greater caution by developing guidance for closing a facility that establishes evidence of aerosolization, as well as confirmation of a diagnosis of inhalation anthrax, as a criterion for closure. We have not yet evaluated this guidance to determine whether it is specific enough to make clear the circumstances under which a postal facility should be closed to adequately protect employees and the public. We recognize that developing such guidance is difficult, given that the Postal Service experiences many hoaxes and needs to accomplish its mission as well as ensure adequate protection of its employees’ health. Another important lesson learned during the 2001 anthrax incidents is that clear and accurate communication is critical to managing the response to an incident. Because the risk information that was provided to employees changed over time and some of the information was communicated in ways that employees reportedly found confusing or difficult to understand, the fears that would naturally accompany a bioterrorism incident were intensified and distrust of management, which already existed in the workplace, was exacerbated. CDC, in particular, has recognized the importance of communicating the uncertainty associated with scientific information to preserve credibility in the event that new findings change what was previously understood. In this regard, our work on the sampling and analytical methodologies used to test for and identify anthrax contamination addresses the uncertainty involved in these efforts. The Postal Service agrees that although the Brentwood facility has been tested and certified as safe for occupancy, the Postal Service cannot assert that the building is 100 percent free of anthrax contamination. Accordingly, the Postal Service stated that it would inform Brentwood employees before opening the facility that the Postal Service cannot guarantee that the building is absolutely risk free. This concludes my prepared statement. I will be happy to respond to any questions you or other members of the Committee may have. Should you or your staff have any questions concerning this report, please contact me at (202) 512-2834 or Keith Rhodes at (202) 512-6412. I can also be reached by e-mail at [email protected]. Individuals making key contributions to this testimony were Don Allison, Hazel Bailey, Jeannie Bryant, Derrick Collins, Dwayne Curry, Elizabeth Eisenstadt, and Kathleen Turner. Drs. Jack Melling and Sushil Sharma provided technical expertise. Events Occurring on That Date Terrorist attacks on the World Trade Center and Pentagon prompt heightened concerns about possible bioterrorism. In Florida, an American Media Inc. (AMI) employee is admitted to the hospital with a respiratory condition. The Centers for Disease Control and Prevention (CDC) issues an alert about bioterrorism, providing information about preventive measures for anthrax. CDC and the Florida Department of Health announce that AMI employee has inhalation anthrax. AMI employee dies of inhalation anthrax. The Postmaster General announces that Postal Inspection Service is working with other law enforcement agencies on the Florida incident. The Postal Service begins nationwide employee education on signs of anthrax exposure and procedures for handling mail to avoid anthrax infection. In NY, the New York City Department of Health (NYCDOH) announces the confirmation of a case of cutaneous anthrax in an NBC employee. The Postal Service says that it will offer gloves and masks to all employees who handle mail. (On or about) Daschle letter passes through Brentwood. Boca Raton post office, which had direct access to the AMI mail, is tested for anthrax and Palm Beach County Department of Health administers nasal swabs and offers a 15-day supply of ciprofloxacin to postal employees. On Capitol Hill, an employee opens a letter addressed to Senator Daschle. Staff in that office, an adjacent office, and first responders are given nasal swabs and a 3-day supply of antibiotics. In NJ, State Department of Health and Senior Services (NJDHSS) assures Trenton employees that they have a low risk of contracting anthrax. Anthrax is confirmed at Boca Raton post office. Part of the Hart Senate Office Building is closed in the morning, and the remainder of the building is closed in the evening. Over the next 3 days, all Hart building and other Capitol Hill employees who request them are given nasal swabs and a 3-day supply of antibiotics. The Postal Service arranges for environmental testing at Brentwood. Events Occurring on That Date A local hazardous materials response team conducts “quick tests” of Brentwood, which are negative for anthrax. A contractor conducts more extensive testing in the evening. Postmaster General Potter holds a press conference at Brentwood, in part to reassure employees they are at low risk. CDC confirms cutaneous anthrax in New Jersey postal employee, and a second suspected case is identified. In NJ, the Trenton facility is closed. Employees are sent home. In NY, NYCDOH announces another case of cutaneous anthrax, in a CBS employee. In Florida, the Postal Service cleans two postal facilities contaminated with anthrax spores. CDC distributes a press release announcing that the Food and Drug Administration has approved doxycycline for postexposure prophylaxis for anthrax. In the DC, a postal employee who works at the Brentwood facility seeks medical attention. In NJ, the NJDHSS refers postal employees to their private physicians for medical treatment. Employees begin seeking treatment at a local hospital. In DC, a postal employee who works at Brentwood is admitted to a hospital with suspected inhalation anthrax. In NJ, laboratory testing confirms cutaneous anthrax in a second postal employee who works at the Trenton postal facility. In DC, another postal employee who works at the Brentwood facility is admitted to a hospital with a respiratory condition. CDC arrives at the Brentwood facility to meet with Postal Service management. In DC, the postal employee who was admitted to the hospital on 10/19/01 is confirmed to have inhalation anthrax. In DC, Brentwood is closed. Evaluation and prophylaxis of employees begin. In DC, a Brentwood employee who had initially sought medical attention on 10/18/01 is admitted to a hospital with suspected inhalation anthrax and becomes the first postal employee (and second anthrax victim) to die. In DC, another postal employee who worked at the Brentwood facility seeks medical attention at a hospital. His chest X-ray is initially determined to be normal, and he is discharged. In DC, the postal employee who worked at the Brentwood facility and who sought medical attention on 10/21/01 and was discharged is readmitted to the hospital with suspected inhalation anthrax, and becomes the second postal employee (and third anthrax victim) to die. In DC, prophylaxis is expanded to include all employees and visitors to nonpublic areas at the Brentwood facility. The postal employee who was admitted to the hospital on October 20 is confirmed to have inhalation anthrax. The Postal Service learns that environmental tests of Brentwood are positive for anthrax. In NJ, a postal employee at Trenton is confirmed to have inhalation anthrax. Events Occurring on That Date In NY, preliminary tests indicate anthrax in a hospital employee who was admitted with suspected inhalation anthrax on 10/28/01. The hospital where she works is temporarily closed, and NYCDOH recommends prophylaxis for hospital employees and visitors. In NJ, laboratory testing confirms cutaneous anthrax in a woman who receives mail directly from the Trenton facility. The woman originally sought medical attention on 10/18/01 and was admitted to the hospital on 10/22/01 for a skin condition. In NJ, laboratory testing confirms a second case of inhalation anthrax, in a Trenton postal employee who initially sought medical attention on 10/16/01 and was admitted to the hospital on 10/18/01 with a respiratory condition. In NY, the hospital employee becomes the fourth anthrax victim to die. In NY, NYCDOH announces another case of cutaneous anthrax, in a New York Post employee. In Connecticut, an elderly woman, who was admitted to the hospital for dehydration on 11/16/01, becomes the fifth anthrax victim to die. The Connecticut Department of Public Health, in consultation with CDC, begins prophylaxis for postal employees working in the Wallingford postal facility. CDC offers the anthrax vaccine to postal employees. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | On October 21, 2001, the U.S. Postal Service closed its Brentwood mail processing facility after the Centers for Disease Control and Prevention (CDC) confirmed that an employee there had contracted inhalation anthrax, an often-fatal form of the disease. On October 21 and 22, two other Brentwood employees died of inhalation anthrax. The contamination was linked to a letter that passed through the facility on or about October 12, before being opened in the office of Senator Daschle in the Hart Senate Office Building on October 15. The Hart Building was closed the next day. The Brentwood facility has since been decontaminated and will soon reopen. This testimony, which is based on ongoing work, provides GAO's preliminary observations on the decisions made in closing the facility and problems experienced in communicating with employees, as well as lessons learned from the experience. The Postal Service's decision to wait to close the Brentwood facility and refer employees for medical treatment until CDC confirmed that a postal employee had contracted inhalation anthrax was consistent with the advice the Postal Service received from public health advisers and the information about health risk available at the time. However, because circumstances differed at Brentwood and the Hart Building--an observed spill at the Hart Building and no observable incident at Brentwood--the Postal Service's response differed from the response at Capitol Hill, leading some Brentwood employees to question whether the Postal Service was taking adequate steps to protect their health. The Postal Service communicated information to its Brentwood employees during the anthrax incident, but some of the health risk information changed over time, exacerbating employees' concerns about the measures being taken to protect them. Notably, employees later learned that their risk of contracting the disease was greater than originally stated. Other factors, including difficulties in communicating the uncertainty associated with health recommendations and employees' distrust of postal managers, also challenged efforts to communicate effectively. Recently, the Postal Service informed employees that Brentwood, which has been tested and certified as safe for occupancy, is "100 percent free of anthrax contamination." However, in discussions with GAO, the Service agreed to revise future communications to acknowledge that although any remaining risk at the facility is likely to be low, complete freedom from risk cannot be guaranteed. The Postal Service and others have learned since the 2001 anthrax incidents that (1) the risk of contracting anthrax through the mail is greater than was previously believed and more caution is needed to respond to that greater risk and (2) clear, accurate communication is critical to managing the response to an incident and its aftermath. The Postal Service is revising its guidance to respond more quickly and to communicate more effectively to employees and the public in the event of a future incident. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
Since the 1960s, the United States has operated two separate operational polar-orbiting meteorological satellite systems: the Polar-orbiting Operational Environmental Satellites (POES), managed by the National Oceanic and Atmospheric Administration (NOAA) and the Defense Meteorological Satellite Program (DMSP), managed by the Department of Defense (DOD). The satellites obtain environmental data that are processed to provide graphical weather images and specialized weather products and are the predominant input to numerical weather prediction models. These images, products, and models are all used by weather forecasters, the military, and the public. Polar satellites also provide data used to monitor environmental phenomena, such as ozone depletion and drought conditions, as well as data sets that are used by researchers for a variety of studies, such as climate monitoring. Unlike geostationary satellites, which maintain a fixed position above the earth, polar-orbiting satellites constantly circle the earth in an almost north-south orbit, providing global coverage of conditions that affect the weather and climate. Each satellite makes about 14 orbits a day. As the earth rotates beneath it, each satellite views the entire earth’s surface twice a day. Currently, there are two operational POES satellites and two operational DMSP satellites that are positioned so that they can observe the earth in early morning, mid morning, and early afternoon polar orbits. Together, they ensure that, for any region of the earth, the data provided to users are generally no more than 6 hours old. Figure 1 illustrates the current operational polar satellite configuration. Besides the four operational satellites, six older satellites are in orbit that still collect some data and are available to provide some limited backup to the operational satellites should they degrade or fail. In the future, both NOAA and DOD plan to continue to launch additional POES and DMSP satellites every few years, with final launches scheduled for 2007 and 2011, respectively. Each of the polar satellites carries a suite of sensors designed to detect environmental data that are either reflected or emitted from the earth, the atmosphere, and space. The satellites store these data and then transmit them to NOAA and Air Force ground stations when the satellites pass overhead. The ground stations then relay the data via communications satellites to the appropriate meteorological centers for processing. The satellites also broadcast a subset of these data in real time to tactical receivers all over the world. Under a shared processing agreement among four satellite data processing centers—NOAA’s National Environmental Satellite Data and Information Service (NESDIS), the Air Force Weather Agency, the Navy’s Fleet Numerical Meteorology and Oceanography Center, and the Naval Oceanographic Office—different centers are responsible for producing and distributing, via a shared network, different environmental data sets, specialized weather and oceanographic products, and weather prediction model outputs. Each of the four processing centers is also responsible for distributing the data to its respective users. For the DOD centers, the users include regional meteorology and oceanography centers, as well as meteorology and oceanography staff on military bases. NESDIS forwards the data to NOAA’s National Weather Service for distribution and use by government and commercial forecasters. The processing centers also use the Internet to distribute data to the general public. NESDIS is responsible for the long-term archiving of data and derived products from POES and DMSP. In addition to the infrastructure supporting satellite data processing noted above, properly equipped field terminals that are within a direct line of sight of the satellites can receive real-time data directly from the polar- orbiting satellites. There are an estimated 150 such field terminals operated by U.S. and foreign governments and academia. Field terminals can be taken into areas with little or no data communications infrastructure—such as on a battlefield or a ship—and enable the receipt of weather data directly from the polar-orbiting satellites. These terminals have their own software and processing capability to decode and display a subset of the satellite data to the user. Figure 2 depicts a generic data relay pattern from the polar-orbiting satellites to the data processing centers and field terminals. Given the expectation that combining the POES and DMSP programs would reduce duplication and result in sizable cost savings, a May 1994 Presidential Decision Directive required NOAA and DOD to converge the two satellite programs into a single satellite program capable of satisfying both civilian and military requirements. The converged program, NPOESS, is considered critical to the United States’ ability to maintain the continuity of data required for weather forecasting and global climate monitoring through the year 2020. To manage this program, DOD, NOAA, and the National Aeronautics and Space Administration (NASA) formed a tri-agency Integrated Program Office, located within NOAA. Within the program office, each agency has the lead on certain activities. NOAA has overall program management responsibility for the converged system and for satellite operations; DOD has the lead on the acquisition; and NASA has primary responsibility for facilitating the development and incorporation of new technologies into the converged system. NOAA and DOD share the costs of funding NPOESS, while NASA funds specific technology projects and studies. Figure 3 depicts the organizations comprising the Integrated Program Office and lists their responsibilities. Program acquisition plans call for the procurement and launch of six NPOESS satellites over the life of the program, as well as the integration of 13 instruments, consisting of 10 environmental sensors and 3 subsystems. Together, the sensors are to receive and transmit data on atmospheric, cloud cover, environmental, climate, oceanographic, and solar-geophysical observations. The subsystems are to support nonenvironmental search and rescue efforts, sensor survivability, and environmental data collection activities. According to the program office, 7 of the 13 planned NPOESS instruments involve new technology development, whereas 6 others are based on existing technologies. In addition, the program office considers 4 of the sensors involving new technologies critical, because they provide data for key weather products; these sensors are shown in bold in table 1, which lists the planned instruments and the state of technology on each. In addition to the sensors and subsystems listed above, in August 2004, the President directed NASA and the Departments of Defense, the Interior, and Commerce to place a LANDSAT-like imagery capability on the NPOESS platform. This new capability is to collect imagery data of the earth’s surface similar to the current LANDSAT series of satellites, which are managed by the Department of Interior’s U.S. Geological Survey and are reaching the end of their respective lifespans. One instrument was launched in 1984 and is now long past its 3-year design life; the newer satellite is not fully operational. LANDSAT is an important tool in environmental monitoring efforts, including land cover change, vegetation mapping, and wildfire effects. The decision to add a LANDSAT-like sensor to the NPOESS platform is currently being revisited by the President’s Office of Science and Technology Policy and the Office of Management and Budget. In addition, the NPOESS Preparatory Project (NPP), which is being developed as a major risk reduction and climate data continuity initiative, is a planned demonstration satellite to be launched several years before the first NPOESS satellite is to be launched. It is planned to host three of the four critical NPOESS sensors (the visible/infrared imager radiometer suite, the cross-track infrared sounder, and the advanced technology microwave sounder), as well as a noncritical sensor (the ozone mapper/profiler suite). NPP will provide the program office and the processing centers an early opportunity to work with the sensors, ground control, and data processing systems. Specifically, this satellite is expected to demonstrate the validity of about half of the NPOESS environmental data records and about 93 percent of its data processing load. NPOESS is a major system acquisition that consists of three key phases: the concept and technology development phase, which lasted from roughly 1995 to early 1997; the program definition and risk reduction phase which began in early 1997 and ended in August 2002; and the engineering and manufacturing development and production phase, which began with the award of the development and production contract in August 2002 and will continue through the end of the program. Before the contract was awarded in 2002, the life cycle cost estimate for the program was estimated to be $6.5 billion over the 24-year period from the inception of the program in 1995 through 2018. Shortly after the contract was awarded, the life cycle cost estimate grew to $7 billion. When the NPOESS development contract was awarded, program officials identified an anticipated schedule and funding stream for the program. The schedule for launching the satellites was driven by a requirement that the satellites be available to back up the final POES and DMSP satellites should anything go wrong during the planned launches of these satellites. In general, program officials anticipate that roughly 1 out of every 10 satellites will fail either during launch or during early operations after launch. Early program milestones included (1) launching NPP by May 2006, (2) having the first NPOESS satellite available to back up the final POES satellite launch in March 2008, and (3) having the second NPOESS satellite available to back up the final DMSP satellite launch in October 2009. If the NPOESS satellites were not needed to back up the final predecessor satellites, their anticipated launch dates would have been April 2009 and June 2011, respectively. In 2003, we reported that these schedules were subsequently changed as a result of changes in the NPOESS funding stream. A DOD program official reported that between 2001 and 2002 the agency experienced delays in launching a DMSP satellite, causing delays in the expected launch dates of another satellite. In late 2002, DOD shifted the expected launch date for the final satellite from 2009 to 2010. As a result, the department reduced funding for NPOESS by about $65 million between fiscal years 2004 and 2007. According to program officials, because NOAA is required to provide the same level of funding that DOD provides, this change triggered a corresponding reduction in funding by NOAA for those years. As a result of the reduced funding, program officials were forced to make difficult decisions about what to focus on first. The program office decided to keep NPP as close to its original schedule as possible, because of its importance to the eventual NPOESS development, and to shift some of the NPOESS deliverables to later years. This shift affected the NPOESS deployment schedule. To plan for this shift, the program office developed a new program cost and schedule baseline. After this new baseline was completed in 2004, we reported that the program office increased the NPOESS cost estimate from about $7 billion to $8.1 billion, and delayed key milestones, including the planned launch of the first NPOESS satellite—which was delayed by 7 months. The cost increases reflected changes to the NPOESS contract as well as increased program management funds. According to the program office, contract changes included extension of the development schedule, increased sensor costs, and additional funds needed for mitigating risks. Increased program management funds were added for non-contract costs and management reserves. We also noted that other factors could further affect the revised cost and schedule estimates. Specifically, the contractor was not meeting expected cost and schedule targets of the new baseline because of technical issues in the development of key sensors. Based on its performance through May 2004, we estimated that the contractor would most likely overrun its contract at completion in September 2011 by $500 million. In addition, we reported that risks associated with the development of the critical sensors, integrated data processing system, and algorithms, among other things, could contribute to further cost increases and schedule slips. Over the past year, NPOESS cost increases and schedule delays have demonstrated worsening trends. NPOESS has continued to experience problems in the development of a key sensor, resulting in schedule delays and anticipated cost increases. Further, contractor data show that costs and schedules are likely to continue to increase in the future. Our trend analysis shows that the contractor will most likely overrun costs by $1.4 billion, resulting in a life cycle cost of about $9.7 billion, unless critical changes are made. Program risks, particularly with the development of critical sensors, could further increase NPOESS costs and delay schedules. Management problems at multiple levels—subcontractor, contractor, program office, and executive leadership—have contributed to these cost and schedule issues. NPOESS has continued to experience problems in the development of a key sensor, resulting in schedule delays and anticipated cost increases. In early 2005, the program office learned that a subcontractor could not meet cost and schedule due to significant technical issues on the visible/infrared imager radiometer suite (VIIRS) sensor—including problems with the cryoradiator, excessive vibration of sensor parts, and errors in the sensor’s solar calibration. These technical problems were further complicated by inadequate process engineering and management oversight by the VIIRS subcontractor. To address these issues, the program office provided additional funds for VIIRS, capped development funding for the conical-scanned microwave imager/sounder (CMIS) and the ozone mapper/profiler suite sensors, and revised its schedule in order to keep the program moving forward. By the summer of 2005, the program office reported that significant technical issues had been resolved—but they had a significant impact on the overall NPOESS program. Regarding NPOESS schedule, the program office anticipated at least a 10-month delay in the launch of the first satellite (totaling at least a 17-month delay from the time the contract was awarded) and a 6-month delay in the launch of the second satellite. A summary of recent schedule changes is shown in table 2. The effect of these delays is evident in the widening gap between when the last POES satellite is expected to launch and when the first NPOESS satellite could be available if needed as a backup. This is significant because if the last POES satellite fails on launch, it will be at least 3 years before the first NPOESS satellite could be launched. During that time, critical weather and environmental observations would be unavailable—and military and civilian weather products and forecasts would be significantly degraded. As for NPOESS costs, program officials reported that the VIIRS development problems caused the program to overrun its budget, and that they need to reassess options for funding the program. They did not provide an updated cost estimate, noting that new cost estimates are under development. A summary of recent program cost growth is shown in table 3. In addition to the overall program office cost and schedule estimates, it is valuable to assess contractor data to monitor the contractor’s progress in meeting deliverables since contractor costs comprise a substantial portion of the overall program costs. NPOESS contractor data show a pattern of cost and schedule overruns—and a most likely contract cost growth of about $1.4 billion. One method project managers use to track contractor progress on deliverables is earned value management. This method, used by DOD for several decades, compares the value of work accomplished during a given period with that of the work expected in that period. Differences from expectations are measured in both cost and schedule variances. Cost variances compare the earned value of the completed work with the actual cost of the work performed. For example, if a contractor completed $5 million worth of work and the work actually cost $6.7 million, there would be a –$1.7 million cost variance. Schedule variances are also measured in dollars, but they compare the earned value of the work completed to the value of work that was expected to be completed. For example, if a contractor completed $5 million worth of work at the end of the month, but was budgeted to complete $10 million worth of work, there would be a –$5 million schedule variance. Positive variances indicate that activities are costing less or are completed ahead of schedule. Negative variances indicate that activities are costing more or are falling behind schedule. These cost and schedule variances can then be used in estimating the cost and time needed to complete the program. Using contractor-provided data, our analysis indicates that NPOESS cost performance continues to experience negative variances. Figure 4 shows the 6-month cumulative cost variance for the NPOESS contract. From March 2005 to September 2005, the contractor exceeded its cost target by $103.7 million, which is about 9 percent of the contractor’s budget for that time period. The contractor has incurred a total cost overrun of $253.8 million with NPOESS development only about 36 percent complete. This information is useful because trends often tend to continue and can be difficult to reverse unless management attention is focused on key risk areas and risk mitigation actions are aggressively pursued. Studies have shown that, once programs are 15 percent complete, the performance indicators are indicative of the final outcome. Based on contractor performance from March 2005 to September 2005, we estimate that the current NPOESS contract will overrun its budget—worth approximately $3.4 billion—by between $788 million and $2 billion. Our projection of the most likely cost overrun is about $1.4 billion. The contractor, in contrast, estimates about a $371 million overrun at completion of the NPOESS contract. Adding our projected $1.4 billion overrun to the prior $8.1 billion life cycle cost estimate and the project office’s estimated need for $225 million in additional management costs brings the total life cycle cost of the program to about $9.7 billion. Our analysis also indicates that the contract is showing a negative schedule variance. Figure 5 shows the 6-month cumulative schedule variance of NPOESS. From March 2005 to September 2005, the contractor was unable to complete $27.8 million worth of scheduled work. In September, the contractor was able to improve its overall schedule performance because of an unexpectedly large amount of work being completed on the spacecraft (as opposed to the sensors). It was not a reflection of an improvement in the contractor’s ability to complete work on the critical sensors. Specifically, performance on the development of critical sensors over the past 6 months continued to be poor, which indicates that schedule performance will likely remain poor in the future. This is of concern because an inability to meet contract schedule performance could be a predictor of future rising costs, as more spending is often necessary to resolve schedule overruns. Risk management is a leading management practice that is widely recognized as a key component of a sound system development approach. An effective risk management approach typically includes identifying, prioritizing, resolving, and monitoring project risks. Program officials reported that they recognize several risks with the overall program and critical sensors that, if not mitigated, could further increase costs and delay the schedule. In accordance with leading management practices, the program office developed a NPOESS risk management program that requires assigning a severity rating to risks that bear particular attention, placing these risks in a database, planning response strategies for each risk in the database, and reviewing and evaluating risks in the database during monthly program risk management board meetings. The program office identifies risks in two categories: program risks, which affect the whole NPOESS program and are managed at the program office level, and segment risks, which affect only individual segments and are managed at the integrated product team level. The program office has identified 17 program risks, including 10 medium to medium-high risks. Some of these risks include the delivery of four sensors (VIIRS, CMIS, the cross-track infrared sounder and the ozone mapper/profiler suite) and the integrated data processing system; and the uncertainty that algorithms will meet system performance requirements. Figure 6 identifies the 17 program risks and their assigned levels of risk. Managing the risks associated with the development of VIIRS, the ozone mapper/profiler suite, the cross-track infrared sounder, the integrated data processing system, and algorithm performance is of particular importance because these are to be demonstrated on the NPP satellite that is currently scheduled for launch in April 2008. The risks with the development of CMIS are also important because CMIS is one of the four critical sensors providing data for key weather products. At present, the program office considers two critical sensors—VIIRS and CMIS—to present key program risks because of technical challenges that each is facing. In addition to the previously reported VIIRS problems, the sensor continues to experience significant problems dealing with the technical complexity of the ground support equipment. The testing of optical and solar diffuser components has also been more challenging than expected and is taking longer than planned to complete. In addition, the delivery of components for integration onto the sensor, including the electronics material from two subcontractors, has been behind schedule due to technical challenges. Until the current technical issues are resolved, delays in the VIIRS delivery and integration onto the NPP satellite remain a potential threat to the expected launch date of the NPP. The CMIS sensor is experiencing schedule overruns that may threaten its expected delivery date. Based on the prime contractor’s analysis, late deliveries of major CMIS subsystems will occur unless the current schedule is extended. For example, the simulator hardware is already expected to be delivered late, based on the current contractual requirement of December 2006. CMIS also continues to experience technical challenges in the design of the radio frequency receivers, the structure, and the antenna. In addition, extensive effort has been expended to resolve system reliability and thermal issues, among other things. To the program office’s credit, it is aware of these risks and is using its risk management plans to help mitigate them. Problems involving multiple levels of management—including subcontractor, contractor, program office, and executive leadership—have played a role in bringing the NPOESS program to its current state. As noted earlier, VIIRS sensor development issues were attributed, in part, to the subcontractor’s inadequate project management. Specifically, after a series of technical problems, internal review teams sent by the prime contractor and the program office found that the VIIRS subcontractor had deviated from a number of contract, management, and policy directives set out by the main office and that both management and process engineering were inadequate. Neither the contractor nor the program office recognized the underlying problems in time to fix them. After these issues were identified, the subcontractor’s management team was replaced. Further, in January 2005, the NPOESS Executive Committee (Excom) called for an independent review of the VIIRS problems. This independent review, delivered in August 2005, reported that the program management office did not have the technical system engineering support it needed to effectively manage the contractor, among other things. Additionally, the involvement of NPOESS executive leadership has wavered from frequent heavy involvement to occasional meetings with few resulting decisions. Specifically, the Excom has met five times over the last 2 years. Most of these meetings did not result in major decisions, but rather triggered further analysis and review. For instance, program officials and the program’s Tri-agency Steering Commitee identified five options to present at the executive committee meeting in mid-August 2005 and expected to receive direction on how to proceed with the project. The Excom did not select an option. Instead, it requested further analysis of the options by another independent review team, and an independent cost estimate by DOD’s Cost Analysis Improvement Group. Sound management is critical to program success. In our reviews of major acquisitions throughout the government, we have reported that sound program management, contractor oversight, risk identification and escalation, and effective and timely executive level oversight are key factors determining a project’s ability to be delivered on time, within budget, and with promised functionality. Given the history of large cost increases and the factors that could further affect NPOESS costs and schedules, continued oversight, strong leadership, and timely decision making are more critical than ever. In August 2005, the program office briefed its Executive Committee on the program’s cost, schedule, and risks. The program office noted that the budget for the program was no longer executable and offered multiple alternatives for reconfiguring the program. Specifically, the program office and contractor developed 26 options during the March to August 2005 timeframe. Of these options, the Tri-agency Steering Committee selected five options, shown in table 4. All of these options alter the costs, schedules, and deliverables for the program. While the options’ preliminary life cycle cost estimates range from $8.8 billion to $9.2 billion, they all involve reductions in functionality and limited probabilities for meeting schedules within the cited budgets. None of the options presented discussed the potential for adding funding in the short term to hold off longer-term life cycle cost increases. Project officials anticipated that at its August meeting, the Excom would decide on an option and provide directions for keeping the project moving. However, Excom officials requested further analysis and detailed cost estimates, and they deferred a decision among alternatives until December 2005. Last week, we learned that in addition to the five options presented in August 2005, program executives are considering nine new options. While we were not provided any details about the nine new options, program officials informed us that they too will affect NPOESS costs, schedule, and promised functionality for system users—although their full impact is not yet clear. Program officials expect the Excom to decide on a limited number of options on November 22, 2005, and to obtain independent cost estimates of those options and make a decision to implement one of the options in December 2005. After a decision is made, the prime contractor will need time to develop more precise cost estimates and the program office with need to renegotiate the contract. Until a decision is made, the program remains without a plan for moving forward. Further, there are opportunity costs in not making a decision—that is, some options may no longer be viable, contractors are not working towards a chosen solution, and other potential options become more difficult to implement Clearly, timely decisions are needed to allow the program to move forward and for satellite data users to start planning for any data shortfalls they may experience. Until a decision is made on how the program is to proceed, the contractor and program office cannot start to implement the chosen solution and some decisions, such as the ability to hold schedule slips to a minimum, become much more difficult. In summary, NPOESS is a program in crisis. Over the last few years, it has been troubled by technical problems, cost increases, and schedule delays. Looking forward, technical challenges persist; costs are likely to grow; and schedule delays could lead to gaps in satellite coverage. Program officials and executives are considering various options for dropping functionality in order to handle cost and schedule increases, but the full impact of these options is not clear. Moving forward, continued oversight, strong leadership, and informed and timely decision making are more critical than ever. This concludes my statement. I would be pleased to respond to any questions that you or other members of the Committee may have at this time. If you have any questions regarding this testimony, please contact David Powner at (202) 512-9286 or by email at [email protected]. Individuals making contributions to this testimony include Carol Cha, Neil Doherty, Joanne Fiorino, Kathleen S. Lovett, Colleen Phillips, and Karen Richey. Our objectives were to (1) discuss the National Polar-orbiting Operational Environmental Satellite System (NPOESS) program’s schedule, cost, trends, and risks and (2) describe plans and implications for moving the program forward. To accomplish these objectives, we focused our review on the Integrated Program Office, the organization responsible for the overall NPOESS program. We also met with officials from the Department of Defense, the National Aeronautics and Space Administration, and NOAA’s National Weather Service and National Environmental Satellite Data and Information Service to discuss user needs for the program. To identify schedule and cost changes, we reviewed program office contract data, the Executive Committee minutes and briefings, and an independent review team study, and we interviewed program officials. We compared changes in NPOESS cost and schedule estimates to prior cost and schedule estimates as reported in our July 2002 and July 2003 testimonies and in our September 2004 report. To identify trends that could affect the program baseline in the future, we assessed the prime contractor’s cost and schedule performance. To make these assessments, we applied earned value analysis techniques to data from contractor cost performance reports. We compared the cost of work completed with the budgeted costs for scheduled work for a 6-month period, from March to September 2005, to show trends in cost and schedule performance. We also used data from the reports to estimate the likely costs at the completion of the prime contract through established earned value formulas. This resulted in three different values, with the middle value being the most likely. We used the base contract without options for our earned value assessments. To identify risks, we reviewed program risk management documents and interviewed program officials. Further, we evaluated earned value cost reports to determine the key risks that negatively affect NPOESS’s ability to maintain the current schedule and cost estimates. To assess options and implications for moving the program forward, we reviewed the five options presented at the Executive Committee briefing and met with representatives of the National Weather Service and National Environmental Satellite Data and Information Service to obtain their views on user’s needs and priorities for satellite data. NOAA officials generally agreed with the facts presented in this statement and provided some technical corrections, which we have incorporated. We performed our work at the Integrated Program Office, DOD, NASA, and NOAA in the Washington, D.C., metropolitan area, between June 2005 and November 2005, in accordance with generally accepted government auditing standards. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | Polar-orbiting environmental satellites provide data and imagery that are used by weather forecasters, climatologists, and the military to map and monitor changes in weather, climate, the oceans, and the environment. Our nation's current operational polar-orbiting environmental satellite program is a complex infrastructure that includes two satellite systems, supporting ground stations, and four central data processing centers. In the future, the National Polar-orbiting Operational Environmental Satellite System (NPOESS) is to combine the two current systems into a single, state-of-the-art environment-monitoring satellite system. This new satellite system is considered critical to the United States' ability to maintain the continuity of data required for weather forecasting and global climate monitoring through the year 2020. GAO was asked to discuss the NPOESS program's schedule, cost, trends, and risks, and to describe plans and implications for moving the program forward. The NPOESS program has experienced continued schedule delays, cost increases, and technical challenges over the last several years. The schedule for the launch of the first satellite has been delayed by at least 17 months (until September 2010 at the earliest), and this delay could result in a gap in satellite coverage of at least 3 years if the last satellite in the prior series fails on launch. Program life cycle cost estimates have grown from $6.5 billion in 2002 to $8.1 billion in 2004 and are still growing. While the program is currently reassessing its life cycle cost estimates, our analysis of contractor trends as of September 2005 shows a likely $1.4 billion contract cost overrun--bringing the life cycle cost estimate to about $9.7 billion. Technical risks in developing key sensors continue, and could lead to further cost increases and schedule delays. As a result of expected program cost growth, the Executive Committee responsible for the program is evaluating options for moving the program forward--and new cost estimates for those options. Key options under consideration in August 2005 included removing a key sensor from the first satellite, delaying launches of the first two satellites, and not launching a preliminary risk-reduction satellite. All of these options impact the program's cost, schedules, and the system users who rely on satellite data to develop critical weather products and forecasts--although the full extent of that impact is not clear. Further, last week GAO was informed that there are nine new options now under consideration, and that they are likely to impact costs, schedules, and system users. Until a decision is made, the program remains without a plan for moving forward. Further, there are opportunity costs in not making a decision--some options are lost and others may become more difficult. Given the history of large cost increases and the factors that could further affect NPOESS costs and schedules, continued oversight, strong leadership, and timely decision making are more critical than ever. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
The FBF, which is administered by GSA, is an intragovernmental revolving fund authorized and established by the Public Buildings Amendments of 1972. Beginning in 1975, the FBF replaced appropriations to GSA as the primary means of financing the operating and capital costs associated with federal space owned or managed by GSA. GSA charges federal agencies rent, and the receipts from the rent are deposited in the FBF. Congress exercises control over the FBF through the appropriations process that sets annual limits on how much of the fund can be expended for various activities. In addition, Congress may appropriate additional amounts for the FBF. The FBF operates as follows. Initially, as part of the President’s budget preparation process, GSA estimates the rental revenue the FBF is expected to receive. The rent estimate is prepared about 18 months in advance of the fiscal year. Through the appropriation process, Congress establishes annual limits on how much of the fund can be expended for various activities. As revenues are received, they are deposited into the FBF, and, subsequently, GSA is to fund various projects and programs within the limits set by Congress. Descriptions for some of these budget activities are shown in table 1. Our first objective was to verify, to the extent practical, the amounts GSA attributed to the individual reasons for overestimation of the FBF rental revenue projections for fiscal years 1996, 1997, and 1998. To do this, we developed an understanding of the rental revenue estimation process that PBS used. We (l) discussed with PBS program officials and staff the basic steps involved in the process used for fiscal years 1996 through 1999; and (2) reviewed studies of the process done by an internal PBS review team, two consulting firms, and GSA’s Inspector General. Further, we examined documents that supplied supporting details, such as a PBS listing of buildings associated with a particular reason, and we discussed each reason for the overestimation and the amount attributed to it with PBS program officials and staff. Our second objective was to determine whether PBS’ corrective actions appeared to address GSA’s identified reasons for the overestimation. We also determined if the corrective actions addressed the weaknesses in the estimation process that we and others identified. To do this, we interviewed PBS officials and staff, reviewed documentation associated with the actions, and observed the operation of a new management information system PBS is developing to help it estimate rental revenues, among other things. On the basis of our knowledge of the estimation system and the proposed or actual corrective actions to the system, we determined whether the corrective actions appeared to address GSA’s identified reasons for the overestimation and other identified weaknesses. Our third objective was to determine the budgetary impact of the overestimation on projects and programs in the FBF. To accomplish this, we developed an understanding of the process by which PBS identified sources of obligational authority that had the potential for inclusion in the fiscal year 1997 obligational reserve. Specifically, through interviews with PBS officials and review of documentation they maintained about the process, we developed an understanding of how PBS became aware of the magnitude of the overestimation problem—$680.5 million—and the action those officials took to identify specific sources of obligational authority. We reviewed the process that PBS used to identify unobligated balances that could be included in the reserve. Both new construction and modernization projects potentially could be included because such projects were experiencing delays that made it unlikely that they would need the obligational authority available in fiscal year 1997. We further developed information on how PBS officials narrowed the pool of potential new construction and repair and alteration projects to the final 11 new construction projects included in the reserve. Concerning the sources of the unobligated fiscal year 1996 balances included in the reserve, we obtained both the regional and headquarters final fiscal year 1996 allowances and the end-of-year obligated balances. However, we did not verify the data on allowances and the end-of-year obligated balances with regional officials or regional records. Finally, PBS headquarters officials provided us with the reasons they believed the unobligated balances existed. In reviewing the budgetary impact of the overestimation on projects and programs, we determined if PBS’ claim that none of the new construction projects included in the reserve were delayed from awarding a construction contract because they were included in the reserve. We did so by discussing the projects with PBS headquarters and regional officials as well as staff of the Administrative Office of the United States Courts (AOUSC) to obtain general background information on the projects and the dates and reasons given for schedule delays. We did not do a detailed review of the project files or the history of the projects before they were included in the reserve. Also, we reviewed the GSA and OMB statements that the impact of the funding problem on the FBF would be eliminated by the end of fiscal year 1998. We verified that GSA had proposed a fiscal year 1998 program of new construction and modernization projects and that GSA’s fiscal year 1998 appropriation did not provide obligational authority for that program. We discussed the impact of the deletion of funding for new construction projects with AOUSC officials to identify the impact on the courts’ immediate and long-range construction programs because the courts’ projects constituted the bulk of PBS’ proposed $594.5 million in fiscal year 1998 funding for new construction. We did not attempt to estimate the dollar impact on specific projects as a result of lack of fiscal year 1998 funding because GSA’s proposed program of projects may have been altered by OMB and congressional reviews prior to obligational authority being provided in GSA’s appropriation law. We did our work primarily at GSA headquarters in Washington, D.C., between July 1997 and June 1998, in accordance with generally accepted government auditing standards. On July 30, 1998, we requested comments on a draft of this report from GSA’s Administrator. GSA’s comments are discussed at the end of this report. Beginning with fiscal year 1994 and continuing through fiscal year 1997, PBS’ actual annual rental revenues were less than the estimated rent revenue PBS projected for budget and appropriation purposes. PBS, in fiscal year 1997 and 1998, took two actions to deal with the overestimation. First, PBS refrained from using about $680.5 million in obligational authority that Congress had previously provided. Second, PBS reduced operating expenses by deferring planned expenditures until later years. It also took steps to address the weaknesses that were identified in the process used to estimate rental revenues for the budget. Figure 1 shows FBF’s estimated and actual income for fiscal years 1990 through 1997. The FBF’s actual rent revenue has grown from about $2.5 billion in fiscal year 1987 to about $4.8 billion in fiscal year 1997. GSA’s historical trends of estimated rental revenue versus actual rental revenue show that actual rental revenues were less than estimated rental revenues for each of fiscal years 1994 through 1997, by amounts ranging from about $110.7 million, or 2.4 percent of the estimate in fiscal year 1995, to about $422.1 million, or 8.2 percent of the estimate in fiscal year 1996. For fiscal years 1994 and 1995, PBS’ overestimation of rental revenue was a combined total of $308.1 million. According to its Chief Financial Officer in fiscal years 1994 and 1995, PBS absorbed the overestimation by reducing planned expenditures and using unobligated carryover balances without the need for congressional action. In January 1997, PBS informed Congress that it expected its total overestimation of rental revenue for fiscal years 1996 and 1997 to be $847 million. As shown in table 2, PBS identified seven reasons for the overestimation and linked specific dollar amounts to each reason. In July 1997, PBS increased the overestimation figure for fiscal year 1997 by $86.8 million and reported a potential overestimation in fiscal year 1998 of about $109.2 million. As a result, the total anticipated overestimation for fiscal years 1996 through 1998 was about $1.04 billion. However, after it closed its fiscal year 1997 books, PBS reported the actual budget impact of its overestimation to be $634.4 million for fiscal years 1996 and 1997 and reduced its fiscal year 1998 overestimation to $28.3 million. In our March 1998 testimony on PBS’ overestimation of the FBF rental revenue projections, we reported that PBS provided documentation supporting the amount of the overestimation for six of the seven reasons shown in table 2. Although we examined the documentation PBS provided to explain its overestimation, we did not trace all the data compiled by PBS back to the original source documents. PBS could not provide documentation showing how it developed the $86 million attributed to the reason that the original fiscal year 1995 rent revenue estimate was higher than actual fiscal year 1995 revenues. We also reported in our testimony that during the course of our work, we determined that weaknesses in PBS’ estimation process contributed to the rental income overestimation. Through discussions with PBS staff and review of studies done by (l) the firms of Ernst and Young and Arthur Andersen—consultants hired by PBS, (2) the GSA Inspector General, and (3) the Rent Revenue Forecasting GO Team—an internal GSA review team established to look at PBS’ rental revenue estimation process—we identified several weaknesses in the process for estimating rental revenues. These weaknesses included the following: lack of documented policy and procedures for the estimating process; unclear lines of responsibility and accountability for revenue estimates below the level of the PBS Commissioner; lack of supporting documentation necessary to verify forecast information and assumptions; and use of national averages, rather than project-specific data, to forecast occupancy schedules and rental rates. Finally, we reported that GSA was aware of the identified weaknesses in its revenue estimation process and had corrective actions to improve this process either already under way or planned. These corrective actions included the following: Documentation is to be required for all decisions, assumptions, and steps involved in the rental revenue estimation process. The Office of Financial and Information Systems, with overall responsibility for the rental revenue forecasting process, was established. Project-specific data is to be used in occupancy schedules and rental rates instead of national averages. A new information system is being implemented to manage, track, and access data, with plans for a revenue forecasting module to be added to the system. We concluded that it appeared that the actions PBS had under way and planned to improve the process it uses to estimate rental revenue address the weaknesses that we and others had identified. If effectively implemented, these actions should help improve future revenue estimates. However, as PBS points out, because its rental revenue estimate is a forecast, it is unlikely to produce a figure that is identical to actual rental revenue. Although some variance is to be expected in any estimating process, variances that go beyond a certain level can be indicative of estimating problems that need to be addressed. In this regard we stated in our testimony that PBS had not established an acceptable margin of error against which it could measure the success of its estimation process. We said that having such a benchmark would put PBS in a better position to identify variances that need to be investigated so that it can explore and fix the causes of excessive variances, improve its estimation process, and determine its effectiveness over time. We recommended that the PBS Commissioner establish an acceptable margin of error for its rental revenue estimates, as well as a process for exploring and resolving causes of variances outside the margin adopted. In a letter dated June 11, 1998, the GSA Administrator notified us that PBS had established 2 percent as a reasonable margin of error and is developing a reconciliation process. Considering the need to prepare estimates 18 months in advance and the steps involved in the estimating process, such as identifying revenue changes for each building, 2 percent does not seem to be an unreasonable margin of error. In late spring 1996, PBS identified a potential revenue gap for fiscal years 1996 and 1997. During fiscal year 1997, PBS officials acted to address the FBF overestimation problem by preventing the use of the FBF obligational authority that could not be met from the FBF resources. PBS determined the size of the obligational authority that was in excess of the FBF resources using both actual fiscal year 1996 operating data and estimates for fiscal year 1997 (see table 3). To address the $680.5 million in obligational authority in excess of available resources, PBS officials created an obligational reserve at the beginning of fiscal year 1997. The intent of the reserve was to ensure that available obligational authority would not be used until revenue was available to cover those obligations. The reserve was composed of funds from the four FBF budget activities, as shown in table 4. To identify sources of obligational authority that could potentially be included in the reserve, PBS officials told us that they initially identified the FBF activities that had unobligated balances at the close of fiscal year 1996. As a result of those efforts, PBS officials identified and included in the reserve $176 million. To identify the additional $504.5 million needed for the reserve, in October and November 1996, PBS officials analyzed the FBF new construction and acquisition, and repair and alteration budget activities. They identified 11 new construction projects, with $591.6 million in unobligated funds, for inclusion in the reserve. Details of the sources of the funds included in the reserve are discussed below. To fund development of some facilities, PBS initially borrows the required funds and subsequently makes regular payments to the lender. The FBF spending authority that funds these annual payments is the installment acquisition payment budget activity. In fiscal years 1996 and 1997, the new obligation authority appropriated for this budget activity amounted to about $182 million and $173 million, respectively. PBS officials told us that when they initially reviewed the various FBF budget activities for available fiscal year 1996 unobligated balances, the installment acquisition payment budget activity had an unobligated balance of about $12 million. We discussed the reasons for this unobligated balance with PBS officials who told us that it was partially a result of lower interest rates for short-term construction loans on projects and for the long-term 30-year notes on the facilities. In addition, they told us that total interest needs were lower than they had budgeted for because the projects had been slower to use borrowed funds. They said that their estimates of both interest rates and the rate at which funds would be needed by projects had projected higher interest costs than actually were incurred. Therefore, the budget activity had closed the fiscal year with an unobligated balance. The PBS officials told us that the $12 million pertained to transactions involving the following nine lease-purchase projects. Foley Square, New York; Woodlawn, Maryland, Health Care Financing Administration; Chamblee, Georgia, Centers for Disease Control Offices; Memphis, Tennessee, Internal Revenue Service; Atlanta, Georgia, Centers for Disease Control; Miami, Florida, Federal Building; Chicago, Illinois, Federal Building; Oakland, California, Federal Building; and District of Columbia, Ronald Reagan Federal Building and International Trade Center. They told us that without a detailed funding analysis of each project, including the funding used versus what was budgeted and the interest rate incurred versus what was budgeted, they could not assign portions of the unobligated balance to each project. PBS officials told us that when they initially reviewed the various FBF budget activities for unobligated balances at the end of fiscal year 1996, the rental of space budget activity had an unobligated balance of about $71 million, an accumulation of fiscal years 1995 and 1996 unobligated balances. They said $68 million of the $71 million would be used as part of the reserve. PBS officials told us that having an unobligated balance in a budget activity is not unusual because regional offices do not have to obligate the entire allowance they receive. Regarding the specific reasons why the rental of space budget activity had an unobligated balance at the close of fiscal year 1996, PBS officials cited incorrect estimates of when leases would start to incur obligations so that lease payments were lower than anticipated. Another reason provided by PBS officials involved the number of lease cancellations. They said there were more cancellations than PBS had budgeted, which resulted in lower obligations. However, they were not able to provide specific dollar amounts by lease. Rather, PBS officials provided us with a breakdown of the fiscal year 1996 regional allowances and unobligated balances (see table 5). PBS staff advised us that although the actual figure, about $71 million, was a little higher than the $68 million included in the reserve, their plan at the time the reserve was established was to include only $68 million in the reserve. However, events during fiscal year 1997 precluded using most of the $68 million for funding of the reserve. In particular, in August 1997, PBS sought congressional approval to transfer about $110 million in funds within the FBF budget activities to meet needs it considered crucial for rental of space. In September 1997, congressional committees approved the transfer request but directed that PBS use $54 million in fiscal year 1996 unobligated balances, which was part of the reserve, to fund part of the transfer. PBS officials told us that the $54 million was used in fiscal year 1997, and additional unobligated construction and acquisition of facilities budget activity funds were used to replace the $54 million in the reserve to maintain full funding of the $680.5 million reserve. PBS funds the operations of government-owned and -leased facilities and pays other government agencies for building operations performed by them in GSA-controlled facilities through the building operations budget activity. Functions budgeted from this activity include cleaning services, utilities, and protection services for facilities. PBS officials told us that when they reviewed the budget activities at the close of fiscal year 1996, the building operations activity had an unobligated balance of about $51 million. This was combined with $45 million in unapportioned fiscal year 1997 funds for a total unobligated balance in the building operations budget activity of $96 million. The officials explained that on a fiscal year basis, a portion of the overall appropriation available for regional building operations is divided into initial allowances against which regions plan and operate their programs. During a fiscal year, according to PBS officials, the initial allowance may be revised to reflect unforeseen needs. These adjustments are funded from money held back by PBS headquarters when the initial allowances are given to the regions. PBS officials told us that the existence of an unobligated balance in a budget activity at the close of a fiscal year is not unusual because regional offices do not have to obligate the entire allowance they receive. At the end of fiscal year 1996, building operations’ unobligated balance was about $51 million. According to a PBS document, the balances were associated with delays in moves, deferred equipment purchases, delays in contract awards, delays in new workload coming on line, and savings achieved through cost-containment measures. This amount, along with $45 million in unapportioned fiscal year 1997 funds, created an unobligated balance of $96 million in the building operations budget activity. Table 6 presents the unobligated balance on a region-by-region basis. According to PBS staff, the FBF’s construction and acquisition of facilities budget activity involves large unobligated balances from year to year; and thus, this budget activity became the focus of planners for funding the balance of the $680.5 million obligational reserve. According to PBS officials, early in fiscal year 1997 they were looking to identify about $504.5 million in obligational authority to complete the reserve. Initially, PBS officials considered both the construction and the modernization programs in developing a list of potential projects for funding the reserve. They evaluated individual projects using the following three criteria. Project had not proceeded to construction contract award. Obligational authority for the project had not been allotted to a regional office for obligation. Both regional and headquarters officials believed the project would not meet a planned fiscal year 1997 construction contract award schedule. As a result of their analysis, PBS officials developed a list of new construction and modernization projects with obligational authority totaling about $1.5 billion. Recognizing that the list of potential projects resulted in obligational authority in excess of the $504.5 million required, PBS officials told us that the decision was made to exclude modernization projects from the reserve and to focus solely on new construction projects. PBS officials pointed out that this decision provided enough funding for PBS’ priority of maintaining the buildings already in the inventory. Table 7 lists the new construction projects from which obligational authority was reserved, showing the project location, the amount of the full appropriation, and the amount available for reserve. PBS officials told us that the obligational authority reserved, $591.63 million, represented their thinking of the funding necessary to meet the $680.5 million before they knew how much would be available in end of the fiscal year unobligated carryover funds from other budget activities. PBS officials told us that, as of November 1996, it was their opinion that each of the 11 projects listed above had a probability of experiencing a schedule slippage that would move the planned construction contract award date beyond fiscal year 1997. Therefore, they felt that reserving the obligational authority of these projects would not delay their overall progress. Our discussions with PBS officials, both in headquarters and the regional offices, and with officials of AOUSC confirmed that with one exception, discussed below, the schedule slippage on each project was sufficient to delay the construction contract award past the close of fiscal year 1997. In the one instance where the delay was solely because the project’s funding was moved to the reserve—the Las Vegas, Nevada, courthouse—the delay of the construction contract award was about 3 weeks, from September 26 to October 16, 1997. The GSA Project Manager told us that the delay did not affect the construction award amount because the contractor agreed to a contract at the price he bid in September 1997. The scheduled construction contract award dates at the time each project was identified for possible inclusion in the reserve, the current construction contract award dates as of the spring of 1998, and reasons for the delays are presented in table 8. Congress provided new obligational authority for the projects and programs in the $680.5 million reserve for fiscal year 1998. Therefore, the FBF revenues received in fiscal year 1998 are now available to be obligated for the budget activities used to create the $680.5 million reserve in fiscal year 1997. OMB and PBS officials have stated that the actions taken through the fiscal year 1998 budget will eliminate the impact of the rent estimating problem on the FBF. However, as noted below, elimination of funding for new construction and modernization and reduced funding for building operations and basic building repair and alteration for fiscal year 1998 could have adverse effects on the FBF. In September 1996, GSA submitted proposed new construction and modernization programs for fiscal year 1998 to OMB totaling about $1.4 billion. However, according to GSA officials, OMB budget decisions required that $680.5 million of fiscal year 1998 budget authority be used to offset the funds reserved in fiscal year 1997 so that previously funded projects could proceed. Congress appropriated no fiscal year 1998 funding for new construction or modernization. In addition, in discussing the impact of the fiscal year 1998 budget decision, a GSA official, in responding to a question during an April 24, 1997, congressional hearing, stated “Absent direct appropriations and with the requirement to earmark $680 million in FY 98 Federal Building Fund budget authority to prior year capital projects, GSA will operate below prudent funding levels for building operations and repair and alterations for FY 98.” It is not clear how many, if any, of the proposed new construction or modernization projects would have been included in the President’s budget or funded by Congress in fiscal year 1998 had it not been for the overestimation problem. However, to the extent the overestimation problem resulted in lack of funding for new projects and these proposed projects are funded in the future, the government could experience cost changes. For example, additional costs could occur from price changes in the future, which could, of course, vary depending upon general and local economic and construction industry conditions. In addition, delays in basic repair and alteration work could also result in additional future cost to the extent prices for these services increase in the future and to the extent delays cause further deterioration. The maintenance of government-owned assets has been a long-standing concern. In 1993, the U.S. Advisory Commission on Intergovernmental Relations reported that maintenance often does not receive adequate attention, especially in times of tight budgets, and that deferring maintenance can result in poor-quality facilities, reduced public safety, higher subsequent repair cost, and poor service to the public. As we stated in our testimony on March 5, 1998, the actions PBS has under way and planned to improve its rental revenue estimation process address the weaknesses that we and others have identified and, if effectively implemented, these actions should help improve future revenue estimates. The actions taken by PBS to establish an obligational reserve to prevent the overobligation of the FBF revenue did not delay 10 of the 11 new construction projects included in the reserve. The construction contract award amount for one project, which was delayed for about 3 weeks, was not affected by the delay. Finally, although both OMB and PBS have stated that the impact of the FBF funding problem will be resolved by the end of fiscal year 1998, we believe that it could affect the FBF obligational authority beyond fiscal year 1998. We did not quantify the possible obligational impact; however, the delay in construction and modernization projects could result in price changes in the future, which could vary depending upon general and local economic and construction industry conditions. In addition, deferred maintenance could result in increased future cost. On July 30, 1998, we requested comments on a draft of this report from the Administrator, GSA. On August 6, 1998, we received oral comments from the Chief Financial Officer, Public Buildings Service, and other PBS staff. These officials generally agreed with the information in the report. We are sending copies of this report to the Ranking Minority Member of your Subcommittee; the Chairmen and the Ranking Minority Members of the Senate Committee on Environment and Public Works and the House Committee on Transportation and Infrastructure; and the Administrator of GSA. Copies will be made available to others upon request. Major contributors to this report are Ronald King, Assistant Director; Thomas Johnson Evaluator-in-Charge; Thomas Keightley, Evaluator-in-Charge; and Hazel Bailey, Communications Analyst. If you have any questions about the report, please call me on (202) 512-8387. Bernard L. Ungar Director, Government Business Operations Issues The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | Pursuant to a congressional request, GAO reviewed the General Services Administration's (GSA) actions in responding to and managing the recent funding problems experienced by its Federal Buildings Fund (FBF), focusing on: (1) verifying, to the extent practical, the amounts GSA attributed to each reason for overstimation of the FBF rental revenue projections for fiscal years 1996, 1997, and 1998; (2) whether the Public Buildings Service's corrective actions appeared to address GSA's identified reasons for the overestimation; and (3) budgetary impact of the overestimation on projects and programs in the FBF. GAO noted that: (1) GSA informed Congress that it expected the total overestimation of rental revenue for fiscal years 1996 and 1997 to be $847 million; (2) GAO verified, to the extent practical given available support, six of GSA's identified seven reasons for the overestimation and the linkage of specific dollar amounts of the overestimation to each of the six reasons; (3) GSA was unable to provide documentation showing how it developed the $86 million it attributed to the remaining reason--the fiscal year (FY) 1995 rent revenue estimate being higher than actual revenues; (4) GAO and others identified several weaknesses in GSA's rental revenue estimation process, such as the lack of documented policy and procedures for the rental revenue estimation process and the lack of supporting documentation necessary to verify forecast information and assumptions; (5) GSA has taken or plans to take corrective actions that, if effectively implemented, should help improve future rental revenue estimates; (6) for FY 1997, GSA took action to prevent the overobligation of FBF revenue by creating a reserve to ensure that obligational authority totalling $680.5 million would not be used until revenue was available to cover those obligations; (7) this action had the potential to affect the projects and programs from which obligational authority was withheld; (8) recent statements by GSA and Office of Management and Budget officials indicated that the impact of the rent estimating problem on the FBF will be resolved by actions taken through the FY 1998 budget; (9) although the $680.5 million appropriated in FY 1998 replenishes the $680.5 million to prior projects, GAO does not believe it necessarily mitigates the effects of not funding GSA's proposed FY 1998 program of new construction and modernization work; (10) GSA has stated that the overestimation problem contributed to a reduction in funding for building operations and basic building repair and alteration; and (11) this reduction could also result in changes in future costs for the same reasons previously mentioned as well as increased repair costs due to more extensive deterioration over time. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
Time—specifically the period that begins with the submission to FDA of a new drug application (NDA) and that ends when a final decision is made on that application (the period known as the NDA review phase of drug development)—is the focus of this report. At your request, we have assembled data on all new drug applications submitted to FDA in 1987-94 to answer three questions: Has the timeliness of the review and approval process for new drugs changed in recent years? What factors distinguish NDAs that are approved relatively quickly from those that take longer to be approved? What distinguishes NDAs that are approved from those that are not? Additionally, as you asked, we obtained the most recently available data on how long it takes for drugs to be approved in the United Kingdom and compared them with approval times in the United States. Because GAO has access to all applications, both those that have been approved and those that have not, our report is the first to present comprehensive data on review time for all NDAs submitted to FDA. The process of bringing a drug to market is lengthy and complex and begins with laboratory investigations of the drug’s potential. For drugs that seem to hold promise, preclinical animal studies are typically conducted to see how a drug affects living systems. If the animal studies are successful, the sponsoring pharmaceutical firm designs and initiates clinical studies in which the drug is given to humans. At this point, FDA becomes directly involved for the first time. Before any new drug can be tested on humans, the drug’s sponsor must submit an investigational new drug application to FDA that summarizes the preclinical work, lays out a plan for how the drug will be tested on humans, and provides assurances that appropriate measures will be taken to protect them. Unless FDA decides that the proposed study is unsafe, clinical testing may begin 31 days after this application is submitted to FDA. While clinical trials progress through several phases aimed at establishing safety and efficacy, the manufacturer develops the processes necessary to produce large quantities of the drug that meet the quality standards for commercial marketing. When all this has been done, the pharmaceutical firm submits an NDA that includes the information FDA needs to determine whether the drug is safe and effective for its intended use and whether the manufacturing process can ensure its quality. The first decision FDA must make is whether to accept the NDA or to refuse to file it because it does not meet minimum requirements. Once FDA has accepted an NDA, it decides whether to approve the drug on the basis of the information in the application and any supplemental information FDA has requested. FDA can approve the drug for marketing (in an “approval letter”) or it may indicate (in an “approvable letter”) that it can approve the drug if the sponsor resolves certain issues. Alternatively, FDA may withhold approval (through a “nonapprovable letter” that specifies the reasons). Throughout the process, the sponsor remains an active participant by responding to FDA’s inquiries and concerns. The sponsor has the option, moreover, of withdrawing the application at any time. For each NDA submitted between 1987 and 1994, we obtained from FDA information on the dates of its significant events between initial submission and final decision as well as the last reported status of the application as of May 1995. To ensure that the data were valid, we independently checked them against values in published reports and other sources. (The variables that we used in our analysis and the procedures that we used to validate the data can be found in appendix I.) We computed time by measuring the interval between all significant events. Results using other ways to calculate review time are compared to ours in appendix II. We used regression analysis to determine the factors that were significantly related to time and to determine which factors were significantly related to approval. (The results of the regression analyses on time are in appendix IV, on approval in appendix V.) Some of our analyses include all the NDAs, while others focus on specific subgroups. Most notably, we restricted analyses of overall time to NDAs that had been submitted by the end of 1992 to avoid the bias introduced by including applications that have had an insufficient time to “mature.” (Appendix VI describes the implications of this decision for our results.) Because our analyses of final decisions concentrate on NDAs submitted through the end of 1992, the data we present do not address the consequences of the full implementation of the Prescription Drug User Fee Act of 1992. Our findings pertain only to FDA’s Center for Drug Evaluation and Research and do not reflect the activities of the agency’s five other centers. We focused only on the NDA review phase—the final critical step of bringing a drug to market. We did not address the lengthier process of initial exploration and clinical testing, which together with the NDA phase average more than a decade, nor did we study the phase that follows a drug’s approval, during which additional studies can be conducted and attention paid to potential adverse events associated with its widespread use in the general population. FDA received 905 NDAs in 1987-94. The total number of NDAs fell from 1987 but remained relatively stable in the ensuing years through 1994 (with the exception of the uncharacteristically small number of submissions in 1993). The number of NDAs for new molecular entities (NMEs) and priority NDAs remained relatively stable over the years. Overall, 17 percent of the NDAs were for priority drugs. (See table 1.) A large percentage of the applications were not approved. Only 390 of the 700 NDAs submitted through 1992 had been approved by May 16, 1995. In other words, 44 percent of the applications submitted were for drugs that FDA did not find to be safe and effective or that sponsors chose not to pursue further. NMEs were approved at a higher rate than non-NMEs (64 percent to 52 percent), and priority drugs were approved more often than standard drugs (76 percent to 52 percent). This means that whether an NDA is or is not ultimately approved is as relevant a question as how long approval takes. (See table 2.) The data in table 2 show that NDAs that are submitted by experienced sponsors and priority NDAs are more likely to be approved than standard NDAs or NDAs submitted by sponsors with little experience with the process. These results are supported by a regression analysis that shows that both the NDA’s priority and the sponsor’s experience are statistically significant predictors of outcome (see appendix I for our definition of sponsor experience and appendix V for the regression analysis). The regression analysis found that, statistically controlling for the effects of the other explanatory variables in the model, priority NDAs are four times more likely to be approved than standard NDAs and that applications submitted by the most experienced companies are three times more likely to be approved than those submitted by less experienced sponsors. Table 3 shows for 1987-92 the average time (in months) from when NDAs were first submitted to when final decisions were made for both NDAs that were approved and those that were not. The table also distinguishes between all NDAs and those that were approved in three categories: new molecular entities, priority applications, and standard applications. As can be seen from the table, the processing time for all eight categories of NDAs fell considerably (from 33 to 18 months, or 45 percent, for all NDAs, or from 33 to 19 months, or 42 percent for approved NDAs). In addition, the reductions in time came for NDAs submitted throughout the period of our study. This finding is consistent with FDA’s statements that review time has decreased in recent years. Alternative presentations of the data demonstrate the same result. For example, table 4 shows that the number of months that passed before half of all submissions were approved declined from 58 months for NDAs submitted in 1987 to 33 months for 1992 submissions. Since just 56 percent of the NDAs submitted between 1987 and 1992 were approved, this measure captures the approval period for almost all the approvals that will ultimately be granted. Similarly, table 4 shows that the proportion of submitted NDAs that were approved within 2 years increased from 23 percent for NDAs submitted in 1987 to 39 percent for NDAs submitted in 1992. Closer examination of the individual NDAs shows that they differed considerably in how long it took before a final decision was made. Some NDAs were approved within a few months (the shortest was 2 months); others took years (the slowest was 96 months). The variation was similar among applications that were not approved. Some were withdrawn on the day they were submitted. The longest outstanding application was 92 months old. This considerable variation raises the question of what differentiates one NDA from the next: Do some factors predict the time it will take to reach a final decision? When we tested potential explanatory variables, we found that the priority FDA assigned to an application and the sponsor’s experience in submitting NDAs were statistically significant predictors of how long review and approval took. (See appendix IV.) More specifically, controlling for the effects of the other explanatory variables in the model, our regression analysis found that priority NDA applications are approved 10 months faster than standard applications and that applications from the most experienced sponsors are approved 4 months faster than applications from less experienced sponsors. The interval between first submission and final decision indicates how long the public must wait for drugs after sponsors believe they have assembled all the evidence to support an approval decision. Alternative measures provide insight into what happens to an NDA before FDA approves it. One such measure is the extent to which FDA is “on time” in making decisions. We examined both the degree to which FDA was on time and the factors that influenced whether it made its decisions on time. The criteria for “on time” performance that we used in this analysis were established under the Prescription Drug User Fee Act of 1992. Although on-time performance may be seen as one indicator of FDA’s efficiency, it is important to note that FDA is not required to meet these criteria until 1997. Of all the decisions FDA made on the NDAs submitted between 1987 and 1993, 67 percent were on time. Simpler decisions (for example, refusals to file) were made on time more often than relatively complex decisions (for example, priority applications in which the first decision was an approval). Overall, the on-time percentage remained relatively stable, varying between a low of 62 percent for NDAs submitted in 1992 and a high of 72 percent for NDAs submitted in 1987. In sharp contrast to the decline in overall time between submission and final decision shown in table 3, this stability shows that there is little relationship between the time FDA takes to reach a final decision and whether or not it meets its deadlines for specific actions. Another process measure of review time is based on where responsibility lies for different parts of the process—with FDA for the intervals during which it acts on an application, or with the sponsor, for the intervals during which FDA waits for the sponsor to provide additional information or to resubmit the application. Figure 2 shows how their relative times were distributed for approved NDAs submitted between 1987 and 1992. As can be seen from the figure, sponsors accounted for approximately 20 percent of the time in the NDA phase for applications that FDA approved. Importantly, the time for both sponsors and FDA diminished for NDAs submitted between 1987 and 1992. Regulatory processes similar to FDA’s have been mentioned as models for reforming FDA. The one most often mentioned is the United Kingdom’s. Proponents of FDA reform have argued that the British counterpart to the FDA, the Medicines Control Agency, performs reviews of equivalent quality and does so significantly more quickly. Comparisons between the Medicines Control Agency and FDA are difficult because the workload, approval criteria, and review procedures followed by the agency may not be exactly the same as FDA’s and because its reports cover a slightly different period than FDA’s. However, the most recent data show that overall approval times are actually somewhat longer in the United Kingdom than they are in this country. For the 12-month period ending September 30, 1994, the Medicines Control Agency reported that the median approval time for applications that were apparently equivalent to NMEs was 30 months. The average time was 24 months. The fastest approval was granted in about 4 months, the slowest in 62 months. According to FDA, the median approval time for NMEs approved in the United States in calendar year 1994 was 18 months, the average about 20 months. The fastest FDA approval took about 6 months and the slowest about 40 months. (See appendix VII for a fuller comparison.) Aside from shedding light on the central issue of time, the data we assembled provide some interesting but rarely mentioned facts about FDA’s drug review and approval process. First, nearly half the NDAs submitted to FDA are not approved for marketing. The 44 percent of NDAs that were not approved in our sample either were not judged by FDA to be safe and effective or were not pursued by their sponsors. Second, the percentage of NDAs for drugs that are viewed by FDA as offering an important therapeutic advance is relatively small. As we pointed out in table 1, only 17 percent of all NDAs were given priority status. Third, our data on drug review and approval show that approximately one fifth of the time in that process comprises activities for which sponsors are responsible. With respect to time, NDAs are moving more quickly through the drug review and approval process. Whether this improvement is because of actions by FDA or the pharmaceutical industry or some other factors is an issue that is beyond the scope of this report. However, the consistency of all our results supports the conclusion that the reduction in time is real and not an artifact of how time is measured. Further, the magnitude of the reduction—more than 40 percent—should be considered in the ongoing discussions of the need to change the NDA review process or the agency in order to speed the availability of drugs to patients. FDA officials reviewed a draft of this report and discussed their comments with us. They generally agreed with our analytic methods and findings. However, they expressed concerns about some aspects of our analysis of FDA’s on-time performance. These comments, and our responses to them, appear in appendix II. FDA also provided a number of specific technical comments that have been incorporated into the report where appropriate. As we agreed with your offices, we plan no further distribution of this report until 30 days from its date of issue, unless you publicly announce its contents earlier. We will then send copies to the Secretary of Health and Human Services, the Commissioner of Food and Drugs, and to others who are interested. We will also make copies available to others upon request. If you have any questions regarding our report, please call me at (202) 512-2900 or George Silberman, Assistant Director, at (202) 512-5885. At our request, FDA provided detailed information about all new drug applications, totaling 905, initially submitted between January 1, 1987, and December 31, 1994. This included the contents and date of all FDA decisions and all major communications between FDA and the NDA sponsors through May 16, 1995. The variables we used in our analysis are described in the next section. Our choice of this time period has important implications for the analysis of drug review time. First, we started with 1987 because that was the first full year following a major change in FDA’s drug review procedures. We do not believe that examining new drug applications from before 1987 would shed any light on FDA’s current activities. Second, most reports of drug approval times, including those published by FDA, measure time for drugs approved during a particular period, regardless of when they were submitted. Some approved drugs may have been submitted much earlier. By limiting our analysis to new drug applications submitted (but not necessarily approved) in 1987 and later, we have limited the maximum value of review time. However, we do not believe that this has significantly biased our findings, since relatively few drugs win approval after exceptionally long review periods. (Appendix VI describes the outcomes of the review process as a function of year of approval in our sample.) While we were unable to independently verify the accuracy of all the data FDA provided, we did undertake a number of validation procedures to ensure the quality of the data. First, we performed extensive checks of the internal consistency of the databases FDA provided. In several cases, we uncovered discrepancies in the level of detail for different categories of drugs and between the information contained in one data file and that contained in another file. We resolved all these inconsistencies with FDA. Second, we compared the information in the data files with published sources where possible. For approved drugs, many reports (by FDA and by others) list the names, submission dates, and approval dates. We were able to resolve with FDA the few inconsistencies we discovered through this method. However, it is important to note that we were unable to do this for nonapproved drugs because there are no published reports on them. Third, for an earlier report, we had already obtained documentation for all NDAs for NMEs submitted in 1989. We compared those documents with the data FDA provided us for this report, and we were able to resolve all apparent inconsistencies. This section describes the variables we used in our analyses. Our definitions of the variables do not necessarily agree with FDA’s practice. FDA provided some of the variables directly to us; we computed others from the data FDA provided and from other sources. Priority drugs. Those that FDA determines to represent a significant therapeutic advance, either offering important therapeutic gains (such as the first treatment for a condition) or reducing adverse reactions. Nonpriority, or standard, drugs offer no therapeutic advantage over other drugs already on the market. New molecular entities. Drugs with molecular structures that have not previously been approved for marketing in this country, either as a separate drug or as part of a combination product. Drugs that are not NMEs are from one of six categories defined by FDA: a new ester or salt, a new dosage form or formulation of a previously approved compound, a new combination of previously approved compounds, a new manufacturer of a previously approved drug, a new indication for an already approved drug, or drugs already marketed but without an approved NDA (that is, drugs first marketed before FDA began reviewing NDAs). Initial submission. The first submission of the application to FDA. Resubmission. After a sponsor has withdrawn an application or FDA has refused it for filing, sponsors can resubmit it. Major amendments. Substantial submissions of new information by the sponsor to FDA, either of the sponsor’s own volition or in response to an FDA query. Refusal to file. After FDA receives a new drug application, the agency first determines if the application is sufficiently complete to allow a substantive review. If not, FDA can refuse to file it. Since the implementation of user fees in 1993, applications must be rejected if the sponsor has failed to pay the appropriate fee to FDA. These applications are categorized as “unacceptable for filing,” not refusal to file. Approval. If FDA is satisfied that a drug is safe and effective, it approves the drug for marketing for its intended use as described in the label. Approvable. FDA determines that a drug is approvable if there is substantial evidence that it is safe and effective, but the sponsor must either supply additional information or agree to some limiting conditions before FDA grants final approval. Not approvable. If FDA determines that the evidence submitted by the sponsor to show that the drug is safe and effective is insufficient, the agency notifies the sponsor that the drug is not approvable. Withdrawal. The sponsor of an NDA may withdraw it at any time for any reason. Final status. We examined the data file for each NDA to see if the drug had ever been approved. If not, we searched the file for the last event that was a withdrawal, not approvable, approvable, or a refusal to file, and we identified that event as the application’s final status. However, since FDA never definitively rejects applications, some whose final status is other than approval may ultimately be approved. (See appendix III.) Year of submission. The calendar year in which an application is first submitted to FDA. Review time. The period between the date of the initial submission of an NDA, even if FDA refuses to file it, and the date of the application’s final status in the data file. For approved drugs, review time is the period between the initial submission and the date of approval. FDA time and sponsor time. For some of the analyses, we divided the total review time into time that is FDA’s responsibility and time that is the sponsor’s responsibility. FDA time consists of periods that begin when the agency has the information it has requested from the sponsor for that stage of the review and that end when FDA issues a judgment of refusal to file, approval, approvable, or not approvable or the application is withdrawn. Sponsor time consists of periods when FDA is waiting for the sponsor to provide additional information or to resubmit the application. FDA time and sponsor time are complementary and together sum to total review time. Review cycles. Each period of FDA time is one review cycle. FDA’s on-time performance. The Prescription Drug User Fee Act of 1992 established specific performance goals for each review cycle. The agency must issue refusals to file within 60 days of submission and must reach all other decisions for priority drugs within 6 months and for standard drugs within 12 months. We applied these guidelines retroactively to identify actions as either on time or not on time for each review cycle for NDAs submitted between 1987 and 1994. Experience. We divided the sponsoring pharmaceutical companies into four groups, based on their activities between 1987 and 1994. We defined the most experienced companies as those that submitted 9 or more NDAs to FDA during this period (that is, at least one per year). Those that submitted between 5 and 8 NDAs in that period made up the middle-experience group. The two least experienced groups submitted 4 or fewer NDAs. We further divided the least experienced companies into one group with affiliations with other companies that sponsored NDAs during this period and another group without such affiliations. Affiliation meant that another sponsoring company had a significant ownership stake in the sponsor of the NDA. We identified affiliations by reviewing business and financial directories. Most of our statistical analyses consist simply of listing average review times, or the number of NDAs with a particular characteristic, separately by year of submission or by the outcome of review. However, we also conducted two regression analyses, one to identify variables related to the length of the review process and another to identify factors related to drug approval. (See appendixes IV and V.) This allowed us to isolate the effects of one variable (for example, drug priority) while statistically holding constant the other predictor variables (for example, year of submission and the experience of the sponsoring company). All our statements about statistical significance are based on the results of the regressions, which answer the question: If there were no differences among these NDAs except, for example, drug priority, does drug priority influence the chances of approval? We performed our work in accordance with generally accepted government auditing standards. The key statistics presented in this report are the average times to final decisions for NDAs submitted in consecutive calendar years from 1987 onward. Previous reports on time have presented other results, sometimes relying on slightly different measures of time, sometimes reporting other statistics (medians rather than averages), and usually constructing cohorts based on the years in which the NDAs were approved rather than the years in which they were submitted. In the sections that follow, we place our work in the context of other studies of drug review and approval time by examining the differences in approach. In our study, review time begins with the first submission of the NDA to FDA. In FDA’s statistical reports, it starts the clock with the submission of an “accepted” NDA. The two measures would provide similar results if the NDA were accepted on the first submission or, if FDA refused to file it, the sponsor never resubmitted the application. However, in any situation in which FDA refused to file the NDA and the sponsor eventually resubmitted it, our measure of review time would be longer by the interval between the first submission and the date of an accepted submission. Approximately 1 in 10 NDAs (9.4 percent) fall into this category. The average time to resubmission for these applications was a little less than 2 months (1.7 months). Therefore, our review times are slightly longer on average than those reported by FDA. Another approach to time measurement is to be less concerned with how long the process took than with whether it was completed within a specified period. FDA takes this approach when it reports the extent to which the agency meets its user fee performance goals as referenced in the Prescription Drug User Fee Act. Data on our measure of on-time performance appear in the body of this report. Table II.1 shows an annual breakdown of “on time” performance. Percent taken “on time” Actions taken as of May 16, 1995. As can be seen from table II.1, the percentages have changed little over the years. Interestingly, this is in contrast to the reduction in total review time (the entire interval between submission and approval) during this period. Seemingly, FDA has managed to reduce the overall time even though it has not increased the proportion of specific actions taken on time. In commenting on a draft of this report, FDA officials agreed with our general conclusions but made two points regarding our analysis of on-time performance. First, FDA emphasized that the 6- and 12-months guidelines used in our analysis were not in effect during the years we studied and that FDA is not required to meet them until 1997. Second, while FDA believes that its review cycle on-time performance may not have improved, the agency cautioned that the nature of its actions has changed with the initiation of the user fee program, particularly for not-approvable letters. Prior to the initiation of user fees, not-approvable letters were not necessarily a complete listing of all the deficiencies in the NDA. For example, FDA may have sent one not-approvable letter when the review of one section of the NDA was complete and additional not-approvable letters as other sections of the review were completed. After user fees, FDA is required to take complete actions, so a not-approvable letter must contain all the deficiencies FDA identifies. In other words, FDA must now complete more work to satisfy a post-user fee deadline than it had to before user fees were introduced. We agree with FDA’s first point. FDA’s second point argues for caution in making comparisons of on-time performance between different years. We agree that changes in procedure would invalidate such comparisons. For that reason, we did not use this measure as an indicator of whether the overall timeliness of the drug approval process had improved. Rather, we included the trends in on-time performance in the report in order to be comprehensive in presenting all measures of time that others had reported. Throughout this report, we have reported the average times for NDA review. An alternative is to report the median review time, the time for the 50th percentile application. In this case, medians reduce the influence of drugs with unusually long review periods and are therefore usually somewhat lower than average review times. Table II.2 lists the average and median approval times for the drugs we examined by year of submission. While the median values are generally slightly lower, they show the same pattern of consistent decrease as the average values. FDA and others frequently report time statistics for NDAs that group the applications by the year in which they were approved rather than the year in which they were submitted. To some extent, this reflects FDA’s general orientation away from publishing data on submissions (given that much of that information is proprietary until they are approved). Table II.3 compares the average approval times we computed using year of submission with the average approval times FDA computed using year of decision. The discussion that follows the table indicates why grouping NDAs by year of submission is preferable for our purpose. We do not present values for these years because they may be biased as a result of the censoring problem discussed in appendix VI. Table II.3 shows an obvious difference between the decrease in approval times when NDAs are grouped by year of submission and the stability when they are grouped by year of approval. This difference arises because grouping by year of approval incorporates into the calculation whatever backlog of NDAs existed at FDA. For example, several NDAs submitted in 1987 that had very lengthy 5-year reviews would increase the average review time in 1987 for year-of-submission statistics but would add to the average review time in 1992 for year-of-approval figures. Thus, whenever the possibility of a backlog exists, basing time on year of approval is a less appropriate way to measure current practice because it incorporates the older applications. In contrast, time based on year of submission eliminates the confounding effects of the backlog and, therefore, is the preferable measure for assessing the current performance of the agency. In 1987, the first year in our study, FDA had a considerable backlog of NDAs submitted in 1986 and earlier and that backlog affected times throughout nearly the entire period of our study. This can be seen from table II.4. As the table shows, a considerable proportion of the approvals in every year except for 1994 were for older NDAs that had been under review for a long time. The first years in which FDA seemed to make progress in reducing the backlog were 1992 and 1993, when larger percentages of older applications were approved. This progress was reflected in the smaller percentage of older NDAs that were approved in 1994 and in the sharp drop in times measured by year of approval between 1993 and 1994 (see table II.3). The decrease from 33 to 26 months indicates that the backlog may have finally passed through the system. In this appendix, we present data on what happens to the NDAs as they move through the review process, focusing on three kinds of activities: first actions, review cycles, and major amendments. Table III.1 shows the first action taken on NDAs submitted in each successive year. It can be seen that approval is the initial decision for relatively few NDAs. Given that approximately 55 percent of all NDAs are ultimately approved, the data in table III.1 also show that such “negative” decisions as refusal to file, not approvable, and withdrawal are not necessarily fatal to an application. Of the 110 NDAs submitted from 1987 to 1992 that FDA initially refused to file, 35 (32 percent) were ultimately approved. Similarly, 43 percent of the NDAs that had a not-approvable first action were ultimately approved, and 27 percent of the withdrawals were resubmitted and approved. Overall, 43 percent of the 390 drugs submitted from 1987 to 1992 that were approved were refused, withdrawn, or found not approvable at some point on their way to approval. FDA reports the review cycles that an NDA goes through in its yearly Statistical Reports. A cycle starts with the submission or resubmission of an NDA and ends with the withdrawal of the NDA, a refusal to file decision, or an approval, approvable, or not-approvable letter. Each new cycle starts the review clock anew. Table III.2 shows the number of cycles for various types of NDAs. Not applicable. As can be seen from table III.2, some types of NDAs are more likely to go through multiple review cycles than others. Approved NDAs go through more cycles on average than applications that get dropped along the way; priority NDAs go through fewer cycles on average than standard NDAs; and, similarly, NMEs go through fewer cycles on average than non-NMEs. The number of cycles for both approved NDAs and all NDAs has decreased for submissions since 1987. This decrease is consistent with the decrease in time to final decisions. FDA has questions about almost all NDAs and requires sponsors to submit additional data in response to those questions. The sponsors submit these data in the form of amendments. Relatively small amounts of data (for example, clarification of a point or correction of a value) are classified as minor amendments, and relatively large amounts of data (for example, a reanalysis or results of an additional study) are classified as major amendments. Not applicable. Table III.3 shows the number of amendments for different types of NDAs. As expected, NDAs that are pursued through to approval have more major amendments on the average than NDAs that drop out of the process. NDAs for priority drugs and for NMEs required more amending on average than applications for standard drugs and non-NMEs. As with the data on cycles, table III.3 shows a decrease in the number of amendments for submissions since 1987. These data, along with those in table III.1 showing a steady decrease in the numbers of not approvables and in table III.2 showing fewer cycles, suggest that the drug review and approval process is getting “cleaner.” This change may result from different applications submitted by the sponsors of new drugs, different FDA review procedures, or both. Without additional study, it is not possible to identify the reasons for this. However, all three sets of data (on first action, cycles, and major amendments) are consistent with a quicker review process. We conducted two regression analyses predicting review time, one for approved new drug applications and the other for applications that were not approved. As table IV.1 shows, we found that the length of time until approval was significantly affected by three factors—year of submission, drug priority, and sponsor experience. Applications submitted in later years were approved much faster than earlier applications (for example, 11 months quicker in 1992 than in 1987). Drug applications given therapeutic priority by FDA were approved nearly 10 months faster than standard drugs. Applications from sponsors that submitted many NDAs were approved more quickly than applications from relatively inexperienced sponsors (for example, applications from the most experienced sponsors were approved 4 months faster than those from inexperienced sponsors that were not affiliated with other sponsoring companies). Year of submission (vs. 1987) Priority drugs (vs. standard) New molecular entity (vs. not) Sponsor experience (vs. inexperienced, unaffiliated) For applications first submitted from 1987 to 1992, N = 390, and R-squared = 0.24. The mean review time is 26.36 months. In contrast, for drugs that were not approved, the only significant factor was year of submission. Applications submitted in later years were acted on more quickly than those submitted earlier (see table IV.2). Neither therapeutic priority nor the experience of the sponsor affected review time. It is important to reiterate that FDA does not definitively reject applications it does not approve. Therefore, FDA may take further action on some of the applications in this analysis. Year of submission (vs. 1987) Priority drugs (vs. standard) New molecular entity (vs. not) Sponsor experience (vs. inexperienced, unaffiliated) For applications first submitted from 1987 to 1992, N = 308, and R-squared = 0.16. Mean review time is 24.93 months. Table V.1 presents the results of a logistic regression analysis predicting NDA approval. The outcome variable is dichotomous: “1” indicates that the drug has been approved, “0” that it has not been approved. Fifty-six percent of the NDAs were approved. The data set for the regression consists of the 698 drugs first submitted between 1987 and 1992 that had final status values as of May 16, 1995 (two applications were pending). Year of submission (vs. 1987) Priority drug (vs. standard) New molecular entity (vs. not) Sponsor experience (vs. inexperienced, unaffiliated) The regression uncovered two statistically significant factors—drug priority and sponsor experience. Priority drugs were approved at nearly four times the rate of nonpriority drugs. Applications from sponsors that submitted many NDAs during this period were approved more often than applications from relatively inexperienced sponsors (applications from the most experienced sponsors were approved three times more often than applications from inexperienced sponsors that were not affiliated with other sponsoring companies; applications from companies with mid-levels of experience were approved nearly twice as often). As mentioned in appendix II, basing our selection of NDAs for analysis on the year of submission has one significant advantage over the more traditional approach of examining NDAs by year of approval. That is, our approach avoids the contamination of the averages by whatever backlog exists. However, relying on year of submission can introduce another form of bias in that averages for approval time computed from all the 1993 and 1994 cohorts incorporate only a highly selective group of NDAs from those 2 years. As table VI.1 shows, the final status distribution for NDAs submitted in 1993 and 1994 is radically different from that for NDAs submitted earlier. Clearly, this is because many of the applications had not had time to “mature” by the time we collected our data. While more than 50 percent of NDAs submitted in every year from 1987 to 1992 were approved by May 1995, comparatively few of the NDAs submitted in 1993 and 1994 had been approved. Most importantly, the only NDAs from 1993 and 1994 that were approved were those that had been approved relatively quickly. As a result, the average approval time for NDAs submitted in 1987-92 is 26.4 months, while the average time for approved NDAs submitted in 1993 and 1994 is 12.6 months. Because of this bias, we excluded NDAs submitted after 1992 whenever we examined final status. Final status as of May 16, 1995. Percentages may not total 100 because of rounding. Percentages for 1993 and 1994 do not total 100 because NDAs found “unacceptable for filing” because user fees were not paid are not included in the table. However, we included NDAs from 1991 and 1992 because we found no evidence that including these years risks exposure to the censoring bias found in 1993 and 1994. As table VI.1 shows, the approval rates for 1991 and 1992 are equivalent to those from earlier years. That is, almost all the NDAs from 1991 and 1992 for which approval ultimately would be expected have already been approved by FDA. Approval times for those years are not likely to increase much. The question that remains is whether the trend in decreasing time that we observed for submissions between 1987 and 1992 continued for 1993 and 1994 submissions. That question cannot be answered definitively until the 1993 and 1994 cohorts have had time to mature. However, preliminary evidence suggests that the trend continues. Table VI.2 compares the percentage of all applications submitted before 1993 that were approved quickly to the same percentage for NDAs submitted in 1993 and 1994. As table VI.2 shows, approximately the same percentages of NDAs were approved quickly both before and after 1992. From this evidence, we have no reason to suspect that the trend of speedier drug approval for 1987-92 submissions was reversed for 1993-94 submissions. The United Kingdom’s equivalent of FDA is the Medicines Control Agency (MCA). MCA publishes information similar to that contained in FDA’s statistical reports, including data on workload (number and type of submissions) and time (how long it takes to review applications). MCA’s 1994-95 annual report indicates that the assessment of an application for a new active substance (the apparent equivalent of what FDA terms a new molecular entity) took an average of 56 working days. This figure stands in sharp contrast to FDA’s reports that show an average approval time of 20 months for applications for NMEs approved in 1994. No doubt, the sharp contrast in these two averages is one factor creating the impression that approval times are much shorter in the United Kingdom than they are in this country. However, closer examination of the data in MCA’s annual report shows that they should be compared to our data on FDA with caution. Most importantly, the drug review process in the United Kingdom is very different from that in the United States. In the United Kingdom, MCA’s assessment is only the first step in a multistage process of drug review and approval. All applications for new active substances are also automatically referred to a government body called the Committee on the Safety of Medicines (CSM). CSM’s expert subcommittees also assess the application, and these assessments, along with those from MCA, are provided to CSM. CSM then provides advice to the Licensing Authority, which actually grants or denies the product license. However, the rate of rejection of applications or requests for modifications or additional information is very high (99 percent for applications submitted 1987-89), although many of these issues are minor and quickly resolved. Applications with remaining unresolved issues then go through a formal appeals process that may involve additional work on the part of the applicant, reassessment by MCA or CSM, and, in rare cases, the involvement of another body called the Medicines Commission. Thus, the total time until the license is actually granted is considerably longer than the period of initial assessment by MCA. In contrast, the time FDA reports includes all the steps between an accepted NDA and the final decision on it. When one examines total time for both processes, the United Kingdom does not appear to be dramatically faster than the United States. One recent study compared approval times for 11 drugs that were approved in both countries during the period 1986-92. The median time in the United States (about 23 months) was 15 percent longer than the median time in the United Kingdom (20 months). The most recent data from MCA show that overall approval times are actually somewhat longer than that. These data indicate that MCA granted licenses for applications representing 32 new active substances during the 12-month period ending September 30, 1994. The median time for granting a license was 30 months and the average was 24 months. The fastest license was granted in about 4 months, the slowest in 62 months. FDA’s data for the calendar year ending December 31, 1994, indicate that the agency approved a total of 22 new molecular entities. The median approval time was 18 months, average approval time about 20 months. The fastest approval reported by FDA took about 6 months and the slowest about 40 months. Thus, the most recent data show that approval times for NMEs are actually shorter in the United States. In addition, a broader perspective shows that approval processes in many industrialized nations may be converging.Approval times over the past 10 years for France, Germany, Japan, the United Kingdom, and the United States all seem to be moving toward the 2-year point. The trend in the United States (which had lengthy times throughout the mid-1980s) has been toward more rapid times, whereas the process has been getting slower in some of the other (originally faster) countries. This report was prepared by Martin T. Gahart, Michele Orza, George Silberman, and Richard Weston of the Program Evaluation and Methodology Division. FDA User Fees: Current Measures Not Sufficient for Evaluating Effect on Public Health (GAO/PEMD-94-26, July 22, 1994). FDA Premarket Approval: Process of Approving Lodine as a Drug (GAO/HRD-93-81, April 12, 1993). FDA Regulations: Sustained Management Attention Needed to Improve Timely Issuance (GAO/HRD-92-35, February 21, 1992). FDA Drug Review: Postapproval Risks 1976-1985 (GAO/PEMD-90-15, April 26, 1990). FDA Resources: Comprehensive Assessment of Staffing, Facilities and Equipment Needed (GAO/HRD-89-142, September 15, 1989). The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (301) 258-4097 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | Pursuant to a congressional request, GAO provided data on the Food and Drug Administration's (FDA) new drug application (NDA) process, focusing on: (1) whether the timeliness of the review and approval process for new drugs changed in recent years; (2) the factors that distinguish NDA that are approved quickly from those that take longer to approve; (3) what distinguishes NDA that are approved from those that are not; and (4) how FDA drug approval process compares with the approval process in the United Kingdom. GAO found that: (1) the average number of months for NDA to be approved by FDA decreased from 33 months in 1987 to 19 months in 1992; (2) the overall decrease in approval times was achieved through gradual reductions in the submission of all NDA from 1987 to 1992; (3) the priority FDA assigns to an NDA and the experience of its sponsor determine the timeliness and likelihood of the approval process; and (4) although comparable data is limited, the review times for FDA and its counterpart agency in the United Kingdom are similar. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
The Creekbed facility was built in 1937 as a German air force hospital. The U.S. military acquired it at the conclusion of World War II and used it as a hospital until the late 1990s. The facility was slated to revert to the German government in 2000. From 2000 to 2001, State conducted discussions with the German government to acquire the property. In July 2002, Creekbed was officially transferred from the German government to the State Department for a cost of $30.3 million. Since July 2002, OBO has been determining which renovations, including security and safety enhancements, will be necessary to prepare the facility to house the U.S. government’s Consulate General in Frankfurt. The design and renovation cost for the facility is estimated at $49.8 million, bringing total project costs to an estimated $80.1 million. State estimates that, if Creekbed had not been available, acquiring a site and building a comparable facility to meet U.S. government needs in Frankfurt would have cost roughly $260 million. The facility consists of 13 major interconnected buildings that will provide 325,000 square feet of usable office space. In addition, an 85,000-square- foot warehouse will be built on the property. The site also contains significant areas of land that can be used for construction and future expansion of operations if necessary. OBO stressed that the renovation will focus on building a perimeter wall, warehouse, and access controls; and performing basic renovation, such as painting and installing upgraded wiring. OBO does not plan to tear down walls, install air conditioning, or do other extensive work. Renovation of the facility is scheduled from September 2003 to March 2005. State projects that by mid-2005, Creekbed will be fully operational. According to State’s business plan to purchase the facility, the Creekbed project had four fundamental objectives. First, the renovated facility would provide secure office space that is a vast improvement over security afforded by existing facilities in Frankfurt. Second, Creekbed would provide space for operations currently located at the Rhein Main Air Force Base, which the U.S. government has agreed to vacate in 2005 and return to the German government. Third, Creekbed would provide office space for staff currently working at the U.S. embassy in Berlin who will not have space in the new U.S. embassy building that is scheduled for construction. Finally, Creekbed has space to accommodate a number of regional staff from outside Germany who are assigned to embassies and consulates with security vulnerabilities. In its business plan, State identified several agencies from outside Germany that would be considered for relocation to Frankfurt. According to State, the Consul General in Frankfurt, and officials at each of the agencies in Frankfurt that we visited, Frankfurt is considered a good location as a regional hub because of its location and transportation links. They also noted that many of the offices currently assigned to the U.S. consulate have regional responsibilities. Developing the Frankfurt facility as a regional center is consistent with recommendations of the Overseas Presence Advisory Panel calling for use of regional centers and relocation of personnel to reduce security vulnerabilities at overseas posts. It is also consistent with a rightsizing framework we developed to support decision-making on overseas staffing. The framework encourages decisions to be based on a full consideration of the security, mission, and cost factors associated with each agency’s presence and outlines rightsizing options, including regionalization of operations. OMB also cited this project as allowing U.S. agencies to put in one central location appropriate administrative functions now performed in multiple posts around Europe and beyond. Furthermore, the House Conference Report for the Consolidated Appropriations Resolution 2003 stated that the conferees support “the Department ’s effort to initiate a consolidation, streamlining and regionalization of country and multi- regional staffing in Frankfurt, Germany.” The report also said, “The success of this initiative will be measured largely by the staffing reductions made possible at less secure locations throughout Germany, Europe, Eurasia, Africa and the Near East.” State indicated it has renewed its efforts to identify staff from posts outside Germany who could be relocated to the new Frankfurt regional center. According to State, this process will consider rightsizing factors such as security, mission requirements, and costs as well as possible changes in functions that would make operations more efficient. State’s earlier efforts were prematurely halted in August/September 2002 because staffing planners mistakenly interpreted space planning estimates as indicating the regional center would be fully occupied. However, in May 2003, we analyzed State’s staffing requirements for Creekbed in relation to the facility’s capacity and found additional space was available. We briefed both State and OMB officials on the capacity issue. OMB urged State to reopen the staffing process and to consider relocating more regional staff to Frankfurt. In May 2003, State announced that it had restarted a process to identify staff from posts outside Germany who could be relocated to take advantage of Creekbed’s available office space and enhanced security. State is reassessing the facility’s space plans and staffing projections for all agencies and is focusing on identifying which additional regional activities might be moved to the Frankfurt center, especially where this action would improve security for U.S. government personnel. State also indicated that it would pursue a rigorous rightsizing and regionalization strategy in staffing the Frankfurt facility. State has said that under its new effort, it will analyze security, mission, and cost factors associated with each agency’s regional operations at posts in Europe, Eurasia, Africa, and the Near East. On June 12, 2003, State sent formal guidance to the ambassadors at each post, directing them to identify staff who might transfer to the regional center in Frankfurt. To help the posts identify positions for relocation, State plans to conduct a detailed, Web-based survey based on our rightsizing framework. State plans to have revised staffing estimates for Frankfurt at the end of 2003. The Frankfurt facility will have a capacity of about 1,100 desk positions. The facility will have sufficient space to consolidate existing diplomatic operations in Frankfurt as well as bring in significant numbers of personnel from posts outside Germany to expand regional operations. Positions currently in Germany envisioned to relocate to the Frankfurt regional center include a total of about 900 personnel from the current Frankfurt consulate, offices at the Rhein Main Air Force Base, and the embassy in Berlin. Based on current capacity estimates, there is also desk space for about 200 staff who could be relocated from other posts. To help address staffing decisions, State also plans to undertake what it characterizes as a “think outside the box” exercise by asking embassies to examine whether any functions in Europe or elsewhere can be reengineered to be more effective. Our rightsizing framework encourages decision makers to consider reengineering actions such as competitively sourcing support functions, regionalizing contract activities, and centralizing warehouse operations. This kind of reengineering, which could help reduce costs of support functions and staffing requirements for embassies, should be weighed along with the options for relocating staff to regional centers. Although State has renewed its process for staffing Creekbed, its comments on a draft of this report lead us to question State’s commitment to the process. State’s comments and our evaluation of them are discussed in more detail on page 10. Although substantial space exists for relocating staff from other posts, State documents indicate that the department may encounter some resistance among agencies identified to relocate. While some agencies and offices agree that relocation would improve their security, State anticipates that they will raise concerns about their relative ability to effectively carry out their mission from Frankfurt, the cost of relocating staff from other locations, the convenience of airline connections, and costs related to living and operating out of Germany. These issues indicate that State and other agencies will have to carefully weigh the security, mission, and cost trade-offs associated with staffing relocation decisions. In some cases, security issues may be so compelling that some staff will have to be relocated. From September 2001 to August 2002, State tried to identify positions with regional responsibilities that could be relocated to Creekbed. Although State initially identified potential positions, State halted its efforts in August/September 2002. In September 2001, State initiated discussions with key agencies operating at its European posts and asked them to consider relocating to Frankfurt if it would be substantially more secure than their current facilities. This process was more formally articulated in a March 2002 State cable to 48 European and Eurasian posts having regional coverage, asking ambassadors to review their staffing with an eye toward relocating to Frankfurt staff whose primary responsibilities were regional. Although many of the posts were slow to respond, some listed possible candidates for relocation. For example, one post identified three agencies with a combined total of more than 50 staff members whom the ambassador believed should be considered for relocation. Although this effort initially identified positions for possible relocation, it was halted when planners in State’s Bureau of European and Eurasian Affairs received a document from OBO in August 2002 stating that “the facility is at 100% occupancy” based on a projected staffing level of about 900 desks. OBO later explained that this document meant that the facility was filled to the requirements level of 900 positions but did not mean the facility was filled to capacity. OBO acknowledged that the wording of the document was confusing. However, State officials told us that based on that document, the department concluded there would be no additional room in the facility for staff beyond the 900-desk staffing level. (The 900- desk projection only included staff currently in the Frankfurt consulate offices, staff currently at the Rhein Main Air Force Base, newly created staff positions, and staff “overflow” from the U.S. embassy in Berlin, Germany.) As a consequence, in August/September 2002, State stopped its efforts to relocate staff from posts outside Germany. For example, in September 2002, State’s Under Secretary for Management sent a letter to the U.S. Agency for International Development, one of the key agencies initially identified by State as having staff potentially available for relocation from outside Germany, indicating that the Frankfurt facility would be fully occupied. Beginning in March 2003, we performed a detailed analysis of State’s staffing requirements for Creekbed in relation to the facility’s capacity. We found that the facility had substantial additional capacity beyond the 900- desk level, affording opportunity for the relocation of personnel from posts outside Germany. Before visiting the Frankfurt facility in early May 2003, we interviewed the private contractor officials responsible for the space planning and concept design for Creekbed, who confirmed that there was space available for additional staff. While at the facility, we examined space allotted for two agencies and found the space significantly exceeded the number of positions slated to fill it. For example, one agency projected 28 office personnel for the facility but was allotted space for about 38 offices. Another agency also projected 28 office personnel but was allotted space for about 50 offices. In addition, we found that there was potentially more office space available at Creekbed because some agencies did not conduct a rigorous staffing process before submitting their staff projections. During our fieldwork in Frankfurt, we reviewed the documented 2002 staffing projections with the agencies in Frankfurt that will be moving into Creekbed and found that some agencies disputed their earlier projections. Some agencies had overestimated their individual staffing requirements, which were eventually curtailed by their headquarters in Washington, D.C. We have previously reported that U.S. agencies do not take a systematic approach to determining long-term staffing needs for embassy buildings scheduled for construction. We discussed these issues with the Consul General and the facility manager in Frankfurt, who agreed that the facility had substantial space to accommodate staff from other posts. When we completed our fieldwork in May 2003, we also discussed our observations with officials in State’s Bureau of European and Eurasian Affairs, the Office of Management Policy, and OBO; and with OMB. They, too, agreed that there was additional space. State then announced that it was renewing its efforts to regionalize operations in Frankfurt. In a May 2003 letter to OMB, State’s Under Secretary for Management said that the department was reopening the space plan for the facility and anticipated that Creekbed would accommodate significant additional positions. State indicated that it took this action because OMB urged it to do so. In a June 2003 cable to all posts, State said that it is considering which additional activities might be relocated to Creekbed. State emphasized that its renewed effort is part of its overall rightsizing strategy. Successful staffing of the Frankfurt facility consistent with State’s regionalization goals is a critical step in efforts to rightsize U.S. overseas operations. In fact, it may be the single most visible and concrete example of a rightsizing initiative by the U.S. government in the near term. We believe that the revised staffing plans for Creekbed will provide State a significant opportunity to work with other agencies to regionalize diplomatic operations in Europe and develop a more rational, secure, and cost-effective overseas presence. The facility has ample, available office and other space that, when fully renovated, will provide a secure alternative location to conducting regional operations at embassies and consulates with physical security deficiencies. Deciding which U.S. government positions will be relocated to the facility will require a careful consideration of the security, mission, and cost factors associated with agencies’ presence at individual posts. In some situations, State may encounter agency resistance to relocation. However, security considerations may be so compelling that relocation of certain staff may be necessary. In other cases, State and other agencies will have to work hard to reach agreement on the relative importance of the security, mission, and cost factors associated with the relocation decision and how the factors should be weighed. More importantly, it will require a strong and continual commitment by State to the broader objective of rightsizing the U.S. overseas presence. OMB and the Department of State provided written comments on a draft of this report (see apps. I and II). OMB said that it is working closely with State to develop a plan of action to appropriately staff the new facility, to assess if staff could be shifted from their current overseas location to Frankfurt, and to discuss potential moves to Frankfurt with headquarters staff at all agencies. OMB also expressed the hope that this facility will serve as an example of a best practice for the development of other regional centers around the world. State said that OBO’s estimate that the facility could accommodate about 1,100 desk positions represented a maximum theoretical capacity and that the actual capacity would probably be less. We subsequently asked OBO, which is State’s expert on overseas real estate and facility issues, if it was confident of its capacity estimate. OBO reiterated its estimate stating that it has identified space in the facility for about 1,100 personnel. However, even if the capacity of the facility were slightly less, there would still be ample room to accommodate some staff currently assigned to other locations outside Germany. State also noted that our report did not identify specific agencies or staff that we believe should be relocated to Frankfurt. State said this suggested that we do not believe that there are suitable candidates for relocation. This is not the case. As we noted in this report, State’s business plan for the purchase of the facility indicated it has space to accommodate regional staff from outside Germany who are assigned to embassies with security vulnerabilities. Moreover, State’s plan identified 73 staff from five agencies at posts outside Germany for potential relocation. As further noted in this report, State’s subsequent efforts at its European and Eurasian posts identified suitable candidates for relocation, but that exercise was halted because State mistakenly believed that the facility did not have sufficient space. Our work at the four posts outside Germany validated the existence of significant numbers of staff with regional responsibilities, many of which were located in buildings with substandard security. We did not identify specific candidates for relocation in this report because State said that it was conducting a full assessment of staffing options for Frankfurt, and we did not want to preempt that assessment. However, in our briefings with State and OMB officials, we discussed our fieldwork observations and told them that there were many staff that could be considered for relocation. For example, there were at least 87 staff with regional responsibilities in Vienna and Budapest that were assigned to space with substandard security. Furthermore, we noted that in 2002, we had identified regional positions in Paris that could be considered for relocation to Frankfurt based on security, mission, and/or cost factors. State also said that it believes, based on their follow-up to the 1999 Overseas Presence Advisory Panel report, that the U.S. government’s overseas presence is already rightsized. We have previously pointed out the substantial weaknesses in the pilot studies which provided the basis of State’s follow-up. State subsequently indicated that it intended to reinvigorate the rightsizing process consistent with the President’s Management Agenda, OMB’s directives, and our rightsizing framework. In our view, State’s comments are inconsistent with its (1) stated expectations that the Frankfurt project will achieve the department’s key rightsizing and regionalization goals and (2) plans to conduct a full assessment of staffing options for the Frankfurt regional center. In addition, State’s comments lead us to question whether the department seriously intends to implement its business plan for the Frankfurt center regarding relocating regional staff, as well as its commitment to the overall rightsizing process. We believe that State’s actions regarding staffing of the facility warrant oversight. State also provided technical comments that we have incorporated into this report, as appropriate. In view of State’s comments on a draft of this report and the continued importance of rightsizing the overseas U.S. presence consistent with security, mission, and cost factors, the Congress may wish to direct the Secretary of State to submit a detailed staffing plan for the Frankfurt facility that specifically lists positions to be relocated to Frankfurt. To determine State’s process for creating staffing projections for the Frankfurt regional center, we reviewed documents and interviewed officials in State’s Bureau of European and Eurasian Affairs, OBO, and Office of Management Policy. We visited the current consulate facilities in Frankfurt and spoke with the Consul General and appropriate State officers about the current security status of their consulate buildings as well as the multiple projections of staff relocating to the facility. We spoke to representatives from agencies that will be moving to the Creekbed facility. We also toured the facilities at the Rhein Main Air Force Base that are scheduled to be relocated by June 2005 as well as the currently empty Frankfurt regional center facility. In addition, we visited other posts in Europe—Paris, Rome, Budapest, and Vienna—to determine (1) the extent to which each has agencies and personnel performing regional functions that could be considered for relocation to Frankfurt based on the nature of their mission and/or their security vulnerability and (2) what actions these embassies had taken to identify staff who could be considered for relocation to the Frankfurt facility. Specifically, at these posts, we interviewed not only the agencies that were earlier identified by State or by their ambassadors as being potential relocatees, but also officials from other agencies with regional responsibilities. To determine the facility’s capacity to accommodate staff from outside Germany, we interviewed the private contractor officials in Albany, New York responsible for the initial feasibility design to discuss their space planning and concept design for the Frankfurt center. We also compared OBO’s capacity estimates with staffing requirements for the facility. In addition, during our visit to Creekbed, we compared the size of office space allocated to two different agencies in Frankfurt with the number of people in those agencies. We also met with officials in OMB to obtain documentation on the plans for purchasing the facility and to discuss State’s approach to staffing it. We conducted our work from February 2003 through August 2003 in accordance with generally accepted government auditing standards. We are sending copies of this report to the Director of OMB and the Secretary of State. We are also sending copies of this report to other interested Members of Congress. Copies will be made available to others upon request. This report will also be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me on (202) 512-4128. John Brummet, Janey Cohen, Lynn Moore, Ann M. Ulrich, and Joseph Zamoyta made key contributions to this report. | The State Department plans to spend at least $80 million to purchase and renovate a multibuilding facility in Frankfurt, Germany. The facility, known as Creekbed, is scheduled to open in mid-2005. The project is a key rightsizing initiative under the President's Management Agenda to reassess and reconfigure the staffing of the U.S. overseas presence. Creekbed is expected to achieve the department's major rightsizing and regionalization goals. The Office of Management and Budget expects the project to serve as a model for developing other regional centers. GAO was asked to determine whether State fully examined the potential for relocating regional staff from outside Germany to Creekbed. The Department of State indicated it is currently renewing earlier efforts to relocate staff from outside Germany to the new Frankfurt regional center. State said it would pursue a rigorous rightsizing and regionalization strategy in staffing the Frankfurt facility. State prematurely stopped its earlier efforts to relocate regional staff from other posts in August/September 2002 because staffing planners interpreted space planning estimates as indicating that the regional center would be fully occupied. However, according to GAO analysis, the facility was not full and significant additional space existed. After touring the facility and studying staffing requirements and space allocated for specific agencies, GAO found there was space available for additional staff. Successfully staffing the Frankfurt regional facility has the potential to optimize its use and achieve broader regionalization objectives. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
The Forest Service, within the Department of Agriculture, manages for multiple uses 191 million acres of national forests and grasslands under a wide and complex set of laws and regulations. For fiscal year 1993, the Forest Service reported selling 4.5 billion board feet of timber from the lands for a total bid value of $774.9 million. Developing ASQs is part of a legislatively required process specified in the Forest and Rangeland Renewable Resources Planning Act (RPA) of 1974 (16 U.S.C. 1600-1614), as amended by the National Forest Management Act (NFMA) of 1976 (16 U.S.C. 1600-1614). RPA requires the Forest Service to develop long-range planning goals for activities on rangelands and in national forests, and NFMA directs the Forest Service to develop detailed management plans for national forests and to regulate timber harvests to ensure the protection of other resources. The Forest Service has supplemented this guidance with regulations, first issued in 1979 and revised in 1982, and with a manual and handbooks for forest-level use. (See apps. I and II for further discussion of these laws, regulations, and policy guidance.) The Forest Service also has management responsibilities that extend beyond timber production, including such other activities as protecting natural resources like air, water, soils, plants, and animals for current and future generations. The Multiple Use-Sustained Yield Act of 1960 (16 U.S.C. 528-531) gives the Forest Service authority to manage lands for multiple uses and to sustain in perpetuity the outputs of various renewable natural resources. In carrying out its responsibilities, the Forest Service must also comply with other requirements for identifying and considering the effects that activities may have on natural resources. For example, the National Environmental Policy Act of 1969 (42 U.S.C. 4321 et seq.) requires the preparation of environmental impact statements for major actions that may significantly affect the quality of the human environment. National forest management can be divided into three main processes—planning, budgeting, and (for timber resources) preparing timber sales. These processes are summarized below and explained further in appendix III. Forest Service officials use the guidance in federal laws and Forest Service regulations and policies to develop a forest-specific plan for managing lands and resources (forest plan) that explains how the various forest resources will be managed for the next 10 to 15 years. The planning process is complex, involving extensive surveys of forest resources, the use of computer models, the development of management alternatives, and substantial public participation. The process is also lengthy, taking generally 3 to 10 years to complete. Part of this process involves developing the ASQ, which is the Forest Service’s estimate of the maximum harvest consistent with sustaining many other uses of the forest. Although the ASQ covers the first 10 years of the forest plan, it is usually expressed as an annual average (i.e., one-tenth of the total ASQ). Timber sales in any year may fluctuate above or below the average annual ASQ as long as the cumulative sales for the 10-year period do not exceed the total ASQ—that is, the maximum amount to be sold over the 10-year period. Each forest’s ASQ is affected by factors unique to that forest, such as the species of trees, the proportion of the acreage devoted to timber production (as compared with other uses), and the market demand for timber. When the forest plan has been completed and put in place, forest officials monitor and evaluate the results so that the effects of implementing the plan can be measured, the measurements can be analyzed, and necessary changes, such as a change in the ASQ, can be made. Generally, 2 to 3 years before the fiscal year in which the funds will actually be spent, each of the Forest Service’s nine regions develops a budget request for its national forests. The budget requests are based partly on the overall objectives for each forest plan as well as guidance from the administration. These requests are then aggregated at the national level, where they are subject to review and change by Forest Service headquarters, the Department of Agriculture, the Office of Management and Budget, and the Congress. Yearly congressional appropriations are then passed down from Forest Service headquarters to the regions, and then from the regions to the individual forests. Preparing timber sales usually takes 3 to 8 years and consists of six steps, or “gates.” The early steps involve identifying the timber to be offered for sale and conducting environmental studies of the areas to be affected; the later steps involve advertising and selling the timber. Because timber is offered for sale from most forests each year, in any given year timber sales may be found at various steps in the process; some sales are at the beginning and others are at the last step before the timber is made available for harvest. Several factors contributed to bringing timber sales below average annual ASQs from fiscal years 1991 through 1993 at all five of the national forests we reviewed. At four of these five forests, timber sales also decreased over the 3-year period. (See app. IV for forest-by-forest totals.) For example, at the Mt. Hood National Forest, which had an average annual ASQ of 189 million board feet, ASQ-related timber sales were approximately 51 million board feet in 1991 and 38 million board feet in 1993. The Ouachita National Forest was the only forest whose timber sales were higher in 1993 than in 1991. Its ASQ is approximately 147 million board feet, and it had ASQ-related timber sales of about 40 million board feet in 1991 and 131 million board feet in 1993. Factors contributing to differences between ASQs and timber sales at the five forests we reviewed included limitations in data and estimating techniques, the emergence of new forest management issues and changing priorities, and rising or unanticipated costs associated with preparing and administering timber sales. At four of the five forests, officials said the preciseness of the ASQ was affected by limitations in data and estimating techniques. To develop the ASQ, officials said they had used the best information available at the time and a variety of estimating and computer modeling techniques. However, they noted that these estimating and computer modeling techniques carry an inherent risk of imprecision. For example, estimates of timber volumes may be based on analysis of aerial photographs and sample tracts within a forest. More detailed, on-the-ground analysis may later reveal that actual timber volumes differ somewhat from the estimated quantities, as the following examples show: After estimating ASQ volumes for planning purposes, officials at the Deschutes National Forest discovered that they had overestimated the size of the timber inventory in timber harvest areas. They had based their inventory on an average volume that might have been accurate for the forest as a whole but was not accurate within specific areas where sales were planned. To correct this weakness, they redesigned the inventory process and began implementing the changes in 1993. At the Chattahoochee-Oconee National Forest, officials said that they had identified limitations in their original estimates of the timber yield. Forest officials had included all potentially saleable trees of all species (the forest has about 40 different species of trees) in their estimates of the timber yield during the planning process. However, as they began to implement their forest plan, they found that buyers desired only some of the species. In addition, the ASQ included yields from some forest land—such as areas next to visually sensitive travelways—that could not be fully harvested. Forest officials acknowledged that including these possible yields lowered the accuracy of their ASQ estimate. To correct these problems, forest officials plan to adjust their yield estimates to include only timber with established markets and to develop a more precise way to identify acres available for harvest. Officials at the Gifford Pinchot National Forest said they believe their ASQ could have been based on an overestimate of the number of acres available for timber production. In later analyzing timber management areas, forest officials found that fewer acres were available for harvest than originally estimated. The forestwide estimates used to develop the ASQ did not consider some factors—such as wildlife habitat, sensitive plant species, or campground uses—later encountered in on-the-ground examination while preparing timber for sale. To improve the accuracy of their estimates, forest officials have proposed collecting more information before determining the number of acres available for timber production. The forest plan, which incorporates the ASQ, reflects the Forest Service’s determination at the time the plan is developed of how timber production and other uses of the forest will be managed over the next 10 to 15 years. After these decisions have been made and an ASQ has been established, however, new forest management issues and changing priorities often emerge that directly affect how the forest will be managed. These changes may also affect the amount of timber that can be sold. The most dramatic example of such changes for the forests we reviewed occurred in the Pacific Northwest Region. In mid-1990, when the forest plans containing the ASQs for the three Pacific Northwest forests were ready to be implemented, the Department of the Interior’s Fish and Wildlife Service announced its decision to list the northern spotted owl as a threatened species under the provisions of the Endangered Species Act. Much of the land inhabited by the spotted owl is managed by the Forest Service. Several environmental groups challenged the process used to implement spotted owl management, and on May 23, 1991, many timber harvests in the three forests were halted by a court injunction. Forest Service officials said this injunction and similar legal challenges were primarily responsible for the difference between ASQs and timber sales in all Pacific Northwest forests. Sharp declines in the volume of timber sold from the Gifford Pinchot National Forest illustrate the effects of challenges and the court injunction on timber sales. This forest had an average annual ASQ of 334 million board feet. In fiscal year 1991, the forest sold 110.2 million board feet of timber that was chargeable to the ASQ and had been harvested outside the owl habitat. In fiscal year 1992, that total dropped to 19.8 million board feet, and in fiscal year 1993 it further declined to 14.8 million board feet. According to the forest’s monitoring report for 1993, “the shortfall continues to be the result of the owl controversy and recent court decisions.” While the Southern forests we reviewed were not affected by an event as sweeping as the spotted owl controversy, their harvests were likewise affected by events that reflected changes in the relative priorities assigned to timber sales and other uses of the forest. These changes generally did not result in court challenges but rather in appeals filed by individuals or groups during an administrative process established by the Forest Service to review challenges to its decisions on issues ranging from the size of a forest’s ASQ to aspects of a particular timber sale. Under this process, Forest Service personnel review and decide on the appeals. At the Chattahoochee-Oconee National Forest, for example, the majority of appeals challenged individual timber sales that were below cost or had been designed without proper environmental evaluations. According to a forest official, in fiscal year 1993 a total of 10 appeals challenged 8 proposed timber sales, and in fiscal year 1994 (through June 29), a total of 44 appeals challenged 22 proposed timber sales. The Forest Service is revising its policies to respond more effectively to changing priorities for uses of the nation’s forests. On June 4, 1992, the Chief of the Forest Service announced a new policy of multiple-use ecosystem management for the national forests and grasslands. Four of the five forests in our review are included in pilot projects proposed for fiscal year 1995 as tests of ecosystem management’s potential to better ensure the sustainable long-term use of natural resources. One project addresses common problems associated with air and water quality, conservation, biological diversity, and sustainable economic growth in the southern Appalachian highlands, a region that includes the Chattahoochee-Oconee forest. In an August 1994 report on ecosystem management, we concluded that such projects afford an opportunity to test this approach to land management. The three Pacific Northwest forests we reviewed are included in another ecosystem management pilot project that could affect the current process for developing ASQs. In response to the spotted owl controversy, the administration created an interagency team to develop alternatives that would “attain the greatest economic and social contribution from the forests of the region and meet the requirements of the applicable laws and regulations.” In April 1994, the interagency team produced a land management plan based on broad land areas, such as river basins and watersheds. Forest Service officials indicated that under the new plan, although an ASQ would still be developed in order to comply with the requirements of the National Forest Management Act of 1976, individual revised forest plans might also include a “probable sale quantity” to reflect the uncertainty associated with selling timber at the ASQ. For example, for the three Pacific Northwest forests we reviewed, the new land management plan identifies an average annual probable sale quantity of 157 million board feet, as compared with the existing average annual ASQ of 621 million board feet. The difference is due primarily to the allocation of fewer acres for timber production. Forest Service officials cite the timing of the budget process, as well as new forest management issues and changing priorities, as contributing to the shortfall in the moneys available to prepare timber sales and administer harvests at ASQ levels. According to these officials, budget requests must be prepared 2 to 3 years before the funds are actually received, and emerging issues and changing priorities may render the original request insufficient, as in the following instances: At the Chattahoochee-Oconee National Forest, officials estimated that the costs per million board feet to prepare timber sales and administer harvests rose by approximately 36 percent between 1988 and 1993 when the Forest Service began to reduce its use of clearcutting and increase its use of other harvesting methods. These other harvesting methods, such as single-tree and group selection methods, require Forest Service personnel to mark each tree planned for harvest. Because this and other activities increase the cost and time associated with preparing each timber sale, available staff and funds cannot be spread over as many sales as originally planned. At the Mt. Hood National Forest, officials said that in recent years they had underestimated their costs to prepare timber sales and administer harvests when developing their annual budget requests. They noted that between fiscal years 1990 and 1991, preparation and administration costs rose by about 39 percent, and between fiscal years 1991 and 1992, these costs rose by an additional 147 percent. Factors contributing to these increases in costs included requirements for (1) conducting surveys of cultural and historical resources and of threatened and endangered species that took more time and resources than had been anticipated and (2) switching from clearcutting to other harvesting methods and shifting timber harvests out of owl habitat to comply with court injunctions. While preparation and administration costs increased by only 8 percent between fiscal years 1992 and 1993, forest officials believe that they will increase by another 51 percent between fiscal years 1993 and 1995 as the new Pacific Northwest forest plan is implemented. Given the uncertainties inherent in developing ASQs, shortfalls between ASQs and timber sales should be expected. An ASQ is, to some extent, imprecise because it is based on estimating techniques and forestwide data rather than on detailed, on-the-ground data from the timber sale area. Even more significantly, however, an ASQ represents a planning “snapshot” that can quickly become outdated as new forest management issues emerge and priorities change. As the value placed on timber production shifts toward other forest uses, ASQs established under earlier, somewhat different priorities may no longer reflect estimated sale quantities. Although forest planning allows ASQs to be updated as needed, the experience of the five forests we reviewed indicates that events may quickly overtake even revised ASQs. We discussed the facts and observations contained in a draft of this report with officials from Forest Service headquarters, including the Deputy Director, Budget Analyst, Staff Assistant, and Interdisciplinary Forester (Forest Plans) within the Timber Management Staff; the Planning Specialist within the Land Management Planning Staff; and the Interdisciplinary Analyst within the Program Planning and Development Staff. We also discussed the facts and observations with senior regional and forest officials from the two regions that we visited. In general, these officials agreed that the information was accurate, and we have incorporated changes that they suggested where appropriate. To determine why timber sales often fall short of ASQs, we met with Timber Management, Program Development and Budget, and Land Management Planning officials from Forest Service headquarters; the Pacific Northwest Regional Office in Portland, Oregon; and the Southern Regional Office in Atlanta, Georgia. We also met with Forest Service officials from the Chattahoochee-Oconee, Deschutes, Gifford Pinchot, Mt. Hood, and Ouachita National Forests. We selected these two regions because they had the largest timber sales for fiscal year 1993. We judgmentally selected the specific forests because of their geographical proximity to the regional offices. In addition, we selected the Ouachita National Forest because it had begun to practice ecosystem management before the Forest Service decided to implement this land management approach agencywide. We reviewed documentation provided by these officials, including forest plans, budget requests, and monitoring reports. We did not, however, evaluate the ASQ calculations made for the five forests but used the figures cited in the forest plans as a starting point for discussing how the figures were determined. We also discussed the budgeting process with officials from the Office of Management and Budget and the Department of Agriculture in Washington, D.C. We discussed forest planning procedures with representatives of the Congressional Research Service and reviewed additional documents on forest planning from the Office of Technology Assessment. In addition, to determine the role the Congress plays in the budget deliberations, we met with staff from both the House and Senate appropriations subcommittees who review the Forest Service’s budget requests. We conducted our review between August 1993 and August 1994 in accordance with generally accepted government auditing standards. We are sending copies of this report to interested congressional committees, the Secretary of Agriculture, and the Chief of the Forest Service. We will make copies available to others upon request. This work was done under the direction of James K. Meissner, Associate Director for Timber Management Issues, who may be reached at (206) 287-4810. Other major contributors to this report are listed in appendix V. To provide the President with the authority to create forest reserves out of forested public domain lands. To identify purposes for creating forest reserves, including improving and protecting forests within reservations, protecting water supplies, and providing the public with a continuous supply of timber. To provide a constant source of funding for the reforestation of harvested lands and to protect and improve nontimber resources in timber sale areas. To ensure the management of national forest resources and products for multiple uses and sustained yield. To preserve natural areas of national forests for recreation and other uses. Prohibits timber harvesting in these areas. To preserve certain rivers and surrounding areas. Limits timber harvesting in the surrounding areas. National Environmental Policy Act (NEPA) To require federal agencies to evaluate and document the impact on the environment of significant land management activities. To protect plant and animal species whose survival is in jeopardy. Forest and Rangeland Renewable Resources Planning Act (RPA) To provide guidance for establishing long-range resource planning goals for the national forests. National Forest Management Act (NFMA) To provide guidance for developing forest plans, regulating activities, and allowing public participation in planning. To place limits on activities that would exceed federal or state water quality standards in order to enhance water quality. The Forest and Rangeland Renewable Resources Planning Act (RPA) of 1974, as amended by the National Forest Management Act (NFMA) of 1976, provides the basic legislative guidance to the Forest Service for planning and managing resources in the national forests. RPA requires the Forest Service to develop long-range planning goals for activities on rangelands and in national forests, and NFMA directs the Forest Service to develop detailed management plans for national forests and to regulate timber harvests to ensure the protection of other resources. NFMA also required the Forest Service to develop regulations for implementing the planning goals established in RPA and NFMA. RPA makes resource management unit plans a statutory requirement through which the Forest Service will provide comprehensive information on the forest’s abilities to produce resources, such as fish and wildlife habitat, and goods and services, such as wood for lumber and opportunities for recreation. RPA directs the Forest Service to establish long-term resource planning goals for rangelands and forests. It requires the Forest Service to (1) assess the renewable resources on all lands every 10 years, (2) recommend a program for renewable resource activities on Forest Service lands every 5 years, and (3) annually report on the implementation of the recommended program and the accomplishments of the program relative to the assessment. RPA also requires the President to submit to the Congress, together with the assessment and the recommended program, a statement of policy that will guide the Forest Service’s budget requests for implementing the 5-year recommended program. In 1975, the Circuit Court of Appeals for the Fourth Circuit affirmed a 1973 district court decision constraining the Monongahela National Forest in West Virginia to sell only individually marked “dead, physiologically mature, and large growth” trees. The Forest Service decided to extend this decision to all nine national forests under the circuit court’s jurisdiction. The Forest Service estimated that the decision, which was based on the circuit court’s interpretation of the Organic Act of 1897, would reduce national forest timber harvests by 50 percent if applied nationwide. To preclude this reduction and to ensure the use of scientifically accepted forestry measures to sustain the yield of natural resources, the Congress enacted NFMA. All but 1 of the first 12 sections of NFMA amend RPA. For example, NFMA provides more specific guidance to the Secretary of Agriculture and the Forest Service for developing and implementing long-range planning goals for national forests. NFMA goals include improving the management of national forests and facilitating the public’s involvement in and congressional oversight of the process. Specifically, NFMA requires that the Forest Service (1) develop integrated land and resource management plans (forest plans) for national forests using interdisciplinary teams, (2) regulate timber management activities in order to protect other resources, and (3) allow the public to participate in the development, review, and revision of the forest plans. In addition, NFMA requires that the Forest Service limit the sale of timber from each national forest to no more than an amount that could be harvested annually on a long-term sustained-yield basis. NFMA also requires the Secretary of Agriculture to develop and issue planning regulations to assist Forest Service regions and national forests in developing and maintaining forest plans. The regulations—completed in 1979 and revised in 1982—establish a process for developing, adopting, and revising forest plans. The regulations also provide guidance on the type of information to be included in the plans, such as multiple-use goals and objectives. In addition, they establish 14 principles to guide planning, including the following: Recognize that the national forests are ecosystems and their management for goods and services requires an awareness and consideration of the interrelationships among plants, animals, soil, water, air, and other environmental elements within such ecosystems. Protect and, where appropriate, improve the quality of renewable resources. Preserve important historic, cultural, and natural aspects of our national heritage. Provide for the safe use and enjoyment of the forest resources by the public. Use a systematic, interdisciplinary approach to ensure coordination and integration of planning activities for multiple-use management. Encourage early and frequent public participation. Respond to changing conditions of the land and other resources and to changing social and economic demands of the American people. The regulations also define the allowable sale quantity (ASQ) as the amount of timber that could be planned for sale from the area of suitable land during the first period of the forest plan—one decade. Essentially, the ASQ is the amount of timber that could be sold and harvested during the first decade without exceeding the amount of timber that could be harvested on a long-term sustained-yield basis. The Forest Service developed and included guidance in its manual and handbooks to provide national forest personnel with further direction for implementing RPA and NFMA. The manual contains general policy rules for forest planning, while the handbooks provide detailed instructions for developing and implementing forest plan activities. For example, the Forest Service manual requires that national forests use FORPLAN, a Forest Service analytical model, as the primary analytical tool for assessing management activities during forest planning, while the resource inventory handbook provides standards, definitions, and specifications for conducting timber inventories. Each Forest Service region provides additional guidance to the forests under its jurisdiction to clarify general guidance from headquarters and to suggest ways of incorporating factors that are unique to the region and its forests. For example, the Pacific Northwest Region provides the forests with guidance on identifying spotted owl habitat within their boundaries and on ensuring that Columbia Basin forests have a consistent approach in developing habitat capability indicators for smolt (young salmon migrating to the sea). National forest management can be divided into three main processes: (1) planning, (2) budgeting, and (3) for timber resources, preparing timber sales. In addition, forest managers monitor and evaluate the results of their activities and use this information to determine whether changes in their management plans are needed. Timber is one of many resources assessed in a forest’s land and resource management plan (forest plan). Besides timber, a forest plan includes such other resources as (1) outdoor recreational facilities (for example, campgrounds and hiking trails), (2) rangelands for providing forage to livestock and wildlife, and (3) wildlife and fish habitat for the various species dependent on the forest environment. The plan specifies how these multiple resources are to be managed so to maximize net public benefits in an environmentally sound manner. To develop forest plans, the Forest Service follows a complicated process set forth in the laws, regulations, and policies discussed in appendixes I and II. A plan’s development rests mainly with an interdisciplinary team of biologists, foresters, soil specialists, and others. The forest supervisor—the person in direct charge of a forest—also provides considerable direction in determining what issues and concerns the team will address. In addition, public participation is sought at various stages throughout the process. For planning purposes, the ASQ is the maximum amount of timber that can be sold from the forest for the next 10 years on a sustained-yield basis. However, in day-to-day usage, the ASQ is usually expressed as an average annual ASQ—that is, as one-tenth of the total. Actual timber sales, however, can fluctuate above or below this average annual amount as long as the sales for the 10-year period do not exceed the total ASQ. To develop the ASQ, the interdisciplinary team determines such information as the species, age, size, number, and location of the trees in the forest. This information helps the team identify land capable of producing trees of commercial value within the period covered by the plan. Because Forest Service regulations require the team to have access to the best available inventory data in preparing the ASQ, the Forest Service may have to conduct special inventories or studies to assemble adequate information. Identifying land suitable for timber production is part of an overall analysis that considers timber production in relation to other forest resources. This analysis responds to the legal requirement to maximize net public benefits—that is, the long-term value to the nation of all outputs and positive effects (benefits) minus the associated inputs and negative effects (costs). As specified in Forest Service planning regulations, lands are not considered suitable for timber production if (1) less than 10 percent of the area has trees, (2) the area cannot begin regrowing trees within 5 years of the harvest, (3) irreversible damage will occur to the land or other resources if the trees are harvested, or (4) land has been withdrawn from timber production by an Act of Congress, the Secretary of Agriculture, or the Chief of the Forest Service. Because maximizing net public benefits often involves making choices between various goals, the initial outcome of this overall analysis is a broad range of alternatives describing the different ways the forest can be managed to address and respond to major public issues, management concerns, and resource opportunities. The primary purpose in developing alternatives is to provide an adequate basis for identifying the alternative that comes nearest to maximizing net public benefits. Under these criteria, the alternatives list (1) the multiple-use goals and objectives that describe the desired future condition of the forest, (2) the goods and services expected to be produced, (3) the standards and guidelines for managing resources, and (4) the conditions and uses that result from the planned activities, such as timber sales. As part of its discussion of land management objectives, each alternative includes an ASQ. Each alternative specifies a particular emphasis, such as protecting wildlife habitat or promoting recreation, and each alternative may have a different ASQ. For example, an alternative that emphasizes wilderness protection will have a lower ASQ than an alternative that emphasizes timber production. The ASQ for each alternative is calculated using a forest planning model called FORPLAN. The model will help analyze such factors as the forest’s ability to supply goods and services in response to society’s demands, as well as each land management alternative’s effects, such as present net value, social and economic impacts, and outputs of goods and services. The team supplements the FORPLAN results, as needed, with input from forestry experts and from the public. The planning process culminates in the selection of an alternative for implementation. The team estimates and compares the physical, biological, economic, and social effects of implementing each alternative. The team looks at such things as the expected outputs for the planning periods, the direct and indirect benefits and costs, and the resource trade-offs and opportunity costs associated with achieving the objectives. The team then makes recommendations to the forest supervisor, who reviews the recommendations and forwards a preferred alternative to the regional forester, who is in charge of all of the forest supervisors in the Forest Service region. Once the regional forester approves the preferred alternative, the forest plan is completed, and the ASQ is established for the next 10 years. Although this process has clearly defined requirements, it is also open-ended in that the ASQ as well as other elements of the forest plan can be changed at any time during the 10-year period if the forest supervisor determines that a change is necessary. Changes are made through amendments or revisions to the forest plan to accommodate such things as shifts in land management policy or other significant changes. Before forest officials develop their budget requests, they receive written instructions from Forest Service headquarters on what to include in their requests. These instructions communicate the agency’s priorities in light of such factors as the administration’s guidance on the agency’s budget targets. The administration’s guidance can be as specific as a letter from the President or as general as a forecasted budget total for the agency. The instructions are also formulated with input from regional foresters, who recommend to the Chief of the Forest Service which program goals should be emphasized—for example, ecosystem management or the operation and maintenance of recreational facilities. Regional foresters also identify levels of data to be collected and (until fiscal year 1996) specific resource targets. For fiscal year 1996, specific resource targets were eliminated. After receiving these instructions, forest officials develop their budget requests. The budget process actually begins 2 to 3 years before the fiscal year in which the funds will be spent. For example, the process for developing a forest’s fiscal year 1995 budget request probably began in fiscal year 1993 or earlier. Forest officials also develop their requests as a range of funding alternatives in accordance with headquarters guidance. For example, fiscal year 1995 budget submissions from Pacific Northwest forests included three funding levels: (1) a base level equal to the fiscal year 1992 appropriation, adjusted for inflation; (2) a reduced level, 5 percent lower than the base level; and (3) an increased level, 20 percent higher than the base level. Budgets prepared for fiscal years up to 1995 also included a funding level based on the amount the forest supervisor believed would be necessary to implement the forest plan’s objectives. The budget request for each forest is subject to levels of internal Forest Service review. The request is first forwarded to the regional office, where it is reviewed for conformity with budget instructions and regional priorities. The regional office makes any changes it deems necessary, consolidates the request for the forest with those for other forests in the region, and adds the regional office’s own estimated costs for supporting the forests and implementing the regional office’s own actions and program initiatives. The completed request, which displays the request for each forest as well as the aggregated numbers, is forwarded to headquarters. There, a similar review of regional requests is conducted. The regional budgets approved by headquarters are aggregated, and headquarters adds the costs it expects to incur in carrying out its administrative and monitoring activities and in initiating any national programs. This process results in an overall Forest Service request. This request may be changed by the Department of Agriculture (the Forest Service’s parent agency), the Office of Management and Budget, or the Congress through the appropriations process. However, budget reviewers at these levels do not have forest-level data to determine the funds needed to attain the goals for the individual forests; instead they review overall agency goals. For example, according to an official from the Department of Agriculture, the agency considers such things as the number of Forest Service employees, the agency’s programs, and national goals like implementing ecosystem management in the Pacific Northwest. According to an official from the Office of Management and Budget, the agency considers whether, in areas such as timber production, the budget reflects policies that are consistent with the administration’s broader policies and objectives. The Office of Management and Budget also reviews the cost-effectiveness of the Forest Service’s production of timber for sale by comparing projected cost estimates with the most recent actual costs. At the congressional level, the administration’s request is subject to change in the committee process and in floor debate. Once a funding level for the Forest Service is approved, the appropriations information is then passed in reverse, from the Congress down to headquarters, along with congressional directives specifying how some of the funds will be spent. Headquarters divides and allocates the funds to the regions, and, in turn, each region allocates funds to each forest, usually well into the fiscal year. Until the actual funding is received, forests will use the region’s estimated appropriation level as a base, as well as the forest plan’s priorities and historical trends. Before fiscal year 1993, in providing funds for preparing and administering timber sales, the Congress also specified the volume of timber it expected the Forest Service to offer for sale. Now, the expected volume is based on each forest’s ability to sell and harvest timber. Regulations require that each forest plan contain a 10-year timber sale schedule identifying the quantity of timber planned for sale from an area of suitable forest land in order to attain the ASQ. Individual timber sales are prepared using a six-step process, referred to as the timber sale gate system. Table III.1 summarizes the six gates. The timber the forest intends to sell is identified, and a position statement is developed setting forth the purpose and reasons for the timber sale. For continuing sales, timber sale design alternatives are developed, a site-specific environmental and economic analysis is completed for the proposed sale, and the approving official decides whether to proceed with the proposed sale. The sale area is physically marked, and data are collected to help prepare the timber appraisal, contract, offering, and sale area improvement plan. The timber is appraised and advertised, and a sample contract is prepared. Bids by potential buyers are reviewed, and an auction is held if required. The contract is signed by both the timber purchaser and the Forest Service. The entire gate process for selling timber normally takes 3 to 8 years, depending on the size, location, and complexity of the sale; access to the area; and the design of the transportation system. Basic decisions about whether to continue the sale occur both at gate 1 and gate 2. Gate 1 generally occurs in the first year; gate 2 usually occurs between the second and fifth year of sales that continue beyond gate 1. Public comments are actively sought by the Forest Service throughout gates 1 and 2. Comment after a decision has been made comes through the administrative appeal system, once a decision notice has been signed by the approving official at gate 2. According to a forest official, administrative appeals or lawsuits can add 4 months to 4 years to the entire process. Gate 3 usually occurs during the third to eighth year of the sale, depending on the complexity of the sale. The remaining gates generally take place during the last year of the sale process. Once the timber contract is awarded in gate 6, the timber purchaser prepares the site to harvest the timber—a process that can take 3 to 5 years to complete. Timber management is not completed when the timber is sold. Forest officials track the results of their planning and timber management activities so that the effects of implementing the plan can be measured, the measurements can be analyzed, and necessary changes can be made. Within the Forest Service, forest supervisors use monitoring information—as well as Forest Service reports and special studies or litigation and appeal results—to evaluate whether the implementation process has achieved the forest plan’s objectives. If the evaluation indicates that the implementation process has failed to achieve the plan’s objectives or if new information—such as a decrease in wildlife habitat—indicates that the plan’s objectives should be revised, then the forest supervisor may amend or revise the forest plan. If the forest supervisor decides that an event—such as a decrease in the forest’s ability to produce the ASQ—is significant, then forest officials must follow the same procedure as is required to develop and approve a forest plan. If the event is insignificant—such as the acquisition of additional forest land—then such an extensive effort is not required and the amendment can be implemented after the public has been properly notified and NEPA procedures have been satisfactorily completed. NFMA requires that a forest plan be revised at least every 15 years; however, the plan can be revised at any time. A forest supervisor can request a plan’s revision when forest conditions or demands have changed significantly or when changes in RPA policies, goals, and objectives significantly affect the forest’s programs. Revisions have to be in accordance with the requirements for developing and approving a forest plan, through the completion of the entire forest plan process, and must be approved by the regional and headquarters offices. Table IV.1 shows the volume of timber sold (not including sales of forest products such as Christmas trees and firewood) and the average annual ASQ for the two Southern Region forests we reviewed. These two forests implemented their ASQs in 1986 and 1987. Timber sales were below average annual ASQs in all years since the ASQs were implemented except (for the Ouachita National Forest) in fiscal years 1987 and 1988. Table IV.1: Comparison of Average Annual ASQ and ASQ-Related Timber Sale Volumes for Southern Region Forests in GAO’s Review Fiscal year in which the ASQ was implemented. Not applicable because the ASQ was not implemented until 1987. Table IV.2 shows the volume of timber sold (not including sales of forest products such as Christmas trees and firewood) and the average annual ASQ for the three Pacific Northwest Region forests we reviewed. These forests implemented their ASQs in 1991. Timber sales were below average annual ASQs in all years since the ASQs were implemented. Volume in millions of board feet Deschutes (1991) Gifford Pinchot (1991) Mt. Hood (1991) The maximum volume of timber that may be sold on a sustained-yield basis from the area of suitable land covered by the forest plan for a time period specified by the plan. This volume is usually expressed on an annual basis as the “average annual allowable sale quantity.” A board foot, a standard measure of timber, equals the amount of wood in an unfinished board 1 inch thick, 12 inches long, and 12 inches wide. Clearcutting is a harvesting method that involves removing all trees from a timber harvest site at one time. Ecosystem management is a new, broader approach to managing the nation’s lands and natural resources. Ecosystem management recognizes that plant and animal communities are interdependent and interact with their physical environment (soil, water, and air) to form distinct ecological units called ecosystems that span federal and nonfederal lands. Any species of animal or plant as defined by the Endangered Species Act that is in danger of extinction throughout all or a significant portion of its range. Land at least 10 percent occupied by forest trees of any size or formerly having had such tree cover and not currently developed for nonforest use. A land management plan designed and adopted to guide forest management activities on a national forest. A method of harvesting timber in which small groups of trees are removed from an area annually or periodically. A group of people trained in different scientific disciplines assembled to solve a problem or perform a task. The team is assembled out of recognition that no one discipline can provide the broad background needed to adequately solve the complex problem. The management of the various renewable resources of the national forest system to ensure their use in a combination that will best meet the needs of the public. A best assessment of the average amount of timber likely to be available for sale annually in a planning area over the next 10 years. A resource that may be used indefinitely if the rate of use does not exceed the resource’s ability to renew the supply. The quantity of timber planned for sale, by time period, from an area of suitable land covered by a forest plan. The first period, usually a decade, provides the allowable sale quantity. The harvesting of selected individual trees of all sizes. The appropriateness of applying certain resource management practices to a particular area of land, as determined by an analysis of the economic and environmental consequences and of the alternative uses forgone. The volume of timber that a forest can produce continuously from a given intensity of management. Any species of animal or plant as defined by the Endangered Species Act that is likely to become an endangered species throughout all or a significant portion of its range within the foreseeable future. Administering sale or use conditions, monitoring effects, and harvesting and removing forest products. A listing of the location, quantity, condition, and growth of trees on forest lands. Preparing and offering timber for sale and awarding a sale. The volume of timber expected to be produced under a certain set of conditions. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (301) 258-4097 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | Pursuant to a congressional request, GAO provided information on timber sales in five national forests between 1991 and 1993, focusing on: (1) whether the Forest Service met its allowable sale quantity (ASQ) for the five forests; and (2) why the quantity of timber sold from the national forests was sometimes substantially below ASQ. GAO found that: (1) timber sales for each of the 5 forests reviewed were significantly below the average ASQ between 1991 and 1993; (2) factors contributing to the Forest Service's inability to meet ASQ included the lack of adequate data and estimating techniques to base ASQ, the emergence of new and changing forest management priorities, and rising or unanticipated costs associated with preparing and administering timber sales; (3) forest officials at one of the five forests overestimated the size of the timber inventory and improperly based the inventory on average volumes rather than on the specific parts of the forest where timber sales were being prepared; and (4) ASQ were reduced in Pacific northwest forests after the northern spotted owl was listed as an endangered species and much of the proposed harvest areas were set aside for its habitat. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
Wildland fires triggered by lightning are both natural and inevitable and play an important ecological role on the nation’s landscapes. These fires shape the composition of forests and grasslands, periodically reduce vegetation densities, and stimulate seedling regeneration and growth in some species. Over the past century, however, various land use and management practices—including fire suppression, grazing, and timber harvesting—have reduced the normal frequency of fires in many forest and rangeland ecosystems and contributed to abnormally dense, continuous accumulations of vegetation. Such accumulations not only can fuel uncharacteristically large or severe wildland fires, but also—with more homes and communities built in or near areas at risk from wildland fires— threaten human lives, health, property, and infrastructure. The Forest Service and four Interior agencies—the Bureau of Indian Affairs, Bureau of Land Management, Fish and Wildlife Service, and National Park Service—are responsible for wildland fire management. These five agencies manage about 700 million acres of land in the United States, including national forests, national grasslands, Indian reservations, national parks, and national wildlife refuges. The federal wildland fire management program has three major components: preparedness, suppression, and fuel reduction. To prepare for a wildland fire season, the agencies acquire firefighting assets— including firefighters, engines, aircraft, and other equipment—and station them either at individual federal land management units (such as national forests or national parks) or at centralized dispatch locations. The primary purpose of these assets is to respond to fires before they become large—a response referred to as initial attack—thus forestalling threats to communities and natural and cultural resources. The agencies fund the assets used for initial attack primarily from their wildland fire preparedness accounts. When a fire starts, current federal policy directs the agencies to consider land management objectives—identified by land and fire management plans developed by each local unit, such as a national forest or a Bureau of Land Management district—and the structures and resources at risk when determining whether or how to suppress it. A wide spectrum of fire response strategies is available to choose from, and the manager at the affected local unit—known as a line officer—is responsible for determining which strategy to use. In the relatively rare instances when fires escape initial attack and grow large, the agencies respond using an interagency system that mobilizes additional firefighting assets from federal, state, and local agencies, as well as private contractors, regardless of which agency or agencies have jurisdiction over the burning lands. Federal agencies typically fund the costs of these activities from their wildland fire suppression accounts. In addition to preparing for and suppressing fires, the agencies attempt to reduce the potential for severe wildland fires, lessen the damage caused by fires, limit the spread of flammable invasive species, and restore and maintain healthy ecosystems by reducing potentially hazardous vegetation that can fuel fires. The agencies generally remove or modify hazardous vegetation using prescribed fire (that is, fire deliberately set in order to restore or maintain desired vegetation conditions), mechanical thinning, herbicides, certain grazing methods, or combinations of these and other approaches. The agencies fund these activities from their fuel reduction accounts. Congress, the Office of Management and Budget, federal agency officials, and others have expressed concern about mounting federal wildland fire expenditures. Federal appropriations to the Forest Service and the Interior agencies to prepare for and respond to wildland fires, including appropriations for reducing fuels, have more than doubled, from an average of $1.2 billion from fiscal years 1996 through 2000 to an average of $2.9 billion from fiscal years 2001 through 2007 (see table 1). Adjusting for inflation, the average annual appropriations to the agencies for these periods increased from $1.5 billion to $3.1 billion (in 2007 dollars). The Forest Service received about 70 percent and Interior about 30 percent of the appropriated funds. The Forest Service and the Interior agencies have improved their understanding of wildland fire’s role on the landscape and have taken important steps toward improving their ability to cost-effectively protect communities and resources. Although the agencies have long recognized that fire could provide ecological benefits in some ecosystems, such as certain grassland and forest types, a number of damaging fires in the 1990s led them to develop the Federal Wildland Fire Management Policy. The policy formally recognizes not only that wildland fire can be beneficial in some areas, but also that fire is an inevitable part of the landscape and, moreover, that past attempts to suppress all fires have been in part responsible for making recent fires more severe. Under this policy, the agencies abandoned their attempt to put out every wildland fire, seeking instead to (1) make communities and resources less susceptible to being damaged by wildland fire and (2) respond to fires so as to protect communities and important resources at risk but also to consider both the cost and long-term effects of that response. By emphasizing firefighting strategies that focus on land management objectives, rather than seeking to suppress all fires, the agencies are increasingly using less aggressive firefighting strategies—strategies that can not only reduce costs but also be safer for firefighters by reducing their exposure to unnecessary risks, according to agency fire officials. To help them better achieve the federal wildland fire management policy’s vision, the Forest Service and the Interior agencies in recent years have taken several steps to make communities and resources less susceptible to damage from wildland fire. These steps include reducing hazardous fuels, in an effort to keep wildland fires from spreading into the wildland-urban interface and to help protect important resources by lessening a fire’s intensity. As part of this effort, the agencies reported they have reduced fuels on more than 29 million acres from 2001 through 2008. The agencies have also nearly completed their geospatial data and modeling system, LANDFIRE, as we recommended in 2003. LANDFIRE is intended to produce consistent and comprehensive maps and data describing vegetation, wildland fuels, and fire regimes across the United States. Such data are critical to helping the agencies (1) identify the extent, severity, and location of wildland fire threats to the nation’s communities and resources; (2) predict fire intensity and rate of spread under particular weather conditions; and (3) evaluate the effect that reducing fuels may have on future fire behavior. LANDFIRE data are already complete for the contiguous United States, although some agency officials have questioned the accuracy of the data, and the agencies expect to complete the data for Alaska and Hawaii in 2009. The agencies have also begun to improve their processes for allocating fuel reduction funds to different areas of the country and for selecting fuel reduction projects, as we recommended in 2007. The agencies have started moving away from “allocation by tradition” toward a more consistent, systematic allocation process. That is, rather than relying on historical funding patterns and professional judgment, the agencies are developing a process that also considers risk, effectiveness of fuel reduction treatments, and other factors. Despite these improvements, further action is needed to ensure that the agencies’ efforts to reduce hazardous fuels are directed to areas at highest risk. The agencies, for example, still lack a measure of the effectiveness of fuel reduction treatments and therefore lack information needed to ensure that fuel reduction funds are directed to the areas where they can best minimize risk to communities and resources. Forest Service and Interior officials told us that they recognize this shortcoming and that efforts are under way to address it; these efforts are likely to be long term involving considerable research investment, but they have the potential to improve the agencies’ ability to assess and compare the cost-effectiveness of potential treatments in deciding how to optimally allocate scarce funds. The agencies have also taken steps to foster fire-resistant communities. Increasing the use of protective measures to mitigate the risk to structures from wildland fire is a key goal of the National Fire Plan. The plan encourages, but does not mandate, state or local governments to adopt laws requiring homeowners and homebuilders to take measures—such as reducing vegetation and flammable objects within an area of 30 to 100 feet around a structure, often called creating defensible space, and using fire- resistant roofing materials and covering attic vents with mesh screens—to help protect structures from wildland fires. Because these measures rely on the actions of individual homeowners or homebuilders, or on laws and land-use planning affecting private lands, achieving this goal is primarily a state and local government responsibility. Nonetheless, the Forest Service and the Interior agencies have helped sponsor the Firewise Communities program, which works with community leaders and homeowners to increase the use of fire-resistant landscaping and building materials in areas of high risk. Federal and state agencies also provide grants to help pay for creating defensible space around private homes. In addition, the agencies have made improvements laying important groundwork for enhancing their response to wildland fire, including: Implementing the Federal Wildland Fire Management Policy. The Federal Wildland Fire Management Policy directs each agency to develop a fire management plan for all areas they manage with burnable vegetation. Without such plans, agency policy does not allow the use of the entire range of wildland fire response strategies, including less aggressive strategies, and therefore the agencies must attempt to suppress a fire regardless of any benefits that might come from allowing it to burn. We reported in 2006 that about 95 percent of the agencies’ 1,460 individual land management units had completed the required plans. The policy also states that the agencies’ responses to a wildland fire are to be based on the circumstances of a given fire and the likely consequences to human safety and natural and cultural resources. Interagency guidance on implementing the policy, adopted in 2009, clarifies that the full range of fire management strategies and tactics are to be considered when responding to every wildland fire, and that a single fire may be simultaneously managed for different objectives. Both we and the Department of Agriculture’s Inspector General had criticized the previous guidance, which required each fire to be managed either for suppression objectives—that is, to put out the fire as quickly as possible—or to achieve resource benefits—that is, to allow the fire to burn to gain certain benefits such as reducing fuels or seed regeneration. By providing this flexibility, the new guidance should help the agencies better achieve management objectives and help contain the long-term costs of fire management. Improving fire management decisions. The agencies have recently undertaken several efforts to improve decisions about firefighting strategies. In one such effort, the agencies in 2009 began to use a new analytical tool, known as the wildland fire decision support system. This new tool helps line officers and fire managers analyze various factors— such as the fire’s current location, adjacent fuel conditions, nearby structures and other highly valued resources, and weather forecasts—in determining the strategies and tactics to adopt. For example, the tool generates a map illustrating the probability that a particular wildland fire, barring any suppression actions, will burn a certain area within a specified time, and the structures or other resources that may therefore be threatened. Having such information can help line officers and fire managers understand the resources at risk and identify the most appropriate response—for example, whether to devote substantial resources in attempting full and immediate suppression or to instead take a less intensive approach, which may reduce risks to firefighters and cost less. Other efforts include (1) establishing experience and training requirements for line officers to be certified to manage fires of different levels of complexity, and (2) forming four teams staffed with some of the most experienced fire managers to assist in managing wildland fires. The Forest Service has also experimented in recent years with several approaches for identifying ongoing fires where suppression actions are unlikely to be effective and for influencing strategic decisions made during those fires, in order to help contain costs and reduce risk to firefighters. Although these efforts are new, and we have not fully evaluated them, we believe they have the potential to help the agencies strengthen how they select firefighting strategies. By themselves, however, these efforts do not address certain critical shortcomings. We reported in 2007, for example, that officials in the field have few incentives to consider cost containment in making critical decisions affecting suppression costs, and that previous studies had found that the lack of a clear measure to evaluate the benefits and costs of alternative firefighting strategies fundamentally hindered the agencies’ ability to provide effective oversight. Acquiring and using firefighting assets effectively. The agencies have continued to make improvements—including better systems for contracting with private vendors to provide firefighting assets and for dispatching assets to individual fires—in how they determine the firefighting assets they need and in how they acquire and use those assets, although further action is needed. For example, although the agencies in 2009 began deploying an interagency budget-planning system known as fire program analysis (FPA) to address congressional direction that they improve how they determine needed firefighting assets, our 2008 report on FPA’s development identified several shortcomings that limit FPA’s ability to meet certain key objectives. FPA was intended to help the agencies develop their wildland fire budget requests and allocate funds by, among other objectives, (1) providing a common budget framework to analyze firefighting assets without regard for agency jurisdictions; (2) examining the full scope of fire management activities; (3) modeling the effects over time of differing strategies for responding to wildland fires and treating lands to reduce hazardous fuels; and (4) using this information to identify the most cost-effective mix and location of federal wildland fire management assets. We reported in 2008 that FPA shows promise in achieving some of the key objectives originally established for it but that the approach the agencies have taken hampers FPA from meeting other key objectives, including the ability to project the effects of different levels of fuel reduction and firefighting strategies over time. We therefore concluded that agency officials lack information that would help them analyze the extent to which increasing or decreasing funding for fuel reduction and responding more or less aggressively to fires in the short term could affect the expected cost of responding to wildland fires over the long term. Senior agency officials told us in 2008 that they were considering making changes to FPA that may improve its ability to examine the effects over time of different funding strategies. The exact nature of these changes, or how to fund them, has yet to be determined. Officials also told us the agencies are currently working to evaluate the model’s performance, identify and implement needed corrections, and improve data quality and consistency. The agencies intend to consider the early results of FPA in developing their budget requests for fiscal year 2011, although officials told us they will not rely substantially on FPA’s results until needed improvements are made. As we noted in 2008, the approach the agencies took in developing FPA provides considerable discretion to agency decision makers and, although providing the flexibility to consider various options is important, doing so makes it essential that the agencies ensure their processes are fully transparent. In addition, previous studies have found that agencies sometimes use more, or more-costly, firefighting assets than necessary, often in response to political or social pressure to demonstrate they are taking all possible action to protect communities and resources. Consistent with these findings, fire officials told us they were pressured in 2008 to assign more firefighting assets than could be effectively used to fight fires in California. More generally, previous studies have found that air tankers may be used to drop flame retardants when on-the-ground conditions may not warrant such drops. Aviation activities are expensive, accounting for about one- third of all firefighting costs on a large fire. We believe that providing clarity about when different types of firefighting assets can be used effectively could help the agencies resist political and social pressure to use more assets than they need. Despite the important steps the agencies have taken, much work remains. We have previously recommended several key actions that, if completed, would improve the agencies’ management of wildland fire. Specifically, the agencies need to: Develop a cohesive strategy. Completing an investment strategy that lays out various approaches for reducing fuels and responding to wildland fires and the estimated costs associated with each approach and the trade- offs involved—what we have termed a cohesive strategy—is essential for Congress and the agencies to make informed decisions about effective and affordable long-term approaches for addressing the nation’s wildland fire problems. The agencies have concurred with our recommendations to develop a cohesive strategy but have yet to develop a strategy that clearly formulates different approaches and associated costs, despite our repeated calls to do so. In May 2009, agency officials told us they had begun planning how to develop a cohesive strategy but were not far enough along in developing it to provide further information. Because of the critical importance of a cohesive strategy to improve the agencies’ overall management of wildland fire, we encourage the agencies to complete one and begin implementing it as quickly as possible. The Federal Land Assistance, Management, and Enhancement Act, introduced in March 2009 and sponsored by the chairman of this committee, would require the agencies to produce, within 1 year of the act’s enactment, a cohesive strategy consistent with our previous recommendations. Although they have yet to complete a cohesive strategy, the agencies have nearly completed two projects—LANDFIRE and FPA—they have identified as being necessary to development of a cohesive strategy. However, the shortcomings we identified in FPA may limit its ability to contribute to the agencies’ development of a cohesive strategy. Establish a cost-containment strategy. We reported in 2007 that although the Forest Service and the Interior agencies had taken several steps intended to help contain wildland fire costs, they had not clearly defined their cost-containment goals or developed a strategy for achieving those goals—steps that are fundamental to sound program management. The agencies disagreed, citing several agency documents that they argued clearly define their goals and objectives and make up their strategy to contain costs. Although these documents do provide overarching goals and objectives, they lack the clarity and specificity needed by land management and firefighting officials in the field to help manage and contain wildland fire costs. Interagency policy, for example, established an overarching goal of suppressing wildland fires at minimum cost, considering firefighter and public safety and importance of resources being protected, but the agencies have established neither clear criteria for weighing the relative importance of the often-competing elements of this broad goal, nor measurable objectives for determining if the agencies are meeting the goal. As a result, despite the improvements the agencies are making to policy, decision support tools, and oversight, we believe that managers in the field lack a clear understanding of the relative importance that the agencies’ leadership places on containing costs and—as we concluded in our 2007 report—are therefore likely to continue to select firefighting strategies without duly considering the costs of suppression. Forest Service officials told us in July 2009 that although they are concerned about fire management costs, they are emphasizing the need to select firefighting strategies that will achieve land management objectives and reduce unnecessary risks to firefighters, an emphasis they believe may, in the long run, also help them contain costs. Nonetheless, we continue to believe that our recommendations, if effectively implemented, would help the agencies better manage their cost-containment efforts and improve their ability to contain wildland fire costs. Clearly define financial responsibilities for fires that cross jurisdictions. Protecting the nation’s communities is both one of the key goals of wildland fire management and one of the leading factors contributing to rising fire costs. A number of relatively simple steps—such as using fire-resistant landscaping and building materials—can dramatically reduce the likelihood of damage to a structure from wildland fire. Although nonfederal entities—including state forestry entities and tribal, county, city, and rural fire departments—play an important role in protecting communities and resources and responding to fires, we reported in 2006 that federal officials were concerned that the existing framework for sharing suppression costs among federal and nonfederal entities insulated state and local governments from the cost of providing wildland fire protection in the wildland-urban interface. As a result, there was less incentive for state and local governments to adopt laws—such as building codes requiring fire-resistant building materials in areas at high risk of wildland fires—that, in the long run, could help reduce the cost of suppressing wildland fires. We therefore recommended that the federal agencies work with relevant state entities to clarify the financial responsibility for fires that burn, or threaten to burn, across multiple jurisdictions and develop more specific guidance as to when particular cost-sharing methods should be used. The agencies have updated guidance on when particular cost-sharing methods should be used, although we have not evaluated the effect of the updated guidance; the agencies, however, have yet to clarify the financial responsibility for fires that threaten multiple jurisdictions. Without such clarification, the concerns that the existing framework insulates nonfederal entities from the cost of protecting the wildland-urban interface from fire—and that the federal government, therefore, would continue to bear more than its share of that cost—are unlikely to be addressed. Mitigate effects of rising fire costs on other agency programs. The sharply rising costs of managing wildland fires have led the Forest Service and the Interior agencies to transfer funds from other programs to help pay for fire suppression, disrupting or delaying activities in these other programs. Better methods of estimating the suppression funds the agencies request, as we recommended in 2004, could reduce the likelihood that the agencies would need to transfer funds from other accounts, yet the agencies continue to use an estimation method with known problems. A Forest Service official told us the agency had analyzed alternative methods for estimating needed suppression funds but determined that no better method was available. Because the agencies have had to transfer funds in each of the last 3 years, however, a more accurate method for estimating suppression costs may still be needed. To further reduce the likelihood of transferring funds from the agencies’ other programs to cover suppression costs, our 2004 report also noted, Congress could consider establishing a reserve account to fund emergency wildland firefighting. Congress, for example, could provide either a specified amount (known as a definite appropriation) or as much funding as the agencies need to fund emergency suppression (known as an indefinite appropriation). Establishing a reserve account with a definite appropriation would provide the agencies with incentives to contain suppression costs within the amount in the reserve account, but depending on the size of the appropriation and the severity of a fire season, suppression costs could still exceed the funds reserved, and the agencies might still need to transfer funds from other programs. An account with an indefinite appropriation, in contrast, would eliminate the need for transferring funds from other programs but would offer no inherent incentives for the agencies to contain suppression costs. Furthermore, both definite and indefinite appropriations could raise the overall federal budget deficit, depending on whether funding levels for other agency or government programs are reduced. The Federal Land Assistance, Management, and Enhancement Act proposes establishing a wildland fire suppression reserve account; the administration’s budget overview for fiscal year 2010 also proposes a $282 million reserve account for the Forest Service and a $75 million reserve account for the Interior to provide funding for firefighting when the appropriated suppression funds are exhausted. We are making no new recommendations at this time. Rather, we believe that our previous recommendations—which the agencies have generally agreed with—could, if implemented, substantially assist the agencies in capitalizing on the important progress they have made to date in responding to the nation’s growing wildland fire problem. We discussed the factual information in this statement with agency officials and incorporated their comments where appropriate. Mr. Chairman, this concludes my prepared statement. I would be pleased to answer any questions that you or other Members of the Subcommittee may have at this time. For further information about this testimony, please contact me at (202) 512-3841 or [email protected], or Robin M. Nazzaro, Director, at (202) 512- 3841 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Steve Gaty, Assistant Director; David P. Bixler; Ellen W. Chu; Jonathan Dent; and Richard P. Johnson made key contributions to this statement. Wildland Fire Management: Interagency Budget Tool Needs Further Development to Fully Meet Key Objectives. GAO-09-68. Washington, D.C.: November 24, 2008. Wildland Fire Management: Federal Agencies Lack Key Long- and Short- Term Management Strategies for Using Program Funds Effectively. GAO-08-433T. Washington, D.C.: February 12, 2008. Wildland Fire Management: Better Information and a Systematic Process Could Improve Agencies’ Approach to Allocating Fuel Reduction Funds and Selecting Projects. GAO-07-1168. Washington, D.C.: September 28, 2007. Wildland Fire Management: Lack of Clear Goals or a Strategy Hinders Federal Agencies’ Efforts to Contain the Costs of Fighting Fires. GAO-07-655. Washington, D.C.: June 1, 2007. Wildland Fire Suppression: Lack of Clear Guidance Raises Concerns about Cost Sharing between Federal and Nonfederal Entities. GAO-06-570. Washington, D.C.: May 30, 2006. Wildland Fire Management: Update on Federal Agency Efforts to Develop a Cohesive Strategy to Address Wildland Fire Threats. GAO-06-671R. Washington, D.C.: May 1, 2006. Wildland Fire Management: Important Progress Has Been Made, but Challenges Remain to Completing a Cohesive Strategy. GAO-05-147. Washington, D.C.: January 14, 2005. Wildfire Suppression: Funding Transfers Cause Project Cancellations and Delays, Strained Relationships, and Management Disruptions. GAO-04-612. Washington, D.C.: June 2, 2004. Wildland Fire Management: Additional Actions Required to Better Identify and Prioritize Lands Needing Fuels Reduction. GAO-03-805. Washington, D.C.: August 15, 2003. Western National Forests: A Cohesive Strategy Is Needed to Address Catastrophic Wildfire Threats. GAO/RCED-99-65. Washington, D.C.: April 2, 1999. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | The nation's wildland fire problems have worsened dramatically over the past decade, with more than a doubling of both the average annual acreage burned and federal appropriations for wildland fire management. The deteriorating fire situation has led the agencies responsible for managing wildland fires on federal lands--the Forest Service in the Department of Agriculture and the Bureau of Indian Affairs, Bureau of Land Management, Fish and Wildlife Service, and National Park Service in the Department of the Interior--to reassess how they respond to wildland fire and to take steps to improve their fire management programs. This testimony discusses (1) progress the agencies have made in managing wildland fire and (2) key actions GAO believes are still necessary to improve their wildland fire management. This testimony is based on issued GAO reports and reviews of agency documents and interviews with agency officials on actions the agencies have taken in response to previous GAO findings and recommendations. The Forest Service and Interior agencies have improved their understanding of wildland fire's ecological role on the landscape and have taken important steps toward enhancing their ability to cost-effectively protect communities and resources by seeking to (1) make communities and resources less susceptible to being damaged by wildland fire and (2) respond to fire so as to protect communities and important resources at risk while also considering both the cost and long-term effects of that response. To help them do so, the agencies have reduced potentially flammable vegetation in an effort to keep wildland fires from spreading into the wildland-urban interface and to help protect important resources by lessening a fire's intensity; sponsored efforts to educate homeowners about steps they can take to protect their homes from wildland fire; and provided grants to help homeowners carry out these steps. The agencies have also made improvements that lay important groundwork for enhancing their response to wildland fire, including adopting new guidance on how managers in the field are to select firefighting strategies, improving the analytical tools that assist managers in selecting a strategy, and improving how the agencies acquire and use expensive firefighting assets. Despite the agencies' efforts, much work remains. GAO has previously recommended several key actions that, if completed, would substantially improve the agencies' management of wildland fire. Specifically, the agencies should: (1) Develop a cohesive strategy laying out various potential approaches for addressing the growing wildland fire threat, including estimating costs associated with each approach and the trade-offs involved. Such information would help the agencies and Congress make fundamental decisions about an effective and affordable approach to responding to fires. (2) Establish a cost-containment strategy that clarifies the importance of containing costs relative to other, often-competing objectives. Without such clarification, GAO believes managers in the field lack a clear understanding of the relative importance that the agencies' leadership places on containing costs and are therefore likely to continue to select firefighting strategies without duly considering the costs of suppression. (3) Clarify financial responsibilities for fires that cross federal, state, and local jurisdictions. Unless the financial responsibilities for multijurisdictional fires are clarified, concerns that the existing framework insulates nonfederal entities from the cost of protecting the wildland-urban interface from fire--and that the federal government would thus continue to bear more than its share of the cost--are unlikely to be addressed. (4) Take action to mitigate the effects of rising fire costs on other agency programs. The sharply rising costs of managing wildland fires have led the agencies to transfer funds from other programs to help pay for fire suppression, disrupting or delaying activities in these other programs. Better methods of predicting needed suppression funding could reduce the need to transfer funds from other programs. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
The Homeland Security Act of 2002 and subsequently enacted laws— including the Intelligence Reform and Terrorism Prevention Act of 2004 and the 9/11 Commission Act—assigned DHS responsibility for sharing information related to terrorism and homeland security with its state, local, and tribal partners, and authorized additional measures and funding in support of carrying out this mandate. DHS designated I&A as having responsibility for coordinating efforts to share information that pertains to the safety and security of the U.S. homeland across all levels of government, including federal, state, local, and tribal government agencies. In June 2006, DHS tasked I&A with the responsibility for managing DHS’s support to fusion centers. I&A established a State and Local Fusion Center Joint Program Management Office as the focal point for supporting fusion center operations and to maximize state and local capabilities to detect, prevent, and respond to terrorist and homeland security threats. The office was also established to improve the information flow between DHS and the fusion centers, as well as provide fusion centers with access to the federal intelligence community. Two DHS components—CBP and ICE—have responsibilities for securing the nation’s land borders against terrorism and other threats to homeland security. Specifically, CBP’s Border Patrol agents are responsible for preventing the illegal entry of people and contraband into the United States between ports of entry. This includes preventing terrorists, their weapons, and other related materials from entering the country. Border Patrol’s national strategy calls for it to improve and expand coordination and partnerships with state, local, and tribal law enforcement agencies to gain control of the nation’s borders. ICE is charged with preventing terrorist and criminal activity by targeting the people, money, and materials that support terrorist and criminal organizations. According to the agency’s 2008 annual report, ICE recognizes the need for strong partnerships with other law enforcement agencies, including those on the local level, in order to combat criminal and terrorist threats. The FBI serves as the nation’s principal counterterrorism investigative agency, and its mission includes protecting and defending the United States against terrorist threats. The FBI conducts counterterrorism investigations through its field offices and Joint Terrorism Task Forces. In addition, each FBI field office has established a Field Intelligence Group, which consists of intelligence analysts and special agents who gather and analyze information related to identified threats and criminal activity, including terrorism. Each group is to share information with other Field Intelligence Groups across the country, FBI headquarters, and other federal, state, and local law enforcement and intelligence agencies to fill gaps in intelligence. Fusion centers serve as the primary focal points within the state and local environment for the receipt and sharing of information related to terrorist and homeland security threats. In March 2006, DHS released its Support and Implementation Plan for State and Local Fusion Centers. In this plan, DHS describes its responsibility to effectively collaborate with its federal, state, and local partners to share information regarding these threats. To facilitate the effective flow of information among fusion centers, DHS, other federal partners, and the national intelligence community, the plan calls for DHS to assign trained and experienced operational and intelligence personnel to fusion centers and includes the department’s methodology for prioritizing the assignments. The plan also notes that identifying, reviewing, and sharing fusion center best practices and lessons learned is vital to the success of DHS’s overall efforts. Accordingly, it recommends that DHS develop rigorous processes to identify, review, and share these best practices and lessons learned. In December 2008, DHS issued a document entitled Interaction with State and Local Fusion Centers Concept of Operations. According to the document, each DHS component field office whose mission aligns with the priorities of the fusion center is to establish a relationship with that center. This relationship should include but not be limited to routine meetings and consistent information sharing among DHS and state and local personnel assigned to each center. The FBI’s role in and support of individual fusion centers varies depending on the level of functionality of the fusion center and the interaction between the particular center and the local FBI field office. FBI efforts to support fusion centers include assigning special agents and intelligence analysts to fusion centers, sharing information, providing space or rent for fusion center facilities in some locations, and ensuring that state and local personnel have appropriate security clearances as well as access to FBI personnel. Since September 11, 2001, several statutes have been enacted into law designed to enhance the sharing of terrorism-related information among federal, state, local, and tribal agencies, and the federal government has developed related strategies, policies, and guidelines to meet its statutory obligations. Regarding border threats, the 9/11 Commission Act contains several provisions that address the federal government’s efforts to share information with state and local fusion centers that serve border communities. For example, the act provides for the Secretary of DHS to assign, to the maximum extent practicable, officers and intelligence analysts from DHS components—including CBP and ICE—to state and local fusion centers participating in DHS’s State, Local, and Regional Fusion Center Initiative, with priority given to fusion centers located along borders of the United States. The act provides that federal officers and analysts assigned to fusion centers in general are to assist law enforcement agencies in developing a comprehensive and accurate threat picture, and to create intelligence and other information products for dissemination to law enforcement agencies. In addition, federal officers and analysts assigned to fusion centers along the borders are to have, as a primary responsibility, the creation of border intelligence products that (1) assist state, local, and tribal law enforcement agencies in efficiently helping to detect terrorists and related contraband at U.S. borders; (2) promote consistent and timely sharing of border security-relevant information among jurisdictions along the nation’s borders; and (3) enhance DHS’s situational awareness of terrorist threats in border areas. The act further directed the Secretary of DHS to create a mechanism for state, local, and tribal law enforcement officers to provide voluntary feedback to DHS on the quality and utility of the intelligence products developed under these provisions. Also, in October 2007, the President issued the National Strategy for Information Sharing. According to the strategy, an improved information sharing environment is to be constructed on a foundation of trusted partnerships at all levels of government, based on a shared commitment to detect, prevent, disrupt, preempt, and mitigate the effects of terrorism. The strategy identifies the federal government’s information sharing responsibilities to include gathering and documenting the information that state, local, and tribal agencies need to enhance their situational awareness of terrorist threats and calls for authorities at all levels of government to work together to obtain a common understanding of the information needed to prevent, deter, and respond to terrorist attacks. Specifically, the strategy requires that state, local, and tribal law enforcement agencies have access to timely, credible, and actionable information and intelligence about individuals and organizations intending to carry out attacks within the United States; their organizations and their financing; potential targets; activities that could have a nexus to terrorism; and major events or circumstances that might influence state, local, and tribal actions. The strategy also recognizes that fusion centers are vital assets that are critical to sharing information related to terrorism, and will serve as primary focal points within the state and local environment for the receipt and sharing of terrorism-related information. In October 2001, we reported on the importance of sharing information about terrorist threats, vulnerabilities, incidents, and lessons learned. Specifically, we identified best practices in building successful information sharing partnerships that could be applied to entities trying to develop the means of appropriately sharing information. Among the best practices we identified were (1) establishing trusted relationships with a wide variety of federal and nonfederal entities that may be in a position to provide potentially useful information and advice; (2) agreeing to mechanisms for sharing information, such as outreach meetings and task forces; and (3) institutionalizing roles to help ensure continuity and diminish reliance on a single individual. Since we designated terrorism-related information sharing a high-risk area in January 2005, we have continued to monitor federal information sharing efforts. Also, as part of this monitoring, in April 2008, we reported on our assessment of the status of fusion centers and how the federal government is supporting them. Our fusion center report and subsequent testimony highlighted continuing challenges—such as the centers’ ability to access information and obtain funding—that DHS and DOJ needed to address to support the fusion centers’ role in facilitating information sharing among federal, state, and local governments. We also recognized the need for the federal government to determine and articulate its long-term fusion center role and whether it expects to provide resources to help ensure their sustainability, and we made a recommendation to that effect to which DHS agreed. At the time of this review, DHS was in the process of implementing the recommendation. In general, local and tribal officials in the border communities we contacted who reported to us that they received information directly from the local office of Border Patrol, ICE, or the FBI said it was useful for enhancing their situational awareness of crimes along the border and potential terrorist threats. Overall, where information sharing among federal, local, and tribal agencies along the borders occurred, local and tribal officials generally said they had discussed their information needs with federal agencies in the vicinity and had established information sharing partnerships with related mechanisms to share information with federal officials—consistent with the National Strategy for Information Sharing—while the agencies that reported not receiving information from federal agencies generally said they had not discussed their needs and had not established partnerships. Officials from three-quarters (15 of 20) of the local and tribal law enforcement agencies in the border communities we contacted said they received information directly from the local office of at least one federal agency (Border Patrol, ICE, or the FBI), and 9 of the 20 reported receiving information from the local office of all three of these federal agencies. However, 5 of the 20 reported that they did not receive information from any of these three agencies, in part because information sharing partnerships and related mechanisms to share information did not exist. We discuss information sharing partnerships and other factors that affect information sharing between federal agencies and local and tribal agencies in border communities later in this report. Figure 1 shows the number of local and tribal agencies that reported receiving information directly from the local office of Border Patrol, ICE, and the FBI. Overall, the local and tribal law enforcement agencies we contacted that received information from federal agencies in the vicinity found it useful in enhancing their situational awareness of border crimes and potential terrorist threats. Local and tribal law enforcement officials in 14 of 20 border communities we contacted said they received a range of information directly from local Border Patrol officials, including incident reports and alerts regarding specific individuals with potential links to criminal activity—such as illegal immigration and drug trafficking—as well as border-related threat assessments and reports of suspicious activity. According to the local and tribal officials, they received this information through direct outreach or visits, phone calls, and e-mails, as well as through issued alerts and bulletins. Of the 14 local and tribal officials that reported receiving information from Border Patrol officials in the vicinity, 12 said it was useful and enhanced their situational awareness of criminal activities and potential terrorist threats along the border and 2 did not take a position when asked about the information’s usefulness. For example, one tribal police department official reported that Border Patrol provides an area assessment that specifically targets the illicit smuggling of humans and contraband in and around the tribal lands, and depicted the threat posed by illegal activity occurring in the area. The official said that this assessment helped the department identify and emphasize those areas on which to focus. Local and tribal officials from the remaining 6 border communities we contacted said they did not receive any information directly from Border Patrol officials in the vicinity, in part because information sharing partnerships and related mechanisms to share information did not exist. Border Patrol officials in the communities we visited said they shared information related to various types of crimes with their local and tribal partners, including information related to illegal immigration and drug trafficking. The officials said this information is shared primarily through established information sharing partnerships and related mechanisms, including joint border operations and task forces, such as Integrated Border Enforcement Teams. The officials noted that they generally did not have specific terrorism-related information to share with local and tribal agencies, but that the information they share is intended to enhance situational awareness of border crimes that terrorists could potentially exploit, such as illegal immigration. Local and tribal law enforcement officials in 10 of 20 border communities we contacted said they received information from ICE officials in the vicinity, including specific persons of interest they should be on the lookout for, as well as information on drug smuggling and drug cartel activities, human smuggling, and other crimes. The officials said such information is important because it provides information that is pertinent to their immediate area. These agencies reported receiving information by e-mail or in person, as well as through participation in task forces, such as Border Enforcement Security Task Forces. For example, in one southwest border location, law enforcement officials said that the department receives information about potential criminal activities in their jurisdiction from ICE based on joint investigations it has conducted with the agency. Of the 10 local and tribal officials that reported receiving information from local ICE officials, 8 said it was useful and enhanced their situational awareness of criminal activities and potential terrorist threats along the border and 2 did not take a position when asked about the information’s usefulness. Officials from the remaining 10 local and tribal agencies we contacted said they did not receive any information from local ICE officials, in part because information sharing partnerships and related mechanisms to share information did not exist. According to ICE headquarters officials, in addition to sharing information at the local level, ICE has significantly expanded its interaction with state, local, and tribal law enforcement officials through automated systems that allow these officials to access and search certain DHS and ICE law enforcement and investigative information. Local and tribal law enforcement officials in 13 of 20 border locations we contacted said they received a range of information directly from local FBI officials, including intelligence assessments and bulletins, threat assessments and terrorism-related alerts, and information on criminal activity. Of the 13 local and tribal officials that reported receiving information from local FBI officials, 12 said it was useful and enhanced their situational awareness of potential terrorist threats along the border and 1 did not take a position when asked about the information’s usefulness. Local and tribal officials in 7 of the 20 border locations we contacted said they did not receive any information directly from local FBI officials, in part because information sharing partnerships and related mechanisms to share information did not exist. FBI officials in the border communities we visited said that they understood the desire of local and tribal law enforcement agencies to receive terrorism-related information that is specific to the border or to their geographic area in particular. However, the officials explained that in many cases, such information is classified, so the FBI can only share it with officials that have a need to know the information and have the requisite security clearances, as well as secure systems, networks, or facilities to safeguard the information. FBI officials also said that information related to ongoing investigations is generally only shared with local officials that participate in an FBI Joint Terrorism Task Force, since sharing the information outside the task force could jeopardize the investigations. Finally, the officials said that at times, terrorism-related information that is specific to the border simply may not exist. Local and tribal law enforcement officials we met with recognized that the FBI has limits on what it can share—including information that is classified—and said they had no intention of interfering with ongoing investigations. However, they also thought the FBI could better communicate when these limits were in effect and when the agency simply had no information to share. We discuss the importance of establishing information sharing partnerships to facilitate discussions between the parties and minimize expectation gaps later in this report. According to FBI officials at the locations we contacted, information that is not related to ongoing investigations is shared with local and tribal agencies through a variety of mechanisms, including task forces (e.g., Safe Trails Task Forces) and working groups; periodic outreach meetings the FBI conducts with local and tribal agencies to both share and solicit information; and through ongoing information sharing partnerships. FBI headquarters officials noted that each FBI field office—through its Field Intelligence Group—is to routinely assess the terrorism and criminal threats and risks in its geographic area of responsibility and report the results to FBI headquarters. The officials said that the assessments incorporate border-specific issues when appropriate, such as the illegal entry of possible terrorists, identification of human smuggling organizations, and the smuggling of weapons and other material which could be employed in terrorist attacks. However, the officials said that the results of the assessments are classified and are generally not shared with local and tribal officials, although in some cases selected information is declassified and distributed through alerts and bulletins. Further, according to FBI headquarters, much of the FBI’s information sharing with other law enforcement entities occurs at the officer or investigator level, often without the specific knowledge of the state and local personnel we interviewed for this report. The FBI also emphasized that most Indian Reservations and tribal law enforcement agencies are located in remote areas of the United States—100 miles or more away from an FBI office—where information sharing between FBI agents and tribal law enforcement occurs on an ad hoc basis usually focused on investigations of crimes occurring on Indian reservations. We recognize that information sharing can occur at the officer or investigator level and on an ad hoc basis. However, as discussed later in this report, limiting information sharing to the officer and investigator level will not ensure that information sharing partnerships are established between agencies. Rather, discussions at senior levels—including the county sheriffs, local police chiefs, and tribal police chiefs we met with—could help ensure continuity in information sharing and diminish reliance on any one individual, which is a best practice in building successful information sharing partnerships. FBI headquarters also noted that in addition to sharing information directly with local and tribal officials in border communities, the FBI disseminates information to these officials through information systems— such as the FBI’s Law Enforcement Online and eGuardian system—and the FBI’s participation in state and local fusion centers and other interagency task forces and intelligence centers throughout the country (e.g., High Intensity Drug Trafficking Area Investigative Support Centers). The FBI noted that it is through these venues that the FBI also accomplishes its information sharing responsibilities to other federal, state, and local partners. The National Strategy for Information Sharing identifies the federal government’s information sharing responsibilities to include gathering and documenting the information that state, local, and tribal agencies need to enhance their situational awareness of terrorist threats. Figure 2 shows the number of the local and tribal agencies in the border communities we contacted that reported discussing their information needs with federal officials in the vicinity. Overall, where local and tribal law enforcement officials in border communities had discussed their information needs with federal officials in the vicinity, they also reported receiving useful information from the federal agencies that enhanced their situational awareness of border crimes and potential terrorist threats. Specifically: Officials from 7 of the 11 localities that had discussed their information needs with Border Patrol officials in the vicinity also reported receiving useful information from them. Officials from each of the 9 localities that had discussed their information needs with ICE officials in the vicinity reported receiving useful information from them. Officials from each of the 8 localities that had discussed their information needs with FBI officials in the vicinity reported receiving useful information from them. Local and tribal officials in the border communities we contacted said they shared their information needs with federal officials through a variety of methods, including regularly scheduled meetings, periodic outreach performed by federal agencies, ad hoc meetings, and established working relationships. For example, one police chief along the southwest border said that he discussed his need for real-time information about border crimes that could affect his area with local federal agency officials. He noted that after he held these discussions, the federal officials took steps to provide his department with this type of information. Nevertheless, as shown in figure 2 above, officials from about one-half of the local and tribal agencies in the border communities we contacted reported that federal officials had not discussed information needs with them, as called for in the National Strategy for Information Sharing. Our discussions with local and tribal officials revealed that where the needs were not discussed, local and tribal agencies also were less likely to have received information from federal agencies than in the localities where needs were discussed. Specifically: Officials from 4 of the 7 localities that had not discussed their information needs with Border Patrol officials in the vicinity also reported not receiving information from them, while the other 3 had received information from Border Patrol. Officials from each of the 9 localities that had not discussed their information needs with ICE officials in the vicinity also reported not receiving information from them. Officials from 7 of the 11 localities that had not discussed their information needs with FBI officials in the vicinity also reported not receiving information from them, while the other 4 reported receiving information from the FBI. While the data above show that federal agencies shared information with local and tribal officials in several cases where information needs had not been discussed, identifying these needs could better support federal agency efforts to provide local and tribal agencies with useful information that is relevant to their jurisdiction. A primary reason why federal agencies had not identified the information needs of local and tribal agencies in many of the border communities we visited was because the methods federal agencies used to solicit the needs, while effective for some localities, were not effective for others. Specifically, Border Patrol and ICE officials said that the information needs of these agencies were generally identified through outreach meetings or through working relationships with local and tribal law enforcement officers. Where these interactions did not exist, the federal agencies generally had not identified the information needs of local and tribal agencies. Also, according to a local police chief, while information needs may be discussed between local officers and federal agents on an ad hoc basis, his department cannot rely on these interactions to ensure that federal agencies have identified the overall information needs of the department. According to FBI headquarters officials, in developing field office area assessments, Field Intelligence Group personnel are required to gather information on terrorism and criminal threats and risks from local and tribal law enforcement agency officials, wherein the information needs of these agencies would be identified. FBI headquarters also noted that through outreach meetings and participation in task forces and working groups, FBI field offices continually evaluate the information needs of their local and tribal partners, as well as their own, and take actions to identify and fill any information gaps. Despite these efforts, less than one- half of the local and tribal agencies we contacted reported discussing their information needs with FBI officials in the vicinity. By more consistently and more fully identifying the information needs of local and tribal agencies in border communities, as called for in the National Strategy for Information Sharing, federal agencies could be better positioned to provide these local and tribal agencies with useful information that enhances their situational awareness of border crimes and potential terrorist threats. The National Strategy for Information Sharing recognizes that effective information sharing comes through strong partnerships among federal, local, and tribal partners. In addition, the current strategic plans of DHS and the FBI both acknowledge the need to establish information sharing partnerships with state, local, and tribal law enforcement agencies to help the agencies fulfill their missions, roles, and responsibilities. Figure 3 shows the number of local and tribal agency officials in the border communities we contacted that reported having established or were developing an information sharing partnership with Border Patrol, ICE, and the FBI officials in the vicinity. Overall, where local and tribal law enforcement officials in border communities had established or were developing information sharing partnerships with federal officials in the vicinity, they also reported receiving information from the federal agencies that enhanced their situational awareness of border crimes and potential terrorist threats. Specifically: Officials from 13 of the 14 localities that had or were developing an information sharing partnership with Border Patrol officials in the vicinity also reported receiving information from them. Officials from 10 of 13 localities that had an information sharing partnership with ICE officials in the vicinity also reported receiving information from them, while the other 3 were not receiving information from ICE. Officials from each of the 11 localities that had an information sharing partnership with FBI officials in the vicinity also reported receiving information from them. The local and tribal agencies that had developed partnerships with federal agencies in the vicinity had established a variety of mechanisms to share information, including regularly scheduled meetings, periodic outreach performed by federal agencies, ad hoc meetings, task forces and working groups, established working relationships, phone calls, e-mails, and issued alerts and bulletins. In some locations, Border Patrol and local law enforcement officials worked together in operational efforts that provided opportunities for federal and local officials to develop information sharing partnerships. For example, Operation Border Star in Texas—a state-led, multiagency effort focused on reducing crime, such as illegal immigration and drug trafficking, in targeted regions along the Texas–Mexico border— draws resources from local law enforcement agencies, the Texas Department of Public Safety, and others to support Border Patrol. Also, in upstate New York, a county sheriff’s department conducted joint patrols with Border Patrol, which extended into Canada. The patrols are designed to prevent the illegal entry of individuals into the United States and the smuggling of contraband. These operations provide an opportunity for officers from all of the agencies to work together and facilitate information sharing. Most of the local and tribal officials that had developed information sharing partnerships with ICE officials reported establishing them through personal contacts made either while working on various task forces alongside ICE personnel or between agents and officers in both agencies. For example, one tribal police chief said that his department has a memorandum of understanding with ICE, which allows the tribal police to perform certain ICE duties in the enforcement of customs laws and facilitates information sharing between the agencies. Nevertheless, as shown in figure 3 above, officials from several local and tribal agencies in the border communities we contacted reported that they had not established information sharing partnerships with Border Patrol, ICE, or FBI officials in the vicinity. Where partnerships were not established, local and tribal agencies also were less likely to have received information from federal agencies than in the localities where partnerships were established. Specifically: Officials from each of the 5 localities that did not have an information sharing partnership with local Border Patrol officials in the vicinity also reported they had not received information from them. Officials from each of the 7 localities that did not have an information sharing partnership with ICE officials in the vicinity also reported they had not received information from them. Officials from 7 of the 9 localities that said they did not have an information sharing partnership with FBI officials also reported they had not received information from them FBI. The local and tribal officials that did not have a partnership with federal officials and were not receiving information said that effective mechanisms for sharing information—a best practice in building successful information sharing partnerships—had not been established. One reason why the officials said established mechanisms were not effective was because they did not have enough resources or funding to participate in the regular meetings or forums that Border Patrol, ICE, and FBI officials in the vicinity used to share information, establish face-to- face contact, and build trusting relationships. For example, an official from one local police department said he was aware of Border Patrol’s efforts to share information through such meetings, but the department did not have the resources needed to participate, since doing so would leave the office short one out of eight patrol officers. Officials at another location said they no longer received invitations to Border Patrol meetings. Similarly, local and tribal officials in other localities said they did not have enough resources to send individuals to participate in outreach meetings that FBI officials said were used to share information, because in some cases the meetings were held more than 100 miles away. A local county sheriff also said that the FBI’s meetings were initially productive but interest faded because of the lack of useful information that was shared during the meetings. An FBI official from another locality noted that FBI officials are sometimes limited in what they can discuss during these meetings if the local and tribal representatives in attendance do not have the appropriate security clearances or do not have a need to know about the information. These examples illustrate the importance of establishing partnerships to facilitate discussions between the parties and minimize expectation gaps regarding the availability of and limits in sharing information. Border Patrol, ICE, and FBI officials also said that information is shared with local and tribal agencies through multiagency task forces, such as ICE Border Enforcement Security Task Forces and FBI Joint Terrorism Task Forces. However, local and tribal officials—especially those in small departments in rural border communities—said these mechanisms to share information were not effective for them, because they did not have enough resources to dedicate personnel to the task forces. Police chiefs and other senior local and tribal officials recognized that ad hoc discussions between officers and investigators are also mechanisms federal agencies in the vicinity use to share information with local and tribal agencies. The officials noted, however, that limiting information sharing to the officer and investigator level is not sufficient to ensure that senior-level department officials are aware of the information, which in turn could be disseminated to other personnel within the department. For example, a police chief in a local community along the southwest border said that he does not need the FBI to brief his entire department, but the FBI should at least brief the police chief. Best practices in building information sharing partnerships call for institutionalizing information sharing through discussions at senior levels to ensure continuity in sharing and diminishing the reliance on single individuals. We recognize that developing and maintaining information sharing partnerships with the numerous local and tribal law enforcement agencies along the borders is a significant challenge, and that Border Patrol, ICE, and the FBI have made progress in this area. However, additional efforts by these federal agencies to periodically assess the extent to which partnerships and related mechanisms to share information exist, fill gaps, and address barriers to establishing such partnerships and mechanisms, could help ensure that information is shared with local and tribal law enforcement agencies that enhances their situational awareness of border crimes and potential terrorist threats. Federal agencies at two of the five fusion centers we visited were supporting fusion center efforts to develop border intelligence products that enhanced local and tribal agencies’ situational awareness of border crimes and potential terrorist threats. DHS recognizes that it needs to add personnel to fusion centers in border states to support the creation of such products, and is developing related plans, but cited funding issues and competing priorities as barriers to deploying such personnel. Further, additional DHS and FBI actions to (1) identify and market promising practices from fusion centers that develop border intelligence products and (2) obtain feedback from local and tribal officials on the utility and quality of the products and use the feedback to improve those products would strengthen future fusion center efforts to develop such products. Federal personnel at two of the five fusion centers we visited—the Arizona Counterterrorism Information Center and the New York State Intelligence Center—were routinely contributing to border intelligence products that were designed to enhance local and tribal law enforcement agencies’ situational awareness of border crimes and potential terrorist threats. Fusion center officials in these states emphasized that the physical presence of federal personnel at the fusion center—including intelligence analysts from I&A, Border Patrol, ICE, and the FBI—was critical to developing the border products, in part because their presence facilitated regular meetings with center personnel and access to federal information systems. According to local and tribal officials in the border communities we contacted in Arizona and New York, the border intelligence products they received generally enhanced their situational awareness of border-related crimes that could have a nexus to terrorism, such as drug trafficking and illegal immigration. However, the border products usually did not contain terrorism-related information that was specific to the border because such information did not exist or a link between a border crime and terrorism had not been established, according to fusion center officials. The two fusion centers also routinely generated terrorism information products that were provided to local and tribal agencies throughout the state to enhance their situational awareness of terrorist threats. Officials from the two fusion centers said that any terrorism-related information that is specific to the border would be included in both the border product and terrorism product. Below is additional information about the border intelligence products developed by the two fusion centers: Arizona Counterterrorism Information Center: The center issues a border- specific product (the “Situational Awareness Bulletin”) twice a week with input from the state’s Department of Public Safety and numerous federal agencies, including DHS’s I&A, Border Patrol, and ICE, and the FBI. The center initiated the bulletin in 2008 to enhance the situational awareness of local law enforcement officials along the Arizona border as drug-related violence on the Mexican side of the border increased. The bulletin now provides information about all types of crimes occurring in the vicinity of the border, as well as incidents from around the country and around the world. Topics have included immigration issues, burglaries at public safety offices, suspicious activities around critical infrastructure, stolen military uniforms, and stolen blank vehicle certificates of title. New York State Intelligence Center: The center’s Border Intelligence Unit issues a border-specific report quarterly with input from the New York State Police and numerous federal agencies, including DHS’s I&A, Border Patrol, and ICE, and the FBI. The report is intended to compile information on all types of crimes along the entire border between New York and Canada into one product for the convenience of local and tribal law enforcement agencies. This report covers crimes—such as illegal immigration and drug trafficking—and includes the results of joint federal and state operations conducted along the border. The report also contains news and updates on policies related to border security. According to center officials, the report grew out of recognition that various federal component agencies have offices that cover the border territory and could, therefore, collectively provide consistent intelligence information that would be helpful in enhancing the situational awareness of law enforcement agencies in border communities throughout the state. The Border Intelligence Unit also issues bulletins with actionable information on border-related crimes on an as-needed basis. In addition to the benefits that officials from the two fusion centers cited from having on-site input and collaboration from representatives of three DHS components, the FBI, and other agencies, the majority of local and tribal agencies in the border communities we contacted found the border intelligence products to be useful. Specifically, six of the seven local and tribal law enforcement agencies we contacted in Arizona and New York were receiving border intelligence products from the fusion center in their state and all six found that the products were useful or met their information needs. For example, one local law enforcement official said that his agency receives the quarterly border report developed by the New York fusion center and that he finds it useful as it sometimes contains issues directly related to his jurisdiction. The remaining locality did not comment on why the products were not received. According to officials from the other three fusion centers we visited, the presence of additional federal personnel would support their efforts to develop border intelligence products that help to provide local and tribal law enforcement agencies along the borders with situational awareness of potential terrorist threats. For example: Washington Fusion Center: The Washington state fusion center is colocated with the local Joint Terrorism Task Force, which facilitates access to FBI information, and has representatives from DHS’s I&A and ICE. According to Border Patrol headquarters officials, as of August 2009, the agency was in the process of assigning a full-time representative to the fusion center. The fusion center director noted that this official, once integrated into the center’s report development process, would contribute greatly towards producing a border intelligence product. The fusion center director added that the border intelligence product would focus on all border crime issues, including any suspected terrorist activity. Montana All Threat Intelligence Center: The Montana All Threat Intelligence Center is colocated with the local Joint Terrorism Task Force, which facilitates access to FBI information. According to the fusion center director, a CBP analyst has supported the center part-time, though most of the time that person is working at the CBP office located 90 miles away. In August 2009, Border Patrol headquarters officials said that a full-time representative had been assigned to the fusion center. The fusion center director said that he expected an analyst from I&A to be assigned to the fusion center, but was unsure when that would happen. According to the director, additional federal personnel and their ability to analyze border- related information would enhance the fusion center’s efforts to routinely produce a border intelligence product. Texas Intelligence Center: The Texas Intelligence Center is located within the Texas Department of Public Safety, and currently has representatives from I&A, ICE, and the FBI. Although the center prepares and disseminates a number of products, including a daily brief covering, among other issues, significant arrests, seizures and homeland security, it does not prepare an intelligence product that focuses on border issues. Officials at the center said that the state’s Border Security Operations Teams located along the border distribute information on border security issues to local and tribal agencies. According to the officials, the center will consider developing a border intelligence product once personnel from other appropriate agencies, such as Border Patrol, are in place at the fusion center. The director of I&A’s State and Local Fusion Center Program Management Office—the office responsible for managing the relationship between I&A and fusion centers—acknowledged the value of having personnel from DHS components physically present at fusion centers, not only for state and local law enforcement but for federal agencies as well. The director noted that deploying DHS analysts to fusion centers is critical to developing trusted partnerships, which in turn will facilitate collaboration and information sharing among federal, state, local, and tribal officials. But to date, the director explained that the office has not received the funding needed to deploy the personnel to other centers and has other competing priorities. DHS has had a plan for deploying personnel from its component agencies to fusion centers since June 2006, when the DHS Secretary signed the Support Implementation Plan for State and Local Fusion Centers. The plan calls for embedding DHS personnel with access to information, technology, and training in fusion centers to form the basis of a nationwide homeland security information network for collaboration and information sharing. According to the director of I&A’s State and Local Fusion Center Program Management Office, in part because of limited resources, the department is taking a risk-based approach to determining where to deploy officers and analysts. As such, the department considers several factors in addition to available funding, including population density, the number of critical infrastructure facilities, and the results of fusion center assessments the office conducts to determine the readiness of the center to use the department’s resources. Senior I&A officials noted that the department places some priority on deploying DHS personnel to state and local fusion centers located in border states, but that other factors also have to be considered under the department’s risk-based approach. According to DHS, as of September 2009, I&A had deployed 41 intelligence analysts to state, local, and regional fusion centers. DHS plans to have a total of about 70 I&A analysts at fusion centers by the end of fiscal year 2010 and an equal number of officers and analysts from DHS component agencies (e.g., Border Patrol and ICE). Figure 4 shows DHS personnel that were assigned to fusion centers in the 14 land border states as of August 2009. According to CBP headquarters officials, the agency has only a limited number of Border Patrol intelligence analysts, and is currently working with I&A to identify priority fusion centers. Officials from ICE’s Office of Intelligence also said that the agency is working with I&A to develop a strategy to enhance ICE participation at state and local fusion centers. Further, although the 9/11 Commission Act included an authorization for $10 million for each of the fiscal years 2008 through 2012 for DHS to carry out the State, Local, and Regional Fusion Center Initiative—including the assignment of CBP, ICE, and other DHS stakeholder personnel to fusion centers—DHS did not specifically request funding for the initiative and no funds were appropriated for fiscal years 2008 or 2009 for this specific purpose. Rather, for fiscal years 2008 and 2009, DHS reprogrammed funds from other activities to support the fusion center initiative. According to the director of I&A’s State and Local Fusion Center Program Management Office, DHS requested funding for the initiative in its fiscal year 2010 budget. Although the 9/11 Commission Act did not address FBI participation at fusion centers, FBI intelligence analysts and special agents were dedicated to fusion centers in 8 of the 14 land border states as of September 2009, in addition to FBI personnel at Joint Terrorism Task Forces or Field Intelligence Groups that were colocated with these fusion centers. The FBI noted that it has committed millions of dollars over the years to ensure that its classified computer system and other databases and equipment were deployed to support FBI personnel assigned on a full- or part-time basis to fusion centers. According to the FBI, the bureau has worked with DHS to develop uniform construction standards and security protocols specifically designed to facilitate the introduction of federal classified computer systems in fusion centers. Further, the FBI noted that it has deployed the eGuardian system—an unclassified counterterrorism tool—to fusion centers and other entities. The creation of border intelligence products—such as those developed by the Arizona and New York fusion centers—represent potential approaches that other border state fusion centers could use to target products for local and tribal law enforcement agencies in border communities. I&A has a framework in place to identify and collect promising practices at fusion centers nationwide, as called for in the department’s March 2006 Support Implementation Plan for State and Local Fusion Centers and the December 2008 Interaction with State and Local Fusion Center Concept of Operations. Specifically, the implementation plan for fusion centers recommended that rigorous processes be used to identify, review, and share information regarding promising practices and lessons learned. Consistent with that recommendation, the concept of operations identifies leveraging promising practices for information sharing and revising existing processes when necessary and advisable as one of the guiding principles of interaction with fusion centers. However, as of July 2009, I&A had not yet identified or explored promising practices related to fusion center efforts to develop border intelligence products. According to the director of I&A’s Border Security Division, such analysis has potential value but has not yet occurred because the division has been focusing on developing its own products and providing other support to fusion centers. While it is understandable that I&A would focus on its own activities, DHS could benefit from identifying promising practices related to fusion center border intelligence products because of the importance the federal government places on fusion centers to facilitate the sharing of information. By identifying such practices, DHS would be better positioned to leverage existing resources and help ensure that local and tribal agencies in border communities receive information that enhances their situational awareness of potential terrorist threats. Also, DHS had not obtained feedback on the utility and quality of the border intelligence products that its analysts in fusion centers have helped to develop. The 9/11 Commission Act requires DHS to (1) create a voluntary feedback mechanism for state, local, and tribal law enforcement officers and other consumers of the intelligence and information products developed by DHS personnel assigned to fusion centers under the act and (2) provide annual reports to committees of Congress describing the consumer feedback obtained and, if applicable, how the department has adjusted its own production of intelligence products in response to that consumer feedback. However, DHS’s December 2008 and August 2009 reports to Congress did not describe the feedback obtained on the intelligence products that its analysts in fusion centers helped to produce—including border intelligence products—or adjustments made in response to the feedback. DHS recognizes that it needs to take additional actions to obtain feedback from local and tribal law enforcement officers who are consumers of the intelligence products that I&A produces. For example, in mid-2009, I&A hired a contractor to initiate feedback pilot projects, including one currently underway to evaluate and implement processes for gathering and evaluating feedback responses. However, these projects are designed to solicit feedback on products developed by I&A and do not specifically include products that DHS personnel in fusion centers help to develop, including border intelligence products. Therefore, these projects may not support I&A efforts to obtain feedback under the 9/11 Commission Act on products that DHS personnel in fusion centers help to develop. DHS’s August 2009 report to Congress generally illustrates the value in obtaining feedback on intelligence products. For example, in one instance, the report notes that a state fusion center expressed concerns that the perspectives of three southwest border state fusion centers were not included in an assessment that I&A headquarters produced on border violence. The feedback resulted in teleconferences and other I&A actions to ensure that state and local perspectives are included in future assessments of border violence. Similarly, obtaining feedback on the border intelligence products that DHS analysts in fusion centers help to produce would support other fusion center efforts to develop such products and the department’s efforts to adjust its own production of intelligence products in response to that consumer feedback. The two fusion centers we contacted that were creating border intelligence products with the support of DHS personnel (Arizona and New York) had established their own mechanisms for obtaining feedback from local and tribal consumers of the products. Specifically, the fusion centers attached feedback forms to the border products, but have received low response rates, according to center officials. As a result, the fusion centers took other actions to solicit feedback on the border products, such as through direct outreach with local and tribal consumers of the information. Officials from both fusion centers said that the feedback has generally been positive and that the border products have been modified in response to this feedback. According to the officials, since these products are developed by the fusion centers, the centers do not routinely provide related feedback to DHS on the value of the contributions of its staff and intelligence input. However, the fusion centers’ efforts to obtain feedback on the border intelligence products—in addition to using feedback forms—demonstrate the feasibility of DHS taking additional actions to collect feedback on the products and report its findings to congressional committees under the 9/11 Commission Act. DHS agrees that it could take additional actions to collect this feedback, which could be done as part of the department’s ongoing feedback pilot projects. By working with fusion centers to obtain feedback on the border intelligence products developed, DHS could better support fusion center efforts to maintain and improve the utility and quality of information provided to local and tribal law enforcement agencies along the borders. This information could also be useful to I&A in modifying its own border intelligence products to better meet the needs of fusion centers, assist the department in making decisions on how to best utilize its limited resources at fusion centers, and be responsive to its statutory reporting requirements. Detecting the warning signs of potential terrorist activities and sharing the information with the proper agencies provides an opportunity to prevent a terrorist attack. However, most of the local and tribal officials in the border communities we contacted did not clearly know what suspicious activities federal agencies and fusion centers wanted them to report, how to report them, or to whom. The federal government is working with state and local entities to develop a standardized suspicious activity reporting process that, when implemented, could help address these issues. In the meantime, providing local and tribal officials with suspicious activity indicators that are associated with criminal activity along the borders could assist the officials in identifying potential terrorist threats. According to an October 2008 intergovernmental report on suspicious activities, fundamental to local and tribal law enforcement agencies’ efforts to detect and mitigate potential terrorist threats is ensuring that front-line personnel recognize and have the ability to document behaviors and incidents indicative of criminal activity associated with international terrorism. Unlike behaviors, activities, or situations that are clearly criminal in nature—such as car thefts, burglaries, or assaults—suspicious activity reporting involves suspicious behaviors that have been associated with terrorist activities in the past and may be predictive of future threats to public safety. Examples include surveillance, photographing of facilities, site breaches or physical intrusion, cyber attacks, and the probing of security. To varying degrees, federal agencies and fusion centers provided local and tribal agencies in the border communities we contacted with alerts, warnings, and other information that enhanced the local and tribal agencies’ situational awareness of potential terrorist threats. As an additional tool, the FBI and fusion centers in two of the five states we contacted had developed lists of suspicious activities—in the form of reference cards or brochures—to help local and tribal agencies determine what behaviors, activities, or situations are indicators of potential terrorist activities and should be reported for further analysis. However, officials from 13 of the 20 local and tribal agencies we contacted said they did not recall being provided with a list of the suspicious activities or indicators that rise to the level of potential terrorist threats and should be reported, while officials from 7 of the 20 agencies said they had received such indicators from either the FBI, the state fusion center, or another entity. According to the October 2008 intergovernmental report on suspicious activities, local law enforcement agencies are critical to efforts to protect local communities from another terrorist attack. The report also notes that to effectively conduct these duties, it is critical that the federal government ensure that local law enforcement personnel can recognize and have the ability to document behaviors and incidents indicative of criminal activity associated with domestic and international terrorism. While federal agencies and fusion centers had taken steps to disseminate or discuss terrorism-related indicators with local and tribal officials—such as through mass mailings and during outreach meetings and law enforcement conferences—these actions did not ensure that local and tribal agencies were aware of them, in part because the mechanisms used to share information were not always effective, as discussed earlier in this report. As a result of not being aware of the suspicious activity indicators, local officials in three border communities we contacted said they did not clearly know what information federal agencies and fusion centers wanted them to collect and report. Increased awareness of these indicators would better position local and tribal agencies along the border to identify and report behaviors and incidents indicative of criminal activity associated with terrorism. Also, in about half of the border communities we contacted, local and tribal agency officials were not aware of the specific processes they were to use to report terrorism-related suspicious activities or to whom this information should be reported because federal agencies had not yet defined such processes. Absent defined processes, the local and tribal officials had independently developed policies and procedures for gathering and reporting suspicious activities and they provided varying responses regarding how and to whom they would submit suspicious activities that may have a nexus to terrorism. Responses included reporting suspicious activities to a fusion center, the FBI, or another federal agency. Several local and tribal officials we contacted said they would report this information to the local federal official—e.g., Border Patrol, ICE, or the FBI—with whom they had developed a relationship. By defining reporting processes, federal, local, and tribal agencies would be in a better position to conduct more efficient collection and analysis of suspicious activities and share the results on a regional or national basis. Also, internal control standards call for management to ensure that there are adequate processes for communicating with and obtaining information from external stakeholders that may have a significant effect on the agency achieving its goals and that information should be recorded and communicated to the entities who need it in a form and within a time frame that enables them to carry out their responsibilities. At the national level, the federal government is working with state and local law enforcement entities on the National Suspicious Activity Reporting Initiative to standardize the reporting of suspicious activities that may be related to terrorism. The long-term goal of the initiative is to develop and implement consistent national policies, processes, and best practices by employing a standardized, integrated approach to gathering, documenting, processing, analyzing, and sharing information about suspicious activity that is potentially related to terrorism. One of the immediate goals of the initiative is to help ensure that suspicious activity reports with a potential connection to terrorism are expeditiously provided by local and tribal law enforcement agencies to the FBI. As of September 2009, related pilot projects were ongoing at fusion centers in 12 major cities. According to the DOJ official who is overseeing the initiative, an evaluation of the pilots will be completed by late 2009, but fully implementing the initiative across the country could take up to 2 years. Until the National Suspicious Activity Reporting Initiative is fully implemented, additional federal agency efforts to establish defined processes for local and tribal officials in border communities to report suspicious activities could help ensure that information is collected and shared with the most appropriate entity. According to the director of I&A’s Border Security Division, senior intelligence officials at fusion centers in two of the five border states we contacted, and other subject matter experts—including federal and state officials who were involved in developing suspicious activity indicators for local and tribal agencies in border communities—the suspicious activity indicators could be more useful if they also contained terrorism-related behaviors, activities, or situations that were more applicable to the border or border crimes and were periodically updated to reflect current threats. Officials from three of the local law enforcement agencies we contacted also suggested that border-specific indicators would help them link potential terrorism-related activities to crimes they are more likely to encounter along the border, such as illegal immigration and currency smuggling. However, our review of the suspicious activity indicators being utilized by the National Suspicious Activity Reporting Initiative and those that were developed by the FBI and fusion centers generally did not include indicators that were specific to the border. According to the DOJ official who was overseeing the implementation of the national initiative, the primary suspicious activity indicators that were validated by the law enforcement and intelligence community for use in the major city pilot projects were designed to be general and applicable to local and tribal officials located anywhere in the country. The official noted that the automated system that is being used by law enforcement agencies to record the suspicious activities during the pilot projects was designed to accommodate “sub-lists” that contain indicators that are applicable to specific sectors, such as the critical infrastructure sector. The official said that there was not a sub-list for border-specific indicators, but that he saw the potential for developing such a list. The official said that I&A would be the entity with the requisite expertise for developing such a list. In April 2009, I&A deployed an intelligence analyst from its Border Security Division to DHS’s Homeland Security Intelligence Support Team to develop terrorism indicators that are specific to the southwest border. According to the director of the Border Security Division, the analyst is looking for trends and patterns in terrorism-related incident reports that are generated by local and tribal law enforcement officials along the southwest border. The director said that I&A has not yet determined a final date for developing the suspicious activity indicators since there is a lot of information that has to be analyzed. The official noted that I&A is considering deploying another intelligence analyst to the northern border to perform similar analyses. According to the director of I&A’s Border Security Division, in his former position as a border analyst in the intelligence community, he worked with CBP and ICE to develop border-related indicators that were potential precursors to terrorist activities. The official noted the importance of periodically updating and consistently disseminating these indicators of terrorism-related behaviors, activities, or situations that reflect current border threats. According to Border Patrol and ICE headquarters and field personnel, neither agency had developed suspicious activity indicators that were specific to the borders. Additional DHS and FBI actions to develop, periodically update, and consistently disseminate indicators of terrorism-related activities that focus on border threats could help to maximize the utility of suspicious activity indicators as a counterterrorism tool in border communities. As discussed in the National Strategy for Information Sharing, state, local, and tribal government officials are critical to our nation’s efforts to prevent future terrorist attacks. Because these officials are often in the best position to identify potential threats that exist within their jurisdictions, they must be partners in information sharing that enhances situational awareness of border crimes and potential terrorist threats. In border communities, this partnership is particularly important because of the vulnerability to a range of criminal activity that exists along our nation’s borders. Therefore, a more robust effort by federal agencies to identify the information needs of local and tribal law enforcement agencies along the borders and periodically assess the extent to which partnerships exist and related mechanisms to share information are working—and fill gaps and address barriers where needed—could better enable federal agencies to provide useful information to their local and tribal partners that enhances situational awareness. The work of state-run fusion centers is also critical to the nation’s efforts to prevent terrorist attacks. Fusion centers in the border states we visited demonstrated a range of practices related to developing border intelligence products that could serve as a model for other fusion centers. By identifying and sharing these promising practices, DHS and the FBI could help strengthen the work of fusion centers nationally in addition to enhancing situational awareness of local and tribal law enforcement. Also, by working with the centers to obtain feedback on border intelligence products, DHS and the FBI could enhance the utility of those products that fusion centers share with local and tribal law enforcement agencies. Finally, until a national suspicious activity reporting process is in place, more consistently providing local and tribal officials in border communities with information on the suspicious terrorism-related activities they should report—including those related to border threats— and establishing processes for reporting this information could help ensure that critical information is reported and reaches the most appropriate agency to take action. To help ensure that local and tribal law enforcement agencies in border communities receive information from local federal agencies that enhances their situational awareness of border crimes and potential terrorist threats, we recommend that the Secretary of Homeland Security and Director of the FBI, as applicable, require Border Patrol, ICE, and FBI offices in border communities to take the following two actions: (1) more consistently and fully identify the local and tribal agencies’ information needs and (2) periodically assess the extent to which partnerships and related mechanisms to share information exist, fill gaps as appropriate, and address barriers to establishing such partnerships and mechanisms. To promote future efforts to develop border intelligence products within fusion centers, we recommend that the Secretary of Homeland Security and the Director of the FBI collaborate with fusion centers to take the following two actions: (1) identify and market promising practices used to prepare these products and (2) take additional actions to solicit feedback from local and tribal officials in border communities on the utility and quality of the products generated. To maximize the utility of suspicious activity indicators as a counterterrorism tool, we recommend that the Secretary of Homeland Security and the Director of the FBI collaborate with fusion centers to take the following two actions: (1) take steps to ensure that local and tribal law enforcement agencies in border communities are aware of the specific types of suspicious activities related to terrorism that they are to report and the process through which they should report this information and (2) consider developing, periodically updating, and consistently disseminating indicators of terrorism-related activities that focus on border threats. On November 10, 2009, we provided a draft of this report to DHS and DOJ for comment. In its written response, DHS noted that CBP, ICE, and I&A are continuing and expanding efforts to share information. DHS agreed with all of our recommendations in this report. Specifically, DHS agreed with our recommendation related to the need for Border Patrol and ICE to (1) more fully identify the information needs of local and tribal agencies along the borders and (2) periodically assess the extent to which partnerships and related mechanisms to share information exist. For example, CBP agreed that a systematic and standardized process to disseminate information and receive feedback is vital to situational awareness for local and tribal law enforcement partners who are within the immediate areas adjacent to the border. CBP noted that Border Patrol plans to develop a list of individuals who will serve as liaisons to local and tribal agencies and also develop a list of local and tribal contacts. According to CBP, the Border Patrol liaisons will then make initial efforts to assess the information needs of the law enforcement partners and take other actions to determine and publish guidance on information sharing. To ensure that information shared is useful, Border Patrol plans to conduct annual surveys of its partners. Border Patrol envisions that this standardized process will be in place by the end of fiscal year 2010. When implemented, the Border Patrol’s actions should meet the intent of our recommendation. ICE also agreed with the recommendation and plans to work with CBP and the FBI to enhance local and tribal law enforcement agencies’ situational awareness, but ICE did not provide details on the specific actions it will take. I&A provided, or otherwise highlighted, additional information on the current status of information sharing among federal, state, and local agencies as it pertains to border security. DHS also agreed with our recommendation related to the need for DHS and the FBI to collaborate to (1) identify and market promising practices used to prepare border intelligence products within fusion centers and (2) take additional actions to solicit feedback from local and tribal officials on the utility and quality of the products generated. According to I&A—the DHS component that has the lead in addressing this recommendation—the department has initiated the creation of a broad Joint Fusion Center Program Management Office, which represents a departmentwide effort that seeks to more closely coordinate support to fusion centers with department component agencies, including CBP and ICE. I&A also noted that its intelligence specialists that are in fusion centers also act as conveyers of information about promising practices to develop border information products. Finally, I&A noted that the department hosts the Lessons Learned and Best Practices Web site that can be utilized to promote future efforts to develop border intelligence products within fusion centers. While these actions could potentially support DHS efforts to identify and market promising practices used to prepare border intelligence products within fusion centers, I&A did not provide any specific information on the extent to which such practices have been identified and marketed. I&A’s comments also did not address what actions, if any, are ongoing or planned to solicit feedback from local and tribal officials on the utility and quality of the products generated. ICE also agreed with the recommendation and noted that it will work with the FBI to implement it, but ICE did not provide details on the specific actions it will take. Finally, DHS agreed with our recommendation related to the need for DHS and the FBI to collaborate to (1) ensure that local and tribal law enforcement agencies in border communities are aware of the suspicious activities related to terrorism they are to report and the process for reporting this information and (2) consider developing and disseminating indicators of terrorism-related activities that focus on border threats. ICE agreed with the recommendation but deferred to I&A on the implementation specifics. I&A provided additional information on the status of the National Suspicious Activity Reporting Initiative and efforts to test and evaluate related policies, procedures, and technology. According to I&A, the evaluation phase of the initiative at participating sites concluded at the end of September 2009 and a final report will be issued that will document lessons learned and best practices. I&A noted that the initiative will then be transitioned from a preoperational environment to a broader nationwide implementation. However, as discussed in our report, the DOJ official who is overseeing the initiative noted that the nationwide implementation could take up to 2 years. Therefore, our recommendation is intended for DHS and the FBI to take interim actions until the national initiative is fully implemented, such as more consistently providing local and tribal officials in border communities with information on the suspicious terrorism-related activities they should report—including those related to border threats— and establishing processes for reporting this information. The full text of DHS's written comments is reprinted in appendix II. DHS also provided technical comments, which we incorporated in this report where appropriate. On December 8, 2009, DOJ’s Audit Liaison Office, within the Justice Management Division, stated by e-mail that the department will not be submitting technical or formal comments on the draft report. As agreed with your office, we plan no further distribution of this report until 30 days from its date, unless you publicly announce its contents earlier. At that time, we will send copies to the Secretary of Homeland Security, the Attorney General, and other interested parties. In addition, this report will be available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-8777 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff that made major contributions to this report are listed in appendix III. The objectives of our review were to determine the extent to which (1) local and tribal law enforcement agencies in border communities are receiving information from their federal partners that enhances the agencies’ situational awareness of border crimes and potential terrorist threats; (2) federal agencies are assisting fusion centers’ efforts to develop border intelligence products that enhance local and tribal agencies’ situational awareness of border crimes and potential terrorist threats; and (3) local and tribal law enforcement agencies in border communities are aware of the specific types of suspicious activities related to terrorism they are to report and to whom, and the process through which they should report this information. To identify criteria for answering these questions, we analyzed relevant laws, directives, policies, and procedures related to information sharing, such as the October 2007 National Strategy for Information Sharing and the Implementing Recommendations of the 9/11 Commission Act of 2007 (9/11 Commission Act). The 9/11 Commission Act provides for the establishment of a State, Local, and Regional Fusion Center Initiative at the Department of Homeland Security (DHS) and contains numerous provisions that address the federal government’s information sharing responsibilities to state and local fusion centers, including those that serve border communities. To examine the information sharing that occurs between local and tribal law enforcement agencies in border communities and federal agencies that have a local presence in these communities—U.S. Customs and Border Protection’s Border Patrol and U.S. Immigration and Customs Enforcement (ICE), and the Federal Bureau of Investigation (FBI)—we conducted site visits to five states that are geographically dispersed along the northern and southwest borders (Arizona, Montana, New York, Texas, and Washington). Within these states, we selected a nonprobability sample of 23 local and tribal law enforcement agencies to visit based on one or more of the following characteristics: locations known to be or suspected of being particularly vulnerable to illegal entry or criminal activity; land ports of entry with heavy inbound passenger traffic; locations in proximity to areas at the border where there is little or no continuous federal border enforcement presence; locations that include Native American tribal communities with lands that abut the border; locations where federal, state, and local communities have in the past, or are currently working with federal agencies to support border security either informally or through pilot programs for sharing information; locations in proximity to federal agencies at the border; and geographically dispersed locations along the northern and southwest land borders. We met with county sheriffs, local police chiefs, and tribal police chiefs from the 23 law enforcement agencies and asked them about the information they received from federal agencies in their localities. We also asked whether federal officials had discussed local and tribal officials’ information needs and had established information sharing partnerships and related mechanisms to share information with them—consistent with the National Strategy for Information Sharing and best practices described in GAO reports. After our visits, we sent follow-up questions to all 23 local and tribal agencies we visited in order to obtain consistency in how we requested and obtained information for reporting purposes. Three agencies did not respond to our follow-up efforts and were excluded from our analysis. Thus, our analysis and reporting is based on our visits and subsequent activities with the 20 local and tribal agencies that responded to our follow-up questions. We also met with local representatives of Border Patrol, ICE, and the FBI to discuss their perspectives on the information sharing that occurred, and compared this information to that provided by local and tribal agencies in order to identify barriers to sharing and related causes. Because we selected a nonprobability sample of agencies in border communities to contact, the information we obtained at these locations may not be generalized across the wider population of law enforcement agencies in border communities. However, because we selected these border communities based on the variety of their geographic location, proximity to federal agencies, and other factors, the information we gathered from these locations provided us with a general understanding of information sharing between federal agencies and state, local, and tribal law enforcement agencies along the border. To assess the extent to which federal agencies assisted fusion centers in developing border intelligence products, as discussed in the 9/11 Commission Act, we reviewed products developed by fusion centers to determine the extent to which they provided border security–relevant information. We also met with and conducted subsequent follow-up conversations with fusion center directors and other senior fusion center officials in the five states we visited (Arizona, Montana, New York, Texas and Washington) and obtained their views on the importance of developing such products and about the level of support federal agencies were providing in developing these products. We asked each of the 20 local and tribal law enforcement agencies we contacted whether they received border intelligence products from their state’s primary fusion center and, if so, we discussed their views on the usefulness of such products. We also interviewed senior officials from DHS’s Office of Intelligence and Analysis—the office responsible for coordinating the federal government’s support to fusion centers—and headquarters and field components of Border Patrol, ICE, and the FBI to discuss their efforts to support fusion centers’ development of border intelligence products, identify promising practices for developing such products, and obtain feedback from local and tribal officials on the usefulness of the products. We also reviewed applicable documents that address fusion centers, including the 9/11 Commission Act, the National Strategy for Information Sharing, fusion center guidelines, and DHS planning documents and reports. Finally, to determine the extent to which local and tribal agencies in border communities were aware of the suspicious activities they are to report, we asked officials from the 20 agencies what, if any, information federal agencies or fusion centers had provided them on the kinds of suspicious activities that could be indicators or precursors to terrorism and what processes they had in place for reporting information on these activities. In general, suspicious activity is defined as observed behavior or incidents that may be indicative of intelligence gathering or preoperational planning related to terrorism, criminal, espionage, or other illicit intentions. We also reviewed the Findings and Recommendations of the Suspicious Activity Report (SAR) Support and Implementation Project to determine the extent to which the federal government recognizes the role of suspicious activity reporting for detecting and mitigating potential terrorist threats. We compared the processes for reporting suspicious activities with GAO’s Standards for Internal Control in the Federal Government. We also examined indicators of various suspicious activities the FBI and fusion centers developed to determine if they contained border-specific content. We interviewed Department of Justice officials who were leading the national initiative to standardize suspicious activity reporting—as well as those from headquarters components of DHS and the FBI—to discuss the status of the national initiative and whether border-specific indicators were needed and are being considered as part of this initiative. We conducted this performance audit from October 2007 through December 2009 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the person named above, Eric Erdman, Assistant Director; Frances Cook; Cindy Gilbert; Kristen Hughes; Christopher Jones; Thomas Lombardi; Ronald Salo; Edith Sohna; Adam Vogt; and Maria Wallace made key contributions to this report. | Information is a crucial tool in securing the nation's borders against crimes and potential terrorist threats, with the Department of Homeland Security's (DHS) Border Patrol and Immigration and Customs Enforcement (ICE), and the FBI, having key information sharing roles. GAO was asked to assess the extent to which (1) local and tribal officials in border communities received useful information from their federal partners, (2) federal agencies supported state fusion centers'--where states collaborate with federal agencies to improve information sharing--efforts to develop border intelligence products, and (3) local and tribal agencies were aware of the suspicious activities they are to report. To conduct this work, GAO analyzed relevant laws, directives, policies, and procedures; contacted a nongeneralizable sample of 20 agencies in border communities and five fusion centers (based on geographic location and size); and interviewed DHS and FBI officials. Officials from 15 of the 20 local and tribal law enforcement agencies in the border communities GAO contacted said they received information directly from at least one federal agency in the vicinity (Border Patrol, ICE, or the FBI) that was useful in enhancing their situational awareness of border crimes and potential terrorist threats. Nine of the 20 agencies reported receiving information from all three federal agencies. Overall, where federal officials had discussed local and tribal officials' information needs and had established information sharing partnerships and related mechanisms to share information with them--consistent with the National Strategy for Information Sharing and best practices--the majority of the local and tribal officials reported receiving useful information. However, most local and tribal officials that reported federal agencies had not discussed information needs and had not established partnerships with them also said they had not received useful information. By more fully identifying the information needs of local and tribal agencies along the borders and establishing information sharing partnerships, federal agencies could be better positioned to provide local and tribal agencies with information that enhances their situational awareness of border crimes and potential terrorist threats. Federal officials at two of the five state fusion centers we visited were supporting fusion center efforts to develop border intelligence products or reports that contained information on border crimes and potential terrorist threats, as discussed in the Implementing Recommendations of the 9/11 Commission Act of 2007. DHS recognizes that it needs to add personnel to other fusion centers in border states to, among other things, support the creation of such products, and is developing plans to do so, but cited funding issues and competing priorities as barriers. The creation of border intelligence products--such as those developed by two of the fusion centers we visited--represent potential approaches that DHS and the FBI could use to identify promising practices that other fusion centers could adopt. Identifying such practices is important because of the central role the federal government places on fusion centers to facilitate the sharing of information. Also, DHS had not obtained feedback from local and tribal officials on the utility and quality of the border intelligence products that its analysts in fusion centers have helped to develop. Additional efforts to obtain such feedback would support DHS and FBI efforts to improve the utility and quality of future products. Officials from 13 of the 20 local and tribal agencies in the border communities we contacted said that federal agencies had not defined what suspicious activities or indicators rise to the level of potential terrorist threats and should be reported to federal agencies or fusion centers. Recognizing this problem, federal agencies are participating in national efforts to standardize suspicious activity reporting. Until such efforts are implemented, defining suspicious activity indicators and current reporting processes would help better position local and tribal officials along the borders to identify and report incidents indicative of criminal activity associated with terrorist threats. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
The FCS concept is designed to be part of the Army’s Future Force, which is intended to transform the Army into a more rapidly deployable and responsive force that differs substantially from the large division-centric structure of the past. The FCS family of weapons is now expected to include 14 manned and unmanned ground vehicles, air vehicles, sensors, and munitions that will be linked by an advanced information network. Fundamentally, the FCS concept is to replace mass with superior information—allowing soldiers to see and hit the enemy first rather than to rely on heavy armor to withstand a hit. The Army envisions a new way of fighting that depends on networking the force, which involves linking people, platforms, weapons, and sensors seamlessly together in a system of systems. Within the FCS program, eight types of manned ground vehicles are being developed, each having a common engine, chassis, and other components. One of the other common components is a hit avoidance system that features a set of capabilities to detect, avoid, and/or defeat threats against the manned ground vehicles. One of its subsystems is the APS, which is intended to protect a vehicle from attack by detecting a threat in the form of an incoming round or rocket propelled grenade (threat) and launching an interceptor round from the vehicle to destroy the incoming weapon. An APS system consists of a radar to detect the incoming weapon, a launcher, an interceptor or missile, and a computing system. The Army has employed a management approach for FCS that centers on a lead systems integrator to provide significant management services to help the Army define and develop FCS and reach across traditional Army mission areas. Boeing, along with its subcontractor, the Science Applications International Corporation (SAIC), serves as the lead systems integrator for the FCS system development and demonstration phase of acquisition, which is expected to extend until 2014. The lead systems integrator has a close partner-like relationship with the Army and its responsibilities include requirements development, design, and source selection of major system and subsystem subcontractors. In the case of APS, the first-tier subcontractors are the manned ground vehicle integrators, BAE and General Dynamics Land Systems, who are responsible for developing individual systems. BAE was designated the hit avoidance integrator, a role that covers more than active protection, and was responsible for awarding the subcontract to the APS developer. This subcontract has three elements: a base contract, option A to support the current force (the short-range solution) and option B to support the FCS manned ground vehicles (short- and long-range solution). Figure 1illustrates these relationships. A separate initiative involving active protection resulted from a Joint Urgent Operational Needs Statement, issued by Central Command and the Multi-National Corps in Iraq in April of 2005, which requested 14 special- equipped vehicles with a host of distinctive capabilities, one of which was an APS. The need statement called for a capability to field a combination of near-term technologies that would be useful in conducting force protection missions, reconnaissance and crowd control in Iraq and an evaluation of an active protection capability against rocket-propelled grenades as part of this suite of capabilities. To respond to this need statement, the Joint Rapid Acquisition Cell, a group within the Office of the Secretary of Defense (OSD) that seeks solutions to urgent needs and focuses on near-term or off-the-shelf equipment to meet these needs, provided funding to the Army, which worked with the OFT to evaluate various technologies, including an APS, for inclusion on the vehicles. The OFT was also an office within the OSD, and its role was to examine unanticipated needs and experiment with innovative technologies that could be used to meet warfighter needs. Both the process for evaluating APS sources and concepts to meet FCS needs and the urgent needs of the Central Command occurred nearly simultaneously, as shown in figure 3. As can be seen in figure 3, many events took place at the same time. The lead systems integrator for FCS completed its subcontractor selection for APS shortly before decisions were made on the near term system being considered to meet the Central Command need. The Trophy system was evaluated as a candidate system in both processes. In choosing the developer for the APS system, the FCS lead systems integrator, with Army support and concurrence, conducted a source selection and followed the FCS lead systems integrator subcontract provisions for avoiding organizational conflicts of interest. The purpose was to select the subcontractor for the APS that would be best able to develop the overall APS architecture to address the FCS requirements to defeat the short- and long-range antiarmor threats as well as meet the current force needs for defeating short-range rocket-propelled grenade attacks. The subcontractor selected would support the hit avoidance integrator in integrating APS technology into the FCS manned ground vehicles and also apply this architecture to the Army’s current force. The contract included two options that were to supply the specific design for the APS system: Option A for the short-range APS for the current force; and Option B for the short- and long-range solution for the FCS. These options would be awarded later, based on the results of trade studies subsequently performed. To protect against organizational conflicts of interest, contracts between the FCS lead systems integrator and its subcontractors preclude a subcontractor from conducting or participating in a source selection for other FCS subcontracts if any part of its organization submits a proposal. Under normal circumstances, since the APS would be part of the hit avoidance system of the FCS manned ground vehicles, the hit avoidance integrator, BAE, would have had the primary responsibility to issue the requests for proposals, conduct the source selection evaluation, and award the contract. In this capacity, BAE issued a draft request for proposals for the APS in April 2005. When the firm subsequently decided to submit a proposal on the APS subcontract, it was required, under the FCS lead systems integrator subcontract organizational conflict of interest provisions, to notify the lead systems integrator, Boeing, of its intention. BAE did so and the lead systems integrator reissued the request for proposals for APS in September 2005 and assumed the source selection responsibilities. BAE submitted its proposal but then had no further role in the evaluation of proposals or the actual source selection. After the source selection was complete, the lead systems integrator transferred contract responsibility to BAE, and BAE assumed the responsibility for awarding and administering the APS contract. From our review, the documentation from the APS source selection process shows that (1) no officials from the offering companies participated in the source selection process, and (2) all offerors were evaluated based on the same criteria contained in the request for proposals. In response to this request for proposals, four proposals were received. Three proposals were considered competitive, while the fourth was eliminated from consideration as it was considered “unsatisfactory” in technical merit and its architectural approach did not meet the requirements. Proposals from the remaining three companies—BAE, Raytheon, and General Dynamics Land Systems—were evaluated in the source selection process and no officials from these companies were on the evaluating or selecting teams. The source selection evaluation team consisted of 53 members, with 27 lead systems integrator representatives and 26 government representatives, including personnel from the FCS program manager’s office, Army research centers, and the Defense Contract Management Agency. After evaluating each of the proposals against the criteria spelled out in the request for proposals, the source selection evaluation team made its recommendation to the lead systems integrator source selection executive, who accepted itsrecommendation. Our review of the documentation shows that the criteria were ranked in order of importance, with technical merit considered most important, then cost, management/schedule and finally past performance. The technical merit criteria were divided into six sub-factors: systems engineering and architecture; expertise in APS technologies; simulation, modeling and test; fratricide and collateral damage; specialty engineering; and integration capability. Cost criteria were based on the realism, reasonableness, completeness, and affordability of the proposal. Management/ schedule criteria included such areas as expertise and experience in key positions. The past performance risk rating category was based on whether the respondents’ past performance raised doubts about their being able to perform the contract. Since all three proposals were deemed comparable in the areas of cost, management/schedule, and past performance, the primary discriminating factor became technical merit. According to the evaluation documentation, the technical merit scores were assessed based on whether the proposal demonstrated that the contractor understood the requirements and on its approach to meeting these requirements in each of the six technical merit sub-factors. Also, part of the technical score was a proposal risk evaluation, defined as the degree any proposal weaknesses could cause disruption of schedule, increase in cost, or degradation in performance. While the source selection’s stated purpose was to choose the company best able to develop the APS and not a specific design, each proposal used a specific APS system as an “artifact” to illustrate how they intended to meet the requirements. Even though, in theory, one company could have been chosen as the APS developer while another company’s preferred design could have been selected for development, much of the source selection assessment of technical merit was based on the “artifact” used for illustration. For example, in the technical merit category of APS expertise, the source selection evaluation of Raytheon states that “the vertical launch concept solves several design and integration problems.” Similarly, the BAE evaluation in the criteria of APS expertise states that “the proposed long-range countermeasure…design has effectiveness against the full spectrum of threats.” The General Dynamics Land System’s evaluation discusses the relatively high technology readiness level (TRL) of the “proposed Trophy system.” Therefore, while each company’s proposed solution was not the only aspect of the proposals to be evaluated, the evaluation documentation shows that the technical merit category was a key factor in the evaluation. The source selection evaluation team decided that the BAE and Raytheon proposals had the highest technical merit. BAE had a lower-risk approach and its solution had been tested in a relevant environment: however, the source selection evaluation team stated that this low-risk approach could prevent BAE from considering higher-risk options that would enable them to meet the full range of the performance requirements, such as protection from top-attack weapons. In addition, the source selection evaluation team determined that, while both Raytheon and BAE could develop the design presented in the BAE proposal, Raytheon would have the advantage if the vertical launch design was chosen. The evaluation team concluded that the Raytheon approach would have the best chance of meeting all the requirements. Based on the team’s recommendation, the lead systems integrator selected Raytheon. The integrator accepted the higher risk because it concluded that the Raytheon proposal had excellent technical merit and the firm would be better able to develop the vertical launch technology, if that were the design decided upon in the trade study. The APS development contract required the winner of the source selection to perform a trade study identifying and assessing competing APS alternatives. The trade study used a methodology consistent with Army guidance to evaluate all alternatives, ultimately selecting Raytheon’s vertical launch as the best design. According to the Army and the lead systems integrator, conducting the trade study after choosing the APS subcontractor could have resulted in selecting a different concept than Raytheon’s vertical launch design. However, in our view, this possibility appears remote given the selection of Raytheon as APS developer was based largely on the technical merits of its vertical launch design and the fact that it would be best able to develop that design. The development contract’s terms required the source selection winner to perform a trade study that would identify and assess APS alternatives and select an APS design from among competing alternatives. Therefore, once Raytheon won the development contract in March 2006, it was required to conduct the trade study rather than simply develop its own design. Since the trade study was not a source selection, FAR contract provisions regarding organizational conflicts of interest did not apply and Raytheon was free to participate in the study as the responsible contractor. The trade study’s specific objective was to choose a single short-range APS architecture (launcher and interceptor) that best met active protection requirements for FCS manned ground vehicles, with consideration for application to the current force. The study was conducted in May 2006 and Raytheon’s vertical launch concept was selected as the design. Based on the trade study documentation, the study was conducted using a methodology prescribed by Army guidance and this methodology was applied consistently to all APS alternatives. Seven alternatives survived a screening process and were then evaluated against a set of weighted criteria. The study concluded that Raytheon’s vertical launch was the best design approach. According to general Army guidance for trade studies, steps in the trade study process should include such elements as incorporating stakeholders, identifying assumptions, determining criteria, identifying alternatives, and conducting comparative analyses. The APS trade study process consistently applied such methodology to all APS alternatives by using separate, independent roles for a technical team and stakeholders; operating under a set of assumptions; using validated, protected technical data on each alternative; having a screening process to filter out non- viable alternatives; and using a set of weighted criteria to assess alternatives that survived the screening process. The trade study was performed by a technical team and stakeholders— each having separate roles and operating independently from one another. The technical team provided technical input and expertise to the stakeholders, who were the voting members of the study and made the final selection. The technical team, 21 members from industry and government as shown in table 1, included individuals who were subject matter experts as well as those from organizations participating in development of the short-range APS. Raytheon had 11 members on the technical team—the most from any single organization. The Army stated that this representation included administrators and observers and occurred because Raytheon had been designated APS developer, was thus required to conduct the trade study, and could gain knowledge from attending subject matter experts. The stakeholders made the final selection. The composition and number of stakeholders are shown in table 2. The stakeholders were program leads from the Army, lead systems integrator, and subcontractors responsible for integrating the FCS manned ground vehicles. According to the Army, Raytheon’s APS program manager was included as a stakeholder because Raytheon as developer had responsibility for developing the design chosen by the trade study process. The technical team and stakeholders operated the trade study under assumptions that set parameters for screening and evaluating each alternative. These assumptions were tied to such areas as performance and threat. Additionally, they conducted the study using data that was previously validated and remained protected throughout the study’s course. The primary source of the data was the Army Research, Development, and Engineering Command’s APS database, which contained data gathered and validated by the Command’s subordinate labs. This data was protected by third parties, including the Department of Energy’s Idaho National Lab, to ensure it was not changed during the study. The technical team used initial screening processes to eliminate four alternatives and identify seven viable alternatives for further assessment. The screening process filtered out the four alternatives that could not meet one or both of two criteria: (1) ability to grow to meet 360-degree hemispherical requirements, and (2) ability to be procured within a program schedule that would meet the need for prototype delivery of a short-range solution to the current force in fiscal year 2009. The seven alternatives that survived the screening process are shown in table 3, along with the respective government organizations and industry associated with each. The technical team assessed the seven alternatives against a set of five weighted criteria. According to the Army, these were the same top-level criteria mandated in all FCS trade studies, and their weights were assigned by FCS chief engineers. Table 4 defines each of the criteria and provides information on respective weights. The vertical launch concept scored highest in every category of criteria except risk. The Army indicated that the concept had about one-third better overall weighted performance than the other alternatives. Army officials described the vertical launch design as having technical advantages over the other alternatives—including the need for less space, weight, and power—as well as cost benefits. The Army and lead systems integrator officials told us that the trade study could have resulted in the selection of a design other than Raytheon’s. They also stated that, had this occurred, Raytheon as APS developer would have been required to develop this design rather than the vertical launch. While in theory the APS source selection chose a developer and the trade study chose the design to develop, in reality it is difficult to separate the trade study results and the source selection decision. In our view, in both the source selection and trade study, criteria related to technical aspects of the designs were deciding factors. Considering that the source selection evaluation relied on artifacts representing specific systems—and Raytheon won the source selection based in large part on the technical merit of its artifact—it seems unlikely that the APS trade study would have resulted in the selection of any system other than Raytheon’s vertical launch. Although the trade study concluded that vertical launch was a high-payoff approach, it also noted that it was a high risk due to its low technology maturity. At the time of the trade study, as shown in table 5, the vertical launch was less technologically mature than the other alternatives except for one. The Army expects the design to reach TRL 6 (system model or prototype demonstration in a relevant environment) by August or September 2007. The Army expects the vertical launch concept to be available for prototype delivery to current force combat vehicles in fiscal year 2009 and for testing on a FCS vehicle in 2011. These estimates appear optimistic. At a TRL 5, the vertical launch will require additional technology development and demonstration before it is ready for either application. Also, the FCS vehicles have not been fully developed yet. Assuming all goes as planned, most FCS vehicle prototypes are expected to be available in 2011 for developmental testing. As we noted in our March 2007 report, the Army has in general been accepting significant risks with immature technologies for the FCS program, coupled with compressed schedules for testing and evaluating prototypes. The Army and the lead systems integrator were both extensively involved in preparing for and conducting the APS subcontractor selection and the trade study. Prior to the selection, FCS program officials assisted in APS requirements development and reviewed and approved the scope of work, schedule, and evaluation criteria for the request for proposals. After the proposals were received, FCS program officials, technical experts from various Army research centers, representatives of the Tank-Automotive and Armaments Command and the Training and Doctrine Command were active participants in the selection evaluation team and reviewed the proposals along with the lead systems integrator members. The Source Selection Advisory Council, who advise the Source Selection Executive, provided oversight to the evaluation team and also had representatives from the FCS program manager’s office and the Army research community. Similarly, Army FCS officials, as well as technical experts from Army research centers, were members of the trade study technical team and also concurred in the choice of the vertical launch concept. The co-lead of the trade study was an FCS official. The lead systems integrator’s office assumed responsibility for the selection process, was the selection executive, and made the final choice of an APS developer. In addition to its lead role in the APS subcontractor selection, the lead systems integrator was represented on the trade study technical team and was one of the stakeholders. As our previous body of work on the FCS program has shown, the Army’s participation in the APS subcontractor selection and trade study is consistent with the Army’s general approach to FCS. Army leadership set up the FCS program in such a way that it would create more competition and have more influence over the selection of suppliers below the lead systems integrator. In setting up FCS, Army leadership noted that traditionally, once the Army hired a prime contractor, that contractor would bring its own supplier chains. The Army was not very involved in the choice of the suppliers. In FCS, the Army called for the lead systems integrator to hold a competition for the next tier of contractors. The Army had veto power over these selections. In addition, the Army directed that the lead systems integrator employ integrators at lower levels in the program, for high-cost items such as sensors and active protection systems and the Army has been involved with these selections. These integrators were also to hold competitions to select suppliers for those systems. This strategy was designed to keep the first tier of contractors from bringing their own supplier chains and pushed competition and Army visibility down lower in the supplier chain. The fact that the decisions on the APS subcontractor selection and trade study lend themselves to after- the-fact examination is due in part to the Army’s focus on competition at lower supplier levels on FCS. The process followed by OFT to meet the urgent needs of the Central Command was characterized by a simpler evaluation of active protection systems with potential for near term fielding, followed by actual physical testing of the APS candidate system that the OFT considered most technically mature, the Trophy. The Army’s Program Manager’s Office for Close Combat Systems was also involved in this evaluation. While the testing of Trophy had a high success rate, the Joint Rapid Acquisition Cell decided to defer fielding the Trophy based, at least in part, on the recommendations of the Army that the testing was not realistic and the Trophy’s integration on the platform would delay fielding of other useful capabilities. OFT officials did not agree with the Army’s position and thought the system’s success in testing indicated it should be further evaluated. To meet the Central Command’s need, OFT began an effort, the Full- Spectrum Effects Platform, to incorporate and test various improvements for potential application to existing military vehicles such as the Stryker. The platform itself is a modified Stryker vehicle. The program was divided into spirals: spiral 0 was to evaluate the synergy of the different systems, including the APS, on the vehicle and to compile lessons learned to aid in future concepts of operations, development and integration. Spiral 1 was intended to field a limited number of such systems to current forces in- theater in 2007, for purposes of an operational assessment of the various capabilities. The Full Spectrum Effects Platform is not part of or associated with FCS. OFT, in association with the Naval Surface Warfare Center, evaluated six candidate APS systems. Army representatives from the Program Manager, Close Combat Systems were also involved in this evaluation. The six candidate systems evaluated are shown in table 6. These systems were evaluated because the OFT and Navy and Army officials considered them to be the most promising APS solutions available within the required schedule. They evaluated each system based on such criteria as the feasibility of the operational concept, its cost and schedule factors, as well as its weight, size, and power requirements. Trophy was selected as the most promising system because it was the most technically mature system and was being developed by Israeli defense forces that had done initial work to integrate it on a light armored vehicle. OFT subsequently sponsored tests of the Trophy APS as part of the Full- Spectrum Effects Platform at Naval Surface Warfare Center in Dahlgren, Virginia. A representative from the Army’s Program Manager, Close Combat Systems, was part of the oversight team for these tests. In these test firings, the Trophy APS did well, destroying 35 of 38 incoming rocket- propelled grenades. However, the process for deciding how to proceed based on the test results was not agreed to in advance. A disagreement subsequently arose between OFT and the Army Close Combat System officials on how best to proceed from the testing. Although the tests were not designed to represent the Trophy’s capabilities in a realistic operational environment, OFT officials concluded that Trophy showed enough promise that they recommended continued testing to demonstrate its capabilities under various conditions. These officials estimated that an additional $13 million would cover the cost for this testing. They believed that Trophy could be integrated in the near term on existing light-armored vehicles and meet the urgent need for an immediate APS capability. The Army officials disagreed with OFT’s assessment that further testing of Trophy for inclusion on the Full Spectrum Effects Platform was justified. According to the Army officials, Trophy was not tested in a realistic environment for collateral damage or effectiveness. They believed that it would not be sufficiently tested for operational and safety issues within the time period required for the first spiral of the Full Spectrum Effects Platform. A delay in its integration on the Platform would delay, by at least 6 to 14 months, demonstration of other potentially useful capabilities,that could be immediately incorporated. Further, the Army estimated that it would take 5 years to integrate and field Trophy on other current force manned ground vehicles. The Army recommended to the Joint Rapid Acquisition Cell that the Trophy APS be excluded from Spiral 1 of the Full- Spectrum Effects Platform. In lieu of putting this technology in the field, the Army recommended that slat armor be incorporated on Spiral 1, since it has been effective in defeating the current rocket-propelled grenade threat. OFT officials disagreed, reasoning that although the use of slat armor on the current force has seemed to mitigate the effects of the rocket-propelled grenades currently in use, improved munitions will soon be available, and the slat armor will no longer be effective against these threats. They believed that the Trophy should be tested further in order to answer the questions raised by the Army and to provide insight into its capabilities. OFT officials based their position on the Trophy’s success in these tests, its high level of technical maturity when compared to other active protection systems, and the criticality of the need. The Joint Rapid Acquisition Cell presented this information to Central Command and recommended slipping the active protection capability to a later platform spiral, once it was more mature. Currently, there are no plans for further evaluation of active protection for future platform spirals. Upon the removal of the Trophy APS system from the Full-Spectrum Effects Platform vehicle, the Joint Rapid Acquisition Cell discontinued funding for further testing and evaluation of the Trophy. The disagreement between Army and OFT officials notwithstanding, we did not find information that would challenge the decision to defer the introduction of the Trophy on light-armored vehicles. On the other hand, the 5 years the Army estimated would be needed to integrate the comparatively mature Trophy system on the existing Stryker vehicle does not appear consistent with its estimates that the less mature vertical launch system could be ready for prototype delivery on Strykers in 2 years and on the yet-to-be developed FCS prototypes in 3 years. The FCS lead systems integrator, with support from the Army, followed a consistent and disciplined process in both selecting Raytheon to develop the APS for FCS and in conducting the trade study and followed the lead systems integrator subcontract and FAR provisions for avoiding organizational conflicts of interest. While the role played by Raytheon in the trade study was in accordance with its contract and thus not improper, the rationale for having the trade study follow the source selection is not entirely clear. The purpose of the trade study was to select the best concept; yet, the source selection process that preceded it had, in fact, chosen Raytheon primarily on the technical merits of its vertical launch design concept. It was thus improbable that the trade study would reach a different conclusion. Both the Army and the lead systems integrator were closely involved throughout the source selection and trade study processes and concurred in the selection of Raytheon’s APS concept. The process for evaluating the Trophy system to meet the urgent needs of the Central Command was different. It centered more directly on the results of physical testing, followed a less-disciplined decision-making process, and was characterized by considerable disagreement between OFT and the Army. While the decision to defer the use of the Trophy on fielded vehicles appears prudent in light of the limited realism of the testing, the promising results of the testing likewise appeared to warrant additional testing of the Trophy system to either confirm or dispel potential risks in the use of APS capabilities. Discontinuing all testing of the Trophy systems may thus have been premature, particularly in light of the need to better understand tactics, techniques and procedures and concepts of operations for both near-term and long-term applications. Because of the likelihood that the Army will introduce APS into its forces, we recommend that the Secretary of Defense support additional testing and demonstration of near-term APS systems on the Full Spectrum Effects Platform or similar vehicles to, at a minimum, help develop tactics, techniques, procedures, and concepts of operations for both near-term and long-term active protection systems. DOD provided us with written comments on a draft of this report. The comments are reprinted in appendix II. DOD did not concur with our recommendation. DOD also provided technical comments, which we incorporated where appropriate. DOD did not concur with our recommendation that the Secretary of Defense support additional testing and demonstration of near-term active protection systems on the Full Spectrum Effects Platform that could respond to the Central Command’s need. It stated that the original decision in May 2006 that delayed delivering Full Spectrum Effects Platform capabilities due to technical development and performance risks remains true today. DOD added that there are no active protection systems mature enough at this time to integrate on a Full Spectrum Effects Platform regardless of any additional testing and demonstration efforts. This represents a much more decided opinion than was rendered at the time of the OFT tests. At that time, Army officials believed that the Trophy would not be sufficiently tested for operational and safety issues in time for the first spiral of the Full Spectrum Effects Platform. OFT officials believed that the Trophy should be tested further to answer the questions raised by the Army and to provide insight into its capabilities. Ultimately, the Joint Rapid Acquisition Cell recommended slipping the active protection capability to a later spiral of the Full Spectrum Effects Platform. This was the basis for our recommendation for additional testing of near-term active protection systems on the Full Spectrum Effects Platform. DOD stated that it continues to pursue active protection, citing the Army’s vertical launch system for FCS. As stated in our report, this system is technically immature and the Army’s estimates for testing it appear optimistic. According to the Institute of Defense Analysis, the vertical launch system is ambitious, with much enabling technology not yet demonstrated. Given the criticality of active protection for the FCS manned ground vehicles, additional testing of near-term active protection systems could provide valuable insights into operations and tactics that would benefit future applications, such as FCS. DOD noted that the Trophy system is being tested on the Wolf Pack Platoon Project, an OSD Rapid Reaction Technology Office (formerly OFT) effort. However, this project is not directed toward development of APS tactics, techniques, procedures, or concepts of operations. In addition, it will not include testing against live targets. Testing near-term active protection systems on the Full Spectrum Effects Platform or similar vehicles is valuable for answering remaining questions about such systems and to provide insights for the employment of future systems. This is particularly important given the likelihood that the Army will field some form of APS to its forces. We have broadened our recommendation to capture the value of continued testing of near-term APS for tactics, techniques and procedures and concepts of operations. Please contact me on (202) 512-4841 if you or your staff has any questions concerning this report. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. To develop the information on the U.S. Army’s decision to pursue a new APS system under the FCS program, we interviewed officials of the Office of the Assistant Secretary of the Army (Acquisition, Logistics and Technology); the Tank-Automotive and Armaments Command; the Joint Rapid Acquisition Cell; the Office of Force Transformation; the Naval Surface Warfare Center (Dahlgren Division); the Program Manager for the Future Combat System (Brigade Combat Team); and the Future Combat System Lead Systems Integrator. We reviewed the APS subcontractor selection documentation, including the APS request for proposal, current force and FCS operational requirements documents, subcontract proposals, criteria used to rate those proposals, and the APS development contract to determine if procedures for avoiding organizational conflicts of interest were followed and how the APS subcontractor was selected. In addition, we held discussions with key Army officials and lead systems integrator representatives regarding this process and their roles in it. To determine why the trade study was conducted after source selection, we reviewed the trade study process and results and Army guidelines for conducting trade studies. To identify the roles played by both the Army and lead systems integrator in the selection of an APS, we reviewed documentation concerning their roles in these processes. We also reviewed these materials to determine whether consideration was given to a separate APS solution for current forces and, in conjunction with this issue, we reviewed test reports and other documentation and discussed the testing of an alternative APS system, the Trophy, with the parties involved. In evaluating the APS subcontractor selection and trade study processes, we did not attempt to determine if the best technical solution was chosen, but only if these processes followed lead systems integrator provisions for organizational conflicts of interest and used a consistent methodology for the trade study. We conducted our work between October 2006 and June 2007 in accordance with generally accepted government auditing standards. Other contributors to this report were Assistant Director William R. Graveline, Marie P. Ahearn, Beverly Breen, Tana Davis, Letisha Jenkins, Kenneth E. Patton, and Robert Swierczek. | Active Protection Systems (APS) protect vehicles from attack by detecting and intercepting missiles or munitions. In 2005, the lead systems integrator for the Army's Future Combat Systems (FCS) program sought proposals for an APS developer and design and to deliver APS prototypes on vehicles by fiscal year 2009. Raytheon was chosen the APS developer. At the same time, the Department of Defense's Office of Force Transformation (OFT) evaluated near-term APS for potential use in Iraq. GAO was asked to review the Army's actions on APS/FCS: (1) the process for selecting the subcontractor to develop an APS for FCS and if potential conflicts of interest were avoided; (2) the timing of the trade study and if it followed a consistent methodology to evaluate alternatives, and the results; (3) the role the Army and Boeing played in selecting the developer; and (4) the process followed to provide a near-term APS solution for current forces. In selecting the APS developer, the Army and Boeing--the FCS lead systems integrator--followed the provisions of the FCS lead systems integrator contract, as well as the Federal Acquisition Regulation, in addressing organizational conflicts of interest. No officials from the offering companies participated in the evaluation and all offerors were evaluated based on the same criteria. Four proposals were evaluated and three were determined to be comparable in terms of cost and schedule. The winner--Raytheon--was chosen on technical merit, as being more likely to meet APS requirements although its design had less mature technology. The APS development contract required the source selection winner to perform a trade study to assess alternatives and select the best design for development, and the Raytheon design was chosen. The trade study applied a consistent methodology to all alternatives before selecting Raytheon's vertical launch design. While the role played by Raytheon in the trade study was in accordance with its contract, the rationale for having the trade study follow the source selection is not entirely clear. The purpose of the trade study was to select the best concept; yet the source selection process that preceded it had, in fact, chosen Raytheon primarily on the technical merits of its vertical launch design concept. Although the vertical launch technology is not mature, the Army estimated that it could be available for prototype delivery to current force vehicles in fiscal year 2009 and tested on a FCS vehicle in 2011. This may be an optimistic estimate, as the FCS vehicle is yet to be fully developed. The Army and Boeing were extensively involved in APS source selection and the trade study. FCS officials actively participated and concurred in the final selection of the APS developer. FCS officials and technical experts from Army research centers took part in the trade study and helped choose the vertical launch design. Boeing officials took part in various ways and, with the Army's concurrence, selected Raytheon as the APS developer, participated in the trade study, and recommended the vertical launch approach. In its pursuit of a different APS concept, OFT was responding to an urgent need statement issued by the Central Command with potential for near-term fielding. This evaluation centered on the results of physical testing of the most technically mature candidate system, the Trophy. Decisions on how to proceed with Trophy involved disagreement between OFT and the Army. While the Trophy tests were successful, the Joint Rapid Acquisition Cell decided to defer fielding the APS system, based in part on the recommendation of Army officials, who believed that testing had not been realistic and integrating it on the platform would delay fielding other useful capabilities. OFT officials proposed additional testing of Trophy to answer these questions, but funding for further OFT testing of this system was discontinued after the Joint Rapid Acquisition Cell's decision. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
Iraq’s national government was established after a constitutional referendum in October 2005, followed by election of the first Council of Representatives (Parliament) in December 2005, and the selection of the first Prime Minister, Nuri Kamal al-Maliki, in May 2006. By mid-2006, the cabinet was approved; the government now has 34 ministries responsible for providing security and essential services—including electricity, water, and education—for the Iraqi people. The Ministry of Finance is responsible for tracking and reporting government expenditures. The Iraqi government uses single-year budgeting, which generally requires that funds be used by December 31, the end of Iraq’s fiscal year. In March 2003, the United States—along with the United Kingdom, Australia, and other members of the coalition—began combat operations in Iraq. The original “coalition of the willing” consisted of 49 countries (including the United States) that publicly committed to the war effort and also provided a variety of support, such as direct military participation, logistical and intelligence support, over-flight rights, or humanitarian and reconstruction aid. Many nations and various international organizations are supporting the efforts to rebuild Iraq through multilateral or bilateral assistance. U.N. Security Council Resolution 1511 of October 16, 2003, urged member states and international and regional organizations to support the Iraq reconstruction effort. On October 23-24, 2003, an international donors conference was held in Madrid, with 76 countries, 20 international organizations, and 13 nongovernmental organizations participating. Since GAO last reported on the status of the 18 Iraqi benchmarks in September 2007, the number of enemy attacks in Iraq has declined. While political reconciliation will take time, Iraq has not yet advanced key legislation on equitably sharing oil revenues and holding provincial elections. In addition, sectarian influences within the Iraqi ministries continue while militia influences divide the loyalties of Iraqi security forces. The January 2007 U.S. strategy, New Way Forward in Iraq, is designed to support Iraqi efforts to quell sectarian violence and foster conditions for national reconciliation by providing the Iraqi government with the time and space needed to help address differences among the various segments of Iraqi society. The number of enemy-initiated attacks on civilians, Iraqi Security Forces, and coalition forces increased dramatically after the February 2006 bombing of the Golden Mosque in Samarra. The increase in the number of monthly attacks generally continued through June 2007. To help quell the violence, the United States deployed about 30,000 additional troops to Iraq during the spring of 2007, bringing the total number of U.S. military personnel to about 164,700 as of September 2007. As depicted in figure 1, enemy-initiated attacks declined from a total of about 5,300 in June 2007 to about 3,000 in September 2007. However, the recent decrease in monthly attacks was primarily due to a decrease in the number of attacks against coalition forces. Attacks against Iraqi Security Forces and civilians have declined less than attacks against coalition forces. According to the Defense Intelligence Agency (DIA), the incidents captured in military reporting do not account for all violence throughout Iraq. For example, they may underreport incidents of Shi’a militias fighting each other and attacks against Iraqi security forces in southern Iraq and other areas with few or no coalition forces. In addition, according to a UN report released October 15, 2007, the Iraqi people and government continue to confront major challenges resulting from the devastating effects of violence. The UN reported that widespread insecurity continues to make national dialogue challenging, and increasing levels of displacement are adding to an alarming humanitarian crisis. The Iraqi government continues to make limited progress in meeting eight legislative benchmarks intended to promote national reconciliation. As of October 25, 2007, the Iraqi government had met one legislative benchmark and partially met another. Specifically, the rights of minority political parties in the Iraqi legislature were protected through existing provisions in the Iraqi Constitution and Council of Representatives’ by-laws. In addition, the Iraqi government partially met the benchmark to enact and implement legislation on the formation of regions; this law was enacted in October 2006 but will not be implemented until April 2008. The benchmark requiring a review of the Iraqi Constitution has not yet been met. Fundamental issues remain unresolved as part of the constitutional review process, such as expanded powers for the presidency, the resolution of disputed areas (such as Kirkuk), and power sharing between federal and regional governments over issues such as the distribution of oil revenue. In addition, five other legislative benchmarks requiring parliamentary action have not yet been met. Figure 2 highlights the status of the benchmarks requiring legislative enactment and implementation. Although State and Multinational Force-Iraq report progress in promoting reconciliation at local levels such as Anbar province, at the national level, sectarian factions within the Iraqi government ministries continue to undermine reconciliation efforts. For example, ministries within the Iraqi government continued to be controlled by sectarian factions and are used to maintain power and provide patronage to individuals and groups. According to an August 2007 U.S. interagency report, the withdrawal of members of the Iraqi cabinet ended the Shi’a-dominated coalition’s claim to be a government of national unity and further undermined Iraq’s already faltering program of national reconciliation. In late August 2007, Iraq’s senior Shi’a and Sunni Arab and Kurdish political leaders signed a unity accord signaling efforts to foster greater national reconciliation. The accord covered draft legislation on de-Ba’athification reform and provincial powers laws, and established a mechanism to release some Sunni detainees being held without charges. However, these laws have not been passed as of October 25, 2007. The Iraqi government has made limited progress in developing effective and non-sectarian forces. Since 2003, the United States has provided about $19.2 billion to train and equip about 360,000 Iraqi soldiers and police officers, in an effort to develop Iraqi security forces, transfer security responsibilities to them and to the Iraqi government, and ultimately withdraw U.S. troops from Iraq. Iraqi security forces have grown in size and are increasingly leading counterinsurgency operations. However, only about 10 of 140 Iraqi army, national police, and special operations forces are operating independently as of September 2007. Several factors have complicated the development of effective and loyal Iraqi security forces. First, the Iraqi security forces are not a single unified force with a primary mission of countering the insurgency in Iraq. Second, high rates of absenteeism and poor ministry reporting result in an overstatement of the number of Iraqi security forces present for duty. Third, sectarian and militia influences have divided the loyalties of Iraqi security forces. According to the Independent Commission on the Security Forces of Iraq, the Iraqi National Police is not viable and should be disbanded. Fourth, Iraqi units remain dependent upon the coalition for their logistical, command and control, and intelligence capabilities. Three GAO reports illustrate a recurring problem with U.S. efforts in Iraq—the lack of strategies with clear purpose, scope, roles and responsibilities, and performance measures. Our reports assessing (1) the National Strategy for Victory in Iraq (NSVI), (2) U.S. efforts to develop planning and budget capacity in Iraq’s ministries, and (3) U.S. and Iraqi efforts to rebuild Iraq’s energy sector show that clear strategies are needed to guide U.S. efforts, manage risk, and identify needed resources. The National Strategy for Victory in Iraq was intended to clarify the President’s strategy for achieving overall U.S. political, security, and economic goals in Iraq. In our 2006 report, we found that the strategy was incomplete. First, it only partially identified the agencies responsible for implementing key aspects of the strategy. Second, it did not fully address how the United States would integrate its goals with those of the Iraqis and the international community, and it did not detail Iraq’s anticipated contribution to its future needs. Third, it only partially identified the current and future costs of U.S. involvement in Iraq, including maintaining U.S. military operations, building Iraqi government capacity, and rebuilding critical infrastructure. Without a complete strategy, U.S. efforts are less likely to be effective. We recommended that the National Security Council (NSC), along with DOD and State, complete the strategy by addressing all six characteristics of an effective national strategy, including detailed information on costs and roles and responsibilities. NSC, State, and DOD did not comment on GAO’s recommendations. In commenting on the report, State asserted that GAO misrepresented the NSVI’s purpose—to provide the public a broad overview of the U.S. strategy in Iraq, not to set forth details readily available elsewhere. However, without detailed information on costs and roles and responsibilities, the strategy does not provide Congress with a clear road map for achieving victory in Iraq. In addition, we have provided the Congress classified reports and briefings on the Joint U.S. Embassy – Multinational Force-Iraq’s classified campaign plan for Iraq. The development of competent and loyal Iraqi ministries is critical to stabilizing and rebuilding Iraq. To help Iraq develop the capability of its ministries, the United States has provided about $300 million between fiscal years 2005 to 2007. The Administration has requested an additional $255 million for fiscal year 2008 to continue these efforts. However, U.S. efforts lack an overall strategy, no lead agency provides overall direction, and U.S. priorities have been subject to numerous changes. U.S. efforts also face four challenges that pose risks to their success and long-term sustainability. First, Iraqi government institutions have significant shortages of personnel with the skills to perform the vital tasks necessary to provide security and deliver essential services to the Iraqi people. Second, Iraq’s government confronts significant challenges in staffing a nonpartisan civil service and addressing militia infiltration of key ministries. Third, widespread corruption undermines efforts to develop the government’s capacity by robbing it of needed resources. Fourth, violence in Iraq hinders U.S. advisors’ access to Iraqi ministries, increases absenteeism among ministry employees, and contributes to the growing number of professional Iraqis leaving the country. Without a unified U.S. strategy that clearly articulates agency roles and responsibilities and addresses the risks cited above, U.S. efforts are less likely to succeed. We recommended that the State Department complete an overall integrated strategy for U.S. capacity development efforts. Congress should also consider conditioning future appropriations on the completion of the strategy. State recognized the value of such a strategy but expressed concern about conditioning further capacity development investment on completion of such a strategy. The weaknesses in U.S. strategic planning are compounded by the Iraqis’ lack of strategic planning in its critical energy sector. As we reported in May 2007, it is difficult to identify the most pressing future funding needs, key rebuilding priorities, and existing vulnerabilities and risks given the absence of an overarching strategic plan that comprehensively assesses the requirements of the energy sector as a whole. While the Iraqi government has crafted a multiyear strategic plan for Iraq’s electricity sector, no such plan exists for the oil sector. Given the highly interdependent nature of the oil and electricity sectors, such a plan would help identify the most pressing needs for the entire energy sector and help overcome the daunting challenges affecting future development prospects. For fiscal years 2003 to 2006, the United States made available about $7.4 billion and spent about $5.1 billion to rebuild Iraq’s oil and electricity sectors. However, production in both sectors has consistently fallen below U.S. program goals of 3 million barrels per day and 6,000 megawatts of electrical peak generation capacity. Billions of dollars are still needed to rebuild, maintain, and secure Iraq’s oil and electricity infrastructure, underscoring the need for sound strategic planning. The Ministry of Electricity’s 2006-2015 Electricity Master Plan estimates that $27 billion will be needed to reach its goal of providing reliable electricity across Iraq by 2015. According to DOD, investment in Iraq’s oil sector is “woefully short” of the absolute minimum required to sustain current production, and additional foreign and private investment is needed. Moreover, U.S. officials and industry experts estimate that Iraq would need $20 billion to $30 billion over the next several years to reach and sustain a crude oil production capacity of 5 million barrels per day. We recommended that the Secretary of State, in conjunction with relevant U.S. agencies and international donors, work with Iraqi ministries to develop an integrated energy strategy. State commented that the Iraqi government, not the U.S. government, is responsible for taking action on GAO’s recommendations. We believe that the recommendations are still valid given the billions made available for Iraq’s energy sector and the U.S. government’s influence in overseeing Iraq’s rebuilding efforts. From the onset of the reconstruction and stabilization effort, the U.S. strategy assumed that the Iraqis and the international community would help finance Iraq’s development needs. However, the Iraqi government has a limited capacity to spend reconstruction funds, which hinders its ability to assume a more prominent role in rebuilding Iraq’s crumbling infrastructure. The international community has provided funds for Iraq’s reconstruction, but most of the funding offered has been in the form of loans that the Iraqis have not accessed. The government of Iraq allocated $10 billion of its 2007 revenues for capital projects and reconstruction, including capital funds for the provinces based on their populations. However, available data from the government of Iraq and analysis from U.S. and coalition officials show that, while 2007 spending has increased compared with 2006, a large portion of Iraq’s $10 billion in capital projects and reconstruction budget will likely go unspent through the end of this year. Iraq’s ministries, for example, spent only 24 percent of their 2007 capital budgets through mid- July 2007. U.S. government, coalition, and international agencies have identified a number of factors that affect the Iraqi government’s ability to spend capital budgets. In addition to the poor security environment and “brain drain” issues, U.S. and foreign officials also noted that weaknesses in Iraqi procurement and budgeting procedures impede completion of capital projects. For example, according to the State Department, Iraq’s Contracting Committee requires about a dozen signatures to approve projects exceeding $10 million, which slows the process. As a possible reflection of Iraq’s difficulty in spending its capital budgets, Iraq’s proposed 2008 capital budget declines substantially (57 percent) from 2007 (see table 1). As a percentage of its overall budget, Iraq’s capital expenditures will decline from 24 percent in 2007 to 12 percent in 2008. We are conducting a review of U.S. efforts to help Iraq spend its budget and will issue a separate report at a later date. As of April 2007, international donors have pledged about $14.9 billion in support of Iraq reconstruction. In addition, some countries exceeded their pledges by providing an additional $744 million for a total of about $15.6 billion, according to the State Department. Of this amount, about $11 billion is in the form of loans. As of April 2007, Iraq had accessed about $436 million in loans from the International Monetary Fund. The remaining $4.6 billion is in the form of grants, to be provided multilaterally or bilaterally; $3 billion of that amount has been disbursed to Iraq. See appendix I for pledges made at Madrid and thereafter for Iraq reconstruction. In addition, 16 of the 41 countries that pledged funding for Iraq reconstruction also provided troops to the multinational force in Iraq. In addition to funds, some countries also contribute troops to the U.S.-led coalition. As of September 2007, 26 countries were contributing 12,300 troops to multinational forces in Iraq. Compared with the 164,700 forces from the United States, other coalition countries represent about 7 percent of Multinational Forces in Iraq. From December 2003 through September 2007, the number of non-U.S. coalition troops decreased from 24,000 to 12,300 and the number of coalition nations contributing troops to military operations decreased from 33 to 26. See appendix II for a comparison of U.S and coalition troops from December 2003 through September 2007. As this committee is called upon to provide more resources to help stabilize and rebuild Iraq, continued oversight is needed of the key issues highlighted in today’s testimony. While U.S. troops have performed courageously under difficult and dangerous circumstances, the continued violence and polarization of Iraqi society as well as the Iraqi government’s continued difficulties in funding its reconstruction needs diminishes the prospects for achieving current U.S. security, political, and economic goals in Iraq. Of particular concern is the lack of strategic plans to guide U.S. and Iraqi efforts to rebuild and stabilize the country. Our assessment of the U.S. strategy for Iraq and recent efforts to build central ministry capacity show that U.S. planning efforts have been plagued by unclear goals and objectives, changing priorities, inadequate risk assessments, and uncertain costs. Weaknesses in U.S. strategic planning are compounded by the lack of strategic planning in Iraq’s energy sector, the sector that provides the most government revenues. Madam Chair this concludes my statement. I would be pleased to answer any questions that you or other Members may have. For questions regarding this testimony, please contact me on (202) 512- 8979 or [email protected]. Other key contributors to this statement were Stephen Lord, David Bruno, Thomas Costa, Lynn Cothern, Mattias Fenton, Muriel Forster, Lisa Helmer, Dorian Herring, Patrick Hickey, Bruce Kutnicky, Tetsuo Miyabara, Judith McCloskey, and Mary Moutsos. 152. 126. 12. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | Since 2003, the Congress has obligated nearly $400 billion for U.S. efforts in Iraq, of which about $40 billion has supported reconstruction and stabilization efforts. Congressional oversight of this substantial investment is crucial as the Administration requests additional military and economic funds for Iraq. This testimony summarizes the results of recent GAO audit work and proposes three areas for which continued oversight is needed: (1) progress in improving security and national reconciliation, (2) efforts to develop clear U.S. strategies, and (3) Iraqi and international contributions to economic development. We reviewed U.S. agency documents and interviewed agency officials, including the departments of State, Defense, and Treasury; and the U.S. Agency for International Development; the UN; and the Iraqi government. We also made multiple trips to Iraq as part of this work. Since GAO last reported in September 2007, on the status of the 18 Iraqi benchmarks, the number of enemy attacks in Iraq has declined. While political reconciliation will take time, Iraq has not yet advanced key legislation on equitably sharing oil revenues and holding provincial elections. In addition, sectarian influences within Iraqi ministries continue while militia influences divide the loyalties of Iraqi security forces. U.S. efforts lack strategies with clear purpose, scope, roles, and performance measures. The U.S. strategy for victory in Iraq partially identifies the agencies responsible for implementing key aspects of the strategy and does not fully address how the United States would integrate its goals with those of the Iraqis and the international community. U.S. efforts to develop Iraqi ministry capability lack an overall strategy, no lead agency provides overall direction, and U.S. priorities have been subject to numerous changes. The weaknesses in U.S. strategic planning are compounded by the Iraqi government's lack of integrated strategic planning in its critical energy sector. The U.S. strategy assumed that the Iraqis and international community would help finance Iraq's reconstruction. However, the Iraqi government has limited capacity to spend reconstruction funds. For example, Iraq allocated $10 billion of its revenues for capital projects and reconstruction in 2007. However, a large portion of this amount is unlikely to be spent, as ministries had spent only 24 percent of their capital budgets through mid-July 2007. Iraq has proposed spending only $4 billion for capital projects in 2008, a significant reduction from 2007. The international community has pledged $15.6 billion for reconstruction efforts in Iraq, but about $11 billion of this is in the form of loans. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
On February 17, 2002, pursuant to ATSA, TSA assumed responsibility for the security of the nation’s civil aviation system from the Federal Aviation Administration (FAA), including FAA’s existing aviation security programs, plans, regulations, orders, and directives covering airports, air carriers, and other related entities. Among other things, ATSA directs TSA to improve the security of airport perimeters and the access controls leading to secured areas, and take measures to reduce the security risks posed by airport workers. (See app. II for more specific details on ATSA requirements and TSA’s actions to address these requirements.) TSA has 158 FSDs who oversee the implementation of, and adherence to, TSA requirements at the approximately 450 commercial airports nationwide. As part of TSA’s oversight role, it also conducts compliance inspections, covert testing, and vulnerability assessments to analyze and improve security. (See app. III for information on how TSA uses compliance inspections and covert testing to identify possible airport security vulnerabilities.) In general, TSA funds its perimeter and access control security–related activities out of its annual appropriation and in accordance with direction set forth in congressional committee reports. For example, the Explanatory Statement accompanying the DHS Appropriations Act, 2008, directed that TSA allocate $15 million of its appropriation to a worker screening pilot program. TSA does not track the amount of funds spent in total for perimeter and access controls because related efforts and activities can be part of broader security programs that also serve other aspects of aviation security. In addition, airports may receive federal funding for perimeter and access control security, such as through federal grant programs or TSA pilot programs. (For more information on such airport security costs and funding, see app. IV.) Airport operators have direct responsibility for day-to-day aviation operations, including, in general, the security of airport perimeters, access controls, and workers, as well as for implementing TSA security requirements. Airport operators implement security requirements in accordance with their TSA-approved security programs. Elements of a security program may include, among other things, procedures for performing background checks on airport workers, applicable training programs for these workers, and procedures and measures for controlling access to secured airport areas. Security programs may also be required to describe the secured areas of the airport, including a description and map detailing boundaries and pertinent features of the secured areas, and the measures used to control access to such areas. Commercial airports are generally divided into designated areas that have varying levels of security, known as secured areas, security identification display areas (SIDA), air operations areas (AOA), and sterile areas. Sterile areas, located within the terminal, are where passengers wait after screening to board departing aircraft. Access to sterile areas is controlled by TSA screeners at security checkpoints, where they conduct physical screening of passengers and their property. Airport workers may access the sterile area through the security checkpoint or through other access points secured by the airport operator in accordance with its security program. The SIDA and the AOA are not to be accessed by passengers, and typically encompass baggage loading areas, areas near terminal buildings, and other areas close to parked aircraft and airport facilities, as illustrated in figure 1. Securing access to the sterile area from other secured areas—such as the SIDA—and security within the area, is the responsibility of the airport operator, in accordance with its security program. Airport perimeter and access control security is intended to prevent unauthorized access into secured areas—either from outside the airport complex or from within the airport’s sterile area. Individual airport operators determine the boundaries for each of these areas on a case-by-case basis, depending on the physical layout of the airport and in accordance with TSA requirements. As a result, some of these areas may overlap. Within these areas, airport operators are responsible for safeguarding their airfield barriers, preventing and detecting unauthorized entry into secured areas, and conducting background checks of workers with unescorted access to secured areas. Methods used by airports to control access through perimeters or into secured areas vary because of differences in the design and layout of individual airports, but all access controls must meet minimum performance standards in accordance with TSA requirements. These methods typically involve the use of one or more of the following: pedestrian and vehicle gates, keypad access codes using personal identification numbers, magnetic stripe cards and readers, turnstiles, locks and keys, and security personnel. According to TSA officials, airport security breaches occur within and around secured areas at domestic airports (see fig. 2 for the number of security breaches reported by TSA from fiscal year 2004 through fiscal year 2008). While some breaches may represent dry runs by terrorists or others to test security or criminal incidents involving airport workers, most are accidental. TSA requires FSDs to report security breaches that occur both at the airports for which they are responsible and on board aircraft destined for their airports. TSA officials said that they review security breach data and report them to senior management as requested, and provide data on serious breaches to senior management on a daily basis, as applicable. According to a TSA official, the increase in known breaches from fiscal years 2004 through 2005 reflects a change in the requirements for reporting security breaches that TSA issued in December 2005. This change provided more specific instructions to FSDs on how to categorize different types of security incidents. Regarding increases in security breaches from fiscal years 2005 through 2008, TSA officials said that while they could not fully explain these increases, there could be several reasons to account for this growth. For example, according to TSA officials, changes in TSA management often trigger increases in specific types of breaches reported, such as since 2004, when the priorities of the new Administrator resulted in an increase in the reporting of restricted items. TSA officials also stated that a report of a security breach at a major U.S. airport is likely to cause security and law enforcement officials elsewhere to subsequently raise the overall awareness of security requirements for a period of time. In addition, TSA noted that certain inspections conducted by TSA officials tend to produce heightened awareness by federal and airport employees whose perimeter security and access control procedures are being inspected for compliance with regulations. Risk management is a tool for informing policymakers’ decisions about assessing risks, allocating resources, and taking actions under conditions of uncertainty. We have previously reported that a risk management approach can help to prioritize and focus the programs designed to combat terrorism. Risk management, as applied in the transportation security context, can help federal decision makers determine where and how to invest limited resources within and among the various modes of transportation. In accordance with Homeland Security Presidential Directive (HSPD) 7, the Secretary of Homeland Security designated TSA as the sector-specific agency for the transportation security sector, requiring TSA to identify, prioritize, and coordinate the protection of critical infrastructure and key resources within this sector and integrate risk management strategies into its protective activities. In June 2006, in accordance with HSPD-7 and the Homeland Security Act of 2002, DHS released the NIPP, which it later updated in 2009. The NIPP developed a risk management framework for homeland security. In accordance with the NIPP, TSA developed the TS-SSP to govern its strategy for securing the transportation sector, as well as annexes for each mode of transportation, including aviation. The NIPP and TS-SSP set forth risk management principles, including a comprehensive risk assessment process for considering threat, vulnerability, and consequence assessments to determine the likelihood of terrorist attacks and the severity of the impacts. Figure 3 illustrates the interrelated activities of the NIPP’s risk management framework. Set security goals: Define specific outcomes, conditions, end points, or performance targets that collectively constitute an effective protective posture. Identify assets, systems, networks, and functions: Develop an inventory of the assets, systems, and networks that constitute the nation’s critical infrastructure, key resources, and critical functions. Collect information pertinent to risk management that takes into account the fundamental characteristics of each sector. Assess risks: Determine risk by combining potential direct and indirect consequences of a terrorist attack or other hazards (including seasonal changes in consequences and dependencies and interdependencies associated with each identified asset, system, or network), known vulnerabilities to various potential attack vectors, and general or specific threat information. Prioritize: Aggregate and analyze risk assessment results to develop a comprehensive picture of asset, system, and network risk; establish priorities based on risk; assess the mitigation of risk for each proposed activity based on a specific investment; and determine protection and business continuity initiatives that provide the greatest mitigation of risk. Implement protective programs: To reduce or manage identified risk, select sector-appropriate protective actions or programs that offer the greatest mitigation of risk for any given resource/expenditure/investment. Secure the resources needed to address priorities. Measure effectiveness: Use metrics and other evaluation procedures at the national and sector levels to measure progress and assess the effectiveness of the national Critical Infrastructure and Key Resources Protection Program in improving protection, managing risk, and increasing resiliency. Within the risk management framework, the NIPP also establishes core criteria for risk assessments. According to the NIPP, risk assessments are a qualitative determination, a quantitative determination, or both of the likelihood of an adverse event occurring and are a critical element of the NIPP risk management framework. Risk assessments also help decision makers identify and evaluate potential risks so that countermeasures can be designed and implemented to prevent or mitigate the potential effects of the risks. The NIPP characterizes risk assessment as a function of three elements: Threat: The likelihood that a particular asset, system, or network will suffer an attack or an incident. In the context of risk associated with a terrorist attack, the estimate of this is based on the analysis of the intent and the capability of an adversary; in the context of a natural disaster or accident, the likelihood is based on the probability of occurrence. Vulnerability: The likelihood that a characteristic of, or flaw in, an asset’s, system’s, or network’s design, location, security posture, process, or operation renders it susceptible to destruction, incapacitation, or exploitation by terrorist or other to intentional acts, mechanical failures, and natural hazards. Consequence: The negative effects on public health and safety, the economy, public confidence in institutions, and the functioning of government, both direct and indirect, that can be expected if an asset, system, or network is damaged, destroyed, or disrupted by a terrorist attack, natural disaster, or other incident. Information from the three elements used in assessing risk—threat, vulnerability, and consequence—can lead to a risk characterization and provide input for prioritizing security goals. While TSA has taken steps to assess risk, it has not conducted a comprehensive risk assessment based on assessments of threats, vulnerabilities, and consequences. TSA officials reported that they have identified threats to airport security as part of an overall assessment of threats to the civil aviation system. While TSA has conducted vulnerability assessment activities at select airports, it has not analyzed whether the select assessments reflect the overall vulnerability of airport security nationwide. Further, TSA has not yet assessed the consequences of an attack against airport perimeter and access control security. According to the NIPP, risk assessments are to be documented, reproducible (so that others can verify the results), defensible (technically sound and free of significant errors), and complete. The NIPP maintains that these qualities are necessary to risk assessments so they can be used to support national-level, comparative risk assessment, planning, and resource prioritization. For a risk assessment to be considered complete, the NIPP states that it must specifically assess threat, vulnerability, and consequence; after these three components have been assessed, they are to be combined to produce a risk estimate. According to the NIPP, comprehensive risk assessments are necessary for determining which assets or systems face the highest risk for prioritizing risk mitigation efforts and the allocation of resources and for effectively measuring how security programs reduce risks. In March 2009 we reported that a lack of information that fully depicts threats, vulnerabilities, and consequences limits an organization’s ability to establish priorities and make cost-effective security measure decisions. TSA officials told us that they have not completed a comprehensive risk assessment for airport security, although they said that they have prepared and are currently reviewing a draft of a comprehensive, scenario-based air domain risk assessment (ADRA), which officials said is to serve as a comprehensive risk assessment for airport security. According to officials, the ADRA is to address all three elements of risk for domestic commercial aviation, general aviation, and air cargo. However, TSA has not released it as originally planned for in February 2008. As of May 2009 TSA officials had not provided revised dates for when the agency expects to finalize the ADRA, and they could not provide documentation to demonstrate to what extent the ADRA will address all three components of risk for airport perimeter and access control security. As a result, it is not clear whether the ADRA will provide the risk analysis needed to inform TSA’s decisions and planning for airport perimeter and access control security. Standard practices in program management call for documenting the scope of the program and milestones (i.e., time frames) to ensure results are achieved. Conducting a comprehensive risk assessment for airport security and documenting milestones for its implementation would help ensure that TSA’s intended actions will be implemented, and would allow TSA to more confidently ensure that its investments in airport security are risk informed and allocated toward the highest-priority risks. A threat assessment is the identification and evaluation of adverse events that can harm or damage an asset. TSA uses several products to identify and assess potential threats to airport security, such as daily intelligence briefings, weekly suspicious incident reports, and situational awareness reports, all of which are available to internal and external stakeholders. TSA also issues an annual threat assessment of the U.S. civil aviation system, which includes an assessment of threats to airport perimeter and access control security. According to TSA officials, these products collectively form TSA’s assessment of threats to airport perimeter and access control security. TSA’s 2008 Civil Aviation Threat Assessment cites four potential threats related to perimeter and access control security, one of which is the threat from insiders—airport workers with authorized access to secured areas. The 2008 assessment characterized the insider threat as “one of the greatest threats to aviation,” which TSA officials explained is meant to reflect the opportunity insiders have to do damage, as well as the vulnerability of commercial airports to an insider attack, which these officials stated as being very high. As of May 2009, TSA had no knowledge of a specific plot by terrorists or others to breach the security of any domestic commercial airport. However, TSA has also noted that airports are seen as more accessible targets than aircraft, and that airport perimeters may become more desirable targets as terrorists look for new ways to circumvent aviation security. Intelligence is necessary to inform threat assessments. As we reported in March 2009, TSA has not clarified the levels of uncertainty—or varying levels of confidence—associated with the intelligence information it has used to identify threats to the transportation sector and guide its planning and investment decisions. Both Congress and the administration have recognized uncertainty inherent in intelligence analysis, and have required analytic products within the intelligence community to properly caveat and express uncertainties or confidence in resulting conclusions or judgments. As a result, the intelligence community and the Department of Defense have adopted this practice in reporting threat intelligence. Since TSA does not assign confidence levels to its analytic judgments, it is difficult for TSA to correctly prioritize its tactics and investments based on uncertain intelligence. In March 2009 we recommended that TSA work with the Director of National Intelligence to determine the best approach for assigning uncertainty or confidence levels to analytic intelligence products and apply this approach. TSA agreed with this recommendation and said that it has begun taking action to address it. The NIPP requires that a risk assessment include a comprehensive assessment of vulnerabilities in assets or systems, such as a physical design feature or type of location, that make them susceptible to a terrorist attack. As we reported in June 2004, these assessments are intended to facilitate airport operators’ efforts to comprehensively identify and effectively address perimeter and access control security weaknesses. TSA officials told us that their primary measures for assessing the vulnerability of commercial airports to attack are the collective results of joint vulnerability assessments (JVA) and professional judgment. TSA officials said that the agency plans to expand the number of JVAs conducted in the future but, as of May 2009, did not have a plan for doing so. According to TSA officials, JVAs are assessments that teams of TSA special agents and other officials conduct jointly with the Federal Bureau of Investigation (FBI) and, as required by law, are generally conducted every 3 years for airports identified as high risk. In response to our 2004 recommendation that TSA establish a schedule and analytical approach for completing vulnerability assessments for evaluating airport security, TSA developed criteria to select and prioritize airports as high-risk for assessment. TSA officials stated that in addition to assessing airports identified as high risk, the agency has also assessed the vulnerability of other airports at the request of FSDs. According to TSA’s TS-SSP, after focusing initially on airports deemed high risk, JVAs are to be conducted at all commercial airports. TSA officials stated that JVA teams assess all aspects of airport security and operations, including fuel, cargo, catering, general aviation, terminal area and law enforcement operations, and the controls that limit access to secured areas and the integrity of the airport perimeter. However, officials emphasized that a JVA is not intended to be a review of an airport’s compliance with security requirements and teams do not impose penalties for noncompliance. From fiscal years 2004 through 2008, TSA conducted 67 JVAs at a total of 57 airports—about 13 percent of the approximately 450 commercial airports nationwide. In 2007 TSA officials conducted a preliminary analysis of the results of JVAs conducted at 23 domestic airports during fiscal years 2004 and 2005, and found 6 areas in which 20 percent or more of the airports assessed were identified as vulnerable. Specific vulnerabilities included the absence of blast resistant glass in terminal windows, lack of bollards/barriers in front of terminals, lack of blast resistant trash receptacles, and insufficient electronic surveillance of perimeter lines and access points. As of May 2009 TSA officials said that the agency had not finalized this analysis and, as of that date, did not have plans to do so. TSA officials also told us that they have shared the results of JVA reports with TSA’s Office of Security Technology to prioritize the distribution of relevant technology to those airports with vulnerabilities that these technologies could strengthen. TSA characterizes U.S. airports as a system of interdependent hubs and links (spokes) in which the security of all is affected or disrupted by the security of the weakest one. The interdependent nature of the system necessitates that TSA protect the overall system as well as individual assets. TSA maintains that such a “systems-based approach” allows it to focus resources on reducing risks across the entire system while maintaining cost-effectiveness and efficiency. TSA officials could not explain to what extent the collective JVAs of specific airports constitute a reasonable systems-based assessment of vulnerability across airports nationwide or whether the agency has considered assessing vulnerabilities across all airports. Although TSA has conducted JVAs at each category of airport, 58 of the 67 were at the largest airports. According to TSA data, 87 percent of commercial airports—most of the smaller Category II, III, and IV airports—have not received a JVA. TSA officials said that because they have not conducted JVAs for these airports, they do not know how vulnerable they are to an intentional security breach. In 2004 we reported that TSA intended to compile baseline data on airport security vulnerabilities to enable it to conduct a systematic analysis of airport security vulnerabilities nationwide. At that time TSA officials told us that such analysis was essential since it would allow the agency to determine the adequacy of security policies and help TSA and airport operators better direct limited resources. According to TSA officials, conducting JVAs at all airports would allow them to compile national baseline data on perimeter and access control security vulnerabilities. As of May 2009, however, TSA officials had not yet completed a nationwide vulnerability assessment, evaluated whether the current approach to JVAs would provide the desired systems-based approach to assessing airport security vulnerabilities, or explained why a nationwide assessment or evaluation has not been conducted. In subsequent discussions, TSA officials told us that based on our review they intend to increase the number of JVAs conducted at airports that are not categorized as high risk—primarily Category II, III, and IV airports. According to officials, the resulting data are to assist TSA in prioritizing the allocation of limited resources. However, TSA officials could not tell us how many additional airports they plan to assess in total or within each category, the analytical approach and time frames for conducting these assessments, and to what extent these additional assessments, in combination with past JVAs, will constitute a reasonable systems-based assessment of vulnerability across airports nationwide. Standard practices for program management call for establishing a management plan and milestones to meet stated objectives and achieve results. It is also unclear to what extent the ADRA, when it is completed, will represent a systems-based vulnerability assessment, an assessment of airports nationwide, or both. Given that TSA officials believe that the vulnerability of airports to an insider attack is very high and the security of airports is interconnected, this vulnerability would extend throughout the nationwide system of airports. Evaluating the extent to which the agency’s current approach assesses systems-based vulnerabilities, including the vulnerabilities of smaller airports, would better position TSA to provide reasonable assurance that it is identifying and addressing the areas of greatest vulnerability and the spectrum of vulnerability across the entire airport system. Further, should TSA decide to conduct a nationwide assessment of airport vulnerability, developing a plan that includes milestones for completing the assessment would help TSA ensure that it takes the necessary actions to accomplish desired objectives within reasonable time frames. According to the NIPP, DHS and lead security agencies, such as TSA, are to seek to use information from the risk assessments of security partners, whenever possible, to contribute to an understanding of sector and national risks. Moreover, the NIPP states that DHS and lead agencies are to work together to assist security partners in providing vulnerability assessment tools that may be used as part of self-assessment processes, and provide recommendations regarding the frequency of assessments, particularly in light of emergent threats. According to the NIPP, stakeholder vulnerability assessments may serve as a basis for developing common vulnerability reports that can help identify strategic needs and more fully investigate interdependencies. However, TSA officials could not explain to what extent they make use of relevant vulnerability assessments conducted independently by airport operators to contribute to the agency’s understanding of airport security risks, or have worked with security partners to help ensure that tools are available for airports to conduct self-assessment processes of vulnerability. Officials from two prominent airport industry associations estimated that the majority of airports, particularly larger airports, have conducted vulnerability assessments, although they could not give us a specific number. In addition, officials from 8 of the 10 airports whom we interviewed on this issue told us that their airports had conducted vulnerability assessment activities. Some of these analyses could be useful to TSA in conducting a systematic analysis of airport security vulnerabilities nationwide. By taking advantage, to the extent possible, of existing vulnerability assessment activities conducted by airport operators, TSA could enrich its understanding of airport security vulnerabilities and therefore better inform federal actions for reducing airport vulnerabilities. According to TSA officials, the agency has not assessed the consequences of a successful attack against airport perimeters or a breach to secured areas within airports, even though the NIPP asserts that the potential consequence of an incident is the first factor to be considered in developing a risk assessment. According to the NIPP, risk assessments should include consequence assessments that evaluate negative effects to public health and safety, the economy, public confidence in national economic and political institutions, and the functioning of government that can be expected if an asset, system, or network is damaged, destroyed, or disrupted by a terrorist attack. Although TSA officials agree that a consequence assessment for airport security is needed, and have stated that the ADRA is intended to provide a comprehensive consequence assessment based on risk scenarios, the agency has not provided additional details as to what the assessment will include, the extent to which it will assess consequence for airport security, or when it will be completed. Standard management practices call for documenting milestones (i.e., time frames) to ensure that results are achieved. TSA officials have agreed that a consequence assessment for airport perimeter and access controls security is an important element in assessing risk to airport security. In addition, TSA officials commented that although the immediate consequences of a breach of airport security would likely be limited, such an event could be the first step in a more significant attack against an airport terminal or aircraft, or an attempt to use an aircraft as a weapon. Conducting a consequence assessment could help TSA in developing a comprehensive risk assessment and increase its assurance that the resulting steps it takes to strengthen airport security will more effectively reduce risk and mitigate the consequences of an attack on individual airports and the aviation system as a whole. TSA has implemented a variety of programs and protective actions to strengthen airport security, from additional worker screening to assessing different technologies. For example, consistent with the Explanatory Statement, TSA piloted several methods to screen workers accessing secured areas, but clear conclusions could not be drawn because of significant design limitations, and TSA did not develop or document an evaluation plan to guide design and implementation of the pilot. Further, while TSA has strengthened other worker security programs, assessed various technologies, and added to programs aimed at improving general airport security, certain issues, such as whether security technologies meet airport needs, have not been fully resolved. TSA has taken a variety of protective actions to improve and strengthen the security of commercial airports through the development of new programs or by enhancing existing efforts. Since we last reported on airport perimeter and access control security in June 2004, TSA has implemented efforts to strengthen worker screening and security programs, improve access control technology, and enhance general airport security by providing an additional security presence at airports. According to TSA, each of its security actions—or layers—is capable of stopping a terrorist attack, but when used in combination (what TSA calls a layered approach), a much stronger system results. To better address the risks posed by airport workers, TSA, in accordance with the Explanatory Statement accompanying the DHS Appropriations Act, 2008, initiated a worker screening pilot program to assess various types of screening methods for airport workers. TSA also implemented a random worker screening program and is currently working to apply its screening procedures consistently across airports. In addition, TSA has expanded its requirements for conducting worker background checks. TSA has also taken steps, such as implementing two pilot programs, to identify and assess technologies to strengthen the security of airport perimeters and access controls to secured areas. Further, TSA has taken steps to strengthen general airport security processes. For example, TSA has developed a program in which teams of TSA officials, law enforcement officers, and airport officials temporarily augment airport security through various actions such as randomly inspecting workers, property, and vehicles and patrolling secured areas. Table 1 lists the actions TSA has taken since 2004 to strengthen airport security. From May through July 2008 TSA piloted a program to screen 100 percent of workers at three airports and to test a variety of enhanced screening methods at four other airports. (See app. V for more detailed information on the pilot program, including locations and types of screening methods used.) According to TSA, the objective of the pilot was to compare 100 percent worker screening and enhanced random worker screening based on (1) screening effectiveness, (2) impact on airport operations, and (3) cost considerations. TSA officials hired a contractor—HSI, a federally funded research and development center—to assist with the design, implementation, and evaluation of the data collected. In July 2009 TSA released a report on the results of the pilot program, which included HSI’s findings. HSI concluded that random screening is a more cost-effective approach because it appears “roughly” as effective in identifying contraband items—or items of interest—at less cost than 100 percent worker screening. However, HSI also emphasized that the pilot program “was not a robust experiment” because of limitations in the design and evaluation, such as the limited number of participating airports, which led HSI to identify uncertainties in the results. Given the significance of these limitations, we believe that it is unclear whether random worker screening is more or less cost-effective than 100 percent worker screening. Specifically, HSI identified what we believe to be significant limitations related to the design of the pilot program and the estimation of costs and operational effects. Limitations related to program design include (1) a limited number of participating airports, (2) the short duration of screening operations (generally 90 days), (3) the variety of screening techniques applied, (4) the lack of a baseline, and (5) limited evaluation of enhanced methods. For example, HSI noted that while two of the seven pilot airports performed complete 100 percent worker screening, neither was a Category X airport; a third airport—a Category X—performed 100 percent screening at certain locations for limited durations. HSI also reported that the other four pilot airports used a range of tools and screening techniques—magnetometers, handheld metal detectors, pat- downs—which reduced its ability to assess in great detail any one screening process common to all the pilot airports. In addition, HSI cited issues regarding the use of baseline data for comparison of screening methods. HSI attempted to use previous Aviation Direct Access Screening Program (ADASP) screening data for comparison, but these data were not always comparable in terms of how the screening was conducted. In addition, HSI identified a significant limitation in generalizing pilot program results across airports nationwide, given the limited number and diversity of the pilot airports. HSI noted that because these airports were chosen based on geographic diversity and size, other unique airport factors that might affect worker screening operations—such as workforce size and the number and location of access points—may not have been considered. HSI also recognized what we believe to be significant limitations in the development of estimates of the costs and operational effects of implementing 100 percent worker screening and random worker screening nationwide. HSI’s characterization of its cost estimates as “rough order of magnitude”—or imprecise—underscores the challenge of estimating costs for the entire airport system in the absence of detailed data on individual airports nationwide and in light of the limited amount of information gleaned from the pilot on operational effects and other costs. HSI noted that the cost estimates do not include costs associated with operational effects, such as longer wait times for workers, and potentially costly infrastructure modifications, such as construction of roads and shelters to accommodate vehicle screening. HSI developed high- and low-cost estimates based on current and optimal numbers of airport access points and the amount of resources (personnel, space, and equipment) needed to conduct 100 percent and random worker screening. According to these estimates, the direct cost—including personnel, equipment, and other operation needs—of implementing 100 percent worker screening would range from $5.7 billion to $14.9 billion for the first year, while the direct costs of implementing enhanced random worker screening would range from $1.8 billion to $6.6 billion. HSI noted that the random worker screening methods applied in the worker screening pilot program were a “significant step” beyond TSA’s ongoing worker screening program—ADASP—which the agency characterizes as a “random” worker screening program. For the four pilot airports that applied random screening methods, TSA and airport associations agreed to screen a targeted 20 percent of workers who entered secured areas each day. TSA officials also told us that this 20 percent threshold was significantly higher than that applied through ADASP, although officials said that they do not track the percentage of screening events processed through ADASP. TSA officials told us that they do not have sufficient resources to track this information. In addition to the limitations recognized by HSI, TSA and HSI did not document key aspects of the design and implementation of the pilot program. For example, while they did develop and document a data collection plan that outlined the data requirements, sources, and collection methods to be followed by the seven pilot airports in order to evaluate the program’s costs, benefits, and impacts, they did not document a plan for how such data would be analyzed to formulate results. Standards for Internal Control for the Federal Government states that significant events are to be clearly documented and the documentation should be readily available for examination to inform management decisions. In addition, in November 2008, based in part on our guide for designing evaluations, we reported that pilot programs can more effectively inform future program rollout when an evaluation plan is developed to guide consistent implementation of the pilot and analysis of the results. At minimum, a well-developed, sound evaluation plan contains several key elements, including measurable objectives, standards for pilot performance, a clearly articulated methodology, detailed data collection methods, and a detailed data analysis plan. Incorporating these elements can help ensure that the implementation of a pilot generates performance information needed to make effective management decisions. While TSA and HSI completed a data collection plan, and generally defined specific measurable objectives for the pilot program, they did not address other key elements that collectively could have strengthened the effectiveness of the pilot program and the usefulness of the results: Performance standards. TSA and HSI did not develop and document criteria or standards for determining pilot program performance, which are necessary for determining to what extent the pilot program is effective. Clearly articulated evaluation methodology. TSA and HSI did not fully articulate and document the methodology for evaluating the pilot program. Such a methodology is to include plans for sound sampling methods, appropriate sample sizes, and comparing the pilot results with ongoing efforts. TSA and HSI documented relevant elements, such as certain sampling methods and sample sizes, in both its overall data collection plan for the program and in individual pilot operations plans for each airport implementing the pilot. However, while officials stated that the seven airports were selected to obtain a range of physical size, worker volume, and geographical dispersion information, they did not document the criteria they used in this process, and could not explain the rationale used to decide which screening methods would be piloted by the individual airports. Because the seven airports tested different screening methods, there were differences in the design of the individual pilots as well as in the type and frequency of the data collected. While design differences are to be expected given that the pilot program was testing disparate screening methods, there were discrepancies in the plans that limited HSI’s ability to compare methods across sites. For example, those airports that tested enhanced screening methods—as opposed to 100 percent worker screening—used different rationales to determine how many inspections would be conducted each day. TSA officials said that this issue and other discrepancies and points of confusion were addressed through oral briefings with the pilot airports, but said that they did not provide additional written instructions to the airports responsible for conducting the pilots. TSA and HSI officials also did not document how they would address deviations from the piloted methods, such as workers who avoided the piloted screening by accessing alternative entry points, or suspension of the pilot because of excessive wait times for workers or passengers (some workers were screened through passenger screening checkpoints). Further, TSA and HSI officials did not develop and document a plan for comparing the results of the piloted worker screening methods with TSA’s ongoing random worker screening program to determine whether the piloted methods had a greater impact on reducing insider risk than ongoing screening efforts. Detailed data analysis. Although the agreement between TSA and HSI also called for the development of a data analysis plan, neither HSI nor TSA developed an analysis plan to describe how the collected data would be used to track the program’s performance and evaluate the effectiveness of the piloted screening methods, including 100 percent worker screening. For example, HSI used the number of confiscated items as a means of comparing the relative effectiveness of each screening method. However, HSI reported that the number of items confiscated during pilot operations was “very low” at most pilot airports, and some did not detect any. Based on these data, HSI concluded that random worker screening appeared to be “roughly” as effective in identifying confiscated items as 100 percent worker screening. However, it is possible that there were few or no contraband items to detect, as workers at the pilot airports were warned in advance when the piloted screening methods would be in effect and disclosure signs were posted at access points. As a result, comparing the very low rate—and in some cases, nonexistence—of confiscated items across pilots, coupled with the short assessment period, may not fully indicate the effectiveness of different screening methods at different airports. If a data analysis plan had been developed during pilot design, it could have been used to explain how such data would be analyzed, including how HSI’s analysis of the pilots’ effectiveness accounted for the low confiscation rates. Because of the significance of the pilot program limitations reported by HSI, as well as the lack of documentation and detailed information regarding the evaluation of the program, the reliability of the resulting data and any subsequent conclusions about the potential impacts, costs, benefits, and effectiveness of 100 percent worker screening and other screening methods cannot be verified. For these reasons, it would not be prudent to base major policy decisions regarding worker screening solely on the results of the pilot program. HSI reported that the wide variation— such as size, traffic flow, and design—of U.S. commercial airports makes it difficult to generalize the seven pilot results to all commercial airports. While we agree it is difficult to generalize the results of such a small sample to an entire population, a well-documented and sound evaluation plan could have helped ensure that the pilot program generated the data and performance information needed to draw reasonable conclusions about the effectiveness of 100 percent worker screening and other methods to inform nationwide implementation. Incorporating these elements into an evaluation plan when designing future pilots could help ensure that TSA’s pilots generate the necessary data for making management decisions and that TSA can demonstrate that the results are reliable. According to TSA officials, FSDs and others in the aviation community have long recognized the potential for insiders to do harm from within an airport. TSA officials said that they developed ADASP—a random worker screening program—to counteract the potential vulnerability of airports to an insider attack. According to TSA officials, ADASP serves as an additional layer of security and as a deterrent to workers who seek to smuggle drugs or weapons or to do harm. According to senior TSA officials, FSDs decide when and how to implement ADASP, including the random screening of passengers at the boarding gate or workers at SIDA access points to the sterile area. TSA officials said that ADASP was initially developed as a pilot project at one airport in March 2005 to deter workers from breaching access controls and procedures for secured areas at that particular airport. According to officials, after concluding that the pilot was successful in deterring airport workers from bringing restricted items into secured areas, TSA began implementing ADASP on a nationwide voluntary basis in August 2006 using existing resources. In March 2007, in response to several incidents of insider criminal activity, TSA directed that ADASP be conducted at all commercial airports nationwide. For example, on March 5, 2007, two airline employees smuggled 14 firearms and 8 pounds of marijuana on board a commercial airplane at Orlando International Airport (based on information received through an anonymous tip, the contraband was confiscated when the plane landed in San Juan, Puerto Rico). In its October 2008 report, the DHS Office of the Inspector General (OIG) found that ADASP was being implemented in a manner that allowed workers to avoid being screened, and that the program had been applied inconsistently across airports. For example, at most of the seven airports the DHS OIG visited, ADASP screening stations were set up in front of worker access points, which allowed workers to identify that ADASP was being implemented and potentially choose another entry and avoid being screened. However, at another airport, the screening location was set up behind the access point, which prevented workers from avoiding being screened. ADASP standard operating procedures allow ADASP screening locations to be set up in front of or behind direct access points as long as there is signage alerting workers that ADASP screening is taking place. However, the DHS OIG found that the location of the screening stations— either in front of or behind direct access points—affected whether posted signs were visible to workers. The DHS OIG recommended that TSA apply consistent ADASP policies and procedures at all airports, and establish an ADASP working group to consider policy and procedure changes based on an accumulation of best practices across the country. TSA agreed with the DHS OIG’s recommendations, and officials stated that they have begun to take action to address them. Since April 2004, and in response to our prior recommendation, TSA has taken steps to enhance airport worker background checks. TSA background checks are composed of security threat assessments (STA), which are name-based records checks against various terrorist watch lists, and criminal history record checks (CHRC), which are fingerprint-based criminal records checks. TSA requires airport workers to undergo both STAs and CHRCs before being granted unescorted access to secured areas in which they perform their duties. In July 2004 TSA expanded STA requirements by requiring workers in certain secured areas to submit current biographical information, such as date of birth. TSA further augmented STAs in 2005 to include a citizenship check to identify individuals who may be subject to coercion because of their immigration status or who may otherwise pose a threat to transportation security. In 2007 TSA expanded STA requirements beyond workers with sterile area or SIDA access to apply to all individuals seeking or holding airport-issued identification badges or credentials. Finally, in June 2009 TSA began requiring airport operators to renew all airport identification media every 2 years, deactivate expired media and require workers to resubmit biographical information in the event of certain changes, and expand the STA requirement to include individuals with unescorted access to the AOA, among other things. TSA has taken steps to strengthen its background check requirements and is considering additional actions to address certain statutory requirements and issues that we identified in 2004. For example, TSA is considering revising its regulation listing the offenses that if a conviction occurred within 10 years of applying for this access, would disqualify a person from receiving unescorted access to secured areas. TSA officials told us that TSA and industry stakeholders are considering whether some disqualifying offenses may warrant a lifelong ban. In addition, while TSA has not yet specifically addressed a statutory provision requiring TSA to require, by regulation, that individuals with regularly escorted access to secured airport areas undergo background checks, TSA officials told us that they believe the agency’s existing measures address the potential risk presented by such workers. They also said that it would be challenging to identify the population of workers who require regularly escorted access because such individuals—for example, construction workers—enter airports on an infrequent and unpredictable basis. Since 2004, TSA has taken some steps to develop biometric worker credentialing; however, it is unclear to what extent TSA plans to address statutory requirements regarding biometric technology, such as developing or requiring biometric access controls at commercial airports in consultation with industry stakeholders. For instance, in October 2008 the DHS OIG reported that TSA planned to mandate phased-in biometric upgrades for all airport access control systems to meet certain specifications. However, as of May 2009, according to TSA officials, the agency had not made a final decision on whether to require airports to implement biometric access controls, but it intends to pursue a combination of rule making and other measures to encourage airports to voluntarily implement biometric credentials and control systems. While TSA officials said that the agency issued a security directive in December 2008 that encourages airports to implement biometric access control systems that are aligned with existing federal identification standards, TSA officials also reported the need to ensure that airports incorporate up- to-date standards. These officials also said that TSA is considering establishing minimum requirements to ensure consistency in data collection, card information configuration, and biometric information. Airport operators and industry association officials have called for a consensus-based approach to developing biometric technology standards for airports, and have stressed the need for standards that allow for flexibility and consider the significant investment some airports have already made in biometric technology. Airport operators have also expressed a reluctance to move forward with individual biometric projects because of concerns that their enhancements will not conform to future federal standards. Although TSA has not decided whether it will mandate biometric credentials and access controls at airports, it has taken steps to assess and develop such technology in response to stakeholder concerns and statutory requirements. For example, TSA officials said the agency has assisted the aviation industry and RTCA, Inc., a federal aviation advisory committee, in developing recommended security standards for biometric access controls, which officials said provide guidelines for acquiring, designing, and implementing access control systems. TSA officials also noted that the agency has cooperated with the Biometric Airport Security Identification Consortium, or BASIC—a working group of airport operators and aviation association representatives—which has developed guidance on key principles that it believes should be part of any future biometric credential and access control system. In addition, TSA is in the early stages of developing the Aviation Credential Interoperability Solution (ACIS) program. ACIS is conceived as a credentialing system in which airports use biometrics to verify the identities and privileges of workers who have airport- or air carrier–issued identification badges before granting them entry to secured areas. According to TSA, ACIS would provide a trusted biometric credential based on smart card technology (about the size of a credit card, using circuit chips to store and process data) and specific industry standards, and establish standard airport processes for enrollment, card issuance, vetting, and the management of credentials. Although these processes would be standardized nationwide, airports would still be individually responsible for determining access authority. According to TSA officials, the agency is seeking to build ACIS on much of the airports’ existing infrastructure and systems and has asked industry stakeholders for input on key considerations, including the population of workers who would receive the credential, program policies, process, technology considerations, operational impacts, and concerns regarding ACIS. However, as of May 2009, TSA officials could not explain the status of ACIS or provide additional information on the possible implementation of the program since the agency released the specifications for industry comment in April 2008. As a result, it is unclear when and how the agency plans to address the requirements of the Intelligence Reform and Terrorism Prevention Act, including establishing minimum standards for biometric systems and determining the best way to incorporate these decisions into airports’ existing practices and systems. As of May 2009 TSA officials had not provided any further information, such as scheduled milestones, on TSA’s plans to implement biometric technology at airports. Standard practices in program management suggest that developing scheduled milestones can help define the scope of the project, achieve key deliverables, and communicate with key stakeholders. In addition, until TSA communicates its decision on whether it plans to mandate—such as through a rule making—or collaboratively implement biometric access controls at airports, and what approach is best—be it ACIS or another system—operators may be hesitant to upgrade airport security in this area. As we reported in 2004, airport operators do not want to run the risk of installing costly technology that may not comply with future TSA requirements and standards. Developing milestones for implementing a biometric system could help ensure that TSA addresses statutory requirements. In addition, such milestones will provide airports and the aviation industry with the scheduling information needed to plan future security improvements and expenditures. In addition to biometric technology efforts, TSA has also initiated efforts to assess other airport perimeter and access control technology. Pursuant to ATSA, TSA established two pilot programs to assess perimeter and access control security technology, the Airport Access Control Pilot Program (AACPP) in 2004 and the Airport Perimeter Security (APS) pilot program in 2006. AACPP piloted various new and emerging airport security technologies, including biometrics. TSA issued the final report on AACPP in December 2006, but did not recommend any of the piloted technologies for full-scale implementation. TSA officials said that a second round of pilot projects would be necessary to allow time for project evaluation and limited deployments, but as of May 2009 TSA officials said that details for this second round were still being finalized. The purpose of the APS pilot, according to TSA officials, is to identify and mitigate existing airport perimeter security vulnerabilities using commercially available technology. APS was originally scheduled to be completed in December 2007, but according to TSA officials, though five of the six pilot projects have been completed, the remaining pilot has been delayed because of problems with the acquisition process. According to TSA officials, the final pilot project is to be completed by October 2009. TSA officials told us that the agency has also taken steps to provide some technical and financial support to small- and medium-sized airports through AACPP and the APS pilot program, as both tested technologies that could be suitable for airports of these sizes. TSA officials also stated that smaller airports could potentially benefit from the agency’s efforts to test the Virtual Perimeter Monitoring System, which was developed by the U.S. Navy and is being installed and evaluated at four small airports. Further, officials noted that TSA has also provided significant funding to support cooperative agreements for the deployment of law enforcement officers at airports—including Category II, III, and IV airports—to help defray security costs. However, according to TSA officials, as of May 2009 TSA had not yet developed a plan, or a time frame for developing a plan, to provide technical information and funding to small- and medium-sized airports, as required by ATSA. According to TSA officials, funds had not been appropriated or specifically directed to develop such a plan, and TSA’s resources and management attention have been focused on other statutory requirements for which it has more direct responsibility and deadlines, including passenger and baggage screening requirements. (For a summary of TSA actions to address certain statutory requirements for airport security technology, see app. II.) TSA has taken actions to improve general airport security by establishing programs and requirements. For example, TSA has augmented access control screening and general airport security by increasing the presence of transportation security officers and law enforcement officials through the Screening of Passengers by Observation Techniques (SPOT) program and the Law Enforcement Officer Reimbursement Program. In addition, it uses the Visible Intermodal Prevention and Response (VIPR) program, which is used across the transportation sector, to augment airport security efforts. (For more information on these TSA programs, see app. VI.) TSA uses a variety of regulatory mechanisms for imposing requirements within the transportation sector. In the aviation environment, TSA uses the security directive as one of its regulatory tools for imposing requirements to strengthen the security of civil aviation, including security at the nation’s commercial airports. Pursuant to TSA regulation, the agency may decide to use security directives to impose requirements on airport operators if, for example, it determines that additional security measures are needed to respond to general or specific threats against the civil aviation system. As of March 2009 TSA identified 25 security directives or emergency amendments in effect that related to various aspects of airport perimeter and access control security. As shown in table 2, TSA imposed requirements through security directives that address areas such as worker and vehicle screening, criminal history record checks, and law enforcement officer deployments. According to TSA officials, security directives enable the agency to respond rapidly to immediate or imminent threats and provide the agency with flexibility in how it imposes requirements on airport operators. This function is especially relevant given the adaptive, dynamic nature of the terrorist threat. Moreover, according to TSA, imposing requirements through security directives is less time consuming than other processes, such as the lengthier notice-and-comment rule making process, which generally provides opportunity for more stakeholder input, requires cost- benefit analysis, and provides the regulated entities with more notice before implementation and enforcement. Officials from two prominent aviation associations and eight of nine airports we visited identified concerns regarding requirements established through security directive: Officials from the two aviation associations noted inconsistencies between requirements established through separate security directives. For example, they noted that the requirements for airport-issued identification badges are different from those for badges issued by an air carrier. Workers employed by the airport, air carrier, or other entities who apply for an airport identification badge granting unescorted access to a secured area are required to undergo an immigration and citizenship status check, whereas workers who apply through an air carrier, which can grant similar unescorted access rights, are not. Both airport and air carrier workers can apply to an airport operator for airport-issued identification badges, but only air carrier workers can apply to their aircraft operator (employer) for an air carrier–issued identification badge. TSA officials told us that the agency plans to address this inconsistency—which has been in effect since December 2002—and is working on a time frame for doing so. Airport operator officials from eight of the nine airports we visited and officials from two industry associations expressed concern that requirements established through security directives related to airport security are often issued for an indefinite time period. Our review of 25 airport security directives and emergency amendments showed that all except one were issued with no expiration date. The two aviation industry associations have expressed concerns directly to TSA that security directive requirements should be temporary and include expiration dates so that they can be periodically reviewed for relevancy. According to senior officials, TSA does not have internal control procedures for monitoring and coordinating requirements established through security directives related to airport perimeter and access control security. In November 2008 TSA officials told us that the agency had drafted an operations directive that documents procedures for developing, coordinating, issuing, and monitoring civil aviation security directives. According to officials, this operations directive also is to identify procedures for conducting periodic reviews of requirements imposed through security directives. However, while TSA officials told us that they initially planned to issue the operations directive in April 2009, in May 2009 they said that they were in the process of adopting the recommendations of an internal team commissioned to review and identify improvements to TSA’s policy review process, including the proposed operations directive. In addition, as of May 2009, officials did not have an expected date for finalizing the directive. TSA officials explained that because the review team’s recommendations will require organizational changes and upgrades to TSA’s information technology infrastructure, it will take a significant amount of time before an approved directive can be issued. As a result, it is unclear to what extent the operations directive will address concerns expressed by aviation operators and industry stakeholders. Standard practices in program management call for documented milestones to ensure that results are achieved. Establishing milestones for implementing guidance to periodically review airport security requirements imposed through security directives would help TSA formalize review of these directives within a time frame authorized by management. In addition to the stakeholder issues previously discussed, representatives from two prominent aviation industry associations have expressed concern that TSA has not issued security directives in accordance with the law. Specifically, these representatives noted that the Transportation Security Oversight Board (TSOB) has not reviewed TSA’s airport perimeter and access control security directives in accordance with a provision set forth in ATSA. This provision, as amended, establishes emergency procedures by which TSA may immediately issue a regulation or security directive to protect transportation security, and provides that any such regulation or security directive is subject to review by the TSOB. The provision further states that any regulation or security directive issued pursuant to this authority may remain in effect for a period not to exceed 90 days unless ratified or disapproved by the TSOB. According to TSA officials, the agency has not issued security directives related to airport perimeter and access control security under this emergency authority. Rather, officials explained, the agency has issued such security directives (and all aviation-related security directives) in accordance with its aviation security regulations governing airport and aircraft operators, which predate ATSA and the establishment of TSA. FAA implemented regulations—promulgated through the notice-and- comment rule making process—establishing FAA’s authority to issue security directives to impose requirements on U.S. airport and aircraft operators. With the establishment of TSA, FAA’s authority to regulate civil aviation security, including its authority to issue security directives, transferred to the new agency. TSA does not consider ATSA to have altered this existing authority. Although TSA has developed a variety of individual protective actions to mitigate identified airport security risks, it has not developed a unified national strategy aimed at enhancing airport perimeter and access control security. Through our prior work on national security planning, we have identified characteristics of effective security strategies, several of which are relevant to TSA’s numerous efforts to enhance perimeter and access control security. For example, TSA has not developed goals and objectives for related programs and activities, prioritized protective security actions, or developed performance measures to assess the results of its perimeter and access control security efforts beyond tracking outputs (the level of activity provided over a period of time). Further, although TSA has identified some cost information that is used to inform programmatic decision making, it has not fully assessed the costs and resources necessary to implement its airport security efforts. Finally, TSA has not fully outlined how activities are to be coordinated among stakeholders, integrated with other aviation security priorities, or implemented within the agency. Developing a strategy to accomplish goals and desired outcomes helps organizations manage their programs more effectively and is an essential mechanism to guide progress in achieving desired results. Strategies are the starting point and foundation for defining what an agency seeks to accomplish, and we have reported that effective strategies provide an overarching framework for setting and communicating goals and priorities and allocating resources to inform decision making and help ensure accountability. Moreover, a strategy that outlines security goals, as well as mechanisms and measures to achieve such goals, and that is understood and available to all relevant stakeholders strengthens implementation of and accountability to common principles. A national strategy to guide and integrate the nation’s airport security activities could strengthen decision making and accountability for several reasons. First, TSA has identified airport perimeter and access control security—particularly the mitigation of risks posed by workers who have unescorted access to secured areas—as a top priority. Historically, TSA has recognized the importance of developing strategies for high-priority security programs involving high levels of perceived risk and resources, such as air cargo security and the SPOT program. Second, in security networks that rely on the cooperation of all security partners—in this case TSA, airport operators, and air carriers—strategies can provide a basis for communication and mutual understanding between security partners that is fundamental for such integrated protective programs and activities. In addition, because of the mutually dependent roles that TSA and its security partners have in airport security operations, TSA’s ability to achieve results depends on the ability of all security partners to operate under common procedures and achieve shared security goals. Finally, officials from two prominent industry organizations that represent the majority of the nation’s airport operators said that the industry would significantly benefit from a TSA-led strategy that identified long-term goals for airport perimeter and access control security. In addition to providing a unifying framework, a strategy that clearly identifies milestones, developed in cooperation with industry security partners, could make it easier for airport operators to plan, fund, and implement security enhancements that according to industry officials can require intensive capital improvements. While TSA has taken steps to assess threat and vulnerability related to airport security and developed a variety of protective actions to mitigate risk, TSA has not developed a unifying strategy to guide the development, implementation, and assessment of these varied actions and those of its security partners. TSA officials cited three reasons why the agency has not developed a strategy to guide national efforts to enhance airport security. First, TSA officials cited a lack of congressional emphasis on airport perimeter and access control security relative to other high-risk areas, such as passenger and baggage screening. Second, these officials noted that airport operators, not TSA, have operational responsibility for airport security. Third, they cited a lack of resources and funding. While these issues may present challenges, they should be considered in light of other factors. First, Congress has long recognized the importance of airport security, and has contributed to the establishment of a variety of requirements pertaining to this issue. For example, the appropriations committees, through reports accompanying DHS’s annual appropriations acts, have directed TSA to focus its efforts on enhancing several aspects of airport perimeter and access control security. Moreover, developing a strategy that clearly articulates the risk to airport security and demonstrates how those risks can be addressed through protective actions could help inform decision making. Second, though we recognize that airport operators, not TSA, generally have operational responsibility for airport perimeter and access control security, TSA—as the regulatory authority for airport security and the designated lead agency for transportation security—is responsible for identifying, prioritizing, and coordinating protection efforts within aviation, including those related to airport security. TSA currently exercises this authority by ensuring compliance with TSA-approved airport operator security programs and, pursuant to them, by issuing and ensuring compliance with requirements imposed through security directives or other means. Finally, regarding resource and funding constraints, federal guidelines for strategies and planning include linking program activities and anticipated outcomes with expected program costs. In this regard, a strategy could strengthen decision making to help allocate limited resources to mitigate risk, which is a cornerstone of homeland security policy. Additionally, DHS’s risk management approach recognizes that resources are to be focused on the greatest risks, and on protective activities designed to achieve the biggest reduction in those risks given the limited resources at hand. The NIPP risk management framework provides guidance for agencies to develop strategies and prioritize activities to those ends. A strategy helps to link individual programs to specific performance goals and describe how the programs will contribute to the achievement of those goals. A national strategy could help TSA, airport operators, and industry stakeholders in aligning their activities, processes, and resources to support mission-related outcomes for airport perimeter and access control security, and, as a result, in determining whether their efforts are effective in meeting their goals for airport security. Our previous work has identified that an essential characteristic of effective strategies is the setting of goals, priorities, and performance measures. This characteristic addresses what a strategy is trying to achieve and the steps needed to achieve and measure those results. A strategy can provide a description of an ideal overall outcome, or “end- state,” and link individual programs and activities to specific performance goals, describing how they will contribute to the achievement of the end- state. The prioritization of programs and activities, and the identification of milestones and performance measures, can aid implementing parties in achieving results according to specific time frames, as well as enable effective oversight and accountability. The NIPP also calls for the development of goals, priorities, and performance measures to guide DHS components, including TSA, in achieving a desired end-state. Security goals allow stakeholders to identify the desired outcomes that a security program intends to achieve and that all security partners are to work to attain. Defining goals and desired outcomes, in turn, enables stakeholders to better guide their decision making to develop protective security programs and activities that mitigate risks. The NIPP also states that security goals should be used in the development of specific protective programs and considered for distinct assets and systems. However, according to TSA officials, the agency has not developed goals and objectives for airport security, including specific targets or measures related to the effectiveness of security programs and activities. TSA officials told us that the agency sets goals for aviation security as a whole but has not set goals and objectives for the airport perimeter and access control security area. Developing a baseline set of security goals and objectives that consider, if not reflect, the airport perimeter and access control security environment would help provide TSA and its security partners with the fundamental tools needed to define outcomes for airport perimeter and access control security. Furthermore, a defined outcome that all security partners can work toward will better position TSA to provide reasonable assurance that it is taking the most appropriate steps for ensuring airport security. Our past work has also shown that the identification of program priorities in a strategy aids implementing parties in achieving results, which enables more effective oversight and accountability. Although TSA has implemented protective programs and activities that address risks to airport security, according to TSA officials it has not prioritized these activities nor has it yet aligned them with specific goals and objectives. TSA officials told us that in keeping with legislative mandates, they have focused agency resources on aviation security programs and activities that were of higher priority, such as passenger and baggage screening and air cargo security. Identifying priorities related to airport perimeter and access control security could assist TSA in achieving results within specified time frames and limited resources because it would allow the agency to concentrate on areas of greatest importance. In addition to our past work on national strategies, the NIPP and other federal guidance require agencies to assess whether their efforts are effective in achieving key security goals and objectives so as to help drive future investment and resource decisions and adapt and adjust protective efforts as risks change. Decision makers use performance measurement information, including activity outputs and descriptive information regarding program operations, to identify problems or weaknesses in individual programs, identify factors causing the problems, and modify services or processes to try to address problems. Decision makers can also use performance information collectively, and, according to the NIPP, examine a variety of data to provide a holistic picture of the health and effectiveness of a security approach from which to make security improvements. If significant limitations on performance measures exist, the strategy might address plans to obtain better data or measurements, such as national standards or indicators of preparedness. TSA officials told us that TSA has not fully assessed the effectiveness of its protective activities for airport perimeters and secured areas, but they said that the agency has taken some steps to collect certain performance data for some airport security programs and activities to help inform programmatic decision making. For example, TSA officials told us that they require protective programs, such as ADASP and VIPR, to report certain output data and descriptive program information, which officials use to inform administrative or programmatic decisions. For ADASP, TSA requires FSDs to collect information on, among other things, the number of workers screened, vehicles inspected, and prohibited items surrendered. TSA officials said that they use these descriptive and output data to inform programmatic decisions, such as determining the number of staff days needed to support ADASP operations nationwide. However, TSA was not able to provide documentation on how such analysis has been conducted. For VIPR, officials said that they require team members to complete after-action reports that include data on the number of participants, locations, and types of activities conducted. TSA officials said that they are analyzing and categorizing this descriptive and output information to determine trends and identify areas of success and failure, which they will use to improve future operations, though they did not provide us with examples of how they have done this. TSA officials also told us that they require SPOT to report descriptive operations data and situational report information, which are to be used to assign necessary duties and correct problems with program implementation. However, TSA officials could not tell us how they use these descriptive and output data to inform program development and administrative decisions. While the use of descriptive and output data to inform program development and administration is both appropriate and valuable, leading management practices emphasize that successful performance measurement focuses on assessing the results of individual programs and activities. TSA officials also told us that while they recognize the importance of assessing the effectiveness of airport security programs and activities in reducing known threats, it is difficult to do so because the primary purpose of these activities is deterrence. Assessing the deterrent benefits of a program is inherently challenging because it involves determining what would have happened in the absence of an intervention, or protective action, and it is often difficult to isolate the impact of the individual program on behavior that may be affected by multiple other factors. Because of this difficulty, officials told us that they have instead focused their efforts on assessing the extent to which each airport security activity supports TSA’s overall layered approach to security. We recognize that assessing the effectiveness of deterrence-related activities is challenging and that it continues to be the focus of ongoing analytic effort and policy review. For example, a January 2007 report by the Department of Transportation addressed issues related to measuring deterrence in the maritime sector, and a February 2007 report by the RAND Corporation acknowledged the challenges associated with measuring the benefits of security programs aimed at reducing terrorist risk. However, as a feature of TSA’s layered security approach, many of its airport activities address other aspects of security in addition to deterrence. Like other homeland security efforts, TSA’s airport security activities also seek to limit the potential for attack, safeguard critical infrastructure and property, identify wrongdoing, and ensure an effective and efficient response in the event of an attack; the desired outcome of its efforts is to reduce the risk of an attack. Deterrence is an inherent benefit of any protective action, and methods designed to detect wrongdoing and measures taken to safeguard critical infrastructure and property, for example, also help deter terrorist attacks. There are a number of activities that TSA has implemented that seek to reduce this risk, such as requiring security threat assessments for all airport workers. Some of these activities serve principally to deter, such as ADASP, while others are more focused on safeguarding critical infrastructure and property, such as conducting compliance inspections of aviation security regulations or installing perimeter fencing. Some activities serve multiple purposes, such as VIPR, which seeks to provide a visual deterrent to terrorist or other criminal activity, but also seeks to safeguard critical infrastructure in various modes of transportation. Examining the extent to which its activities have effectively addressed these various purposes would enable TSA to more efficiently implement and manage its programs. There are several methods available that TSA could explore to gain insight on the extent to which its security activities have met their desired purpose and to ultimately improve program performance. For example, TSA could work with stakeholders, such as airport operators and other security partners, to identify and share lessons learned and best practices across airports to better tailor its efforts and resources and continuously improve security. TSA could also use information gathered through covert testing or compliance inspections—such as noncompliance or security breaches—to make adjustments to specific security activities and to identify which aspects require additional investigation. In addition, TSA could develop proxy measures—indirect measures or signs that approximate or represent the direct measure—to show how security efforts correlate to an improved security outcome. Appendix VII provides a complete discussion on these methods, as well as information on other alternatives TSA could explore. Our prior work shows that effective strategies address costs, resources, and resource allocation issues. Specifically, effective strategies address the costs of implementing the individual components of the strategy, the sources and types of resources needed (such as human capital or research and development), and where those resources should be targeted to better balance risk reductions with costs. Effective strategies may also address in greater detail how risk management will aid implementing parties in prioritizing and allocating resources based on expected benefits and costs. Our prior work found that strategies that provide guidance on costs and needed resources help implementing parties better allocate resources according to priorities, track costs and performance, and shift resources as appropriate. Statutory requirements and federal cost accounting standards also stress the benefits of developing and reporting on the cost of federal programs and activities, as well as using that information to more effectively allocate resources and inform program management decisions. TSA has identified the costs and resources it needs for some specific activities and programs that exclusively support airport security, such as JVAs of selected commercial airports. However, for programs that serve airport security as well as other aspects of aviation security, TSA has not identified the costs and resources devoted to airport security. For example, TSA has identified its expenditures for compliance inspections and other airport security–related programs and activities, which collectively totaled nearly $850 million from fiscal years 2004 through 2008. However, TSA has not identified what portion of these funds was directly allocated for airport security activities versus other aviation security activities, such as passenger screening. (For a more detailed discussion of airport security costs, see app. IV.) Further, TSA has not fully identified the resources it needs to mitigate risks to airport perimeter and access control security. According to TSA officials, identifying collective agency costs and resource needs for airport security activities is challenging because airport security is not a separately funded TSA program, and many airport security activities are part of broader security programs. However, without attempting to identify total agency costs, it will be difficult for TSA to identify costs associated with individual security activities, and therefore it will be hindered in determining the resources it needs to sustain desired activity levels and realize targeted results. While TSA officials told us that they are starting to identify costs for airport security activities and plan to complete this effort by the end of 2009, they could provide no additional information to illustrate their approach for doing so. As a result, it is unclear what costs the agency will identify, and to what extent TSA will be able to identify costs for specific security activities in order to identify the resources it needs to sustain desired activity levels and realize targeted results. TSA officials also told us that they have not yet identified or estimated costs to the aviation industry for implementing airport security requirements, such as background checks for their workers, or capital costs—such as construction and equipment—that airport operators incur to enhance the security of their facilities. According to these officials, the agency does not have the resources and funds to collect cost information from airport operators. However, TSA officials could not tell us how and to what extent they had assessed the resources and funds needed to collect this information or whether they had explored other options for collecting cost data, such as working with industry associations to survey airport operators. Estimating general cost information on the types and levels of resources needed for desired outcomes would provide TSA and other stakeholders with valuable information with which to make informed resource and investment decisions, including decisions about future allocation needs, to mitigate risks to airport security. According to our previous work on effective national strategies, as well as NIPP guidance, risk management focuses security efforts on those activities that bring about the greatest reduction in risk given the resources used. According to federal guidance, employing systematic cost-benefit analysis helps ensure that agencies choose the security priorities that most efficiently and effectively mitigate risk for the resources available. The Office of Management and Budget (OMB) cites cost-benefit analysis as one of the key principles to be considered when an agency allocates resources for capital expenditures because it provides decision makers with a clear indication of the most efficient alternative. DHS’s Cost-Benefit Analysis Guidebook also states that cost-benefit analysis identifies the superior financial solution among competing alternatives, and that it is a proven management tool to support planning and managing costs and risks. While TSA has made efforts to consider costs for some airport security programs, it has not used cost-benefit analysis to allocate or prioritize resources toward the most cost-effective alternative actions for mitigating risk. According to TSA officials, certain factors have limited TSA’s ability to conduct cost-benefit analysis, such as resource constraints and the need to take immediate action to address new and emerging security threats. However, officials could not demonstrate that they had attempted to conduct cost-benefit analysis for programs and activities related to airport security within the constraints of current resources, or explain how, or to what extent, they had assessed the resources that would be needed to conduct cost-benefit analysis. Further, TSA officials could not cite a situation in which the need to take immediate action—outside of issuing security directives—in response to a threat prevented them from conducting cost-benefit analysis. TSA officials agreed that conducting cost-benefit analysis is beneficial, but also said that it is not always practical because of the difficulty in quantifying the benefits of deterrence- based activities. Because of this challenge, officials said that they have used professional judgment, past experience, law enforcement principles, and intelligence information to evaluate alternative airport security activities to mitigate risks. While TSA’s approach to identifying security actions includes accepted risk reduction decision-making tools, such as professional judgment, it does not provide a means to fully weigh the benefits versus the costs of implementing alternative actions. However, despite the challenges TSA cited to developing cost-benefit analysis, TSA officials told us that as of January 2009, the agency was in the early stages of investigating costs and benefits related to airport perimeter access control. According to these officials, TSA plans to initially focus on developing cost estimates associated with improving access control, a process the agency expects to complete by the end of 2009. However, because TSA officials did not explain how they expect to identify and estimate these costs and how, in the future, they plan to identify and estimate benefits for alternative actions, especially those actions that focus on deterrence, it is not yet clear to what extent TSA’s efforts will constitute cost-benefit analysis. The use of systematic cost-benefit analysis when considering future airport security measures would help TSA to choose the most cost- effective security options for mitigating risk. We recognize the difficulties in quantifying the benefits of deterrence-based activities, but there are alternatives that TSA could pursue to assess benefits, such as examining the extent to which its activities address other purposes besides deterrence. Moreover, OMB recognizes that in some circumstances—such as when data are insufficient—costs and benefits cannot be quantified, in which case costs and benefits are to be assessed in qualitative terms. By exploring ways to identify expected costs associated with alternatives, and balancing these with estimated security benefits, TSA can more fully ensure that it is efficiently allocating and prioritizing its limited resources, as well as those of individual airports, in a way that maximizes the effectiveness of its airport security efforts. Our prior work shows that effective national strategies address how to coordinate efforts and resolve conflicts among stakeholders, address ways in which each strategy relates to the goals of other strategies, and devise plans for implementing the strategies. Because the responsibility for airport perimeter and access control security involves multiple stakeholders, including federal entities, individual airport operators, air carriers, and industry organizations, coordination among stakeholders is critical. In such an environment, the implementation of security activities is strengthened when a strategy addresses how federal efforts will coordinate and integrate with other federal and private sector initiatives, relate to the goals and objectives of other strategies and plans, and be implemented and coordinated by relevant parties. Representatives from industry associations told us that while TSA has collaborated with industry stakeholders on the development of multiple airport security activities and initiatives, the agency has not always fully coordinated the development and implementation of specific security activities and initiatives. For example, although TSA has worked with the industry in the development of some aspects of airport security technology, such as biometrics, industry association officials told us that the agency has not yet recommended specific technology based on the results of technology-based pilot programs it completed over 2 years ago in 2007. These officials also noted that TSA did not fully coordinate with the industry in its decision to impose stronger requirements on worker credentialing practices in the wake of security incidents at individual airports. TSA officials said that they have worked closely with industry stakeholders in addressing airport security issues, and have established working groups to continue to coordinate on issues such as biometric access control security. Our prior work found that a strategy should provide both direction and guidance to government and private entities so that missions and contributions can be more appropriately coordinated. TSA has not demonstrated how it relates the activities of airport security to the goals, objectives, and activities of TSA’s other aviation security strategies, such as passenger screening, air cargo screening, and baggage screening. In addition, TSA has not identified how these various security areas are coordinated at the national level. For example, TSA officials told us that some security efforts, such as the random worker screening program and roving security response teams, are used to address multiple security needs, such as both passenger and worker screening, but could not identify the extent to which program resources are planned for and applied between competing security needs. TSA officials said that decisions to allocate random worker screening resources between passenger and worker screening are made at the local airport level by FSDs. However, a clear understanding of how TSA’s needs and goals for airport security align with those of its other security responsibilities would enable the agency to better coordinate its programs, gauge the effectiveness of its actions, and allocate resources to its highest-priority needs. Finally, it is not clear to what extent TSA has coordinated airport security activities within the agency, the responsibilities for which are spread among multiple offices. TSA officials explained that agency efforts to enhance and oversee airport perimeter and access control security are spread across multiple programs within five TSA component offices. No one office or program has responsibility for coordinating and integrating actions that affect the numerous aspects of perimeter and access control security, including operations, technology, intelligence, program policy, credentialing, and threat assessments. TSA officials agreed that the diffusion of responsibilities across offices can present coordination challenges. Developing an overarching, integrated framework for coordinating actions between implementing parties could better position TSA to avoid unnecessary duplication, overlap, and conflict in the implementation of these actions. According to our past work, strategies that provide guidance to clarify and link the roles, responsibilities, and capabilities of the implementing parties can foster more effective implementation and accountability. Commercial airports facilitate the movement of millions of passengers and tons of goods each week and are an essential link in the nation’s transportation network. Given TSA’s position that the interconnected commercial airport network is only as strong as its weakest asset, determining vulnerability across this network is fundamental to determining the actions and resources that are necessary to reasonably protect it. Evaluating whether existing, select vulnerability assessments reflect the network of airports will help TSA ensure that its actions strengthen the whole airport system. If TSA finds that additional assessments are needed to identify the extent of vulnerabilities nationwide, then developing a plan with milestones for conducting those assessments, and leveraging existing available assessment information from stakeholders, would help ensure the completion of these assessments and that intended results are achieved. In addition, although the consequences of a successful terrorist breach in airport security have not been assessed, based on the past events, the potential impact on U.S. assets, safety, and public morale could be profound. For this reason, assessing the likely consequences of an attack is an essential step in assessing risks to the nation’s airports. Further, a comprehensive risk assessment that combines threat, vulnerability, and consequence would help TSA determine which risks should be addressed—and to what degree—and would help guide the agency in identifying the necessary resources for addressing these risks. Moreover, documenting milestones for completing the risk assessment would help ensure its timely completion. Implementing and evaluating a pilot program can be challenging, especially given the individual characteristics of the sites involved in the worker screening pilot, such as the variation in airport size, traffic flows, and layouts. However, a well-developed and documented evaluation plan, with well-defined and measurable objectives and standards as well as a clearly articulated methodology and data analysis plan, can help ensure that a pilot program is implemented and evaluated in ways that generate reliable information to inform future program development decisions. By making such a plan a cornerstone of future pilot programs, TSA will be better able to ensure that the results of those pilot programs will produce the reliable data necessary for making the best program and policy decisions. Integrating biometric technology into existing airport access control systems will not be easy given the range of technologies available, the number of stakeholders involved, and potential differences in the biometric controls already in use at airports. Yet Congress, the administration, and the aviation industry have emphasized the need to move forward in implementing such technology to better control access to sensitive airport areas. But until TSA decides whether, when, and how it will mandate biometric access controls at airports, individual airport operators will likely continue to delay investing in potentially costly technology in case it does not comply with future federal standards. Establishing milestones for addressing requirements would not only provide airports with the necessary information to appropriately plan future security upgrades, but give all stakeholders a road map by which they can anticipate future developments. TSA uses security directives as a means for establishing additional security measures in response to general or specific threats against the civil aviation system, including the security of airport perimeters and the controls that limit access to secured airport areas. Just as it is important that federal agencies have flexible mechanisms for responding to the adaptive, dynamic nature of the terrorist threat, it is also important that requirements remain consistent with current threat information. Establishing milestones for periodically reviewing airport perimeter and access control requirements imposed through security directives would help provide TSA and stakeholders with reasonable assurance that TSA’s personnel will review these directives within a time frame authorized by management. TSA, along with industry partners, has taken a variety of steps to implement protective measures to strengthen airport security, and many of these efforts have required numerous stakeholders to implement a range of activities to achieve desired results. These various actions, however, have not been fully integrated and unified toward achieving common outcomes and effectively using resources. A national risk-informed strategy—that establishes measurable goals, priorities, and performance measures; identifies needed resources; and is aligned and integrated with related security efforts—would help guide decision making and hold all public and private security partners accountable for achieving key shared outcomes within available resources. Moreover, a strategy that identifies these key elements would allow TSA to better articulate its needs—and the challenge of meeting those needs—to industry stakeholders and to Congress. Furthermore, balancing estimated costs against expected security benefits, and developing measures to assess the effectiveness of security activities, would help TSA provide reasonable assurance that it is properly allocating and prioritizing its limited resources, or those of airports, in a way that maximizes the effectiveness of its airport security efforts. To help ensure that TSA’s actions in enhancing airport security are guided by a systematic risk management approach that appropriately assesses risk and evaluates alternatives, and that it takes a more strategic role in ensuring that government and stakeholder actions and resources are effectively and efficiently applied across the nationwide network of airports, we recommend that the Assistant Secretary of TSA work with aviation stakeholders to implement the following five actions: Develop a comprehensive risk assessment for airport perimeter and access control security, along with milestones (i.e., time frames) for completing the assessment, that (1) uses existing threat and vulnerability assessment activities, (2) includes consequence analysis, and (3) integrates all three elements of risk—threat, vulnerability, and consequence. As part of this effort, evaluate whether the current approach to conducting JVAs appropriately and reasonably assesses systems vulnerabilities, and whether an assessment of security vulnerabilities at airports nationwide should be conducted. If the evaluation demonstrates that a nationwide assessment should be conducted, develop a plan that includes milestones for completing the nationwide assessment. As part of this effort, leverage existing assessment information from industry stakeholders, to the extent feasible and appropriate, to inform its assessment. Ensure that future airport security pilot program evaluation and implementation efforts include a well-developed and well-documented evaluation plan that includes criteria or standards for determining program performance, a clearly articulated methodology, a detailed data collection plan, and a detailed data analysis plan. Develop milestones for meeting statutory requirements, in consultation with appropriate aviation industry stakeholders, for establishing system requirements and performance standards for the use of biometric airport access control systems. Develop milestones for establishing agency procedures for reviewing airport perimeter and access control requirements imposed through security directives. To better ensure a unified approach among airport security stakeholders for developing, implementing, and assessing actions for securing airport perimeters and access to controlled areas, develop a national strategy for airport security that incorporates key characteristics of effective security strategies, including the following: Measurable goals, priorities, and performance measures. TSA should also consider using information from other methods, such as covert testing and proxy measures, to gauge progress toward achieving goals. Program cost information and the sources and types of resources needed. TSA should also identify where those resources would be most effectively applied by exploring ways to develop and implement cost-benefit analysis to identify the most cost-effective alternatives for reducing risk. Plans for coordinating activities among stakeholders, integrating airport security goals and activities with those of other aviation security priorities, and implementing security activities within the agency. We provided a draft of our report to DHS and TSA on August 3, 2009, for review and comment. On September 24, 2009, DHS provided written comments, which are reprinted in appendix VIII. In commenting on our report, DHS stated that it concurred with all five recommendations and identified actions planned or under way to implement them. In its comments to our draft report, DHS stated that the Highlights page of our report includes a statement that is inaccurate. We disagree. Specifically, DHS contends that it is not accurate to state that TSA “has not conducted vulnerability assessments for 87 percent of the nation’s 450 commercial airports” because this statement does not recognize that TSA uses other activities to assess airport vulnerabilities, and that these activities are conducted for every commercial airport. For example, DHS stated that (1) every commercial airport must have a TSA-approved ASP, which is to cover personnel, physical, and operational security measures; (2) each ASP is reviewed on a regular basis by a FSD; and (3) such FSD reviews “include a review of security measures applied at the perimeter.” As we noted in our report, TSA identified JVAs, along with professional judgment, as the agency’s primary mechanism for assessing airport security vulnerabilities in accordance with NIPP requirements. Moreover, it is not clear to what extent the FSD reviews and other activities TSA cites in its comments address airport perimeter and access control vulnerabilities or to what extent such reviews have been applied consistently on a nationwide basis, since TSA has not provided us with any documentary evidence regarding these or other reviews. Finally, in meeting with TSA, its officials acknowledged that because they have not conducted a joint vulnerability assessment for 87 percent of commercial airports, they do not know how vulnerable these airports are to an intentional breach in security or an attack. Thus, we consider the statement on our Highlights page to be accurate. TSA also stated that “as provided in our draft report” the foundation of TSA’s national strategy is its individual layers—or actions—of security, which, when combined, generate an exponential increase in deterrence and detection capability. However, we did not evaluate TSA’s layered approach to security or the extent to which this approach provides increased deterrence and detection capabilities. Regarding our first recommendation that TSA develop a comprehensive risk assessment for airport perimeter and access control security, DHS stated that TSA will develop such an assessment through its ongoing efforts to conduct a comprehensive risk assessment for the transportation sector. TSA intends to provide the results of the assessment to Congress by January 2010. According to DHS, the aviation domain portion of the sector risk assessment is to address, at the national level, nine airport perimeter and access control security scenarios. It also stated that the assessment is to integrate all three elements of risk—threat, vulnerability and consequence—and will rely on existing assessment activities, including JVAs. In developing this assessment, it will be important that TSA evaluate whether its current approach to conducting JVAs, which it identifies as one element of its risk assessment efforts, appropriately assesses vulnerabilities across the commercial airport system, and whether additional steps are needed. Since TSA has repeatedly stated the need to develop baseline data on airport security vulnerabilities to enable it to conduct systematic analysis of vulnerabilities on a nationwide basis, TSA could also benefit from exploring the feasibility of leveraging existing assessment information from industry stakeholders to inform this assessment. DHS also agreed with our second recommendation that a well-developed and well-documented evaluation plan should be part of TSA’s efforts to evaluate and implement future airport security pilot programs. In addition, DHS concurred with our third recommendation that TSA develop milestones for meeting statutory requirements for establishing system requirements and performance standards for the use of biometric airport access control systems. DHS noted that while mandatory use of such systems is not required by statute, TSA is still considering whether it will mandate the use of biometric access control systems at airports, and in the meantime it will continue to encourage airport operators to voluntarily utilize biometrics in their access control systems. We agree that mandatory use of biometric access control systems is not required by statute, but establishing milestones would help guide TSA’s continued work with the airport industry to develop and refine existing biometric access control standards. In regard to our fourth recommendation that TSA develop milestones for establishing agency procedures for reviewing airport security requirements imposed through security directives, DHS concurred that milestones are necessary. Finally, in regard to our fifth recommendation that TSA develop a national strategy for airport security that incorporates key characteristics of effective security strategies, DHS concurred and stated that TSA will develop a national strategy by updating the TS-SSP. DHS stated that TSA intends to solicit input on the plan from its Sector Coordinating Council, which represents key private sector stakeholders from the transportation sector, before releasing the updated TS-SSP in the summer of 2010. However, given that the TS-SSP is to focus on detailing how the NIPP framework will apply to the entire transportation sector, it may not be the most appropriate vehicle for developing a national strategy that addresses the various management issues specific to airport security that we identified in our report. A more effective approach might be to issue the strategy as a stand-alone plan, in keeping with the format TSA has used for its air cargo, passenger checkpoint screening, and SPOT strategies. A stand-alone strategy might better facilitate key stakeholder involvement, focus attention on airport security needs, and allow TSA to more thoroughly address relevant challenges and goals. But irrespective of the format, it will be important that TSA fully address the key characteristics of an effective strategy, as identified in our report. The intent of a national strategy is to provide a unifying framework that guides and integrates stakeholder activities toward desired results, which may be best achieved when planned efforts are clear and sustainable, and transparent enough to ensure accountability. Thus, it is important that the strategy fully incorporate the following characteristics: (1) measurable goals, priorities, and performance measures; (2) program cost information, including the sources and types of resources needed; and (3) plans for coordinating activities among stakeholders, integrating airport security goals and activities with those of other aviation security priorities, and implementing security activities within the agency. TSA also provided us with technical comments, which we considered and incorporated in the report where appropriate. We are sending copies of this report to the Secretary of Homeland Security, the Secretary of Transportation, the Assistant Secretary of the Transportation Security Administration, appropriate congressional committees, and other interested parties. The report also is available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any further questions about this report or wish to discuss these matters further, please contact me at (202) 512-4379 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix IX. This report evaluates to what extent the Transportation Security Administration (TSA) has assessed the risk to airport security consistent with the National Infrastructure Protection Plan’s (NIPP) risk management framework; implemented protective programs to strengthen airport security, and evaluated its worker screening pilot program; and established a national strategy to guide airport security decision making. To evaluate the extent to which TSA has assessed risks for airport perimeter and access control security efforts, we relied on TSA to identify risk assessment activities for these areas, and we then examined documentation for these activities, such as TSA’s 2008 Civil Aviation Threat Assessment, and interviewed TSA officials responsible for conducting assessment efforts. We examined the extent to which TSA generally conducted activities intended to assess threats, vulnerabilities, and consequences to the nation’s approximately 450 airports. We also reviewed the extent to which TSA’s use of these three types of assessments met the NIPP criteria for completing a comprehensive risk assessment. However, while we assessed the extent to which the individual threat and vulnerability assessment activities that TSA identified addressed the area of airport perimeter and access controls, the scope of our work did not include individual evaluations of these activities to determine whether they were consistent with the NIPP criteria for conducting threat and vulnerability assessments. In addition, we reviewed and summarized critical infrastructure and aviation security requirements set out by Homeland Security Presidential Directives 7 and 16, the Aviation and Transportation Security Act (ATSA), and other statutes and related materials. We also examined the individual threat and vulnerability assessment activities and discussed them with senior TSA and program officials, to evaluate how TSA uses this information to set goals and inform its decision making. We compared this information with the NIPP, TSA’s Transportation Security Sector-Specific Plan, and our past guidance and reports on recommended risk management practices. In addition, we obtained and analyzed data from TSA regarding joint vulnerability assessments, which are conducted with the Federal Bureau of Investigation (FBI), to determine the extent to which TSA has used this information to assess risk to airport perimeter and access control security. We also obtained information on the processes used to schedule and track these activities to determine the reliability with which these data were collected and managed, and we determined that the data were sufficiently reliable for the purposes of this report. We interviewed TSA and FBI officials responsible for conducting joint vulnerability assessments to discuss the number conducted by TSA since 2004, the scope of these assessments, and how they are conducted. In addition, we interviewed selected TSA officials responsible for risk management and security programs related to airport perimeter and access control to clarify the extent to which TSA has assessed risk in these areas. We selected these officials based upon their relevant expertise with TSA’s risk management efforts and its airport perimeter and access control efforts. We also analyzed TSA data on security breaches by calculating the total number of security breaches from fiscal years 2004 through 2008. To determine that the data were sufficiently reliable to present contextual information regarding all breaches to secured areas (including airport perimeters) in this report, we obtained information on the processes used to collect, tabulate, and assess these data, and discussed data quality control procedures with appropriate officials and found that the data were sufficiently reliable for this purpose. Because the data include security breaches that occurred within any type of secured areas, including passenger-related breaches, they are not specific to perimeter and access control security. In addition, the data have not been adjusted to reflect potential issues that could also influence or skew the number of overall breaches, such as annual increases in the number of passengers or specific incidences occurring within individual airports that account for more breaches than others. Furthermore, because TSA does not require its inspectors to enter a description of the breach when documenting an incident, and general reports on breach data do not show much variation between incidences unless a report includes a description of the breach, we did not ask TSA for descriptive information on breaches that occurred. To evaluate the extent to which TSA has implemented protective programs to strengthen airport security consistent with the NIPP risk management framework, we asked TSA to identify agency-led activities and programs for strengthening airport security. For the purposes of this report, we categorized TSA’s responses into four main areas of effort: (1) worker screening pilot program, (2) worker security programs, (3) technology, and (4) general airport security. To determine the extent to which TSA evaluated its worker screening pilot program, we analyzed TSA’s final report on it worker screening pilot program, including conclusions and limitations cited by the contractor—the Homeland Security Institute (HSI)—TSA hired to assist with the pilot’s design, implementation, and evaluation. We also reviewed standards for internal control in the federal government and our previous work on pilot program development and evaluation to identify accepted practices for ensuring reliable results, including key features of a sound evaluation plan. Further, we analyzed TSA and HSI’s documentation of the worker screening pilot program methodology to determine whether TSA and HSI had documented their plans for conducting the program, whether each pilot was carried out in a consistent manner, and if participating airports were provided with written requirements or guidance for conducting the pilots. To evaluate TSA’s efforts for its worker security programs, we assessed and summarized relevant program information, operations directives, and standard operating procedures for the Aviation Direct Access Screening Program (ADASP) and enhanced background checks. We also informed this assessment with recent work by the Department of Homeland Security’s (DHS) Office of the Inspector General (OIG) regarding worker screening. We reviewed the DHS OIG’s methodology and analysis to determine whether its findings were reliable for use in our report. We analyzed TSA’s documentation of its background checks to determine if TSA sufficiently addressed relevant ATSA requirements and recommendations from our 2004 report on airport security. We also interviewed TSA officials responsible for worker background checks to determine the agency’s efforts to develop a plan to meet outstanding ATSA requirements. With respect to perimeter and access control technology, we reviewed and summarized TSA documentation and evaluations of the Airport Access Control Pilot Program (AACPP), documentation related to the Airport Perimeter Security (APS) pilot program, and the dissemination of information regarding technology to airports. We interviewed officials with the DHS Directorate for Science and Technology, the National Safe Skies Alliance, and RTCA, Inc., regarding research, development, and testing efforts, and challenges and potential limitations of applicable technologies to airport perimeter and access control security. We selected these entities because of their role in the development of such technology. We also interviewed TSA Headquarters officials to obtain views on the nature and scope of technology-related efforts and other relevant considerations, such as how they addressed relevant ATSA requirements and recommendations from our 2004 report, or how they plan to do so. With regard to TSA’s efforts for general airport security, we examined TSA’s procedures for developing and issuing airport perimeter and access control requirements through security directives and other methods, and analyzed the extent to which TSA disseminated security requirements to airports through security directives. At our request, TSA identified 25 security directives and emergency amendments that imposed requirements related to airport perimeter and access control security, which we examined to identify specific areas of regulation. In addition, we assessed and summarized relevant program information and documentation, such as operations directives, for other programs identified by TSA, such as the Visible Intermodal Prevention and Response (VIPR) program, Screening of Passengers by Observation Techniques (SPOT) program, and the Law Enforcement Officer Reimbursement Program. To evaluate the extent to which TSA established a national strategy to guide airport security decision making, we considered guidance on effective characteristics for security strategies and planning that we previously reported, Government Performance and Results Act (GPRA) requirements, and generally accepted strategic planning practices for government agencies. In order to evaluate TSA’s approach to airport security, we reviewed TSA documents to identify major security goals and subordinate objectives for airport perimeter and access control security, and relevant priorities, goals, objectives, and performance measures. We also analyzed relevant program documentation, including budget, cost, and performance information, including relevant information TSA developed and maintains for the Office of Management and Budget’s Performance Assessment Rating Tool. We compared TSA’s approach with criteria identified in NIPP, other DHS guidance, GPRA, and other leading practices in strategies and planning. We also interviewed relevant TSA program and budget officials, Federal Aviation Administration (FAA) officials, and selected aviation industry officials regarding the cost of airport perimeter and access control security for fiscal years 2004 through 2008. To determine the extent to which TSA collaborated with stakeholders on airport security activities, and to obtain their insights on airport security operations, costs, and regulation, we interviewed industry officials from the Airports Council International-North America—whose commercial airport members represent 95 percent of domestic airline passenger and air cargo traffic in North America—and from the American Association of Airport Executives—whose members represent 850 domestic airports. We selected these industry associations based on input from TSA and from industry stakeholders, who identified the two associations representing commercial airport operators. We also attended aviation association conferences at which industry officials presented information on national aviation security policy and operations, and we conducted a group discussion with 17 officials representing various airport and aircraft operators and aviation associations to obtain their views regarding key issues affecting airport security. While the views expressed by these industry, airport, and aircraft operator officials cannot be generalized to all airport industry associations and operators, these interviews provided us with additional perspectives on airport security and an understanding of the extent to which TSA has worked and collaborated with airport stakeholders. We also conducted site visits at nine U.S. commercial airports—Orange County John Wayne Airport, Washington-Dulles International Airport, Miami International Airport, Orlando International Airport, John F. Kennedy International Airport, Westchester County Airport, Logan International Airport, Barnstable Municipal Airport, and Salisbury/Wicomico County Regional Airport. During these visits we observed airport security operations and discussed issues related to perimeter and access control security with airport officials and on-site TSA officials, including federal security directors (FSD). We selected these airports based on several factors, including airport category, size, and geographical dispersion; whether they faced problems with perimeter and access control security; and the types of technological initiatives tested or implemented. Because we selected a nonprobability sample of airports to visit, those results cannot be generalized to other U.S. commercial airports; however, the information gathered provides insight into TSA and airport programs and procedures. In addition, at Miami International Airport and John F. Kennedy International Airport we conducted separate interviews with airport officials to discuss their ongoing, or anticipated, efforts to implement additional worker screening methods at their respective airports. We also conducted telephone interviews with airport officials and FSDs from four airports that had implemented, or planned to implement, various forms of 100 percent screening of airport workers to discuss their efforts. These were Cincinnati/Northern Kentucky International Airport, Dallas/Fort Worth International Airport, Denver International Airport, and Phoenix Sky Harbor International Airport. While the views of the officials we spoke with regarding additional worker screening methods cannot be generalized to all airport security officials, they provided insight into how airport security programs were chosen and developed. We also conducted an additional site visit at Logan International Airport to observe TSA’s implementation of various worker screening methods as part of the agency’s worker screening pilot program. While the experiences of this pilot location cannot be generalized to all airports participating in the pilot, we chose this airport based on airport category and the variety of worker screening methods piloted at this location. We conducted this performance audit from May 2007 through September 2009 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. TSA has taken steps since 2004 to address some of the requirements related to airport perimeter and access control security prescribed by ATSA. The related ATSA requirements, and TSA’s actions as of May 2009 to address these requirements, are summarized in table 3. TSA officials told us that they use the results of compliance inspections and covert testing to augment their assessment of potential vulnerabilities in airport security. Compliance inspections examine a regulated entity’s— such as an airport operator or air carrier—adherence to federal regulations, which TSA officials say they use to determine if airports adequately address known threats and vulnerabilities. According to TSA, while regulatory compliance is just one dimension of airport security, compliance with federal requirements allows TSA to determine the general level of security within an airport. As a result, according to TSA, compliance with regulations suggests less vulnerability within an airport and, conversely, failure to meet critical compliance rates suggests the likelihood of a larger problem within an airport and helps the agency identify and assess vulnerabilities. TSA allows its inspectors to conduct compliance inspections based on observations of various activities, such as ADASP, VIPR, and local covert testing, and to conduct additional inspections based on vulnerabilities identified through assessments or the results of regular inspections. Covert tests are any test of security systems, personnel, equipment, and procedures to obtain a snapshot of the effectiveness of that security measure, and they are used to improve airport performance, safety, and security. TSA officials stated that covert testing assists the agency in identifying airport vulnerabilities because such tests are designed based on threat assessments and intelligence to approximate techniques that terrorists may use to exploit gaps in airport security. TSA conducts four types of covert tests for airport access controls: Access to security identification display areas (SIDA): TSA inspectors not wearing appropriate identification attempt to penetrate SIDA access points, such as boarding gates, employee doors, and other entrances. Access to air operations areas (AOA): TSA inspectors not wearing appropriate identification attempt to penetrate AOA via access points from public areas, such as perimeter gates and cargo areas. Access to aircraft: TSA inspectors not wearing appropriate identification (or not carrying valid boarding passes) attempt to penetrate passenger access points that lead to aircraft from sterile areas, such as boarding gates, employee doors, and jet ways. SIDA challenges: Once inside a SIDA, TSA inspectors attempt to walk around these areas, such as the tarmac and baggage loading areas, without displaying appropriate identification. TSA also requires FSDs to conduct similar, locally controlled tests of access controls to ensure compliance and identify possible vulnerabilities with airport security. These tests are selected by the FSDs and based on locally identified risks and can include challenging procedures in the secure area, piggybacking (following authorized airport workers into secured areas), and attempting to access an aircraft from sterile area. According to TSA officials, the agency uses the results of its covert tests to inform decision making for airport security, but officials could not provide examples of how this information has specifically informed past decisions. Various TSA offices and programs contribute to the overall operations and costs of airport perimeter and access control security. According to TSA officials, the agency does not develop a cost estimate specific to perimeter and access control security because such efforts are often part of broader security activities or related programs—for example, VIPR and SPOT are also used for passenger screening. As a result, it is difficult to identify what percentage of program costs has been expended on airport perimeter and access control security activities. At our request, TSA officials identified the estimated spending related to perimeter and access control security programs from fiscal years 2004 through 2008 (see table 4). Airports can receive funding for purposes related to perimeter and access control security via grants awarded through FAA’s Airport Improvement Program. TSA officials also told us that the agency generally does not collect or track cost information for airport security efforts funded through the Airport Improvement Program. This program is one of the principal sources of funding for airport capital improvements in the United States, providing approximately $3 billion in grants annually to enhance airport capacity, safety, and environmental protection, as well as perimeter security. According to FAA officials, many factors are considered when awarding grants to airports for perimeter security enhancements, although security projects required by statute or regulation receive the highest priority. Projects that receive funding have included computerized access controls for ramps, infrastructure improvements to house central computers, surveillance systems, and perimeter fencing. According to FAA, more than $365 million in airport perimeter and access control–related grants were provided through the Airport Improvement Program for fiscal years 2004 through 2008. TSA officials also told us that the agency does not track funds spent by individual airport operators to enhance or maintain perimeter and access control security. In 2009 the Airports Council International-North America—an aviation industry association—surveyed commercial airports regarding the funding needed for airport capital projects from 2009 to 2013. As part of this effort, the association surveyed airport operators on the amount of funds they planned to expend on airport security as a percentage of their overall budgets. The association reported that planned airport operator spending on airport security, as a percentage of total spending, ranged from 3.8 percent (about $2 billion) for large hub airports to 3.9 percent (about $230 million) for small hub airports. The association surveys did not include information on the types of security projects undertaken by airports. However, during our site visits we obtained data from selected airport operators on the costs of perimeter and access control security projects they had recently concluded or estimated costs for projects in progress. Examples of airport spending on perimeter and access control security include $30 million to install a full biometric access system; $6.5 million to install an over 8,000-foot-long blast/crash resistant wall along the airport perimeter; $8 million to install over 680 bollards in front of passenger terminals and vehicle access points; and $3 million to develop and install an infrared intrusion detection system. From May through July 2008 TSA implemented worker screening pilots at seven airports in accordance with the Explanatory Statement accompanying the DHS Appropriations Act, 2008 (see table 5 for a summary of text directing the worker screening pilot program). At three airports, TSA conducted 100 percent worker screening—inspections of all airport workers and vehicles entering secure areas; at four others TSA randomly screened 20 percent of workers and tested other enhanced security measures. Screening of airport workers was to be done at either the airport perimeter or the passenger screening checkpoints. TSA was directed to collect data on the methods it utilized, and evaluate the benefits, costs, and impacts of 100 percent worker screening to determine the most effective and cost-efficient method of addressing and deterring potential security risks posed by airport workers. The enhanced measures that TSA tested at the four airports not implementing 100 percent screening are summarized below: Employee training: TSA provided a security awareness training video, which all SIDA badgeholders were required to complete. According to TSA, the training intended reduce security breaches by increasing workers’ understanding of their security responsibilities and awareness of threats and abnormal behaviors. Behavioral recognition training: TSA provided funding to participating airports to teach select law enforcement officers and airport personnel to identify potentially high-risk individuals based on their behavior. A condensed version of the SPOT course, this training was intended to equip personnel with skills to enhance existing duties, according to TSA officials. Targeted physical inspections: TSA conducted random inspections of vehicles and individuals entering the secured areas of airports to increase the coverage of ADASP. Inspections consisted of bag, vehicle, and identification checks; scanning bottled liquids; and random security sweeps of specific airport areas. Deployment of technology: TSA employed additional technology at selected airports to assist with the screening of employees, such as walk- through and handheld metal detectors, bottled liquid scanners, and explosive detection systems. TSA also tested biometric access control systems at selected airports. According to TSA, VIPR operations augment existing airport security activities, such as ADASP, and provide a visual deterrent to terrorist or other criminal activity. VIPR was first implemented in 2005, and according to TSA officials, VIPR operations are deployed through a risk-based approach and in response to specific intelligence information or known threats. In a VIPR operation, TSA officials, including transportation security officers and inspectors, behavioral detection officers, bomb appraisal officers, and federal air marshals work with local law enforcement and airport officials to temporarily enhance aviation security. According to TSA officials, VIPR operations for perimeter and access control security can include random inspections of individuals, property, and vehicles, as well as patrols of secured areas and random checks to ensure that employees have the proper credentials. TSA officials told us that although they do not know how many VIPR deployments have specifically addressed airport perimeter and access control security, from March 2008 through April 2009 TSA performed 1,042 commercial and general aviation airport or cargo VIPR operations. According to TSA officials, the majority of these operations involved the observation and patrolling of secured airport areas and airport perimeters. As of May 2009 TSA officials also said that the agency is in the process of enhancing its VIPR database to more accurately capture and track specific operational objectives, such as enhancing the security of airport perimeters and access controls, and developing an estimated time frame for completing this effort. Since 2004 TSA has used SPOT—a passenger screening program in which behavior detection officers observe and analyze passenger behavior to identify potentially high-risk individuals—to determine if an individual or individuals may pose a risk to aircraft or airports. Although SPOT was originally designed for passenger screening, TSA officials stated that FSDs can also use behavior detection officers to assess worker behavior as they pass through the passenger checkpoint, as part of random worker screening operations or as part of VIPR teams deployed at an airport. However, TSA officials could not determine how often behavior detection officers have participated in random worker screening or VIPR operations, or identify which airports have used behavior detection officers for random worker screening. According to TSA officials, the agency is in the process of redesigning its data collection efforts and anticipates that it will be able to more accurately track this information in the future, though officials did not provide a time frame for doing so. TSA officials also told us that when participating in random worker screening, behavior detection officers observe workers for suspicious behavior as they are being screened and may engage workers in casual conversation to assess potential threats. According to TSA officials, the agency has provided behavior detection training to law enforcement personnel as part of its worker screening pilot program, as well as to selected airport security and operations personnel at more than 20 airports. We currently have ongoing work assessing SPOT, and will issue a report on this program at a later date. TSA undertakes efforts to facilitate the deployment of law enforcement personnel authorized to carry firearms at airport security checkpoints, and in April 2002, the Law Enforcement Officer Reimbursement Program was established to provide partial reimbursement for enhanced, on-site law enforcement presence in support of the passenger screening checkpoints. Since 2004, the program has expanded to include law enforcement support along the perimeter and to assist with worker screening. According to TSA, the program is implemented through a cooperative agreement process that emphasizes the ability of both parties to identify and agree as to how law enforcement officers will support the specific security requirements at an airport. For example, the FSD, in consultation with the airport operator and local law enforcement, may determine that rather than implementing fixed-post stationing of law enforcement officers, it may be more appropriate to implement flexible stationing of law enforcement officers. TSA may also provide training or briefings on an as- needed basis on relevant security topics, including improvised explosive device recognition, federal criminal statutes pertinent to aviation security, and procedures and processes for armed law enforcement officers. Awards made under the reimbursement program are subject to the availability of appropriated funds, among other things, and are to supplement not supplant state and local funding. According to TSA officials, however, no applicant has been denied funds based on lack of appropriated funds. Program evaluation methods exist whereby TSA could attempt to assess whether its activities are meeting intended objectives. These methods center on reducing the risk of both external and internal threats to the security of airport perimeters and access controls, and seek to use information and resources available to help capture pertinent information. First, recognizing that there are challenges associated with measuring the effectiveness of deterrence-related activities, the NIPP’s Risk Management Framework provides mechanisms for qualitative feedback that although not considered a metric, could be applied to augment and improve the effectiveness and efficiency of protective programs and activities. For example, working with stakeholders—such as airport operators and other security partners—to identify and share lessons learned and best practices across airports could assist TSA in better tailoring its efforts and resources and continuously improving security. Identifying a range of qualitative program information—such as information gathered through vulnerability assessment activities or compliance inspections—could also allow TSA to determine whether activities are effective. As discussed in appendix III, compliance inspections and covert tests could be used to identify noncompliance with regulations or security breaches within designated secured areas. For example, TSA could use covert tests to determine if transportation security officers are following TSA procedures when screening airport workers or whether certain worker screening procedures detect prohibited items. However, in order to improve the usefulness of this technique, we previously recommended to TSA that the agency develop a systematic process for gathering and analyzing specific causes of all covert testing failures, record information on processes that may not be working properly during covert tests, and identify effective practices used at airports that perform well on covert tests. Second, as TSA has already begun to do with some activities, it could use data it already collects to identify trends and establish baseline data for a future comparison of effectiveness. For example, a cross-sectional analysis of the number of workers caught possessing prohibited items at specific worker screening locations over time, while controlling for variables such as increased law enforcement presence or airport size, could provide insights into what type of security activities help to reduce the possession of prohibited items. Similarly, an examination of airport workers apprehended, fired, or referred to law enforcement while on the job could provide insights into the quality of worker background checks and security threat assessments. Essentially, the these types of analyses provide a useful context for drawing conclusions about whether certain security practices are reasonable and appropriate given certain conditions and, gradually, with the accumulation of relevant data, should allow TSA to start identifying cause-and-effect relationships. Third, according to the Office of Management and Budget (OMB), the use of proxy measures may also allow TSA to determine how well its activities are functioning. Proxy measures are indirect measures or indicators that approximate or represent the direct measure. TSA could use proxy measures to address deterrence, other security goals as identified above, or a combination of both. According to OMB, proxy measures are to be correlated to an improved security outcome, and the program should be able to demonstrate—for example, through the use of modeling—how the proxies tie to the eventual outcome. The Department of Transportation has also highlighted the need for proxy measures when assessing maritime security efforts pertaining to deterrence. For example, according to the Department of Transportation, while a direct measure of access to seaports might be the number of unauthorized intruders detected, proxy measures for seaport access may include related information on gates and guards—combined with crime statistics relating to unauthorized entry in the area of the port—to support a broader view of port security. In terms of aviation security, because failure to prevent a worker from placing a bomb on a plane could be catastrophic, proxy measures may include information on access controls, worker background checks, and confiscated items. Proxy measures could also include information on aircraft operators’ efforts to secure the aircraft. In using a variety of proxy measures, failure in any one of the identified measures could provide an indication on the overall risk to security. Lastly, the use of likelihood, or “what-if scenarios,” which are used to describe a series of steps leading to an outcome, could allow TSA to assess whether potential activities and efforts effectively work together to hypothetically achieve a positive outcome. For example, the development of such scenarios could help TSA to consider whether an activity’s procedures could be modified in response to identified or projected changes in terrorist behaviors, or if an activity’s ability to reduce or combat a threat is greater if used in combination with other activities. In addition to the contact named above, Steve Morris, Assistant Director, and Barbara Guffy, Analyst-in-Charge, managed this assignment. Scott Behen, Valerie Colaiaco, Dorian Dunbar, Christopher Keisling, Matthew Lee, Sara Margraf, Spencer Tacktill, Fatema Wachob, and Sally Williamson made significant contributions to the work. Chuck Bausell, Jr. provided expertise on risk management and cost-benefit analysis. Virginia Chanley and Michele Fejfar assisted with design, methodology, and data analysis. Thomas Lombardi provided legal support; Elizabeth Curda and Anne Inserra provided expertise on performance measurement; and Pille Anvelt developed the report’s graphics. | Incidents of airport workers using access privileges to smuggle weapons through secured airport areas and onto planes have heightened concerns regarding commercial airport security. The Transportation Security Administration (TSA), along with airports, is responsible for security at TSA-regulated airports. To guide risk assessment and protection of critical infrastructure, including airports, the Department of Homeland Security (DHS) developed the National Infrastructure Protection Plan (NIPP). GAO was asked to examine the extent to which, for airport perimeters and access controls, TSA (1) assessed risk consistent with the NIPP; (2) implemented protective programs, and evaluated its worker screening pilots; and (3) established a strategy to guide decision making. GAO examined TSA documents related to risk assessment activities, airport security programs, and worker screening pilots; visited nine airports of varying size; and interviewed TSA, airport, and association officials. Although TSA has implemented activities to assess risks to airport perimeters and access controls, such as a commercial aviation threat assessment, it has not conducted vulnerability assessments for 87 percent of the nation's approximately 450 commercial airports or any consequence assessments. As a result, TSA has not completed a comprehensive risk assessment combining threat, vulnerability, and consequence assessments as required by the NIPP. While TSA officials said they intend to conduct a consequence assessment and additional vulnerability assessments, TSA could not provide further details, such as milestones for their completion. Conducting a comprehensive risk assessment and establishing milestones for its completion would provide additional assurance that intended actions will be implemented, provide critical information to enhance TSA's understanding of risks to airports, and help ensure resources are allocated to the highest security priorities. Since 2004, TSA has taken steps to strengthen airport security and implement new programs; however, while TSA conducted a pilot program to test worker screening methods, clear conclusions could not be drawn because of significant design limitations and TSA did not document key aspects of the pilot. TSA has taken steps to enhance airport security by, among other things, expanding its requirements for conducting worker background checks and implementing a worker screening program. In fiscal year 2008 TSA pilot tested various methods to screen airport workers to compare the benefits, costs, and impacts of 100 percent worker screening and random worker screening. TSA designed and implemented the pilot in coordination with the Homeland Security Institute (HSI), a federally funded research and development center. However, because of significant limitations in the design and evaluation of the pilot, such as the limited number of participating airports--7 out of about 450--it is unclear which method is more cost-effective. TSA and HSI also did not document key aspects of the pilot's design, methodology, and evaluation, such as a data analysis plan, limiting the usefulness of these efforts. A well-developed and well-documented evaluation plan can help ensure that pilots generate needed performance information to make effective decisions. While TSA has completed these pilots, developing an evaluation plan for future pilots could help ensure that they are designed and implemented to provide management and Congress with necessary information for decision making. TSA's efforts to enhance the security of the nation's airports have not been guided by a unifying national strategy that identifies key elements, such as goals, priorities, performance measures, and required resources. For example, while TSA's various airport security efforts are implemented by federal and local airport officials, TSA officials said that they have not identified or estimated costs to airport operators for implementing security requirements. GAO has found that national strategies that identify these key elements strengthen decision making and accountability; in addition, developing a strategy with these elements could help ensure that TSA prioritizes its activities and uses resources efficiently to achieve intended outcomes. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
The credibility of USDA’s efforts to correct long-standing problems in resolving customer and employee discrimination complaints has been undermined by faulty reporting of complaint data, including disparities we found when comparing various ASCR sources of data. When ASCR was created in 2003, there was an existing backlog of complaints that had not been adjudicated. In response, the Assistant Secretary for Civil Rights at that time called for a concerted 12-month effort to reduce this backlog and to put lasting improvements in place to prevent future complaint backlogs. In July 2007, ASCR reported that it had reduced its backlog of 690 complaints and held the complaint inventory to manageable levels through fiscal year 2005. However, the data ASCR reported lack credibility because they were inconsistent with other complaint data it reported a month earlier to a congressional subcommittee. The backlog later surged to 885 complaints, according to ASCR data. Furthermore, the Assistant Secretary’s letter transmitting these data stated that while they were the best available, they were incomplete and unreliable. In addition, GAO and USDA’s OIG have identified other problems with ASCR’s data, including the need for better management controls over the entry and validation of these data. In addition, some steps that ASCR took to speed up its investigations and decisions on complaints in 2004 may have adversely affected the quality of its work. ASCR’s plan called for USDA’s investigators and adjudicators, who prepare agency decisions, to nearly double their normal pace of casework for about 12 months. ASCR’s former Director, Office of Adjudication and Compliance, stated that this increased pace led to many “summary” decisions on employees’ complaints that did not resolve questions of fact, with the result that many decisions were appealed to the Equal Employment Opportunity Commission. This official also said these summary decisions “could call into question the integrity of the process because important issues were being overlooked.” In addition, inadequate working relationships and communications within ASCR, as well as fear of retaliation for reporting management-related problems, complicated ASCR’s efforts to produce quality work products. In August 2008, ASCR officials stated they would develop standard operating procedures for the Office of Adjudication and Compliance and had provided USDA staff training on communication and conflict management, among other things. While these are positive steps, they do not directly respond to whether USDA is adequately investigating complaints, developing thorough complaint decisions, and addressing the problems that gave rise to discrimination complaints within ASCR. The Food, Conservation, and Energy Act of 2008 (2008 Farm Bill), enacted in June 2008, states that it is the sense of Congress that all pending claims and class actions brought against USDA by socially disadvantaged farmers and ranchers should be resolved in an expeditious and just manner. In addition, the 2008 Farm Bill requires USDA to report annually on, among other things, the number of customer and employee discrimination complaints filed against each USDA agency, and the length of time the agency took to process each complaint. In October 2008, we recommended that the Secretary of Agriculture take the following actions related to resolving discrimination complaints: Prepare and implement an improvement plan for resolving discrimination complaints that sets time frame goals and provides management controls for resolving complaints from beginning to end. Develop and implement a plan to ensure the accuracy, completeness and reliability of ASCR’s databases on customer and employee complaints, and that provides for independent validation of ASCR’s data quality. Obtain an expert, independent, and objective legal examination of the basis, quality, and adequacy of a sample of USDA’s prior investigations and decisions on civil rights complaints, along with suggestions for improvement. USDA agreed with the first two recommendations, but initially disagreed with the third, asserting that its internal system of legal sufficiency addresses our concerns, works well, and is timely and effective. Given the substantial evidence of civil rights case delays and questions about the integrity of USDA’s civil rights casework, we believe this recommendation remains valid and necessary to restore confidence in USDA’s civil rights decisions. In April 2009, ASCR officials said that USDA now agrees with all three of the recommendations and that the department is taking steps to implement them. These steps include hiring a consultant to assist ASCR with setting timeframe goals and establishing proper management controls; a contractor to help move data from ASCR’s three complaint databases into one; and a firm to provide ASCR with independent legal advice on developing standards on what constitutes a program complaint and actions needed to adjudicate those complaints. As required by the 2002 farm bill, ASCR has published three annual reports on the participation rate of socially disadvantaged farmers and ranchers in USDA programs. The reports are to provide statistical data on program participants by race and ethnicity, among other things. However, much of these data are unreliable because USDA lacks a uniform method of reporting and tabulating race and ethnicity data among its component agencies. According to USDA, to collect standardized demographic data directly from participants in many of its programs, it must first obtain OMB’s approval. In the meantime, most of USDA’s demographic data are gathered by visual observation of program applicants, a method that is inherently unreliable and subjective, especially for determining ethnicity. To address this problem, ASCR published a notice in the Federal Register in 2004 seeking public comment on its plan to collect standardized data on race, ethnicity, gender, national origin, and age for all its programs. However, while it received some comments, ASCR has not moved forward to finalize this rulemaking and obtain OMB’s approval to collect these data. The 2008 Farm Bill contains several provisions related to reporting on minority farmers’ participation in USDA programs. First, it requires USDA to annually compile program application and participation rate data for each program serving those farmers. These reports are to include the raw numbers and participation rates for the entire United States and for each state and county. Second, it requires USDA to ensure, to the maximum extent practicable, that the Census of Agriculture and studies by USDA’s Economic Research Service accurately document the number, location, and economic contributions of minority farmers in agricultural production. In October 2008, to address underlying data reliability issues, as discussed, and potential steps USDA could take to facilitate data analysis by users, we recommended that the Secretary of Agriculture work expeditiously to obtain OMB’s approval to collect the demographic data necessary for reliable reporting on race and ethnicity by USDA program. USDA agreed with the recommendation. In April 2009, ASCR officials indicated that a draft Federal Register notice requesting OMB’s approval to collect these data for Farm Service Agency, Natural Resources Conservation Service, and Rural Development programs is being reviewed within USDA. These officials said they hoped this notice, which they considered an initial step toward implementing our recommendation, would be published and implemented in time for USDA’s field offices to begin collecting these data by October 1, 2009. According to these officials, USDA also plans to seek, at a later time, authority to collect such data on participants in all USDA programs. In light of USDA’s history of civil rights problems, better strategic planning is vital. Results-oriented strategic planning provides a road map that clearly describes what an organization is attempting to achieve and, over time, it can serve as a focal point for communication with Congress and the public about what has been accomplished. Results-oriented organizations follow three key steps in their strategic planning: (1) they define a clear mission and desired outcomes, (2) they measure performance to gauge progress, and (3) they use performance information for identifying performance gaps and making program improvements. ASCR has started to develop a results-oriented approach as illustrated in its first strategic plan, Assistant Secretary for Civil Rights: Strategic Plan, Fiscal Years 2005-2010, and its ASCR Priorities for Fiscal Years 2007 and 2008. However, ASCR’s plans do not include fundamental elements required for effective strategic planning. In particular, we found that the interests of ASCR’s stakeholders—including representatives of community-based organizations and minority interest groups—are not explicitly reflected in its strategic plan. For example, we found that ASCR’s stakeholders are interested in improvements in (1) USDA’s methods of delivering farm programs to facilitate access by underserved producers; (2) the county committee system, so that stakeholders are better represented in local decisions; and (3) the diversity of USDA employees who work with minority producers. A more complete list of these interests is included in the appendix. In addition, ASCR’s strategic plan does not link to the plans of other USDA agencies or the department and does not discuss the potential for linkages to be developed. ASCR could also better measure performance to gauge progress, and it has not yet started to use performance information for identifying USDA performance gaps. For example, ASCR measures USDA efforts to ensure USDA customers have equal and timely access to programs by reporting on the numbers of participants at USDA workshops rather than measuring the results of its outreach efforts on access to benefits and services. Moreover, the strategic plan does not make linkages between levels of funding and ASCR’s anticipated results; without such a discussion, it is not possible to determine whether ASCR has the resources needed to achieve its strategic goal of, for example, strengthening partnerships with historically black land-grant universities through scholarships provided by USDA. To help ensure access to and equitable participation in USDA’s programs and services, the 2008 Farm Bill provided for establishing the Office of Advocacy and Outreach and charged it with, among other things, establishing and monitoring USDA’s goals and objectives to increase participation in USDA programs by small, beginning, and socially disadvantaged farmers and ranchers. As of April 2009, ASCR officials indicated that the Secretary of Agriculture plans to establish this office, but has not yet done so. In October 2008, we recommended that USDA develop a results-oriented department-level strategic plan for civil rights that unifies USDA’s departmental approach with that of ASCR and the newly created Office of Advocacy and Outreach and that is transparent about USDA’s efforts to address stakeholder concerns. USDA agreed with this recommendation. In April 2009, ASCR officials said they plan to implement this recommendation during the next department-wide strategic planning process, which occurs every 5 years. Noting that the current plan runs through 2010, these officials speculated that work on the new plan will start in the next few months. Our past work in addressing the problems of high-risk, underperforming federal agencies, as well as our reporting on results-oriented management, suggests three options that could benefit USDA’s civil rights performance. These options were selected based on our judgment that they (1) can help address recognized and long-standing problems in USDA’s performance, (2) have been used previously by Congress to improve aspects of agency performance, (3) have contributed to improved agency performance, and (4) will result in greater transparency over USDA’s civil rights performance. These options include (1) making USDA’s Assistant Secretary for Civil Rights subject to a statutory performance agreement, (2) establishing an agriculture civil rights oversight board, and (3) creating an ombudsman for agriculture civil rights matters. Our prior assessment of performance agreements used at several agencies has shown that these agreements have potential benefits that could help improve the performance of ASCR. Potential benefits that performance agreements could provide USDA include (1) helping to define accountability for specific goals and align daily operations with results- oriented programmatic goals, (2) fostering collaboration across organizational boundaries, (3) enhancing use of performance information to make program improvements, (4) providing a results-oriented basis for individual accountability, and (5) helping to maintain continuity of program goals during leadership transitions. Congress has required performance agreements in other federal offices and the results have been positive. For example, in 1998, Congress established the Department of Education’s Office of Federal Student Aid as the government’s first performance-based organization. This office had experienced long-standing financial and management weaknesses and we had listed the Student Aid program as high-risk since 1990. Congress required the office’s Chief Operating Officer to have a performance agreement with the Secretary of Education that was transmitted to congressional committees and made publicly available. In addition, the office was required to report to Congress annually on its performance, including the extent to which it met its performance goals. In 2005, because of the sustained improvements made by the office in its financial management and internal controls, we removed this program from our high-risk list. More recently, Congress has required statutory performance agreements for other federal executives, including for the Commissioners of the U.S. Patent and Trademark Office and the Under Secretary for Management of the Department of Homeland Security. A statutory performance agreement could benefit ASCR. The responsibilities assigned to USDA’s Assistant Secretary for Civil Rights were stated in general terms in both the 2002 Farm Bill and the Secretary’s memorandum establishing this position within USDA. The Secretary’s memorandum stated that the Assistant Secretary reports directly to the Secretary and is responsible for (1) ensuring USDA’s compliance with all civil rights laws and related laws, (2) coordinating administration of civil rights laws within USDA, and (3) ensuring that civil rights components are incorporated in USDA strategic planning initiatives. This set of responsibilities is broad in scope, and it does not identify specific performance expectations for the Assistant Secretary. A statutory performance agreement could assist in achieving specific expectations by providing additional incentives and mandatory public reporting. In October 2008, we suggested that Congress consider the option of making USDA’s Assistant Secretary for Civil Rights subject to a statutory performance agreement. USDA initially disagreed with this suggestion, in part stating that the Assistant Secretary’s responsibilities are spelled out in the 2002 and 2008 farm bills. In response, we noted, in part, that a statutory performance agreement would go beyond the existing legislation by requiring measurable organizational and individual goals in key performance areas. In April 2009, ASCR officials indicated that the department no longer disagrees with this suggestion. However, these officials expressed the hope that the actions they are taking or planning to improve the management of civil rights at USDA, such as obtaining an independent external analysis of program delivery, will preclude the need for this mechanism. Congress could also authorize a USDA civil rights oversight board to independently monitor, evaluate, approve, and report on USDA’s administration of civil rights activities, as it has for other federal activities. Oversight boards have often been used by the federal government—such as for oversight of public accounting, intelligence matters, civil liberties, and drug safety—to provide assurance that important activities are well done, to identify weaknesses that may need to be addressed, and to provide for transparency. For example, Congress established the Internal Revenue Service (IRS) Oversight Board in 1998 to oversee IRS’s administration of internal revenue laws and ensure that its organization and operation allow it to carry out its mission. At that time, IRS was considered to be an agency that was not effectively serving the public or meeting taxpayer needs. The board operates much like a corporate board of directors, tailored to fit the public sector. The board provides independent oversight of IRS administration, management, conduct, and the direction and supervision of the application of the internal revenue code. We have previously noted the work of the Internal Revenue Service Oversight Board—including, for example, the board’s independent analysis of IRS business systems modernization. Currently, there is no comparable independent oversight of USDA civil rights activities. In October 2008, we suggested that Congress consider the option of establishing a USDA civil rights oversight board to independently monitor, evaluate, approve, and report on USDA’s administration of civil rights activities. Such a board could provide additional assurance that ASCR management functions effectively and efficiently. USDA initially disagreed with this suggestion, stating that it would be unnecessarily bureaucratic and delay progress. In response, we noted that a well-operated oversight board could be the source of timely and wise counsel to help raise USDA’s civil rights performance. In April 2009, ASCR officials said that the department no longer disagrees with this suggestion. However, these officials expressed the hope that the actions they are taking or planning to address our recommendations to improve the management of civil rights at USDA will preclude the need for this mechanism. An ombudsman for USDA civil rights matters could be created to address the concerns of USDA customers and employees. Many other agencies have created ombudsman offices for addressing employees’ concerns, as authorized by the Administrative Dispute Resolution Act. However, an ombudsman is not merely an alternative means of resolving employees’ disputes; rather, the ombudsman is a neutral party who uses a variety of procedures, including alternative dispute resolution techniques, to deal with complaints, concerns, and questions. Ombudsmen who handle concerns and inquiries from the public—external ombudsmen—help agencies be more responsive to the public through impartial and independent investigation of citizens’ complaints, including those of people who believe their concerns have not been dealt with fairly and fully through normal channels. For example, we reported that ombudsmen at the Environmental Protection Agency serve as points of contact for members of the public who have concerns about certain hazardous waste cleanup activities. We also identified the Transportation Security Administration ombudsman as one who serves external customers and is responsible for recommending and influencing systemic change where necessary to improve administration operations and customer service. Within the federal workplace, ombudsmen provide an informal alternative to existing and more formal processes to deal with employees’ workplace conflicts and other organizational climate issues. USDA faces concerns of fairness and equity from both customers and employees—a range of issues that an ombudsman could potentially assist in addressing. A USDA ombudsman who is independent, impartial, fully capable of conducting meaningful investigations and who can maintain confidentiality could assist in resolving these civil rights concerns. As of April 2007, 12 federal departments and 9 independent agencies reported having 43 ombudsmen. In October 2008, we recommended that USDA explore the potential for an ombudsman office to contribute to addressing the civil rights concerns of USDA customers and employees, including seeking legislative authority, as appropriate, to establish such an office and to ensure its effectiveness, and advise USDA’s congressional oversight committees of the results. USDA agreed with this recommendation. In April 2009, ASCR officials indicated that the Assistant Secretary for Civil Rights has convened a team to study the ombudsman concept and to make recommendations by September 30, 2009, to the Secretary of Agriculture for establishing an ombudsman office. USDA has been addressing allegations of discrimination for decades and receiving recommendations for improving its civil rights functions without achieving fundamental improvements. One lawsuit has cost taxpayers about a billion dollars in payouts to date, and several other groups are seeking redress for similar alleged discrimination. While ASCR’s established policy is to fairly and efficiently respond to complaints of discrimination, its efforts to establish the management system necessary to implement the policy have fallen short, and significant deficiencies remain. Unless USDA addresses several fundamental concerns about resolving discrimination complaints—including the lack of credible data on the numbers, status, and management of complaints; the lack of specified time frames and management controls for resolving complaints; questions about the quality of complaint investigations; and concerns about the integrity of final decision preparation—the credibility of USDA efforts to resolve discrimination complaints will be in doubt. In addition, unless USDA obtains accurate data on minority participation in USDA programs, its reports on improving minority participation in USDA programs will not be reliable or useful. Furthermore, without better strategic planning and meaningful performance measures, it appears unlikely that USDA management will be fully effective in achieving its civil rights mission. Given the new Administration’s commitment to giving priority attention to USDA’s civil rights problems, various options may provide a road map to correcting long-standing management deficiencies that have given rise to these problems. Specifically, raising the public profile for transparency and accountability through means such as a statutory performance agreement between the Secretary of Agriculture and the Assistant Secretary for Civil Rights, a civil rights oversight board, and an ombudsman for addressing customers’ and employees’ civil rights concerns would appear to be helpful steps because they have proven to be effective in raising the performance of other federal agencies. These options could lay a foundation for clarity about the expectations USDA must meet to restore confidence in its civil rights performance. Mr. Chairman, this concludes my prepared statement. I would be pleased to respond to any questions that you or other Members of the Subcommittee may have. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. For further information about this testimony, please contact Lisa Shames, Director, Natural Resources and Environment, (202) 512-2649 or [email protected]. Key contributors to this statement were James R. Jones, Jr., Assistant Director; Kevin S. Bray; Nancy Crothers; Nico Sloss; and Alex M. Winograd. USDA outreach programs for underserved producers could be much better. Systematic data on minority participation in USDA programs are not available. The 10708 Report and Minority Farm Register have been ineffective. Partnerships with community-based organizations could be better used. Methods of USDA program delivery need to better facilitate the participation of underserved producers and address their needs. USDA could do more to provide assistance in accessing markets and programs. USDA could better address cultural and language differences for providing services. Some USDA program rules and features hinder participation by underserved producers. Some USDA employees have little incentive to work with small and minority producers. County offices working with underserved producers continue to lack diversity, and some have poor customer service or display discriminatory behaviors toward underserved producers. USDA lacks a program that addresses farmworker needs. There continue to be reports of cases where USDA has not processed loans for underserved producers. Some Hmong poultry farmers with guaranteed loans facilitated by USDA are experiencing foreclosures. The county committee system does not represent minority producers well. Minority advisers are ineffective because they have no voting power. USDA has not done enough to make underserved producers fully aware of county committee elections, and underserved producers have difficulties winning elections. There is a lack of USDA investment in research and extension services that would determine the extent of minority needs. The Census of Agriculture needs to better count minority producers. USDA may continue to be foreclosing on farms belonging to producers who are awaiting decisions on discrimination complaints. ASCR needs authority to exercise leadership for making changes at USDA. USDA and ASCR need additional resources to carry out civil rights functions. Greater diversity among USDA employees would facilitate USDA’s work with minority producers. Producers must still access services through some USDA employees who discriminated against them. The Office of Adjudication and Compliance needs better management structure and function. Backlogs of discrimination complaints need to be addressed. Alternative dispute resolution techniques to resolve informal employee complaints should be used consistently and documented. Civil rights compliance reviews of USDA agencies are behind schedule and should be conducted. USDA’s Office of General Counsel continues to be involved in complaint cases. U.S. Department of Agriculture: Recommendations and Options to Address Management Deficiencies in the Office of the Assistant Secretary for Civil Rights. GAO-09-62. Washington, D.C.: October 22, 2008. U.S. Department of Agriculture: Management of Civil Rights Efforts Continues to Be Deficient Despite Years of Attention. GAO-08-755T. Washington, D.C.: May 14, 2008. Pigford Settlement: The Role of the Court-Appointed Monitor. GAO-06-469R. Washington, D.C.: March 17, 2006. Department of Agriculture: Hispanic and Other Minority Farmers Would Benefit from Improvements in the Operations of the Civil Rights Program. GAO-02-1124T. Washington, D.C.: September 25, 2002. Department of Agriculture: Improvements in the Operations of the Civil Rights Program Would Benefit Hispanic and Other Minority Farmers. GAO-02-942. Washington, D.C.: September 20, 2002. U.S. Department of Agriculture: Resolution of Discrimination Complaints Involving Farm Credit and Payment Programs. GAO-01-521R. Washington, D.C.: April 12, 2001. U.S. Department of Agriculture: Problems in Processing Discrimination Complaints. T-RCED-00-286. Washington, D.C.: September 12, 2000. | For decades, there have been allegations of discrimination in the U.S. Department of Agriculture (USDA) programs and workforce. Reports and congressional testimony by the U.S. Commission on Civil Rights, the U.S. Equal Employment Opportunity Commission, a former Secretary of Agriculture, USDA's Office of Inspector General, GAO, and others have described weaknesses in USDA's programs--in particular, in resolving complaints of discrimination and in providing minorities access to programs. The Farm Security and Rural Investment Act of 2002 authorized the creation of the position of Assistant Secretary for Civil Rights (ASCR), giving USDA an executive that could provide leadership for resolving these long-standing problems. This testimony focuses on USDA's efforts to (1) resolve discrimination complaints, (2) report on minority participation in USDA programs, and (3) strategically plan its efforts. This testimony is based on new and prior work, including analysis of ASCR's strategic plan; discrimination complaint management; and about 120 interviews with officials of USDA and other federal agencies, as well as 20 USDA stakeholder groups. USDA officials reviewed the facts upon which this statement is based, and we incorporated their additions and clarifications as appropriate. GAO plans a future report with recommendations. ASCR's difficulties in resolving discrimination complaints persist--ASCR has not achieved its goal of preventing future backlogs of complaints. At a basic level, the credibility of USDA's efforts has been and continues to be undermined by ASCR's faulty reporting of data on discrimination complaints and disparities in ASCR's data. Even such basic information as the number of complaints is subject to wide variation in ASCR's reports to the public and the Congress. Moreover, ASCR's public claim in July 2007 that it had successfully reduced a backlog of about 690 discrimination complaints in fiscal year 2004 and held its caseload to manageable levels, drew a questionable portrait of progress. By July 2007, ASCR officials were well aware they had not succeeded in preventing future backlogs--they had another backlog on hand, and this time the backlog had surged to an even higher level of 885 complaints. In fact, ASCR officials were in the midst of planning to hire additional attorneys to address that backlog of complaints including some ASCR was holding dating from the early 2000s that it had not resolved. In addition, some steps ASCR had taken may have actually been counter-productive and affected the quality of its work. For example, an ASCR official stated that some employees' complaints had been addressed without resolving basic questions of fact, raising concerns about the integrity of the practice. Importantly, ASCR does not have a plan to correct these many problems. USDA has published three annual reports--for fiscal years 2003, 2004, and 2005--on the participation of minority farmers and ranchers in USDA programs, as required by law. USDA's reports are intended to reveal the gains or losses that these farmers have experienced in their participation in USDA programs. However, USDA considers the data it has reported to be unreliable because they are based on USDA employees' visual observations about participant's race and ethnicity, which may or may not be correct, especially for ethnicity. USDA needs the approval of the Office of Management and Budget (OMB) to collect more reliable data. ASCR started to seek OMB's approval in 2004, but as of May 2008 had not followed through to obtain approval. ASCR staff will meet again on this matter in May 2008. GAO found that ASCR's strategic planning is limited and does not address key steps needed to achieve the Office's mission of ensuring USDA provides fair and equitable services to all customers and upholds the civil rights of its employees. For example, a key step in strategic planning is to discuss the perspectives of stakeholders. ASCR's strategic planning does not address the diversity of USDA's field staff even though ASCR's stakeholders told GAO that such diversity would facilitate interaction with minority and underserved farmers. Also, ASCR could better measure performance to gauge its progress in achieving its mission. For example, it counts the number of participants in training workshops as part of its outreach efforts rather than access to farm program benefits and services. Finally, ASCR's strategic planning does not link levels of funding with anticipated results or discuss the potential for using performance information for identifying USDA's performance gaps. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
DOD is a massive and complex organization. To illustrate, the department reported that its fiscal year 2006 operations involved approximately $1.4 trillion in assets and $2.0 trillion in liabilities, more than 2.9 million military and civilian personnel, and $581 billion in net cost of operations. Organizationally, the department includes the Office of the Secretary of Defense, the Chairman of the Joint Chiefs of Staff, the military departments, numerous defense agencies and field activities, and various unified combatant commands that are responsible for either specific geographic regions or specific functions. Figure 1 provides a simplified depiction of DOD’s organizational structure. In support of its military operations, DOD performs an assortment of interrelated and interdependent business functions, including logistics management, procurement, health care management, and financial management. As we have previously reported, the systems environment that supports these business functions is overly complex and error prone, and is characterized by (1) little standardization across the department, (2) multiple systems performing the same tasks, (3) the same data stored in multiple systems, and (4) the need for data to be entered manually into multiple systems. Air Force is a major component of DOD. Its mission is to deliver options for the defense of the United States and its global interests in air, space, and cyberspace. Air Force relies extensively on IT to fulfill these competencies effectively and to meet its organizational mission. It has 909 business systems; of these systems, 832 (91 percent) are in operations and maintenance. In fiscal year 2006, Air Force was allocated approximately $651 million for its business systems, of which about $406 million (62 percent) was allocated to systems in operations and maintenance and $245 million (38 percent) was allocated to systems in development and/or modernization. Air Force has created the Office of Warfighting Integration and Chief Information Office to provide the IT and supporting infrastructure to fulfill its mission. Among the goals of this organization are to deliver the ability to direct forces while anticipating situations, capabilities, and limitations; develop adaptive, trained airmen; provide policy, standards, oversight, and training to enable airmen to share and exploit accurate information any place and, anytime; and transform the communications and information career field to lead Air Force in leveraging information for its competitive advantage. The Office of Warfighting Integration and Chief Information Office consists of several organizations, as depicted in figure 2. Successful public and private organizations use a corporate approach to IT investment management. Recognizing this, Congress enacted the Clinger- Cohen Act of 1996, which requires the Office of Management and Budget (OMB) to establish processes to analyze, track, and evaluate the risks and results of major capital investments in IT systems made by executive agencies. In response to the Clinger-Cohen Act and other statutes, OMB has developed policy and issued guidance for the planning, budgeting, acquisition, and management of federal capital assets. We have also issued guidance in this area that defines institutional structures, such as investment review boards; processes for developing information on investments (such as costs and benefits); and practices to inform management decisions (such as whether a given investment is aligned with an enterprise architecture). IT investment management is a process for linking IT investment decisions to an organization’s strategic objectives and business plans. Consistent with this, the federal approach to IT investment management focuses on selecting, controlling, and evaluating investments in a manner that minimize risks while maximizing the return on investment. During the selection phase, the organization (1) identifies and analyzes each project’s risks and returns before committing significant funds to any project and (2) selects those IT projects that will best support its mission needs. During the control phase, the organization ensures that projects, as they develop and investment expenditures continue, meet mission needs at the expected levels of cost and risk. If the project is not meeting expectations or if problems arise, steps are quickly taken to address the deficiencies. During the evaluation phase, expected results are compared with actual results after a project has been fully implemented. This comparison is done to (1) assess the project’s impact on mission performance, (2) identify any changes or modifications to the project that may be needed, and (3) revise the investment management process based on lessons learned. Our ITIM framework consists of five progressive stages of maturity for any given agency relative to selecting, controlling, and evaluating its investment management capabilities. (See fig. 3 for the five ITIM stages of maturity.) This framework is grounded in our research of IT investment management practices of leading private and public sector organizations. The framework can be used to assess the maturity of an agency’s investment management processes and as a tool for organizational improvement. The overriding purpose of the framework is to encourage investment selection and control and to evaluate processes that promote business value and mission performance, reduce risk, and increase accountability and transparency. We have used the framework in several of our evaluations, and a number of agencies have adopted it. ITIM’s five maturity stages represent the steps toward achieving stable and mature processes for managing IT investments. Each stage builds on the lower stages; the successful attainment of each stage leads to improvement in the organization’s ability to manage its investments. With the exception of the first stage, each maturity stage is composed of “critical processes” that must be implemented and institutionalized in order for the organization to achieve that stage. These critical processes are further broken down into key practices that describe the types of activities that an organization should be performing to successfully implement each critical process. It is not unusual for an organization to be performing key practices from more than one maturity stage at the same time. However, our research has shown that agency efforts to improve investment management capabilities should focus on implementing all lower stage practices before addressing the higher stage practices. In the ITIM framework, Stage 2 critical processes lay the foundation for sound IT investment management by helping the agency to attain successful, predictable, and repeatable investment management processes at the project level. Specifically, Stage 2 encompasses building a sound investment management foundation by establishing basic capabilities for selecting new IT projects. This stage also involves developing the capability to control projects so that they finish predictably within established cost and schedule expectations and developing the capability to identify potential exposures to risk and put in place strategies to mitigate that risk. Further, it involves evaluating completed projects to ensure they meet business needs and collecting lessons learned to improve the IT investment management process. The basic management processes established in Stage 2 lay the foundation for more mature management capabilities in Stage 3, which represents a major step forward in maturity, in which the agency moves from project-centric processes to a portfolio approach, evaluating potential investments by how well they support the agency’s missions, strategies, and goals. Stage 3 requires that an organization continually assess both proposed and ongoing projects as parts of a complete investment portfolio—an integrated and competing set of investment options. It focuses on establishing a consistent, well-defined perspective on the IT investment portfolio and maintaining mature, integrated selection (and reselection), control, and post-implementation evaluation processes. This portfolio perspective allows decision makers to consider the interaction among investments and the contributions to organizational mission goals and strategies that could be made by alternative portfolio selections, rather than focusing exclusively on the balance between the costs and benefits of individual investments. Organizations that have implemented Stages 2 and 3 practices have capabilities in place that assist in establishing selection; control; and evaluation structures, policies, procedures, and practices that are required by the investment management provisions of the Clinger- Cohen Act. Stages 4 and 5 require the use of evaluation techniques to continuously improve both the investment portfolio and the investment processes in order to better achieve strategic outcomes. At Stage 4, an organization has the capacity to conduct IT succession activities and, therefore, can plan and implement the deselection of obsolete, high-risk, or low-value IT investments. An organization with Stage 5 maturity conducts proactive monitoring for breakthrough information technologies that will enable it to change and improve its business performance. DOD’s major system investments (i.e., weapons and business systems) are governed by three management systems that focus on defining needs, budgeting for, and acquiring investments to support the mission—the Joint Capabilities Integration and Development System (JCIDS); the Planning, Programming, Budgeting, and Execution (PPBE) system; and the Defense Acquisition System (DAS). In addition, DOD’s business systems are subject to a fourth management system, which, for purposes of this report, we refer to as the Business Investment Management System. For each of these systems, DOD relies on its component agencies to execute the underlying policies and procedures. According to DOD, the four management systems, collectively, are the means by which the department—and its components—selects, controls, and evaluates its business systems investments. JCIDS is a needs-driven, capabilities-based approach to identify mission needs and meet future joint forces challenges. It is intended to identify future capabilities for DOD; address capability gaps and mission needs recognized by the Joint Chiefs of Staff or derived from strategic guidance, such as the National Security Strategy Report or Quadrennial Defense Review; and identify alternative solutions by considering a range of doctrine, organization, training, materiel, leadership and education, personnel, and facilities solutions. According to DOD, the Joint Chiefs of Staff—through the Joint Requirements Oversight Council—has primary responsibility for defining and implementing JCIDS. All JCIDS documents are submitted to the Joint Chiefs of Staff, which determines whether the proposed system has joint implications or is component-unique. If it is designated as joint interest, then the Joint Requirements Oversight Council is responsible for approving and validating the documents. If it is not designated as having joint interests, the sponsoring component is responsible for validation and approval. PPBE is a calendar-driven approach that is composed of four phases that occur over a moving 2-year cycle. The four phases—planning, programming, budgeting, and executing—define how budgets for each DOD component and the department as a whole are created, vetted, and executed. As recently reported, the components start programming and budgeting for addressing a JCIDS-identified capability gap or mission need several years before actual product development begins and before the Office of the Secretary of Defense formally reviews the components’ programming and budgeting proposals (i.e., Program Objective Memorandums). Once reviewed and approved, the financial details in the Program Objective Memorandums become part of the President’s budget request to Congress. During budget execution, components may submit program change proposals or budget change proposals, or both (e.g., program cost increases or schedule delays). According to DOD, the Under Secretary of Defense (Policy), the Director for Program Analysis and Evaluation, and the Under Secretary of Defense (Comptroller) have primary responsibility for defining and implementing the PPBE system. DAS is a framework-based approach that is intended to translate mission needs and requirements into stable, affordable, and well-managed acquisition programs. It consists of five key program life-cycle phases. These five phases are as follows: Concept Refinement: Intended to refine the initial JCIDS-validated system solution (concept) and create a strategy for acquiring the investment solution. A decision is made at the end of this phase (Milestone A decision) regarding whether to move to the next phase (Technology Development). Technology Development: Intended to determine the appropriate set of technologies to be integrated into the investment solution by iteratively assessing the viability of various technologies while simultaneously refining user requirements. Once the technology has been demonstrated in a relevant environment, a decision is made (Milestone B decision) regarding whether to move to the next phase (System Development and Demonstration). System Development and Demonstration: Intended to develop a system or a system increment and demonstrate through developer testing that the system or system increment can function in its target environment. A decision is made at the end of this phase (Milestone C decision) regarding whether to move to the next phase (Production and Deployment). Production and Deployment: Intended to achieve an operational capability that satisfies the mission needs, as verified through independent operational test and evaluation, and ensures that the system is implemented at all applicable locations. Operations and Support: Intended to operationally sustain the system in the most effective manner over its life cycle. A key principle of DAS is that investments are assigned a category, where programs of increasing dollar value and management interest are subject to more stringent oversight. For example, Major Defense Acquisition Programs and Major Automated Information Systems are large, expensive programs subject to the most extensive statutory and regulatory reporting requirements and unless delegated, are reviewed by acquisition boards at the DOD corporate level. Smaller and less risky acquisitions are generally reviewed at the component executive or lower levels. Another key principle is that DAS requires acquisition management under the direction of a milestone decision authority. The milestone decision authority—with support from the program manager and advisory boards, such as the Defense Acquisition Board and the IT Acquisition Board— determines the project’s baseline cost, schedule, and performance commitments. The Under Secretary of Defense for Acquisition, Technology, and Logistics has primary responsibility for defining and implementing DAS. DOD relies on its components to execute these investment management policies and procedures. To implement DOD’s JCIDS process, the Air Force has designated the Joint Staff Functional Capabilities Board to review and approve operational capabilities. The Joint Staff Functional Capabilities Board seeks to establish a common understanding of how a capability will be used, who will use it, when it is needed, and why it is needed to achieve a desired effect. Each capability is assessed based on the effects it seeks to generate and the associated operational risk of not having it. In addition, the Capabilities Review and Risk Assessment process is being used to analyze concepts of operation and assess their associated capabilities. This process uses a phased approach to produce a prioritized list of capabilities, capability gaps or shortfalls, and possible capability solutions. To implement the PPBE process, Air Force officials stated that they use their Annual Planning and Programming Guidance manual. Finally, to implement DAS, Air Force has developed guidance that outlines a systematic acquisition framework that mirrors the framework defined by DOD and includes the same three event-based milestones and associated five program life-cycle phases. The Business Investment Management System is a calendar-driven approach that is described in terms of governance entities, tiered accountability, and certification reviews and approvals. This system was initiated in 2005, when DOD reassigned responsibility for providing executive leadership for the direction, oversight, and execution of its business systems modernization efforts to several entities. These entities and their responsibilities include the following: The Defense Business Systems Management Committee serves as the highest-ranking governance body for business systems modernization activities. The Principal Staff Assistants serve as the certification authorities for business system modernizations in their respective core business missions. The Investment Review Boards are chartered by the principal staff assistants and are the review and decision-making bodies for business system investments in their respective areas of responsibility. The boards are also responsible for recommending certification for all business system investments costing more than $1 million. The component precertification authority is accountable for the component’s business system investments and acts as the component’s principal point of contact for communication with the Investment Review Boards. The Air Force has designated its CIO to be the precertification authority. The Business Transformation Agency is responsible for leading and coordinating business transformation efforts across the department. The agency is organized into seven directorates, one of which is the Defense Business Systems Acquisition Executive—the component acquisition executive for DOD enterprise-level (DOD-wide) business systems and initiatives. This directorate is responsible for developing, coordinating, and integrating enterprise-level projects, programs, systems, and initiatives—including managing resources such as fiscal, personnel, and contracts for assigned systems and programs. Figure 4 provides a simplified illustration of the relationships among these entities. According to DOD, in 2005 it also adopted a tiered accountability approach to business transformation. Under this approach, responsibility and accountability for business system investment management is allocated among DOD (i.e., Office of the Secretary of Defense) and the component agencies, based on the amount of development/modernization funding involved and the investment’s “tier.” DOD is responsible for ensuring that all business systems with a development/modernization investment in excess of $1 million are reviewed by the Investment Review Boards for compliance with the business enterprise architecture, certified by the principal staff assistants, and approved by the Defense Business Systems Management Committee. Components are responsible for certifying development/modernization investments with total costs of $1 million or less. All DOD development and modernization efforts are assigned a tier on the basis of the acquisition category or the size of the financial investment, or both. According to DOD, a system is given a tier designation when it passes through the certification process. Table 1 describes the five investment tiers and identifies the associated reviewing and approving entities for DOD and the Air Force. DOD’s business investment management system includes two types of reviews for business systems: certification and annual reviews. Certification reviews apply to new modernization projects with total costs over $1 million. These reviews focus on program alignment with the business enterprise architecture and must be completed before components obligate funds for programs. The annual reviews apply to all business programs and are undertaken to determine whether the system development effort is meeting its milestones and addressing its Investment Review Board certification conditions. Certification reviews and approvals: Tier 1 through 3 business system investments in development and modernization are certified at two levels—component-level precertification and DOD-level certification and approval. At the component level, program managers prepare, enter, maintain, and update information about their investments in the Air Force data repository. The component precertification authority validates that the system information is complete and accessible on the Air Force data repository, reviews system compliance with the business enterprise architecture and enterprise transition plan, and verifies the economic viability analysis. This information is then transferred to DOD’s IT Portfolio Repository. The precertification authority asserts the status and validity of the investment information by submitting a component precertification letter to the appropriate Investment Review Board for its review. Annual reviews: Tier 1 through 4 business system investments are reviewed annually at the component and DOD-levels. At the component level, program managers annually review and update information on all tiers of system investments that are identified in their data repository. For Tier 1 through 3 systems that are in development or being modernized, information is updated on cost, milestone, and risk variances and actions or issues related to certification conditions. The precertification authority then verifies and submits the information for these business system investments for DOD’s Investment Review Board review in an annual review assertion letter. The letter addresses system compliance with the DOD business enterprise architecture and the enterprise transition plan and includes investment cost, schedule, and performance information. At the DOD level, the Investment Review Boards annually review investments for certified Tier 1 through 3 business systems that are in development or are being modernized. These reviews focus on program compliance with the business enterprise architecture, program cost and performance milestones, and progress in meeting certification conditions. The Investment Review Boards can revoke an investment’s certification when the system has significantly failed to achieve performance commitments (i.e., capabilities and costs). When this occurs, the component must address the Investment Review Board’s concerns and resubmit the investment for certification. As stated earlier, DOD relies on its components to execute investment management policies and procedures. Air Force has developed a precertification process, which is intended to ensure that new or existing systems undergo proper scrutiny prior to being precertified by the Air Force CIO. First, the certification package is to be prepared by the Program Manager and reviewed by the Major Command (MAJCOM) and functional portfolio managers. The package is then to be provided to the Air Force Certification Process Manager, who is to review the package for completeness based on certification requirements and transmit the package to the Certification Review Team—which is composed of subject matter experts—to assess compliance with their relevant areas of expertise, such as the business enterprise architecture and information assurance. Once the certification review is complete, the package is to be sent to the Senior Working Group. This group is responsible for reviewing the package and approving or disapproving the package. Finally, the Precertification Authority is to precertify the system for submission to the relevant Investment Review Board. Table 2 lists decision-making personnel involved in Air Force’s investment management process and provides a description of their key responsibilities. Figure 5 shows the relationship among the key players in Air Force’s precertification review and approval process. Reviews and validates certification packages. Pre-certifies Tier 1-3 and approves Tier 4 systems. Technically reviews and endorses program information and certification packages for Tier 1-4. Reviews and validates certification packages for Tier 1-4 and coordinates review with other subject matter experts. Functionally reviews and validates program information and certification packages for Tier 1-4. Enters and maintains business system investment information in Air Force data repository for Tier 1-4, completes certification package requirements for Tier 1-3. DOD relies on its components to execute investment management policies and procedures. However, while Air Force has established the basic management structures needed to effectively manage its IT projects as investments, it has not fully implemented many of the related policies and procedures outlined in our ITIM framework. Relative to its business system investments, Air Force has implemented three of nine practices that call for project-level structures, policies, and procedures, and has not defined any of the five practices that call for portfolio-level policies and procedures. Air Force officials stated that they are aware of the absence of documented policies and procedures, and they are currently working on guidance to address these areas. For example, these officials stated that they have drafted policies and procedures to establish portfolio-level practices and are currently obtaining the necessary approvals. Air Force plans to complete and approve these policies and procedures by December 2007. According to our framework, adequately documenting both the policies and the associated procedures that govern how an organization manages its IT investment portfolio is important because doing so provides the basis for having rigor, discipline, and repeatability in how investments are selected and controlled across the entire organization. Until Air Force has fully defined policies and procedures for both individual projects and the portfolio of projects, it risks selecting and controlling these business system investments in an inconsistent, incomplete, and ad hoc manner, which in turn could reduce the chances that these investments will meet mission needs in the most effective manner. At ITIM Stage 2, an organization has attained repeatable and successful IT project-level investment control processes and basic selection processes. Through these processes, the organization can identify project expectation gaps early and take the appropriate steps to address them. ITIM Stage 2 critical processes include (1) defining investment board operations, (2) identifying the business needs for each investment, (3) developing a basic process for selecting new proposals and reselecting ongoing investments, (4) developing project-level investment control processes, and (5) collecting information about existing investments to inform investment management decisions. Table 3 describes the purpose of each of these Stage 2 critical processes. Within these five critical processes are nine key practices required for effective project-level management. Air Force has fully defined the policies and procedures for three of these nine practices. Specifically, Air Force has established a management structure by instituting a business system Investment Review Board, called the Senior Working Group. This group is composed of senior executives from the functional business units, including the Office of the Air Force Chief Information Officer, and the members are responsible for establishing and implementing investment policies. In addition, Air Force has established policies and procedures for capturing information about its IT projects and systems and submitting, updating, and maintaining this information in its data repository. Finally, it has assigned the Certification Process Manager the responsibility of ensuring that specific investment information contained in the Air Force data repository is accurate and complete. However, the Air Force’s policies and procedures associated with the remaining six project-level management practices are missing critical elements needed to effectively carry out essential investment management activities. For example: Policies and procedures for directing the Investment Review Board’s operations do not define how investments that are in operations and maintenance are to be governed by the Investment Review Board. In addition, procedures do not specify how the business investment management process is coordinated with other DOD management systems. Without clearly defined guidance and visibility into all investments with an understanding of decisions reached through other management systems, Air Force cannot be assured that consistent investment management decisions are being made. Policies and procedures do not define how systems in operations and maintenance will support ongoing and future business needs. This increases the risk that Air Force will continue to maintain legacy investments that no longer support current organizational objectives. Policies and procedures for selecting new systems do not specify how the full range of cost, schedule, and performance data are being considered in making selection (i.e., precertification) decisions. Without documenting how factors such as cost, schedule, and performance are considered when making precertification decisions, Air Force cannot ensure that it consistently and objectively selects system investments that best meet the department’s needs and priorities. Policies and procedures do not include a structured method that defines how the criteria will be evaluated when the precertification authority makes reselection decisions. In addition, policies and procedures do not define an approach to annually reviewing systems in operations and maintenance. Given that Air Force spends millions of dollars annually in operating and maintaining business systems, this is significant. Without an understanding of how the precertification authority is to consider these investments when making reselection decisions, Air Force’s ability to make informed and consistent reselection and termination decisions is limited. Policies and procedures do not specify how funding decisions are integrated with the process of selecting an investment. Without considering budget constraints and opportunities, Air Force risks making investment decisions that do not effectively consider the relative merits of various projects and systems when funding limitations exist. Policies and procedures do not provide for sufficient oversight and visibility into investment management activities. Air Force has predefined criteria for adherence to cost, schedule, and performance milestones, but does not have policies and procedures that guide the implementation of corrective actions when program expectations are not met. Without such policies and procedures, the agency risks investing in systems that are duplicative, stovepiped, nonintegrated, and unnecessarily costly to manage, maintain, and operate. Table 4 summarizes our findings relative to Air Force’s execution of the nine key practices that call for the policies and procedures needed to manage IT investments at the project level. Air Force officials stated that they are aware of the absence of documented procedures in certain areas of project-level management, and plan to issue new policies and procedures addressing these areas by December 2007. However, until Air Force fully documents IT investment management policies and procedures for Stage 2 activities and specifies the linkages between the various related processes, and describes how system investments in operations and maintenance are to be governed, it risks not being able to carry out investment management activities in a consistent and disciplined manner. Moreover, the Air Force risks selecting investments that will not effectively meet its mission needs. At Stage 3, an organization has defined critical processes for managing its investments as a portfolio or a set of portfolios. Portfolio management is a conscious, continuous, and proactive approach to allocating limited resources among competing initiatives in light of the investments’ relative benefits. Taking a departmentwide perspective enables an organization to consider its investments comprehensively, so that collectively the investments optimally address the organization’s missions, strategic goals, and objectives. Managing IT investments as portfolios also allows an organization to determine its priorities and make decisions about which projects to fund on the basis of analyses of the relative organizational value and risks of all projects, including projects that are proposed, under development, and in operation. Although investments may initially be organized into subordinate portfolios—on the basis of, for example, business lines or life-cycle stages—and managed by subordinate investment boards, they should ultimately be aggregated into enterprise- level portfolios. According to ITIM, Stage 3 involves four critical processes (1) defining the portfolio criteria; (2) creating the portfolio; (3) evaluating (i.e., overseeing) the portfolio; and (4) conducting post-implementation reviews. Within these critical processes are five key practices that call for policies and procedures to ensure effective portfolio management. Table 5 summarizes the purpose of each of the critical processes. Air Force has begun to establish a governance structure for portfolio-level management, but it has not executed any of the five practices within the Stage 3 critical processes that call for policies and procedures associated with effective portfolio-level management. Specifically, Air Force has assigned the Senior Working Group the responsibility of establishing a governance forum to oversee business system portfolio activities. However, the Senior Working Group has not developed and approved charters outlining the roles and responsibilities to be assigned to subordinate Investment Review Boards that are intended to establish and manage the portfolios. In addition, Air Force has not fully defined policies and procedures needed to effectively execute portfolio management practices. Specifically, Air Force does not have policies and procedures for defining the portfolio criteria or creating and evaluating the portfolio. In addition, while DOD has policies and procedures for conducting post-implementation reviews for Tier 1 systems as part of the Defense Acquisition System, Air Force has not established policies or procedures for conducting post-implementation reviews for systems in the remaining tiers. Finally, Air Force has not established procedures detailing how lessons learned from these reviews are to be used during investment reviews as the basis for management and process improvements. Table 6 summarizes the rating for each critical process required to manage investments as a portfolio and summarizes the evidence that supports these ratings. Air Force officials are aware that they need to develop the appropriate portfolio management processes, and in this regard, have drafted some portfolio management guidance, such as the Air Force Operational Support Portfolio Investment Review Process. According to Air Force officials, this guidance is expected to be completed and approved by December 2007. Until policies and procedures for managing business systems investment portfolios are defined and implemented, Air Force is at risk of not consistently selecting the mix of investments that best supports the department mission needs and ensuring that investment- related lessons learned are shared and applied departmentwide. Given the importance of business systems modernization to Air Force’s mission, performance, and outcomes, it is vital for the department to adopt and employ an effective institutional approach to managing business system investments. However, while the department acknowledges these shortcomings and the importance of addressing them and has established aspects of such an approach, it is lacking important elements, such as policies and procedures needed for project-level and portfolio-level investment management. This means that Air Force lacks an institutional capability to ensure that it is investing in business systems that best support its strategic needs and that ongoing projects meet cost, schedule, and performance expectations. Until Air Force develops this capability, the department will be impaired in its ability to optimize business mission area performance and accountability. To strengthen Air Force’s business system investment management capability and address the weaknesses discussed in this report, we recommend that the Secretary of Defense direct the Secretary of the Air Force to ensure that well-defined and disciplined business system investment management policies and procedures are developed and issued. At a minimum, this should include project-level management policies and procedures that address the six key practices areas: Specifying how systems that are in operations and maintenance will be reviewed and specifying how Air Force’s business investment management system is coordinated with JCIDS, PPBE, and DAS. Ensuring that systems in operations and maintenance are aligned with ongoing and future business needs. Selecting investments, including specifying how factors, such as cost, schedule, and performance data are to be used in making certification decisions. Reselecting ongoing investments, including specifying how factors, such as cost, schedule, and performance data are to be used in making reselection decisions during the annual review process and providing for the reselection of investments that are in operations and maintenance. Integrating funding with the process of selecting an investment, including specifying how the precertification authority is using funding information in carrying out decisions on system certification and approvals. Overseeing IT projects and systems, including specifying policies and procedures that guide the implementation of corrective actions when program expectations are not met. These well-defined and disciplined business system investment management policies and procedures should also include portfolio-level management policies and procedures that address the following five areas: Creating and modifying IT portfolio selection criteria for business system investments. Defining the roles and responsibilities for the development and modification of the IT portfolio selection criteria. Analyzing, selecting, and maintaining business system investment portfolios. Reviewing, evaluating, and improving the performance of the portfolio(s) by using project indicators, such as cost, schedule, and risk. Conducting post-implementation reviews for all investment tiers and directing the investment boards to consider the information gathered to develop lessons learned from these reviews. In written comments on a draft of this report, signed by the Deputy Under Secretary of Defense (Business Transformation) and reprinted in appendix II, DOD partially concurred with the report’s recommendations. The department further stated that our recommendations and feedback were helpful in guiding its business transformation and related improvement efforts. Regarding our recommendation that DOD direct Air Force to ensure that well-defined and disciplined business system investment management policies and procedures are developed and issued, DOD stated that Air Force has a well-defined business system investment management process that, while not codified into formal policy, has been documented in investment review guides. Further, it stated that Air Force is committed to formalizing its policies as the processes expand and mature. Our report recognizes that Air Force has drafted a business system investment management process. However, as our report states, the draft review guide lacks critical elements needed to effectively carry out essential investment management activities. Until Air Force completes and issues IT investment management policies and procedures that fully address these elements, it risks not being able to carry out investment management activities in a consistent and disciplined manner. Regarding our recommendation that DOD direct Air Force to ensure that the well-defined and disciplined business system investment management policies and procedures also include portfolio-level management policies and procedures, the department stated that DOD Instruction 8115.02 defines the DOD IT portfolio management process, which Air Force is applying in its decisionmaking. However, as our report notes, Air Force has not implemented a process for managing its business system portfolios. Until Air Force defines and implements such a process, it is at risk of not consistently selecting the mix of investments that best supports the Air Force’s mission needs and ensuring that investment-related lessons learned are shared and applied departmentwide. We are sending copies of this report to interested congressional committees; the Director, Office of Management and Budget; the Secretary of Defense; the Deputy Secretary of Defense; the Secretary of Air Force; the Air Force Chief Information Officer, and the Under Secretary of Defense for Acquisition, Technology, and Logistics. Copies of this report will be made available to other interested parties on request. This report will also be available at no charge on our Web site at http://www.gao.gov. Should you or your staffs have any questions on matters discussed in this report, please contact me at (202) 512-6304 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix III. Our objective was to determine whether the investment management approach of the Department of the Air Force (a major Department of Defense (DOD) component) is consistent with leading investment management best practices. Our analysis was based on the best practices contained in GAO’s Information Technology Investment Management (ITIM) framework and the framework’s associated evaluation methodology, and focused on Air Force’s establishment of policies and procedures for business system investments needed to assist organizations in complying with the investment management provisions of the Clinger- Cohen Act of 1996 (Stages 2 and 3). To address our objective, we asked Air Force to complete a self- assessment of its investment management process and provide the supporting documentation. We then reviewed the results of the department’s self-assessment of Stages 2 and 3 organizational commitment practices—meaning those practices related to structures, policies, and procedures—and compared them with our ITIM framework. We focused on Stages 2 and 3 because these stages represent the processes needed to meet the standards of the Clinger-Cohen Act, and they establish the foundation for effective acquisition management. We also validated and updated the results of the self-assessment through document reviews and interviews with officials, such as the Air Force Chief Information Officer and Certification Process Manager. In doing so, we reviewed written policies, procedures, and guidance and other documentation providing evidence of executed practices, including the Air Force Information Technology Investment Review Guide, Air Force Operations of Capabilities Based Acquisition System Instruction, Air Force IT Investment Architecture Compliance Guide, Senior Working Group charter and meeting minutes, the Investment Review Board Concept of Operations and Guidance, and the Business Transformation Guidance. We compared the evidence collected from our document reviews and interviews with the key practices in ITIM. We rated the key practices as “executed” on the basis of whether the department demonstrated (by providing evidence of performance) that it had met all of the criteria of the key practice. A key practice was rated as “not executed” when we found insufficient evidence of all elements of a practice being fully performed or when we determined that there were significant weaknesses in Air Force’s execution of the key practice. In addition, we provided Air Force with the opportunity to produce evidence for the key practices rated as “not executed.” We conducted our work at the Air Force offices in Arlington, Virginia, from February 2007 through September 2007 in accordance with generally accepted government auditing standards. In addition to the contact person named above, key contributors to this report were Tonia Johnson, Assistant Director; Idris Adjerid; Sharhonda Deloach; Nancy Glover; and Freda Paintsil. | In 1995, GAO first designated the Department of Defense's (DOD) business systems modernization program as "high-risk" and continues to do so today. In 2004, Congress passed legislation reflecting prior GAO recommendations that DOD adopt a corporate approach to information technology (IT) business systems investment management including tiered accountability for business systems at the department and component levels. To support GAO's legislative mandate to review DOD's efforts, GAO assessed whether the investment management approach of one of DOD's components--the Department of the Air Force (Air Force)--is consistent with leading investment management best practices. In doing so, GAO applied its IT Investment Management (ITIM) framework and associated methodology, focusing on the stages related to the investment management provisions of the Clinger-Cohen Act of 1996. The Air Force has established the basic management structures needed to effectively manage its IT projects as investments, but has not fully implemented many of the related policies and procedures outlined in GAO's ITIM framework. Air Force has fully implemented three of the nine key practices that call for project-level management structures, policies, and procedures, and has not implemented any of the five practices that call for portfolio-level policies and procedures. Regarding project-level practices, it has established an IT investment board that is responsible for defining and implementing the department's business systems investment governance process, has developed procedures for identifying and collecting information about its business systems to support investment selection and control, and has assigned responsibility for ensuring that the information collected during project identification meets the needs of the investment management process. However, Air Force has not fully documented business systems investment policies and procedures for directing investment board operations, selecting new investments, reselecting ongoing investments, or integrating the investment funding and investment selection processes. In addition, it has not implemented any of the policies and procedures for developing and maintaining a complete business system investment portfolio. Air Force officials stated that they are aware of the absence of documented policies and procedures in certain areas of project-level and portfolio-level management and that they are currently working on guidance to address these areas. For example, officials stated that they had begun drafting portfolio-level policies and procedures. According to Air Force officials, the policies and procedures are expected to be completed and approved by December 2007. Until Air Force fully defines policies and procedures for both individual projects and portfolios of projects, it risks not being able to select and control these business system investments in a way that is consistent and complete, which in turn increases the chances that these investments will not meet mission needs in the most effective manner. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
Exchanges are online marketplaces where eligible individuals and small businesses can purchase health insurance. PPACA prescribes a seamless, streamlined eligibility process for consumers to submit a single application and receive an eligibility determination for enrollment in a qualified health plan through the exchange, advance payments of the premium tax credit, cost sharing reductions, Medicaid, CHIP, and the Basic Health Program (BHP), if applicable. Under PPACA, an exchange must be operational in each state by January 1, 2014. States have some flexibility with respect to exchanges, by choosing to establish and operate an exchange themselves (referred to as a state-based exchange) or by ceding this authority to HHS (referred to as a federally facilitated exchange). States choosing to establish a state-based exchange were required to submit an application “blueprint” to HHS by December 14, 2012. Subject to HHS review and approval, the blueprint detailed how the states planned to implement various functions and activities that HHS deemed essential to operating this type of exchange. HHS identified a third type of exchange states could choose, referred to as a partnership exchange. According to HHS, a partnership exchange is a variation of a federally facilitated exchange, whereby HHS establishes and generally operates the exchange and the state assists HHS with operating various functions of the exchange. States opting for a partnership exchange were required to submit an application blueprint to HHS by February 15, 2013, detailing how the state planned to implement various functions and activities. According to HHS, as of March 14, 2013, 18 states have opted to establish a state-based exchange. In another 7 states, HHS will establish and operate a partnership exchange, with states assisting in certain functions (see figure 1). HHS’s approval of these exchanges is conditional on the states’ addressing a list of activities highlighted in the state’s application blueprint. HHS will establish a federally facilitated exchange in the remaining 26 states. Regardless of the type of exchange states plan to establish, open enrollment in the exchange is to begin on October 1, 2013. See figure 2 for a timeline of key milestones under PPACA. To help states establish an exchange, federal grants are available for planning and implementation activities, as well as for the first year of an exchange’s operation. As shown in figure 2, beginning in September 2010, states could apply for up to $1 million in planning grants to conduct initial research and exchange planning activities. Establishment grants became available to eligible states to set up their own exchanges or to support activities related to the establishment of partnership exchanges or federally facilitated exchanges in the state. States could also apply for “early innovator” grants to help them develop and adapt technology systems to determine eligibility and enrollment. These grants were awarded in 2011 to states that demonstrated an ability to develop IT systems on a fast track schedule and a willingness to share design and implementation solutions with other states. Between September 2010 and March 2013, HHS awarded exchange grants totaling nearly $3.7 billion to 50 states. Of that amount, states returned over $98 million in grant awards. HHS awarded over $1 billion dollars to the 7 states in our review—New York and Oregon were awarded the largest amounts. Figure 3 shows the range of exchange grant funding by state as of March 27, 2013. PPACA and HHS implementing regulations and guidance require states and exchanges to carry out a number of key functions, for which state responsibilities vary by exchange type. A state that chooses to run its own exchange is responsible for: establishing an operating and governance structure, ensuring QHPs are certified and available to qualified individuals, streamlining eligibility and enrollment systems, conducting consumer outreach and assistance, and ensuring the financial sustainability of the exchange. A state that has created a partnership exchange may assist HHS in some of these functions, such as making QHP certification recommendations and conducting aspects of consumer outreach and assistance. A state choosing to operate a state-based exchange must establish the operating and governance structure through which the exchange will be run and managed. Specifically, the state must determine whether the exchange will be run as a governmental agency or a nonprofit organization. Regardless of whether the exchange will be run as a governmental agency or a nonprofit, the state has the authority to allow an exchange to contract with other entities to carry out one or more responsibilities of the exchange. Further, a state operating an exchange as an independent state agency or nonprofit entity established by the state must establish a governance board that meets certain requirements. For example, the board must be administered under a publicly adopted operating charter or by-laws, ensure the board’s membership includes at least one voting member who is a consumer representative and is not made up of a majority of voting representatives with conflicts of interest (for example, representatives of health insurance issuers), and ensure that a majority of the voting members have relevant health care experience (for example, health benefits administration or public health). States choosing to operate their own exchange must ensure the exchange will be capable of certifying qualified health plans (QHP) and making them available to qualified individuals. A state opting for a partnership exchange may choose to engage in this function. In a partnership exchange, health insurance issuers will work directly with the state to submit all QHP issuer application information in accordance with state guidance. An exchange may only offer health plans that are certified as a QHP. To be certified, a health plan must meet two categories of requirements: (1) the health insurance issuer must be in compliance with minimum certification requirements as defined by HHS; and (2) the availability of the health plan through an exchange must be in the interest of qualified individuals and employers. To meet the minimum certification requirements, health insurance issuers must, for example, (1) be licensed and in good standing in each state in which the insurance coverage is offered, (2) comply with quality improvement standards, and (3) ensure their plan networks are adequate and include essential community health providers, where available, to provide timely access to services for predominantly low-income, medically underserved individuals. How an exchange determines whether a plan is in the interest of qualified individuals and employers may depend on how the state organizes its market. The state may choose to organize its market as an “active purchaser” or as a “passive purchaser.” As an active purchaser, the state will decide which health plans can be offered in the exchange on the basis of such factors as select criteria, quality, and price. As a passive purchaser, the state may permit all QHPs to participate in the exchange. In order to be certified as a QHP, plans will also need to meet certain coverage requirements. Specifically, PPACA requires that QHPs provide essential health benefits (EHB) which include coverage within 10 categories: 1. Ambulatory patient services, 2. Emergency services, 4. Maternity and newborn care, 5. Mental health benefits and substance abuse disorder services, including behavioral health treatment, 6. Prescription drugs, 7. Rehabilitative and habilitative services and devices, 8. Laboratory services, 9. Preventive and wellness services and chronic disease management, 10. Pediatric services including oral and vision care. In addition, within an exchange, health insurance issuers may offer QHPs at one of four levels of coverage that reflect out-of-pocket expenses for an enrollee. The four levels of coverage correspond to a percentage paid by a health plan of the total allowed costs of benefit designated by metal tiers: 60 percent (bronze), 70 percent (silver), 80 percent (gold), and 90 percent (platinum). At a minimum, however, a health insurance issuer must offer QHPs at both the silver and gold levels of coverage. States may choose to identify a benchmark plan for their state that, at a minimum, covers the EHB. According to HHS, the benchmark plan reflects the scope of services and limits offered by a “typical employer” plan in the state. HHS identified four plans that a state could choose: (1) one of the three largest plans in the state’s small group market health insurance plans; (2) one of the three largest state employee health benefit plans; (3) one of the three largest national plans offered through the Federal Employees Health Benefits Program; or (4) the largest commercial non-Medicaid health maintenance organization operating in the state. If the state does not select a benchmark plan, the state will default to the largest plan by enrollment in the largest product by enrollment in the state’s small group market. States also have the option of requiring QHPs to offer benefits in addition to EHB. If they choose to do so, states must identify which specific state- required benefits are in excess of the EHB. Under HHS regulations, if a state required QHPs to cover benefits beyond EHB on or after January 1, 2012, the state would be responsible for defraying the cost of these services. States operating their own exchanges generally must ensure that the exchanges will be able to determine an applicant’s eligibility for QHPs, as well as for Medicaid and CHIP. Specifically, under PPACA and implementing regulations, states must establish an electronic, streamlined, and coordinated system through which an individual may apply for and receive a determination of eligibility for enrollment in a QHP, Medicaid, CHIP, or Basic Health Program, if applicable. Exchanges must be able to use a single application that can be completed online, by mail, over the telephone, or in person. This means that no matter how an individual submits an application or which program receives the application, an individual will use the same application and receive an eligibility determination, without the need to submit information to multiple programs. Thus, state IT systems must be interoperable and integrated with an exchange, Medicaid, and CHIP to allow consumers to easily switch from private insurance to Medicaid and CHIP as their circumstances change. Exchanges must also be able to transmit certain data to HHS to be verified before determining applicants’ eligibility. HHS, through a “federal data services hub,” will coordinate with the Department of Homeland Security, the Internal Revenue Service, and other federal agencies to verify applicant information, such as citizenship and household income. With the amount of data that states must share with HHS in order to verify eligibility, developing streamlined eligibility and enrollment systems is a vast undertaking requiring states to develop sophisticated IT systems. As part of the enrollment and eligibility process, HHS directs exchanges to rely on existing electronic sources of data to the maximum extent possible to verify relevant information, with high levels of privacy and security protection for consumers. For the majority of applicants, an automated electronic data matching process should eliminate the need for paper documentation. States that operate their own exchange are required to conduct consumer assistance and outreach through a number of activities. States that partner with HHS may assume some aspects of this function. Specifically, exchanges must have consumer assistance functions that are available to consumers to provide help in using the exchange. Such functions are required to be accessible to individuals with disabilities and individuals with limited English proficiency. Exchanges are also required to operate a toll-free call center and maintain a website that, among other things, allows consumers to compare qualified health plan benefits, costs, and quality ratings, and select and enroll in a plan. Further, exchanges must assist consumers with accessing and obtaining coverage, including providing tools to help consumers access the exchange, determine which plan or program to enroll in, and determine their eligibility for premium tax credits and cost sharing reductions. As part of states’ consumer outreach and assistance activities, each exchange is also required to operate a navigator program, which will provide eligible organizations with grants so they can raise awareness of QHPs’ availability and facilitate consumers’ selection of QHPs. Navigators may include organizations such as trade associations, community and consumer-focused non-profit groups and chambers of commerce. Navigators must maintain expertise in eligibility, enrollment, and program specifications. The entity serving as a navigator must deliver information to the public in a fair, accurate, and impartial manner that is culturally and linguistically appropriate to the needs of the population they serve. HHS afforded state-based exchanges the opportunity to use in-person assisters in certain circumstances to ensure that the full range of services that the navigator program will provide in subsequent years are provided during the exchanges’ initial year of operation. State partnership exchanges in which states will assist with consumer assistance functions will be required to establish and operate an in-person assistance program. While in-person assisters may receive the same training as navigators, they are part of a separate and distinct program and can use establishment grants to fund their operation. PPACA requires that exchanges regularly consult with certain groups of stakeholders for all activities, including establishing and operating consumer assistance programs. These stakeholders include educated health care consumers enrolled in QHPs, representatives of small businesses and self-employed individuals, advocates for enrolling hard- to-reach populations, and individuals and entities with experience in facilitating enrollment in health insurance coverage. Further, HHS provided supplementing guidance on activities states may want to consider as part of their outreach and education, including: performing market analysis or an environmental scan to assess outreach and education needs to determine geographic and demographic-based target areas and vulnerable populations for outreach efforts; developing a “toolkit” for outreach to include educational materials and designing a media strategy and other information dissemination tools; and submitting a final outreach and education plan to HHS. States operating their own exchanges are required to ensure their exchanges will be self-sustaining by 2015—meaning that states must ensure their exchanges have sufficient funding to support ongoing operations. PPACA allows these exchanges to generate funding for exchange operations in certain ways, such as charging user fees or other assessment fees to exchange-participating health insurance issuers. Under HHS guidance, states are to submit a plan to HHS to demonstrate how their exchanges will be financially sustainable by January 1, 2015. Six of the seven states in our study were conditionally approved by HHS to create a state-based exchange. State exchange officials we interviewed said that, among the reasons that states chose to establish this type of exchange are that it allows the state to (1) maintain consistency between the insurance market inside and outside the exchange, (2) better control its insurance market, and (3) have opportunities to better meet the unique needs of the state’s population. In contrast, Iowa officials said the state opted to partner with HHS due to the high cost of building and maintaining a state-based exchange—which the state estimated to be $15.9 million annually. Iowa officials also reported that, by assuming responsibility over certain exchange activities, such as overseeing and certifying qualified health plans, partnering with HHS allows the state to maintain regulatory control over its insurance market. Iowa officials told us that the state plans to transition to a state-based exchange sometime in the future. To begin building an exchange, six of the seven states have established an operating structure through state legislation or by executive order. As a partnership state, Iowa is not establishing an operating structure at this time because HHS will initially establish and operate the exchange. As Iowa switches to a state-based exchange, it will need to establish an operating structure. As shown in table 1, states varied in how they established their exchange operating structures. For example, three states—New York, Nevada, and Rhode Island—plan to run their exchange as entities within an existing state agency. Exchange officials in New York told us that basing the exchange within an existing state agency—New York’s Department of Health—allows the state to leverage established administrative systems and procedures, thereby relieving the exchange from some of the administrative burdens common to start-up organizations. Table 1 also shows that five out of the six states that have established an exchange have also created a governance board that ranges in member composition and expertise. Consistent with HHS regulation, all five governance boards include members that represent consumer interests. All seven states in our review reported taking steps toward certifying QHPs. Two states have decided whether their exchanges will have the authority to actively select which QHPs may participate in the exchange. As active purchasers, exchanges can select QHPs by applying additional criteria and negotiating with health insurance issuers, or by a combination of these actions. As table 2 shows, two states decided to organize their exchanges as active purchasers, while the remaining five states will organize their exchanges as passive purchasers, allowing all plans that meet the minimum requirements for QHPs to participate in the exchange. To identify benchmark plans, all selected states analyzed the plans and considered various factors, including whether the plans offered by the state required benefits in addition to the EHB required under PPACA. In choosing their benchmark plans, all seven states identified plans that included state-mandated benefits that did not exceed PPACA’s EHB requirements. Table 2 shows that five of the seven states recommended benchmark plans to HHS, while two states chose not to identify a benchmark plan and will default to the largest small group plan in their state. All seven states included in our review have taken steps to invite health insurers to participate in their exchanges. For example, in January 2013, New York released an invitation to participate and began accepting applications for licensed insurers in the state (and those expected to be licensed by October 2013) to apply for certain QHPs to be offered through the New York exchange. The exchange governing board will review the applications of individual health plans to make sure they meet all federal minimum participation standards and other requirements to be certified as QHPs. Officials reported that the exchange anticipates certifying plans by mid-July 2013, and will be ready for enrollment on October 1, 2013. Minnesota and Oregon requested applications in October 2012 from insurers who wanted to offer QHPs in the state’s exchange, while the District began accepting applications in April 2013. Insurers certified through the exchange must demonstrate the ability to meet minimum certification requirements including providing adequate networks, care coordination, and quality measures, among other things. Oregon officials told us the state plans to certify QHPs by the summer of 2013 and begin enrolling consumers in October 2013. All seven states in our review are in various stages of developing an IT infrastructure that can support a streamlined and integrated eligibility and enrollment system. A major focus of the states’ integration activities is redesigning their current Medicaid and CHIP eligibility and enrollment systems. State officials described this as the most significant and onerous aspect of developing an IT infrastructure to support the exchange, given the age and limited functionality of current state systems. All seven states in our review use outdated systems, which lack the capacity to support web-based streamlined processes. Further, the majority of states operate multiple eligibility and enrollment systems that serve individuals enrolled not only in Medicaid and CHIP but in other public assistance programs, such as Temporary Assistance to Needy Families (TANF) and the Supplemental Nutrition Assistance Program (SNAP). These separate systems, which may be managed by multiple entities across the state, have limited interface capabilities. For example, similar to other states in our review, Oregon operates multiple enrollment and eligibility systems, whereby only a limited amount of enrollee information is accessible and reusable across multiple programs. In addition, Oregon has multiple interfaces between these programs to support integrated business processes, making systems complex, inflexible, and expensive to maintain. To address these kinds of issues, states are using enhanced federal funding, referred to as the 90 percent match, to either upgrade or rebuild their outdated Medicaid and CHIP eligibility and enrollment systems to meet the requirements under PPACA. As states upgrade their Medicaid and CHIP systems, many are also taking the opportunity to integrate enrollment and eligibility processes for other public assistance programs, such as TANF and SNAP, in order to provide shared services across programs. In addition to upgrading eligibility and enrollment systems, six of the seven states are in various stages of building the exchange IT infrastructure needed to integrate these systems and allow consumers to navigate among health programs and purchase QHPs through a variety of access points, using a single streamlined application. The integrated systems will enable states to collect information needed for eligibility determination and verification, not only from their own state systems, but from federal systems as well. These systems are to utilize a federal data services hub provided by CMS, which will serve as a single source of the federal data that are needed to determine eligibility. To use this system, state systems are to transmit requests for data through the federal data services hub to multiple federal agencies, such as the Department of Homeland Security and the Internal Revenue Service. The federal data services hub is to return the data in near real-time back to the state systems where it can be used to verify the information the states collected for determining applicants’ eligibility. Two states—New York and Oregon—are further along in this work than the other states in our review, as they were awarded early innovator grants to develop an IT infrastructure that will integrate Medicaid, CHIP, and other programs. To develop its state integrated systems, Oregon will use a commercial framework that can be easily adopted and used by other states. As part of its approach and consistent with the intent of the early innovator grant, Oregon has begun working with multiple states to share this framework, including their analyses, design, and other components. CCIIO officials indicated that readiness testing of states’ eligibility and enrollment systems for the exchange will begin in March 2013 and continue through August 2013. To date, three of the states in our review—Nevada, New York and Oregon—have begun testing various aspects of their eligibility, enrollment, and federal data services hub functionality with CCIIO. According to CCIIO officials, the remaining states in our review are expected to begin testing over the next few months. Most state officials told us that because of the complexities of developing an integrated and streamlined eligibility and enrollment system, they plan to use a phased approach to implementation to ensure that key system changes are in place before 2014. Specifically, they will focus first on ensuring that new systems are capable of determining eligibility for enrollment in QHPs, Medicaid, CHIP, and the exchange, and will integrate other assistance programs—such as SNAP and TANF— during later stages. While state officials reported they expect to be ready to enroll individuals by October 1, 2013 and are moving forward with IT-related efforts, officials in six states identified challenges they faced with developing aspects of their systems, given compressed timeframes and a lack of clear federal requirements related to the federal data services hub. For example, exchange officials expressed concerns about the timeframes for implementation, because of the complexities and large undertaking of integrating and modernizing these systems. Further, most officials reported that transitioning multiple programs into a streamlined and coordinated eligibility and enrollment system could take years to fully implement. Officials in six states told us that developing business rules for the eligibility and enrollment system was challenging because they did not have complete information on the requirements of the federal data services hub. Because of implementation timelines, however, these officials said they needed to begin IT-related activities before receiving complete federal guidance. Most officials reported they were concerned that this could lead to changes late in the development process. To address this uncertainty, a few states built in flexibility in their requests for proposals when making procurement decisions. Officials in one state also reported that, in order to meet timeframes, modifications to the IT systems will be completed in 2014 (after enrollment begins), based on guidance issued late in the development process. CMS has indicated that while the federal data services hub is still under development, CMS has released guidance to the states on how to access or verify data through the federal data services hub through such sources as webinars, conferences, and other forums. Despite the challenges associated with developing the IT systems, officials in six states reported their systems will be ready for enrollment by October 1, 2013. Six of the seven states included in our review are in various stages of developing a consumer outreach and assistance program to reach out to potential consumers and help them enroll. As a partnership state, Iowa has not yet decided whether and to what extent it will assist HHS with aspects of this function. Most states have contracted with or plan to contract with vendors to design a program. The vendors will assist with the exchanges’ branding, which will be able to translate materials into multiple languages and take into account the needs of individuals with disabilities. The vendors will also design and implement communications and marketing plans (for example, radio and television ads) with the goal of enrolling the maximum number of eligible individuals into the exchange. As part of the consumer outreach and assistance programs, states will use a range of tools to provide potential consumers with information and assist them in enrolling in an exchange. These include: Navigators and in-person assistors. Six of the seven states in our review plan to use navigators and assistors to provide in-person enrollment assistance to individuals applying for health insurance, such as assisting individuals with selecting QHPs or providing information to individuals in a way that is culturally and linguistically appropriate. HHS plans to assume responsibility for operating the navigator program in Iowa, since it is a partnership state. Nearly all states told us that assistance will need to be tailored to the unique needs of their populations. For example, Nevada officials told us that their program must be able to accommodate individuals who live in Nevada’s remote frontier region, where population density can be as low as two people per square mile and which may lack infrastructure such as Internet access. New York officials told us they will address linguistic and cultural challenges reaching individuals in some of New York City’s more diverse communities. Four states—the District, New York, Oregon, and Rhode Island—plan to leverage state resources within existing health and human services programs to support navigators and assistors. For example, Oregon plans to model its navigator program after a state Medicaid program that provides uninsured individuals with premium assistance and access to health care information and resources. Similarly, New York, which issued a request for application in February 2013 for in-person assistors and navigators, will model its approach after its community assistance programs and will provide assistance through a variety of access points in other local areas across the state. New York officials told us that the state plans to sign contracts with navigators and in-person assistors in the summer of 2013 and begin training them in August or September 2013. Web portals and call centers. Six of the seven states in our review are designing web portals and contact centers as part of their consumer assistance and outreach initiatives. The seventh state, Iowa, is a partnership state and is deferring this responsibility to HHS. State planning documents in the remaining six states indicated that the web portals and the contact centers will be central to assisting residents. State officials told us that web portals, in particular, will ease comparisons among health plans by providing standardized information about each health plan’s premium, benefit structure, and cost-sharing provisions. For example, District officials told us that a web portal, which is being developed in conjunction with the IT infrastructure, will be the key access point for consumers to interface with the exchange. Similarly, Minnesota is designing a contact center that will offer multiple modes of assistance through such means as Internet access, telephone, mail, and in-person assistance. State officials told us they expect the customer service functions will be ready to operate on October 1, 2013. Officials in six states in our review reported they are considering a number of revenue options for financially sustaining their exchange. For example, as part of the planning efforts to develop these options, three states—Nevada, Minnesota, and the District—created work groups to recommend options for achieving long-term sustainability. In particular, both Minnesota and Nevada created working groups intended to review and propose financing options to enable the exchange to be self- sustaining by January 1, 2015. While states reported they are considering options to fund ongoing exchange costs, such as salaries and benefits, consulting services, outreach and marketing, and information technology, three states will charge fees to insurance carriers participating in the exchange. Specifically: Oregon will charge an administrative fee to insurance carriers participating in the exchange. In particular, carriers will be required to pay a percentage of the premiums (up to 5 percent) based on the number of enrollees in the exchange. The fee is designed to decrease as enrollment in the exchange increases. For example, if more than 300,000 individuals enroll in the exchange, the state exchange will charge carriers up to a 3 percent fee. If enrollment is at or below 175,000, the state exchange will charge carriers up to a 5 percent fee. Between 100,000 and 120,000 enrollees would be required for the exchange to be self-sustaining using the maximum administrative fee of 5 percent. Further, any excess revenues generated above the cost of operating the exchange may be placed in a reserve fund of up to 6 months of operating expenses or returned to insurance carriers. Nevada plans to charge insurance carriers a per member per month fee based on enrollment. In its financial sustainability plan, the state estimated the fee will amount to between $7.13 and $7.78 per member per month, which the state anticipates insurance carriers will build into their QHP premiums. In addition, based on the state’s estimates, the state expects the fee will be paid by the advance premium tax credit. Nevada is also considering other potential sources of supplementary revenue, such as fees charged for stand- alone vision and dental plans. Minnesota plans to charge an administrative fee to insurance carriers participating in the exchange. Specifically, insurers will be required to pay a percentage of the premiums (about 3.5 percent) sold through the exchange. The fee will be based on the volume of insurance premiums for plans sold through the exchange. While the states in our review have developed financing options, some state officials identified challenges with developing these options, given uncertainties related to exchange enrollment. Specifically, financial sustainability will be highly dependent on the size of enrollment and the take up rate, which is the percent of individuals that are estimated to enroll in coverage out of the entire eligible population. Some state officials reported that, estimating enrollment patterns without the benefit of historical data from the exchange, could impact revenue projections. Further, according to one state, uptake estimates among various groups are “drastically different,” so that estimating enrollment could result in significantly different per member per month carrier fees required to fund the exchange. Officials from two states reported that given these uncertainties, they expect to make adjustments to these estimates over time. We provided a draft of this report to the Secretary of HHS for review and comment. In response, HHS provided technical comments, which we incorporated as appropriate. Additionally, we provided excerpts of the draft report to exchange officials, such as the executive director and chief policy research and evaluation officer, in the seven states we interviewed for this study. We incorporated their technical comments as appropriate. As arranged with your offices, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days after its issue date. At that time, we will send copies of this report to the Secretary of HHS and interested congressional committees. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you have any questions concerning this report, please contact Stanley J. Czerwinski at (202) 512-6806 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix II. This report addresses the following objectives: (1) identify states’ responsibilities for establishing health benefit exchanges; and (2) describe the actions selected states have taken to establish exchanges and the challenges they have encountered. To identify states’ responsibilities for establishing exchanges and the challenges they encountered, we reviewed selected Patient Protection and Affordable Care Act (PPACA) provisions and Department of Health and Human Services (HHS) implementing regulations and guidance related to the following categories: establishing a governance and operating structure; ensuring exchanges will be capable of certifying qualified health ensuring the financial sustainability of the exchange. plans; simplifying and streamlining eligibility and enrollment systems; conducting consumer assistance and outreach; and Our review of HHS’s guidance included HHS’s blueprint for approval of state-based and partnership exchanges, information bulletins, questions and answers, and webinars. We also reviewed reports that have summarized state responsibilities with regard to the categories we included in our study, including those completed by federal agencies monitoring the implementation process and national associations that play a role in assisting states with implementation. Specifically, we reviewed reports from the Congressional Budget Office, the Congressional Research Service, and relevant state associations, such as the National Association of Insurance Commissioners, the National Conference of State Legislatures, the National Association of State Budget Officers, and the National Academy for State Health Policy. To identify actions selected states have taken to create exchanges and the challenges they encountered, we conducted semistructured interviews with officials in seven states: the District of Columbia, Iowa, Minnesota, Nevada, New York, Oregon, and Rhode Island. We selected these states on the basis of: 1. The percentage of the uninsured population in states based on a 3- year average (2008 to 2010); 2. The percentage of the uninsured population in states in 2011; 3. The amount of exchange grants awarded to states on a per capita 4. Geographic dispersion; and 5. The type of exchange states intended to establish, based on data publicly available as of September 27, 2012. Table 3 shows the characteristics of the states selected for our review. We initially selected two states that intended to operate as federally facilitated exchanges—Florida and Maine. However, exchange officials in both states declined to be interviewed. Therefore, this review focused on states’ responsibilities to establish state-based and partnership exchanges. We conducted initial interviews in person and by telephone between October and November 2012 and follow-up interviews between February and March 2013. The interview questions focused on states’ actions regarding establishing an exchange and the challenges they encountered in the following areas: establishing an operating and governance structure, developing information technology systems and infrastructure to support a streamlined eligibility and enrollment system, ensuring exchanges will be capable of certifying qualified health plans, creating consumer outreach and assistance, and ensuring the exchange’s financial sustainability. We also met with budget officials in some of the states to discuss the fiscal aspects of establishing exchanges, including how states will ensure exchanges are financially sustainable. The responses to the interviews are not intended to be representative of all state exchange and budget officials. To supplement our interviews, we reviewed state planning, budget, and implementation documents, such as state blueprint applications, business plans, exchange grant applications, and contracting documents. In addition, we conducted interviews with officials from the Centers for Medicare & Medicaid Services (CMS) and CMS’s Center for Consumer Information and Insurance Oversight and relevant state associations, including the National Association of State Budget Officers, National Conference of State Legislatures and the National Association of Insurance Commissioners. We conducted our work from September 2011 to April 2013 in accordance with generally accepted government audit standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the contact named above, Brenda Rabinowitz, Assistant Director; Kisha Clark, Analyst-in-Charge; Sandra Beattie, Amy Bowser, Robert Gebhart, Sherrice Kerns, Cynthia Saunders, Stacy Ann Spence, and Hemi Tewarson made key contributions to this report. | A central provision of PPACA requires the establishment of exchanges in each state--online marketplaces through which eligible individuals and small business employers can compare and select health insurance coverage from participating health plans. Exchanges are to begin enrollment by October 1, 2013, with coverage to commence January 1, 2014. States have some flexibility with respect to exchanges by choosing to establish and operate an exchange themselves (i.e., state-based), or by ceding this authority to HHS (i.e., federally facilitated). States may also choose to enter into a partnership with HHS whereby HHS establishes the exchange and the state assists with operating various functions. According to HHS, 18 states will establish a statebased exchange, while 26 will have a federally facilitated exchange. Seven states will partner with HHS. GAO was asked to report on (1) states' responsibilities for establishing exchanges, and (2) actions selected states have taken to establish exchanges and challenges they have encountered. To do this work, GAO reviewed PPACA provisions and HHS implementing regulations and guidance. GAO also conducted semistructured interviews with state officials in the District of Columbia, Iowa, Minnesota, Nevada, New York, Oregon, and Rhode Island. For this review, GAO refers to the District of Columbia as a state. GAO selected these states based on several criteria, such as a 3-year average of states' uninsured population and geographic dispersion. HHS and the seven states in our review provided technical comments on this report, which GAO incorporated as appropriate. The Patient Protection and Affordable Care Act (PPACA) and the Department of Health and Human Services (HHS) regulations, supplemented by HHS guidance, require states and American Health Benefit Exchanges (exchanges) to carry out a number of key functions, for which state responsibilities vary by exchange type. A state that chooses to operate its exchange is responsible for: (1) establishing an operating and governance structure, (2) ensuring exchanges are capable of certifying qualified health plans and making them available to qualified individuals, (3) developing electronic, streamlined, and coordinated eligibility and enrollment systems, (4) conducting consumer outreach and assistance, and (5) ensuring the financial sustainability of the exchange. A state that partners with HHS may assist HHS with certain functions, such as making qualified health plan recommendations and conducting aspects of consumer outreach and assistance. Despite some challenges, the seven selected states in GAO's review reported they have taken actions to create exchanges, which they expect will be ready for enrollment by the deadline of October 1, 2013. For example: Six states will operate as a state-based exchange, with most choosing this option as a way to maintain control of their insurance markets and better meet the needs of their state's residents. The seventh state-- Iowa--will--partner with HHS. All seven states have taken steps toward deciding which qualified health plans would be included in the exchange. Two states have decided that their exchanges will have the authority to actively select which qualified health plans may participate in the exchange, while the remaining five states will allow all qualified health plans to participate in the exchange. All states are in various stages of developing an information technology (IT) infrastructure, including redesigning, upgrading, or replacing their outdated Medicaid and Children's Health Insurance Program eligibility and enrollment systems. Six states are also building the exchange IT infrastructure needed to integrate systems and allow consumers to navigate among health programs, but identified challenges with the complexity and magnitude of the IT projects, time constraints, and guidance for developing their systems. Six of the seven states included in our review are in various stages of developing a consumer outreach and assistance program to reach out to and help enroll potential consumers. As a partnership state, Iowa has not yet decided whether and to what extent it will assume responsibility for aspects of this function. Officials in the six state-based exchanges reported they are considering revenue options for financially sustaining their exchange. For example, three states plan to charge fees to insurance carriers participating in the exchange. However, some states reported challenges with developing these options, given uncertainties related to exchange enrollment, on which the fees are based. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
DOT is working with the automobile industry, state and local transportation agencies, researchers, private sector stakeholders, and others to lead and fund research on connected vehicle technologies to enable safe wireless communications among vehicles, infrastructure, and travelers’ personal communications devices. Connected vehicle technologies include vehicle-to-vehicle (V2V) and vehicle-to-infrastructure (V2I) technologies: V2V technologies transmit data between vehicles to enable applications that can warn drivers about potential collisions. Specifically, V2V-equipped cars would emit data on their speed, position, heading, acceleration, size, brake status, and other data (referred to as the “basic safety message”) 10 times per second to the on-board equipment of surrounding vehicles, which would interpret the data and provide warnings to the driver as needed. For example, drivers may receive a forward collision warning when their vehicle is close to colliding with the vehicle in front of them. V2V technologies have a greater range of detection than existing sensor-based crash avoidance technologies available in some new vehicles. NHTSA is pursuing actions to require that vehicle manufacturers install the underlying V2V technologies that would enable V2V applications in new passenger cars and light truck vehicles, and requested comment on this issue in an August 2014 Advanced Notice of Proposed Rulemaking. We reported on V2V technologies in November 2013. Thus, we are not focusing on these technologies in this report. Vehicle-to-infrastructure (V2I) technologies transmit data between vehicles and the road infrastructure to enable a variety of safety, mobility, and environmental applications. V2I applications are designed to avoid or mitigate vehicle crashes, particularly those crash scenarios not addressed by V2V alone, as well as provide mobility and environmental benefits. Unlike V2V, DOT is not considering mandating the deployment of V2I technologies. V2I applications rely on data sent between vehicles and infrastructure to provide alerts and advice to drivers. For example, the Spot Weather Impact Warning application is designed to detect unsafe weather conditions, such as ice or fog, and notify the driver if reduced speed or an alternative route is recommended (see left side of figure 1). DOT is also investigating the development of V2I mobility and environmental applications. For example, the Eco-Approach and Departure at Signalized Intersections application alerts drivers of the most eco-friendly speed for approaching and departing signalized intersections to minimize stop-and- go traffic and idling (see right side of fig. 1), and eco-lanes, combined with eco-speed harmonization, (demonstrated in the following video) would provide speed limit advice to minimize congestion and maintain consistent speeds among vehicles in dedicated lanes. DOT is also pursuing the development of V2I mobility applications that are designed to provide traffic signal priority to certain types of vehicles, such as emergency responders or transit vehicles. In addition, other types of V2I mobility applications could capture data from vehicles and infrastructure (for example, data on current traffic volumes and speed) and relay real-time traffic data to transportation system managers and drivers. For example, after receiving data indicating vehicles on a particular roadway were not moving, transportation system managers could adjust traffic signals in response to the conditions, or alert drivers of alternative routes via dynamic message signs located along the roadway. In addition to receiving alerts via message signs, these applications could also allow drivers to receive warnings through on-board systems or personal devices. Japan has pursued this approach through its ITS Spot V2I initiative, which uses roadside devices located along expressways to simultaneously collect data from vehicles to allow traffic managers to identify congestion, while also providing information to drivers regarding upcoming congestion and alternative routes. To communicate in a connected vehicle environment, vehicles and infrastructure must be equipped with dedicated short-range communications (DSRC), a wireless technology that enables vehicles and infrastructure to transmit and receive messages over a range of about 300 meters (nearly 1,000 feet). As previously noted, V2V- equipped cars emit data on their speed, position, heading, acceleration, size, brake status, and other data (referred to as the “basic safety message”) 10 times per second to the surrounding vehicles and infrastructure. V2I-equipped infrastructure can also transmit data to vehicles, which can be used by on-board applications to issue appropriate warnings to the driver when needed. According to DOT, DSRC is considered critical for safety applications due to its low latency, high reliability, and consistent availability. In addition, DSRC also transmits in a broadcast mode, providing data to all potential users at the same time. Stakeholders and federal agencies have noted that DSRC’s ability to reliably transfer messages between infrastructure and rapidly moving vehicles is an essential component to detecting and preventing potential collisions. DSRC technology uses radiofrequency spectrum to wirelessly send and receive data. The Federal Communications Commission (FCC), which manages spectrum for nonfederal users, including commercial, private, and state and local government users, allocated 75 megahertz (MHz) of spectrum—the 5.850 to 5.925 gigahertz (GHz) band (5.9 GHz band)—for the primary purpose of improving transportation safety and adopted basic technical rules for DSRC operations. However, in response to increased demands for spectrum, FCC has requested comment on allowing other devices to “share” the 5.9 GHz band with DSRC technologies. V2I equipment may vary depending on the location and the type of application being used, although in general, V2I components in the connected vehicle environment include an array of roadside equipment (RSE) that transmits and receives messages with vehicles for the purpose of supporting V2I applications (see figure 2). For example, a V2I- equipped intersection would include: Roadside units (RSU)—a device that operates from a fixed position and transmits data to vehicles. This typically refers to a DSRC radio, which is used for safety-critical applications that cannot tolerate interruption, although DOT has noted that other technologies may be used for non-safety-critical applications. A traffic signal controller that generates the Signal Phase and Timing (SPaT) message, which includes the signal phase (green, yellow, and red) and the minimum and maximum allowable time remaining for the phase for each approach lane to an intersection. The controller transfers that information to the RSU, which broadcasts the message to vehicles. A local or state back office, private operator, or traffic management center that collects and processes aggregated data from the roads and vehicles. As previously noted, these traffic management centers may use aggregated data that is collected from vehicles (speed, location, and trajectory) and stripped of identifying information to gain insights into congestion and road conditions as well. Communications links (such as fiber optic cables or wireless technologies) between roadside equipment and the local or state back office, private operator, or traffic management center. This is typically referred to as the “backhaul network.” Support functions, such as underlying technologies and processes to ensure that the data being transmitted are secure. DOT, state and local transportation agencies, academic researchers, and private sector stakeholders are engaged in a number of efforts to develop and test V2I technologies and applications, as well as to develop the technology and systems that enable V2I applications. DOT’s V2I work is funded through its connected vehicle research program. DOT’s initial connected vehicle research focused on V2I technologies; however, it shifted its focus to V2V technologies because they are projected to produce the majority of connected vehicle safety benefits and they do not require the same level of infrastructure investment as V2I technologies. After conducting much of the research needed to inform its advanced notice of proposed rulemaking to require that vehicle manufacturers install V2V technologies in new passenger cars and light truck vehicles, DOT is now shifting its focus back to V2I technologies, and some of the technical work needed to develop V2V applications has also informed the development of V2I. A number of DOT agencies are involved with the development and deployment of V2I technologies. In addition, private companies have received contracts from DOT to develop the underlying concept of operations and technologies to support V2I applications, and auto manufacturers are collaborating with DOT in its efforts to develop and pilot certain V2I applications and the underlying technologies to support them. State and local transportation agencies, which will ultimately be deploying V2I technologies on their roads, have also pursued efforts to test V2I technologies in real-world settings. However, to date, only small research deployments (such as those described below) have occurred to test V2I technologies: The Safety Pilot Model Deployment: DOT partnered with the University of Michigan Transportation Research Institute to collect data to help estimate the effectiveness of connected vehicle technologies and their benefits in real-world situations. The pilot was conducted in Ann Arbor, Michigan, from August 2012 to February 2014, and included roughly 2,800 V2V-equipped cars, trucks, and buses, as well as roadside V2I equipment placed at 21 intersections, three curve-warning areas, and five freeway sites. While the primary focus was on V2V technologies, the pilot also evaluated V2I technology, such as Signal Phase and Timing (SPaT) technologies. DOT officials stated that it would be releasing six reports with findings from the Safety Pilot in mid to late 2015, although these reports will primarily focus on V2V applications. As of July 2015, DOT has released one report that included an evaluation of how transit bus drivers responded to V2V and V2I warnings, and of how well the test applications performed in providing accurate warnings. The two V2I applications included were a curve speed warning and a warning that alerts the bus driver if pedestrians are in the intended path of the bus when it is turning at an intersection. Connected Vehicle Pooled Fund Study: A group of state transportation agencies, with support from the FHWA, established the Connected Vehicle Pooled Fund Study. The study aims to aid transportation agencies in justifying and promoting the large scale deployment of a connected vehicle environment and applications through modeling, development, engineering, and planning activities. To achieve this goal, the study funds projects that facilitate the field demonstration, deployment, and evaluation of connected vehicle infrastructure and applications. For example, the University of Arizona and the University of California at Berkeley are collaborating on a project to develop and test an intelligent traffic-signal system that could, among other things, provide traffic signal priority for emergency and transit vehicles, and allow pedestrians to request for more time to cross the street. Crash Avoidance Metrics Partners, LLC (CAMP): CAMP—a partnership of auto manufacturers that works to accelerate the development and implementation of crash avoidance countermeasures—established a V2I Consortium that focuses on addressing the technical issues related to V2I. In 2013, DOT awarded a cooperative agreement to CAMP, with a total potential federal share of $45 million, to develop and test V2I safety, mobility, and environmental applications, as well as the underlying technology needed to support the applications, such as security and GPS- positioning technologies. According to an FHWA official, CAMP’s current efforts include developing, testing, and validating up to five V2I safety applications, as well as a prototype for Cooperative Adaptive Cruise Control, an application that uses V2V and V2I technology to automatically maintain the speed of and space between vehicles. In addition to CAMP, automakers have established the Vehicle Infrastructure Integration Consortium, which coordinates with DOT on connected vehicle policy issues, such as interoperability of V2I technologies. Test Beds: DOT, state and local agencies, and universities have established connected vehicle test beds. Test beds provide environments (with equipped vehicles and V2I roadside equipment) that allow stakeholders to create, test, and refine connected vehicle technologies and applications. This includes DOT’s Southeast Michigan Test Bed, which has been in operation since 2007 to provide a real-world setting for developers to test V2I and V2V concepts, applications, technology, and security systems. In addition, state agencies and universities have established their own test beds. For example, the University Transportation Center in Virginia, in collaboration with the Virginia Department of Transportation, established the Northern Virginia Test Bed to develop and test V2I applications, some of which target specific problems—like congestion—along the I-66 corridor. DOT offers guidance on how research efforts can become DOT-affiliated test beds, with the goal of enabling test beds to share design information and lessons learned, as well as to create a common technical platform. According to DOT, there are over 70 affiliated test bed members. The deployment of connected vehicle infrastructure to date has been conducted in test beds in locations such as Arizona, California, Florida, Michigan, New York, and Virginia. Additionally, officials from some of these test beds told us they may apply to the Connected Vehicle Pilot Deployment Program later this year (see below). The Connected Vehicle Pilot Deployment Program: Over the next 5 years, DOT plans to provide up to $100 million in funding for a number of pilot projects that are to design and deploy connected vehicle environments (comprised of various V2I and V2V technologies and applications) to address specific local needs related to safety, mobility, and the environment. As envisioned, there are to be multiple pilot sites with each site having different needs, purposes, and applications. The program solicitation notes that successful elements of the pilot deployments are expected to become permanent operational fixtures in the real-world setting (rather than limited to particular testing facilities), with the goal of creating a foundation for expanded and enhanced connected vehicle deployments. FHWA solicited applications for the pilot program from January through March 2015. According to DOT, the initial set of pilot deployments (Wave 1 award) is expected to begin in Fall 2015, with a second set (Wave 2 award) scheduled to begin in 2017. Pilot deployments are expected to conclude in September 2020. DOT and other stakeholders have worked to provide guidance to help state and local agencies pursue V2I deployments, since it will be up to state and local transportation agencies to voluntarily deploy V2I technologies. In September 2014, FHWA issued and requested comment on draft V2I deployment guidance intended to help transportation agencies make appropriate V2I investment and implementation decisions. For example, the guidance includes information on planning deployments, federal funding that can be used for V2I equipment and operations, technical requirements for equipment and systems, and applicable regulations, among other things. FHWA is updating the guidance and creating complementary guides, best practices, and toolkits, and officials told us they expect the revised guidance to be released by September 2015. In addition, the American Association of State Highway and Transportation Officials (AASHTO), in collaboration with a number of other groups, developed the National Connected Vehicle Field Infrastructure Footprint Analysis. This report provides a variety of information and guidance for state and local agencies interested in V2I implementation, including a description of benefits; various state/local based scenarios for V2I deployments; underlying infrastructure and communications needs; timelines and activities for deployment; estimated costs and workforce requirements; and an identification of challenges that need to be addressed. AASHTO, with support the Institute of Transportation Engineers and the Intelligent Transportation Society of America, is also leading a V2I Deployment Coalition. The Coalition has several proposed objectives: support implementation of FHWA V2I deployment guidance; establish connected vehicle deployment strategies, and support standards development. According to information from the coalition and DOT, the V2I Deployment Coalition will be supported by technical teams drawn from DOT, trade associations, transportation system owners/operators, and auto manufacturers. While early pilot-project deployment of V2I technologies is occurring, V2I technologies are not likely to be extensively deployed in the United States for the next few decades. According to DOT, V2I technologies will likely be slowly deployed in the United States over a 20-year period as existing infrastructure systems are replaced or upgraded. DOT has developed a connected vehicle path to deployment that includes steps such as releasing the final version of FHWA’s V2I deployment guidance for state and local transportation agencies (September 2015), and awarding and evaluating the Connected Vehicle Pilot Deployment Program projects in two phases, with the first phase of awards occurring in September 2015 and evaluation occurring in 2019, and the second phase of awards occurring in September 2017 and evaluation occurring in 2021. In addition, DOT officials noted that V2I will capitalize on V2V, and its deployment will lag behind the V2V rulemaking. NHTSA will issue a final rule specifying whether and when manufacturers will be required to install V2V technologies in new passenger cars and light trucks. In addition, FCC has not made a decision about whether spectrum used by DSRC can be shared with unlicensed devices, which could affect the time frames for V2I deployment. Even after V2I technologies and applications have been developed and evaluated through activities such as the pilot program, it will take time for state and local transportation agencies to deploy the infrastructure needed to provide V2I messages, and for drivers to purchase vehicles or equipment that can receive V2I messages. AASHTO estimated that 20 percent of signalized intersections will be V2I- capable by 2025, and 80 percent of signalized intersections would be V2I- capable by 2040. Similarly, AASHTO estimated that 90 percent of light vehicles would be V2V-equipped by 2040. However, DOT officials noted that environmental and mobility benefits can occur even without widespread market penetration and that other research has indicated certain intersections may be targeted for deployment. Similarly, in its National Connected Vehicle Field Infrastructure Footprint Analysis, AASHTO noted that early deployment of V2I technologies will likely occur at the highest-volume signalized intersections, which could potentially address 50 percent of intersection crashes. See figure 3 for a list of planned events and milestones related to DOT’s path to deployment of connected vehicle technologies. According to experts and industry stakeholders we interviewed, there are a variety of challenges that may affect the deployment of V2I technologies including: (1) ensuring that possible sharing with other wireless users of the radiofrequency spectrum used by V2I communications will not adversely affect V2I technologies’ performance; (2) addressing states’ lack of resources to deploy and maintain V2I technologies; (3) developing technical standards to ensure interoperability between devices and infrastructure; (4) developing and managing a data security system and addressing public perceptions related to privacy; (5) ensuring that drivers respond appropriately to V2I warnings; and (6) addressing the uncertainties related to potential liability issues posed by V2I. DOT is collaborating with the automotive industry and state transportation officials, among others, to identify potential solutions to these challenges. As previously noted, V2I technologies depend on radiofrequency spectrum, which is a limited resource in high demand due in part to the increase in mobile broadband use. To address this issue, the current and past administrations, Congress, FCC, and others have proposed a variety of policy, economic, and technological solutions to support the growing needs of businesses and consumers for fixed and mobile broadband communications by providing access to additional spectrum. One proposed solution, introduced in response to requirements in the Middle Class Tax Relief and Job Creation Act of 2012, would allow unlicensed devices to share the 5.9 GHz band radiofrequency spectrum that had been previously set aside for the use of DSRC-based ITS applications such as V2I and V2V technologies. FCC issued a Notice of Proposed Rulemaking in February 2013 that requested comments on this proposed solution. DOT officials and 17 out of 21 experts we interviewed considered the proposed spectrum sharing a significant challenge to deploying V2I technologies. DSRC systems support safety applications that require the immediate transfer of data between entities (vehicle, infrastructure, or other platforms). According to DOT officials, delays in the transfer of such data due to harmful interference from unlicensed devices may jeopardize crash avoidance capabilities. Experts cited similar concerns, with one state official saying that if they deploy applications and they do not work due to harmful interference, potential users may not accept V2I. Seven experts we interviewed agreed that further testing was needed to determine if sharing would result in harmful interference to DSRC. In addition, DOT officials noted that changing to a shared 5.9 GHz band could impact current V2I research, which is based on the assumption that DSRC systems will have reliable access to the 5.9 GHz wireless spectrum. According to Japanese government officials we interviewed, Japan also considered whether to share its dedicated spectrum with unlicensed devices and decided not to allow sharing of the spectrum used for V2I in the 700 MHz band. According to officials we interviewed, Japan’s Ministry of Internal Affairs and Communications conducted a study to test interference with V2I technologies and mobile phones to determine the impact on reliability and latency in delivering safety messages. Based on these tests, the Japanese government decided not to allow sharing of the spectrum band used for V2I, because sharing could lead to delays or harmful interference with V2I messages. Japanese auto manufacturers we interviewed in Japan supported the decision of the Japanese government to keep the 700 MHz band dedicated to transportation safety uses. According to officials, if latency problems affect the receipt of safety messages, this could degrade the public’s trust, consequently slowing down acceptance of the V2I system in Japan. Since the Notice of Proposed Rulemaking was announced, various organizations have begun efforts to evaluate potential spectrum sharing in the 5.9 GHz band and some have expressed concerns. For example, harmful interference from unlicensed devices sharing the same band could affect the speed at which a V2I message is delivered to a driver. NTIA, which has conducted a study on the subject, identified risks associated with allowing unlicensed devices to operate in the 5.9 GHz band, and concluded that further work was needed to determine whether and how the risks identified can be mitigated. DOT also plans to evaluate the potential for unlicensed device interference with DSRC as discussed below. Given the pending FCC rulemaking decision, DOT, technology firms, and car manufacturers have taken an active role pursuing solutions to spectrum sharing. Specifically, DOT’s fiscal year 2016 budget request included funds for technical analysis to determine whether DSRC can co- exist with the operation of unlicensed wireless services in the same radiofrequency band without undermining safety applications. According to DOT officials, since industry has not yet developed an unlicensed device capable of sharing the spectrum, the agency does not have a specific date for completion of this testing at this time. DOT officials noted, however, that they would work with NTIA in any spectrum-related matter to inform FCC of its testing results. According to FCC officials we spoke with, FCC is currently collecting comments and data from government agencies, industry, and other interested parties and will use this information to inform their decision. For example, since 2013, representatives from Toyota, Denso, CSR Technology, and other firms worked together as part of the Institute of Electrical and Electronics Engineers (IEEE) DSRC Tiger Team to evaluate potential options and technologies that would allow unlicensed devices to use the 5.9 GHz band without causing harmful interference to licensed devices. However, the representatives did not reach an agreement on a unified spectrum- sharing approach. Another ongoing effort from Cisco Systems, the Alliance of Automobile Manufacturers, and the Association of Global Automakers is preparing to test whether unlicensed devices using the “listen, detect and avoid” protocol would be able to share spectrum without causing harmful interference to incumbent DSRC operations. As of September 2015, FCC has not announced a date by which it will make a decision. Because the deployment of V2I technologies will not be mandatory, the decision to invest in these technologies will be up to the states and localities that choose to use them as part of their broader traffic- management efforts. However, many states and localities may lack resources for funding both V2I equipment and the personnel to install, operate, and maintain the technologies. In its report on the costs, benefits, and challenges of V2I deployment by local transportation agencies, the National Cooperative Highway Research Program (NCHRP) noted that many states they interviewed said that their current state budgets are the leanest they have been in years. Furthermore, states are affected because traditional funding sources, such as the Highway Trust Fund, are eroding, and funding is further complicated by the federal government’s current financial condition and fiscal outlook. Consequently, there can be less money for state highway programs that support construction, reconstruction, and improvement of highways and bridges on eligible federal-aid highway routes, as well as for other authorized purposes. According to one stakeholder we interviewed, there have been widespread funding cuts for state DOTs, and many state DOTs must first focus on maintaining the infrastructure and equipment they already have before investing in advanced technologies. Ten experts we interviewed, including six experts from state and local transportation agencies, agreed that the lack of state and local resources will be a significant challenge to deploying V2I technologies. According to one report, without additional federal funding, deploying V2I systems would be difficult. Even if states decide to invest in V2I deployment, states and localities may face difficulties finding the resources necessary to operate and maintain V2I technologies. We have previously found that effectively using intelligent transportation systems, like V2I, depends on agencies’ having the staff and funding resources needed to maintain and operate the technologies. However, a recently released DOT report noted that staffing and information technology resources for maintaining V2I technologies were lacking in most agencies due to low and uncompetitive wage rates and funding constraints at the state and local government levels. Similarly, 12 experts we interviewed stated that states and localities generally lack the resources to hire and train personnel with the technical skills needed to operate and maintain V2I systems. According to FHWA’s draft guidance on V2I deployment, funds are available for the purchase and installation of V2I technologies under various Federal-aid highway programs. In addition, costs that support V2I systems, including maintenance of roadside equipment and related hardware, are eligible in the same way that other Intelligent Transportation System (ITS) equipment and programs are eligible. According to DOT, states have the authority and responsibility to determine the priority for funding V2I systems along with other competing transportation programs. Japan’s V2I systems, which were also voluntarily deployed, were funded in large part by the national government. According to Japan’s National Police Agency, half of the costs for traffic signals were provided by the national government. In addition, according to the National Policy Agency, the Japanese government has invested an estimated $97 million (2014 dollars) in research and development for these systems. Two of the Japanese automakers we interviewed attributed the success of the Japanese V2I system in part to the significant government involvement and financial investment. Furthermore, according to a study on international connected vehicle technologies, Japan’s nationally deployed and funded infrastructure devices allowed for industry partners to test and release connected vehicle technologies. Nineteen of the 21 experts we spoke with reported that establishing technical standards is essential for all connected vehicle programs, including V2I, and will be challenging for a number of reasons. According to DOT, such standards define how systems, products, and components perform, how they can connect, and how they can exchange data to interoperate. DOT further noted that these standards are necessary for connected vehicle technologies to work on different types of vehicles and devices to ensure the integrity and security of their data transmission. As well, current standardization efforts have focused on standardizing the data elements and message sets that are transmitted between vehicles and the infrastructure. Currently, according to DOT officials, DOT and various organizations have worked with the Society for Automotive Engineers (SAE) International to standardize the message sets and associated performance requirements for DSRC (SAE J2735 and J2945), which support a wide variety of V2V and V2I applications. DOT, SAE International, and engineers from auto manufacturers, V2I suppliers, technology firms, and other firms meet to develop high-quality, safe, and cost-effective standards for connected vehicle devices and technologies, according to an expert from a leading industry organization specializing in setting connected vehicle technical standards. This expert also noted that developing consensus around what standards should be instituted could be difficult given the different interests (political, economic, or industry-related) of the many stakeholders involved in developing and deploying V2I technologies. For example, the expert said that developing effective security standards required for these technologies that are also cost-effective for auto manufacturers and government organizations to implement may be difficult. Without common standards, V2I technologies may not be interoperable. DOT has noted that consistent, widely applicable standards and protocols are needed to ensure V2I interoperability across devices and applications. However, ensuring interoperability with a standard set of V2I applications in each state may be particularly challenging because unlike V2V, deployment of V2I technologies will remain voluntary. Consequently, states and localities may choose to deploy a variety of different V2I technologies—or no technologies at all—based on what they deem appropriate for their transportation needs. DOT officials we interviewed recognized that a complete national deployment of V2I technologies may never occur, resulting in a patchwork deployment of different applications in localities and states, although these applications will be required to be interoperable with one another. As a result, V2I deployment may be challenged by the following limitations: Benefits may not be optimized: Four experts we interviewed said that having a standard set of V2I applications in each state would be beneficial for drivers because a consistent deployment of applications could potentially increase benefits. Development of applications may be more limited: AASHTO’s National Connected Vehicle Footprint Analysis argues that the more connected vehicle infrastructure is deployed nationwide using common standards, the more likely applications will be developed to take advantage of new safety, mobility, and environmental opportunities. Drivers may not find the system valuable: One expert from a state agency said without a standard set of V2I applications that allows drivers to use V2I applications seamlessly as they travel from state to state, travelers may lose confidence in the usefulness of the system and choose not to use it. DOT and standardization organizations, such as the Society of Automotive Engineers (SAE) International, are working to develop standards to support DSRC and other V2I communications technologies. The data elements and message sets specified in the SAE standards are suitable not only for use with DSRC but also with other communications technologies such as cellular. According to DOT officials, the department is providing funding support, expert participation, and leadership in multiple standards development organizations to promote consensus on the key standards required to support nationally interoperable V2I and V2V technology deployments. Furthermore, the V2I Deployment Coalition—which includes AASHTO, the Institute of Electrical and Electronic Engineers, and the Institute of Transportation Engineers— intends to lead the effort to develop and support publishing of V2I standards, guidelines, and test specifications to support interoperability. To facilitate standardization among potential state users of V2I technologies, FHWA is currently developing deployment guidance as discussed previously. According to DOT, that guidance will include specifications to ensure interoperability and to assist state and local agencies in making appropriate investment and implementation decisions for those agencies that will deploy, operate, and maintain V2I systems. In addition to developing V2I standards across the United States, five experts we interviewed mentioned the importance of international harmonization for V2I technologies. Auto manufacturer experts recognized the importance of developing standards at both a domestic and international level as cars are manufactured globally. However, this is a challenge because international standardization organizations, including those in Europe and Japan, have different verification and validation processes than the United States, according to an auto manufacturer expert. Furthermore, another expert noted that harmonization of standards is dependent on the country’s or regional government’s regulations, and since there are different views on the role of these regulations in Europe, Japan, and the United States, achieving global standards will be complex. According to DOT, the joint standardization of connected vehicle systems (V2V and V2I) is a core objective of European Union-U.S. cooperation on ITS, and U.S.-Japan staff exchanges have been invaluable in building relationships and facilitating technical exchange, thus creating a strong foundation for ongoing collaboration and research. According to DOT officials, even when identical standards are not viable across multiple countries or regions due to technical or legal differences, maximizing similarities can increase the likelihood that common hardware and software can be used in multiple markets, reducing costs and accelerating deployment. According to officials from one Japanese auto manufacturer we interviewed, developing a standard message set for V2I communications in Japan was a long and challenging process that took over 5 years of discussion among auto manufacturers. According to DOT, for connected vehicle technologies to function safely, security and communications infrastructure need to enable and ensure the trustworthiness of messages between vehicles and infrastructure. The source of each message needs to be trusted and message content needs to be protected from outside interference or attacks on the system’s integrity. A DOT study we reviewed and the majority of the experts we interviewed noted that data security challenges exist and cited challenges that range from securing messages delivered to and from vehicle devices and infrastructure to managing security credentials and associated policies for accessing data and the system. Fourteen of 21 experts we interviewed cited securing data as a significant challenge to the deployment of V2I technologies. For example, experts from 5 states and one local agency that operated V2I test beds told us they were uncertain how vehicle and infrastructure data would be stored and secured for a larger deployment of V2I technologies because they have only tested V2I applications in limited, small-scale deployments. Most of these experts were also unsure whether current data security efforts could be scalable to a larger deployment. According to DOT officials, they are currently researching this area. DOT and industry have taken steps to develop a security framework for all connected vehicle technologies, including V2I. DOT, along with automakers from CAMP, are testing and developing the Security Credential Management System (SCMS) to ensure the basic safety messages are secure and coming from an authorized device. More than half of the experts we interviewed expressed a variety of concerns about (1) the SCMS system, including whether SCMS can ensure a trusted and secure data exchange and (2) who will ultimately manage the system. To solicit input on these issues DOT launched a Request for Information in October 2014 to obtain feedback in developing the organizational and operating structure for SCMS. In our previous work on V2V, we found that as a part of its research on the security system, DOT had identified three potential models—federal, public-private, and private. We previously found that if a federal model were pursued, according to DOT, the federal government would likely pursue a service contract that would include specific provisions to ensure adequate market access, privacy and security controls, and reporting and continuity of services. We also reported that under a public-private partnership, the security system would be jointly owned and managed by the federal government and private entities. At the time of our prior report, DOT officials stated that its legal authority and resources have led NHTSA to focus primarily on working with stakeholders to develop a viable private model, involving a privately owned and operated security-management provider. According to DOT officials, the agency is expanding the scope of its planned policy research to enable the Department to play a more active leadership role in working with V2V and V2I stakeholders to develop and prototype a private, multi-stakeholder organizational model for a V2V SCMS. Officials said that such a model would ensure organizational transparency, fair representation of stakeholders, and permit the federal government to play an ongoing advisory role. A central component of the Department’s planned policy research is the development of policies and procedures that could govern an operational SCMS, including minimum standards to ensure security and appropriately protect consumer privacy. Currently, NHTSA is reviewing comments on the management and organization for SCMS to inform its V2V Notice of Proposed Rulemaking, expected to be submitted for Office of Management and Budget review by the end of 2015. In addition, according to DOT’s Connected Vehicle Pilot Deployment Program request for proposals, participating state and local agencies will utilize SCMS as a tool to support deployment security, which will allow states, local agencies, and private sector firms an opportunity to test capabilities in a real-world setting. Ultimately, when asked about the sufficiency of SCMS, almost half of the experts we interviewed (10 of 21) indicated they were confident that a secure system for V2I could be developed. According to FHWA, a secure system is essential to appropriately protect the privacy of V2I users. Nine of the experts identified privacy as a significant challenge for the deployment of V2I technologies. For example, the public may perceive that their personal information could be exposed or their vehicle could be tracked using connected vehicle technologies. In a connected vehicle environment, various organizations—federal, state, and local agencies; academic organizations; and private sector firms—potentially may have access to data generated by V2I technologies in order to, for example, manage traffic and conduct research. DOT has taken some steps to mitigate security and privacy concerns related to V2V and V2I technologies. According to DOT officials, the safety message will be broadcast in a very limited range (approximately 300 meters) and will not contain any information that identifies a specific driver, owner or vehicle (through vehicle identification numbers or license plate or registration information). The messages transmitted by DSRC devices (such as roadside units) in support of V2V and V2I technologies also will be signed by security credentials that change on a periodic basis (currently expected to be every 5 minutes) to minimize the risk that a third party could use the messages as a basis for tracking the location or path of a specific individual or vehicle. Additionally, with respect to V2I technologies, DOT officials, car manufacturers and V2I suppliers plan to incorporate privacy by design into V2I technologies. Under this approach, according to DOT, V2I data will be aggregated, and anonymized. Also NHTSA is currently in the process of conducting a V2V privacy risk assessment and intends to publish a Privacy Impact Assessment in connection with its V2V Notice of Proposed Rulemaking, which is expected to include an analysis of data collected, transmitted, stored, and disclosed by the V2V system components and other entities in relation to privacy concerns. The Department expects the V2V privacy risk research and the Privacy Impact Assessment to influence the development of policies, including security and privacy policies with regard to V2I. Furthermore, according to DOT, its V2I Deployment Coalition also plans to identify privacy and data issues at the state and county level. According to Japanese officials we interviewed from the Ministry of Land, Infrastructure, Transport, and Tourism (MLIT), Japan took a number of steps to address the security and privacy of its V2I system. First, Japan’s Intelligent Transportation Systems Technology Enhancement Association is responsible for managing the security of their V2I systems, and developed a system that used encryption to maintain security and ensure privacy. More specifically, each vehicle participating in V2I is assigned a changing, random identification number each time the vehicle started, thus making it difficult to track the vehicle over time. MLIT officials also noted that data generated from each vehicle is not stored permanently, but rather saved for distinct time frames depending on its use. Further, MLIT officials stated that security is ensured because V2I information is protected, anonymous, non-identifiable, and not shared with outside organizations; rather, it is used solely for public safety purposes. According to the National Police Agency officials, no significant security issue has occurred with V2I technologies as of July 2015. Because V2I data will initially provide alerts and warning messages to drivers, the ultimate effectiveness of these technologies, especially as it relates to safety, depends on how well drivers respond to the warning messages. In a November 2013 report on V2V technologies, we found that addressing human factors that affect how drivers will respond included (1) minimizing the risk that drivers could become too familiar with or overly reliant upon warnings over time and fail to exercise due diligence in responding to them, (2) assessing the risk that warnings could distract drivers and present new safety issues, and (3) determining what types of warnings will maximize driver response. Seven of the 21 experts we interviewed identified human factors issues as significant to V2I deployment. To address these concerns, DOT is participating in a number of research efforts to determine the effects of new technologies on driver distraction. To further examine the effects on drivers using V2I applications, NHTSA has a research program in place to develop human factors principles that may be used by automobile manufacturers and suppliers as they design and deploy V2I technology and other driver-vehicle interfaces that provide warnings to drivers. In addition, DOT’s ITS-JPO is funding NHTSA and FHWA research to investigate human factors implications for V2I technologies. Furthermore, according to DOT, the Connected Vehicle Pilot Program will allow additional opportunities to review drivers’ reactions to V2I messages using cameras and driver vehicle data on speed, braking, and other metrics. Eleven of the 21 experts we interviewed identified uncertainty related to potential liability in the event of a collision involving vehicles equipped with V2I technologies as a challenge. In our November 2013 report on V2V, an auto manufacturer expert said that it could be harder to determine whether fault for a collision between vehicles equipped with connected vehicle technologies lies with one of the drivers, an automobile manufacturer, the manufacturer of a device, or another party. According to DOT officials, it is unlikely that either V2I or V2V technologies will create significant liability exposure for the automotive industry, as DOT expects auto manufacturers will contractually limit their potential liability for integrated V2I and V2V applications and third-party services. However, according to DOT, V2I applications using data received from public infrastructure may create potential new liability risks to various infrastructure owners and operators—state and local governments, railroads, bridge owners, and roadway owners—because such cases often are brought against public or quasi-public entities and not against vehicle manufacturers. According to DOT, this liability will likely be the same as existing liability for traffic signals and variable message signs. DOT officials, stakeholders representing state officials and private sector entities, and experts we interviewed stated that the deployment of V2I technologies and applications is expected to result in a variety of benefits to users. Experts identified safety, mobility, operational, and environmental benefits as the potential benefits of V2I. Safety: Eleven of 21 experts identified safety as one of the primary benefits of V2I technologies. This included 6 of the 8 state and local agencies we interviewed. According to Japanese officials we interviewed, Japan has realized safety benefits from its deployment of V2I infrastructure. For example, in an effort to prevent rear-end collisions, Japan installed V2I infrastructure that detected and warned motorists of upcoming congestion on an accident-prone curve on an expressway in Tokyo. According to Japanese officials, this combined with other measures such as road marking, led to a 60-percent reduction in rear-end collisions on this curve. Mobility: In interviews, 8 of 21 experts identified mobility as one of the primary benefits of V2I, including 6 of the 8 state and local agencies we interviewed. Officials in three states we interviewed noted that they are focusing on V2I applications that have the potential to increase mobility. These applications could allow for transportation system managers to identify and address congestion in real-time, as well as provide traffic signal priority to certain types of vehicles, such as emergency responders or transit. For example, Japanese officials estimated that as the use of electronic tolling rose to nearly 90 percent of vehicles on expressways, tollgate congestion was nearly eliminated on certain expressways. Operations: In interviews, 7 of 21 experts, including 4 of 8 state and local agencies, identified the potential for V2I applications to provide operational benefits or cost savings. For example, one state agency noted that using data collected from vehicles could allow the transportation managers to more easily monitor pavement conditions and identify potholes (typically a costly and resource-intensive activity). DOT and the National Cooperative Highway Research Program have also noted that the visibility and enhanced data on current traffic and road conditions provided by V2I applications would provide operational benefits to state and local transportation managers. This result, in turn, could provide safety or other benefits to drivers. For example, officials in Japan told us that by using data collected from vehicles through the ITS infrastructure, they were able to identify 160 locations in which drivers were braking suddenly. After investigating the cause, officials took steps to address safety issues at these sites (such as trimming trees that created visual obstructions) and incidents of sudden braking decreased by 70 percent and accidents involving injuries or fatalities decreased 20 percent. In addition, the Japanese government partnered with private industry to collect and analyze vehicle probe data to help the public determine which roads were passable following an earthquake. Environment: Of the experts we interviewed, 4 of 21 identified environmental benefits as a primary benefit of V2I technologies, with some noting interconnections among safety, mobility, and environmental benefits. For example, officials from two state agencies we interviewed stated that improving safety and mobility will lead to environmental benefits because there will be less stop-and-go traffic. Indeed, Japanese officials estimated that decreased tollgate congestion reduced CO2 emissions by approximately 210,000 tons each year. Although V2I applications are being developed for the purpose of providing safety, mobility, operational, and environmental benefits, the extent to which V2I benefits will be realized is currently unclear because of the limited data available and the limited deployment of V2I technologies. To date, only small research deployments have occurred to test connected vehicle technologies. However, DOT has commissioned or conducted some studies to estimate potential V2I benefits, particularly with respect to safety and the environment. NHTSA used existing crash data and estimated that in combination, V2V and V2I could address up to 81 percent of crashes involving unimpaired drivers. Similarly, in 2012, a study commissioned by FHWA used existing crash data and estimated the number, type, and costs of crashes that could be prevented by 12 different V2I applications. This study estimated that the 12 V2I applications would prevent 2.3-million crashes annually (representing 59 percent of single vehicle crashes and 29 percent of multi-vehicle crashes and comprising $202 billion in annual costs). With respect to the environment, DOT contracted with Booz Allen Hamilton to develop an initial benefit-cost analysis for its environmental applications, with the goal of informing DOT’s future work and prioritization of certain applications. As part of the next phase of this work, Booz Allen Hamilton used models to estimate potential benefits of individual applications, as well as their benefits when used in combination with other applications. NCHRP estimated operational and financial benefits that V2I applications may provide to state and local governments, such as reduced costs for crash response and cleanup costs; reduced need for traveler information infrastructure; reduction of infrastructure required to monitor traffic; and lower cost of pavement condition detection. However, one of the study’s major conclusions was that the data required to quantify benefits are generally not available. DOT is taking some steps to evaluate the benefits of V2I applications. For example, as part of its upcoming Connected Vehicle Pilot Deployment Program, pilot projects are expected to develop a performance-monitoring system, establish performance measures, and collect relevant data. Projects will also receive an independent evaluation of their projects’ costs and benefits; user acceptance and satisfaction; and lessons learned. In addition, organizations researching the benefits of V2I have noted that the benefits of V2I deployments may depend on a variety of factors, including the size and location of the deployment, the number of roadside units deployed, the number of vehicles equipped, and the types of applications that are deployed. A study sponsored by the University of Michigan Transportation Research Institute noted that some V2I safety applications require a majority of vehicles to be equipped before reaching optimum effectiveness, in contrast to mobility, road weather, and operations applications, which only require a small percentage of equipped vehicles before realizing benefits. Japanese government officials, as well as representatives from a private company we interviewed in Japan, noted that in some cases, they have found it difficult to quantify benefits. However, DOT and the Ministry of Land, Infrastructure, Transport and Tourism of Japan established an Intelligent Transportation Systems (ITS) Task Force to exchange information and identify the areas for collaborative research to foster the development and deployment of ITS in both the United States and Japan. According to DOT, evaluation tools and methods are high-priority areas for the task force, and DOT has stated that a report detailing the task force’s collaborative research on evaluation tools and methods will be published in 2015. In addition, 8 of the 21 experts we interviewed noted that it can be difficult to identify benefits that are solely attributable to V2I, due to the interconnected nature of V2V and V2I technologies. However, some experts we spoke with provided some examples of how connected vehicle benefits could be measured, including: crash avoidance, reduction in fatalities, reduced congestion, and reduced travel times. The costs for the deployment of a national V2I system are unclear because current cost data for V2I technology are limited due to the small number of test deployments thus far. According to DOT officials, experts, and other industry stakeholders we spoke to, there are two primary resources for estimating V2I deployment costs: AASHTO’s National Connected Vehicle Footprint Analysis (2014) and National Cooperative Highway Research Program’s (NCHRP) 03-101 Costs and Benefits of Public-Sector Deployment of Vehicle-to-Infrastructure Technologies (2013). However, the cost estimates in both reports are based on limited available data from small, research test beds. As a result, neither report contains an estimate for the total cost if V2I were to be deployed at a national level. Despite these limitations, the cost estimates in these two studies are cited by several experts and industry stakeholders, including DOT. According to DOT, these cost figures may be useful to agencies considering early deployments. According to AASHTO and NCHRP, costs of V2I deployment will likely be comprised of two types of costs. First, V2I will require non-recurring costs—the upfront, initial costs required to deploy the infrastructure. According to AASHTO, there are two primary, non-recurring cost categories associated with V2I deployments: Infrastructure deployment costs include the costs for planning, acquiring, and installing the V2I roadside equipment. State and local agencies will need to evaluate the costs for planning and design that may include mapping intersections and deciding where to deploy the DSRC radios based on traffic and safety analyses, according to AASHTO. Deployment costs will include the cost of acquiring the equipment, including the roadside unit. AASHTO estimates that the total equipment costs would be $7,450 per site, with $3,000 attributed to each roadside unit, on average. However, 4 of the experts we interviewed stated that the cost estimates for the hardware are likely to decrease over time, as the technology matures and the market becomes more competitive. The total average cost for installation of the equipment per site includes the costs of labor and inspection. In addition, deployment costs may include the cost of upgrading traffic signal controllers. AASHTO estimates that approximately two thirds of all controllers in the United States will need to be upgraded to support connected vehicle activities. Backhaul costs refer to the costs for establishing connectivity for communication between roadside units and back offices or traffic management centers (TMCs). As discussed, backhaul includes the fiber optic cables connecting traffic signals to the back office, as well as any sensors or relays that link to or serve these components. According to NCHRP, backhaul will be one of the biggest components of costs. In fact, three state agencies and one supplier we spoke with referred to backhaul as a factor that will affect costs for V2I deployment. Backhaul costs are also uncertain because states vary in the extent to which they have existing backhaul. According to AASHTO, some sites may only require an upgrade to their current backhaul system to support expected bandwidth requirements for connected vehicle communications. However, 40 percent of all traffic signals have either no backhaul or will require new systems, according to AASHTO. The difference in cost between tying into an existing fiber-optic backhaul and installing a new fiber-optic backhaul for the sites is significant, according to DOT. The average national cost to upgrade backhaul to a DSRC roadside site is estimated to vary from $3,000, if a site has sufficient backhaul and will only need an upgrade, to $40,000, if the V2I site requires a completely new backhaul system, according to AASHTO estimates. The total potential average, non-recurring costs of deploying connected vehicle infrastructure per site, according to DOT and AASHTO, are $51,650 (see table 1). Second, V2I will also require recurring costs—the costs required to operate and maintain the infrastructure. According to AASHTO, there are several types of recurring costs associated with V2I deployments, including equipment maintenance and replacement, security, and personnel costs. The amount of maintenance needed to keep roadside units running is unclear, according to 3 of the experts we interviewed, because the test bed deployments have generally not operated long enough to warrant maintenance of the equipment. However, NCHRP estimates that routine maintenance costs for roadside units would likely vary from 2 to 5 percent of the original hardware and labor costs. This includes such maintenance as realigning antennas and rebooting hardware. AASHTO also estimates that the device would need replacing every 5 to 10 years. In addition, states and localities may also need to hire new personnel or train existing staff to operate these systems. According to AASHTO, personnel costs will also depend on the size of the deployment as smaller deployments may not need dedicated personnel to complete maintenance, while large deployments may require staff dedicated to system monitoring on site or on call. Furthermore, security costs will be a recurring cost and include the costs of keeping the security credentials of the SCMS up to date and the costs to manage the security system, according to AASHTO. Given that SCMS is still being developed, cost estimates are unknown. One car manufacturer we interviewed explained that because the management of the security system is unknown, it is extremely challenging to estimate future costs. In addition, one county agency official said security costs could greatly affect the total costs for V2I deployment because the requirements and funding responsibility are not clearly defined. As part of its ANPRM, NHTSA conducted an assessment of preliminary V2V costs, including costs for the SCMS. NHTSA estimated that the SCMS costs per vehicle range from $1 to $6, with an average of $3.14. SCMS costs will increase over time due to the need to support an increasing number of vehicles with the V2V technologies, according to NHTSA. While AASHTO and NCHRP have estimated the above potential average costs for various components associated with a V2I deployment, 10 of 21 experts stated that it is difficult to determine the actual costs for a V2I deployment in a particular state or locality due to a number of factors. First, the scope of the deployment will affect the total costs of a region’s V2I deployment, according to NCHRP, because it will determine the amount of equipment needed for the system to function, including the number of roadside units. Previous test bed deployments have varied in size ranging from 1 to 2,680 DSRC roadside units. Further, the number of devices needed will be dependent on how many devices are required to enable the applications. For example, while a curve-speed-warning application may require installing equipment at a specific location, applications that aim to mitigate congestion by advising drivers of the best speed to approach an intersection may need to be installed at several intersections throughout an urban corridor. One state agency said that one factor that could affect costs is how often roadside equipment needs to be replaced in order to enable certain V2I applications. In addition, as previously mentioned, the size of the deployment will contribute to personnel costs. Second, the state or locality’s deployment environment will affect its deployment costs. One state agency pointed out that everyone’s costs will be different because they will be deploying in environments with differing levels of existing infrastructure. For example, as previously noted, the region’s existing backhaul infrastructure will determine the extent of the cost for installing or upgrading the region’s system, including whether a city or state has fiber optics already installed or signal controllers need upgrading. Lastly, the maturity of the technology will also affect cost estimates for equipment such as a DSRC radio. Estimating equipment costs is difficult at this time because the technology is still developing, according to NCHRP. Ten of the 21 experts we interviewed, including all of the state agencies, also mentioned that estimating costs is challenging because the technology is still immature. Furthermore, the reports and 4 experts we interviewed agree that the cost estimates for the hardware are likely to decrease over time, as the technology matures and the market becomes more competitive. As part of the upcoming Connected Vehicle Pilot Deployment Program, DOT developed the Cost Overview for Planning Ideas and Logical Organization Tool (CO-PILOT). This tool generates high-level cost estimates for 56 V2I applications based on AASHTO’s estimations. In addition, according to DOT, the agency will work with AASHTO to develop a life-cycle cost tool that agencies can use to support V2I deployment beyond the Connected Vehicle Pilot Deployment Program. DOT officials also indicated that they plan to update the tool over time as more data are collected from the Connected Vehicle Pilot Deployment Program, and they expect the tool to be available for use by 2016. Also, as previously mentioned, FHWA is developing deployment guidance that will outline potential sources of funding for states and localities, among other things. We provided a draft of this product to the Secretary of Transportation, Secretary of Commerce, and the Chairman of the FCC, for review and comment. DOT and Commerce’s NTIA both provided comments via email that were technical in nature. We incorporated these comments as appropriate. FCC did not provide comments. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. We will send copies of this report to the Secretary of Transportation, the Chairman of the Federal Communications Commission, and the Administrator of the National Telecommunications and Information Administration and appropriate congressional committees. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-2834 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix III. To address all of our objectives, we reviewed documentation relevant to vehicle-to-infrastructure (V2I) technology research efforts of the Department of Transportation (DOT), state and local government, and automobile industry, such as DOT’s 2015 Federal Highway Administration V2I Draft Deployment Guidance and Products and AASHTO’s National Connected Vehicle Field Infrastructure Footprint Analysis, as well as documentation on completed and ongoing research. We interviewed officials from DOT’s Office of the Assistant Secretary for Research and Technology, Intelligent Transportation Systems-Joint Program Office (ITS-JPO), Federal Highway Administration (FHWA), National Highway Traffic Safety Administration (NHTSA), and the Volpe National Transportation Systems Center, about these efforts. For all objectives, we developed a structured set of questions for our interviews with 21 experts who represented domestic automobile manufacturers, V2I device suppliers, state and local government, privacy experts, standardization organizations, and academic researchers with relevant expertise. The identified experts have varying degrees of expertise in the following areas related to V2I technology: the production of passenger vehicles; technology development; technology deployment; data privacy; security; state agency deployment; and legal and policy issues. Our starting point for our expert selection was a list of experts originally created in January 2013 by the National Academy of Sciences for GAO’s vehicle-to-vehicle (V2V) report. We used this list for our initial selection because V2V and V2I technologies are both connected vehicle technologies with many similarities, and many V2V stakeholders are also working on V2I. In addition to nine experts we selected from the National Academy of Sciences list, we selected an additional 12 experts based on the following factors: 1. their personal involvement in the deployment of V2I technologies; 2. recommendations from federal agencies (DOT, and the Federal Communications Commission (FCC) and associations (such as the American Association of State and Highway Transportation Officials (AASHTO); and 3. experts’ involvement in professional affiliations such as a V2I consortium or groups dedicated to these technologies or to a specific challenge affecting V2I (e.g., privacy). Table 2 lists the experts we selected. In conducting our structured interviews, we used a standardized interview to ensure that we asked all of the experts the same questions. During these interviews we asked, among other things, for expert views on the state of development and deployment of V2I technologies (including DOT’s role in this process), the potential benefits of V2I technologies, and their potential costs. We also asked for each expert’s views on a number of defined potential challenges facing the deployment of V2I technologies, and asked the experts to rate the significance of each challenge using a three-point scale (significant challenge, moderate challenge, or slight challenge). We determined this list of potential challenges after initial interviews with DOT, industry associations, and other interest groups knowledgeable about V2I technologies. Prior to conducting the interviews, we tested the structured interview with one association to ensure our questions were worded appropriately. After conducting these structured interviews, we summarized expert responses relevant to each objective. The viewpoints gathered through our expert interviews represent the viewpoints of individuals interviewed and cannot be generalized to a broader population. For the purpose of this review, state and local agency officials were considered experts because of their experience in deploying and testing V2I technologies, and experience working with the required technologies (DSRC equipment and software), decision process (funding and scheduling); personnel requirements and skill sets needed for deployment; operations and maintenance. We specifically included six officials who deployed V2I test beds in their respective states in our pool of expert interviews. We also included two officials who studied V2I for several years, had taken part in the AASHTO’s Connected Vehicle group, and had applied to DOT’s prior Connected Vehicles Pilot Program (V2I test bed). We also interviewed additional officials who have contributed to the U.S. efforts to develop and deploy connected vehicle technologies—officials who we refer to as “stakeholders.” Specifically, we used these stakeholders to help us understand issues that informed our structured set of questions, but did not administer the structured question set during these stakeholder interviews. We primarily selected stakeholders based on recommendations from DOT and industry associations. However, we also included DOT as a stakeholder in the deployment of V2I technologies because it is leading federal V2I efforts. We interviewed officials from 17 V2I stakeholder organizations including: 1. DOT, NHTSA 2. DOT, Office of the Assistant Secretary for Research and Technology, 3. DOT, FHWA 4. DOT, Volpe National Transportation Systems Center 5. DOT, Chief Privacy Officer 6. National Telecommunications and Information Administration (NTIA) 9. Intelligent Transportation Society of America (ITS America) 10. Crash Avoidance Metrics Partners, LLC (CAMP) 11. Institute of Electrical and Electronics Engineers (IEEE) 12. National Cooperative Highway Research Program (NCHRP) 13. Leidos, previously known as Science Applications International Corporation (SAIC) 14. Virginia Tech Transportation Institute 15. Virginia Department of Transportation 16. Minnesota Department of Transportation 17. Road Commission for Oakland County, Michigan To determine the status of development and deployment of V2I technology, we interviewed officials from DOT, including the Office of the Assistant Secretary for Research and Technology, ITS-JPO, FHWA, Volpe National Transportation Systems Center, and the NHTSA. We also interviewed officials at all seven V2I test beds located in Virginia, Michigan, Florida, Arizona, California, and New York. We conducted site visits to three test beds—the Safety Pilot in Ann Arbor, Michigan, and the test beds in Southeast Michigan and Northern Virginia. We selected the three site visit locations based on which had the most advanced technology according to DOT and state officials. At these site visits, we conducted interviews with officials from state and local transportation agencies and academic researchers to collect information on developing and deploying V2I technology. We visited FHWA’s Turner Fairbank Highway Research Center in Virginia to understand the agency’s connected vehicle research efforts. We reviewed documentation of the efforts of DOT and automobile manufacturers related to vehicle-to- infrastructure (V2I) technologies, such as the 2015 FHWA’s V2I Draft Deployment Guidance and Products and documentation on completed and ongoing research. We identified materials published in the past 4 years that were related to the terms “vehicle-to-infrastructure” and “V2I” through searches of bibliographic databases, including Transportation Research International Documentation and WorldCat. While a variety of V2I technologies exist for transit and commercial vehicles, for the purpose of this report we limited our scope to passenger vehicles since much of DOT’s connected vehicle work is focused on passenger vehicles. To determine the challenges affecting the deployment of V2I technology and DOT’s existing or planned actions to address potential challenges, we reviewed FHWA’s V2I draft guidance to assist in planning for future investments and deployment of V2I systems. In addition, we interviewed officials from FCC and NTIA about challenges related to the potential for spectrum sharing in the 5.9 GHz band. We interviewed DOT’s Privacy Officer, two privacy experts, and several stakeholders to understand privacy concerns regarding the deployment of V2I technologies. We collected information on anticipated benefits of these technologies through interviews with officials from DOT, automobile manufacturers, industry associations, and experts identified by National Academy of Sciences and other stakeholders, and through reviews of studies they provided. To specifically address the potential costs associated with V2I technologies, we analyzed two reports, AASHTO’s National Connected Vehicle Field Infrastructure Footprint Analysis and NCHRP’s 03-101, Cost and Benefits of Public-Sector Deployment of Vehicle-to-Infrastructure Technologies report, both of which addressed acquisition, installation, backhaul, operations, and maintenance costs. According to DOT officials and other stakeholders we interviewed, those two reports were the primary sources of information for V2I potential deployment costs estimates and actual costs. We used V2I costs estimates from the AASHTO Footprint Analysis to give examples of potential costs for deployment. To further assess the reliability of the cost estimates, in addition to our own review of the two reports, our internal economic stakeholder also independently reviewed both reports, and we subsequently interviewed representatives from AASHTO and NCHRP to verify the scope and methodology of the cost analyses performed in both reports. In addition, we discussed estimated costs and factors that affected costs for V2I investments with experts and stakeholders from federal, local, state government, academia, car manufacturers, industry associations, and V2I suppliers. We determined that the actual cost figures were reliable and suitable for the purpose of our report. In addition to the above work, we selected Japan for a site visit because of its nationwide deployment and years of experience with deployment and maintenance of V2I technologies. Japan has led efforts in V2I technology development and deployment for over two decades. The country serves as an illustrative example from which to draw information on potential benefits, costs, and challenges of deploying V2I technologies in the United States. During our site visit, we interviewed Japanese government officials and auto manufacturers on similar topics that we discussed with U.S. experts, including V2I deployment efforts, benefits, costs, and challenges. Cabinet Secretariat (IT Strategy) Cabinet Office (Council for Science and Technology Policy) Ministry of Land, Infrastructure, Transport and Tourism (MLIT) o Road Bureau o Road Transport Bureau Ministry of Internal Affairs and Communications (MIC) National Police Agency (NPA) Ministry of Economy, Trade and Industry (METI) We conducted this performance audit from July 2014 through September 2015 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. As part of our review, we conducted 21 structured interviews with individuals identified by the National Academy of Sciences and based on other factors discussed in our scope and methodology to be experts on vehicle-to-infrastructure (V2I) technologies (see table 2 in app. I for list of experts interviewed). During these interviews we asked, among other things, for each expert’s views on a number of already defined potential challenges facing the deployment of V2I technologies. The ratings provided by the experts for each of the potential challenges discussed are shown in table 3 below. To inform our discussion of the challenges facing the deployment of V2I technologies, we considered these ratings as well as experts’ responses to open-ended questions. In addition to the contact named above, Susan Zimmerman, Assistant Director; Nelsie Alcoser; David Hooper; Crystal Huggins; Amber Keyser; Nancy Santucci; Terence Lam; Josh Ormond; Amy Rosewarne; and Elizabeth Wood made key contributions to this report. | Over the past two decades, automobile crash-related fatality and injury rates have declined over 34 and 40 percent respectively, due in part to improvements in automobile safety. To further improve traffic safety and provide other transportation benefits, DOT is promoting the development of V2I technologies. Among other things, V2I technologies would allow roadside devices and vehicles to communicate and alert drivers of potential safety issues, such as if they are about to run a red light. GAO was asked to review V2I deployment. This report addresses: (1) the status of V2I technologies; (2) challenges that could affect the deployment of V2I technologies, and DOT efforts to address these challenges; and (3) what is known about the potential benefits and costs of V2I technologies. GAO reviewed documentation on V2I from DOT, automobile manufacturers, industry associations, and state and local agencies. In addition, GAO interviewed DOT, Federal Communication Commission (FCC), and National Telecommunications Information Administration (NTIA) officials. GAO also conducted structured interviews with 21 experts from a variety of subject areas related to V2I. The experts were chosen based on recommendations from the National Academy of Sciences and other factors. DOT, NTIA, and the FCC reviewed a draft of this report. DOT and NTIA provided technical comments, which were incorporated as appropriate. FCC did not provide comments. Vehicle-to-infrastructure (V2I) technologies allow roadside devices to communicate with vehicles and warn drivers of safety issues; however, these technologies are still developing. According to the Department of Transportation (DOT), extensive deployment may occur over the next few decades. DOT, state, and local-transportation agencies; researchers; and private-sector stakeholders are developing and testing V2I technologies through test beds and pilot deployments. Over the next 5 years, DOT plans to provide up to $100 million through its Connected Vehicle pilot program for projects that will deploy V2I technologies in real-world settings. DOT and other stakeholders have also provided guidance to help state and local agencies pursue V2I deployments, since it will be up to these agencies to voluntarily deploy V2I technologies. According to experts and industry stakeholders GAO interviewed, there are a variety of challenges that may affect the deployment of V2I technologies including: (1) ensuring that possible sharing with other wireless users of the radio-frequency spectrum used by V2I communications will not adversely affect V2I technologies' performance; (2) addressing states and local agencies' lack of resources to deploy and maintain V2I technologies; (3) developing technical standards to ensure interoperability; (4) developing and managing data security and addressing public perceptions related to privacy; (5) ensuring that drivers respond appropriately to V2I warnings; and (6) addressing the uncertainties related to potential liability issues posed by V2I. DOT is collaborating with the automotive industry and state transportation officials, among others, to identify potential solutions to these challenges. The full extent of V2I technologies' benefits and costs is unclear because test deployments have been limited thus far; however, DOT has supported initial research into the potential benefits and costs. Experts GAO spoke to and research GAO reviewed indicate that V2I technologies could provide safety, mobility, environmental, and operational benefits, for example by: (1) alerting drivers to potential dangers, (2) allowing agencies to monitor and address congestion, and (3) providing driving and route advice. V2I costs will include the initial non-recurring costs to deploy the infrastructure and the recurring costs to operate and maintain the infrastructure. While some organizations have estimated the potential average costs for V2I deployments, actual costs will depend on a variety of factors, including where the technology is installed, and how much additional infrastructure is needed to support the V2I equipment. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
Pursuant to Executive Order 13327, the administration has taken several key actions to strategically manage real property. FRPC was established in 2004, which subsequently created interagency committees to work toward developing and implementing a strategy to accomplish the executive order. FRPC developed a sample asset management plan and published Guidance for Improved Asset Management in December 2004. In addition, FRPC established asset management principles that form the basis for the strategic objectives and goals in the agencies’ asset management programs and also worked with GSA to develop and enhance an inventory system known as the Federal Real Property Profile (FRPP). FRPP was designed to meet the executive order’s requirement for a single database that includes all real property under the control of executive branch agencies. The FRPC, with the assistance of the GSA Office of Government-wide Policy, developed 23 mandatory data elements, which include four performance measures. The four performance measures are utilization, condition index, mission dependency, and annual operating and maintenance costs. In addition, a performance assessment tool has been developed, which is to be used by agencies to analyze the inventory’s performance measurement data in order to identify properties for disposal or rehabilitation. In June 2006, FRPC added a data element for disposition that included six major types of disposition, including sale, demolition, or public benefit conveyance. Finally, to assist agencies in their data submissions for the FRPP database, FRPC provided standards and definitions for the data elements and performance measures through guidance issued on December 22, 2004, and a data dictionary issued by GSA in October 2005. The first governmentwide reporting of inventory data for FRPP took place in December 2005, and selected data were included in the fiscal year 2005 FRPP published by GSA, on behalf of FRPC, in June 2006. Data on the four performance measures were not included in the FRPP report. Adding real property asset management to the PMA has increased its visibility as a key management challenge and focused greater attention on real property issues across the government. OMB has identified goals related to the four performance measures in the inventory for agencies to achieve in right-sizing their real property portfolios and it is the administration’s goal to reduce the size of the federal real property inventory by 5 percent, or $15 billion, by disposing of unneeded assets by 2015. In October 2006, the administration reported that $3.5 billion in unneeded federal real property had been disposed of since 2004. To achieve these goals and gauge an agency’s success in accurately accounting for, maintaining, and managing its real property assets so as to efficiently meet its goals and objectives, the administration established the real property scorecard in the third quarter of fiscal year 2004. The scorecard consists of 13 standards that agencies must meet to achieve green status, which is the highest status. These 13 standards include 8 standards needed to achieve yellow status, plus 5 additional standards. An agency reaches “green” or “yellow” status if it meets all of the standards for success listed in the corresponding column in figure 1 and red if it has any of the shortcomings listed in the “red” column. OMB evaluates agencies quarterly on progress and agencies then have an opportunity to update OMB on their status towards achieving green. According to PMA real property scorecards, for the second quarter of fiscal year 2007, the Department of Labor is the only real property-holding agency included in the real property initiative that failed to meet the standards for yellow status as shown in figure 2. All of the other agencies, have, at a minimum, met the standards for yellow status. Among the 15 agencies under the real property initiative, 5 agencies—GSA NASA, Energy, State, and VA—have achieved green status. According to OMB, the agencies achieving green status have established 3-year timelines for meeting the goals identified in their asset management plans; provided evidence that they are implementing their asset management plans; used real property inventory information and performance measures in decision making; and managed their real property in accordance with their strategic plan, asset management plan, and performance measures. Once an agency has achieved green status, OMB continues to monitor its progress and results through PMA using deliverables identified in its 3-year timeline and quarterly scorecards. Each quarter, OMB also provides formal feedback to agencies through the scorecard process, along with informal feedback, and clarifies expectations. Yellow status agencies still have various standards to meet before achieving green. In addition to addressing their real property initiative requirements, some agencies have taken steps toward addressing some of their long-standing problems, including excess and underutilized property and deteriorating facilities. Some agencies are implementing various tools to prioritize reinvestment and disposal decisions on the basis of agency needs, utilization, and costs. For example, GSA officials reported that GSA’s Portfolio Restructuring Strategy sets priorities for disposal and reinvestment based on agency missions and anticipated future need for holdings. In addition, GSA developed a methodology to analyze its leased inventory in fiscal year 2005. This approach values leases over their life, not just at the point of award; considers financial performance and the impact of market rental rates on current and future leasing actions; and categorizes leases by their risk and value. Additionally, some agencies are taking steps to make the condition of core assets a priority and address maintenance backlog challenges. For example, Energy officials reported establishing budget targets to align maintenance funding with industry standards as well as programs to reduce the maintenance backlogs associated with specific programs. In addition, Interior officials reported that the department has conducted condition assessments for 72,233 assets as of fourth quarter fiscal year 2006. As mentioned previously, Executive Order 13327 requires that OMB, along with landholding agencies, develop legislative initiatives to improve federal real property management and establish accountability for implementing effective and efficient real property management practices. Some individual agencies have obtained legislative authority in recent years to use certain real property management tools, but no comprehensive legislation has been enacted. Some agencies have received special real property management authorities, such as the authority to enter into EUL agreements. These agencies are also authorized to retain the proceeds of the lease and to use them for items specified by law, such as improvement of their real property assets. DOD, Energy, Interior, NASA, USPS, and VA are authorized to enter into EUL agreements and have authority to retain proceeds from the lease. These authorities vary from agency to agency, and in some cases, these authorities are limited. For example, NASA is authorized to enter into EUL agreements at two of its centers, and VA’s authority to enter into EUL agreements expires in 2011. In addition, VA was authorized in 2004 to transfer real property under its jurisdiction or control and to retain the proceeds from the transfer in a capital asset fund for property transfer costs, including demolition, environmental remediation, and maintenance and repair costs. VA officials noted that although VA is authorized to transfer real property under its jurisdiction or control and to retain the proceeds from such transfers, this authority has significant limitations on the use of any funds generated by any disposal under this authority. Additionally, GSA was given the authority to retain proceeds from disposal of its real property and to use the proceeds for its real property needs. Agencies with enhanced authorities believe that these authorities have greatly improved their ability to manage their real property portfolios and operate in a more businesslike manner. Overall, the administration’s efforts to raise the level of attention to real property as a key management challenge and to establish guidelines for improvement are noteworthy. The administrative tools, including asset management plans, inventories, and performance measures, were not in place to strategically manage real property before we updated our high- risk list in January 2005. The actions taken by major real property-holding agencies and the administration to establish such tools are clearly positive steps. However, these administrative tools and the real property initiative have not been fully implemented, and it is too early to determine if they will have a lasting impact. Implementation of these tools has the potential to produce results such as reductions in excess property, reduced maintenance and repair backlogs, less reliance on leasing, and an inventory that is shown to be reliable and valid. Although clear progress has been made toward strategically managing federal real property and addressing some long-standing problems, real property remains a high–risk area because the problems persist and obstacles remain. Agencies continue to face long-standing problems in the federal real property area, including excess and underutilized property, deteriorating facilities and maintenance and repair backlogs, reliance on costly leasing, and unreliable real property data. Federal agencies also continue to face many challenges securing real property. These problems are still pervasive at many of the major real property-holding agencies, despite agencies’ individual attempts to address them. Although the changes being made to strategically manage real property are positive and some realignment has taken place, the size of agencies’ real property portfolios remains generally outmoded. As we have reported, this trend largely reflects a business model and the technological and transportation environment of the 1950s. Many of these assets and organizational structures are no longer needed; others are not effectively aligned with, or responsive to, agencies’ changing missions. While some major real property-holding agencies have had some success in attempting to realign their infrastructures in accordance with their changing missions, others still maintain a significant amount of excess and underutilized property. For example, officials with Energy, DHS, and NASA—which are three of the largest real property-holding agencies—reported that over 10 percent of the facilities in their inventories were excess or underutilized. The magnitude of the problem with underutilized or excess federal real property continues to put the government at risk for lost dollars and missed opportunities. Table 1 describes the status of excess and underutilized real property challenges at the nine major real property- holding agencies. Addressing the needs of aging and deteriorating federal facilities remains a problem for major real property-holding agencies. According to recent estimates, tens of billions of dollars will be needed to repair or restore these assets so that they are fully functional. Furthermore, much of the federal portfolio was constructed over 50 years ago, and these assets are reaching the end of their useful lives. Energy, NASA, GSA, Interior, State, and VA reported repair and maintenance backlogs for buildings and structures that total over $16 billion. In addition, DOD reported a $57 billion restoration and modernization backlog. We found that there was variation in how agencies reported data on their backlog. Some agencies reported deferred maintenance figures consistent with the definition used for data on deferred maintenance included in their financial statements. Others provided data that included major renovation or restoration needs. More specifically, For DOD, facilities restoration and modernization requirements total over $57 billion. Officials noted that the backlog does not reflect the impact of 2005 Base Realignment and Closures (BRAC) or related strategic rebasing decisions that will be implemented over the next several years. For Energy, the backlog in fiscal year 2005 for a portfolio valued at $85.2 billion was $3.6 billion. For Interior, officials reported an estimated maintenance backlog of over $3 billion for buildings and other structures. GSA’s current maintenance backlog is estimated at $6.6 billion. For State, the maintenance backlog is estimated at $132 million, which includes all of the deferred/unfunded maintenance and repair needs for prior fiscal years. For NASA, the restoration and repair backlog is estimated at over $2.05 billion as of the end of fiscal year 2006. For VA, the maintenance backlog for facilities with major repair needs is estimated at $5 billion, and according to VA officials, VA must address this aged infrastructure while patient loads are changing. Many of the major real property-holding agencies continue to rely on leased space to meet new space needs. As a general rule, building ownership options through construction or purchase are often the least expensive ways to meet agencies’ long-term requirements. Lease purchases—under which payments are spread out over time and ownership of the asset is eventually transferred to the government— are often more expensive than purchase or construction but are generally less costly than using ordinary operating leases to meet long-term space needs. For example, we testified in October 2005 that for the Patent and Trademark Office’s long-term requirements in northern Virginia, the cost of an operating lease was estimated to be $48 million more than construction and $38 million more than lease purchase. However, over the last decade we have reported that GSA—as the central leasing agent for most agencies— relies heavily on operating leases to meet new long-term needs because it lacks funds to pursue ownership. Operating leases have become an attractive option, in part because they generally “look cheaper” in any given year, even though they are often more costly over time. Under current budget scorekeeping rules, the budget generally should record the full cost of the government’s commitment. Operating leases were intended for short-term needs and thus, under the scorekeeping rules, for self-insuring entities, only the amount needed to cover the first year lease payments plus cancellation costs needs to be recorded. However, the rules have been stretched to allow budget authority for some long-term needs being met with operating leases to be spread out over the term of the lease, thereby disguising the fact that over time, leasing will cost more than ownership. Resolving this problem has been difficult; however, change is needed because the current practice of relying on costly leasing to meet long-term space needs result in excessive costs to taxpayers and does not reflect a sensible or economically rational approach to capital asset management, when ownership would be more cost effective. Five of the nine largest real property-holding agencies—Energy, Interior, GSA, State, and VA—reported an increased reliance on operating leases to meet new space needs over the past 5 years. According to DHS officials, per review of GSA’s fiscal year 2005 and 2006 lease acquisition data for DHS, there has been no significant increase in GSA acquired leased space for DHS. In addition, officials from NASA and USPS reported that their agency’s use of operating leases has remained at about the same level over the past 5 years. We did not analyze whether the leasing activity at these agencies, either in the aggregate or for individual leases, resulted in longer-term costs than if these agencies had pursued ownership. For short-term needs, leasing likely makes economic sense for the government in many cases. However, our past work has shown that, generally speaking, for long-term space needs, leasing is often more costly over time than direct ownership of these assets. While the administration and agencies have made progress in collecting standardized data elements needed to strategically manage real property, the long-term benefits of the new real property inventory have not yet been realized, and this effort is still in the early stages. The federal government has made progress in revamping its governmentwide real property inventory since our 2003 high-risk designation. The first governmentwide reporting of inventory data for FRPP took place in December 2005, and GSA published the data on behalf of FRPC, in June 2006. According to the 2005 FRPP report, the goals of the centralized database are to improve decision making with accurate and reliable data, provide the ability to benchmark federal real property assets, and consolidate governmentwide real property data collection into one system. According to FRPC, these improvements in real property and agency performance data will result in reduced operating costs, improved asset utilization, recovered asset values, and improved facility conditions, among others. It is important to note that real property data contained in the financial statements of the U.S. government have also been problematic. The CFO Act, as expanded by the Government Management Reform Act, requires the annual preparation and audit of individual financial statements for the federal government’s 24 major agencies. The Department of the Treasury is also required to compile consolidated financial statements for the U.S. government annually, which we audit. In March 2007, we reported that— for the tenth consecutive year—certain material weaknesses in internal controls and in selected accounting and financial reporting practices resulted in conditions that continued to prevent us from being able to provide the Congress and the American people with an opinion as to whether the consolidated financial statements of the U.S. government were fairly stated in conformity with U.S. generally accepted accounting principles. Further, we also reported that the federal government did not maintain effective internal control over financial reporting (including safeguarding assets) and compliance with significant laws and regulations as of September 30, 2006. While agencies have made significant progress in collecting the data elements from their real property inventory databases for the FRPP, data reliability is still a problem at some of the major real property-holding agencies and agencies lack a standard framework for assessing the validity of data used to populate the FRPP. Quality governmentwide and agency- specific data are critical for addressing the wide range of problems facing the government in the real property area, including excess and unneeded property, deterioration, and security concerns. Despite the progress made by the administration and individual agencies in recent years, decision makers historically have not had access to complete, accurate, and timely data on what real property assets the government owns; their value; whether the assets are being used efficiently; and what overall costs are involved in preserving, protecting, and investing in them. Also, real property-holding agencies have not been able to easily identify excess or unneeded properties at other agencies that may suit their needs. For example, in April 2006, the DOD Inspector General (IG) reported weaknesses in the control environment and control activities that led to deficiencies in the areas of human capital assets, knowledge management, and compliance with policies and procedures related to real property management. As a result, the military departments’ real property databases were inaccurate, jeopardizing internal control over transactions reported in the financial statements. Compounding these issues is the difficulty each agency has in validating its real property inventory data that are submitted to FRPP. Validation of individual agencies’ data is important because the data are used to populate the FRPP. Because a reliable FRPP is needed to advance the administration’s real property initiative, ensuring the validity of data that agencies provide is critical. In general, we found that agencies’ efforts to validate the data for the FRPP are at the very early stages of development. For example, according to Interior officials, the department had designed and was to begin implementing a program of validating, monitoring, and improving the quality of data reported into FRPP in the last quarter of fiscal year 2006. Furthermore, according to OMB staff, there is no comprehensive review or validation of data once agencies submit their real property profile data to OMB. OMB staff reported that both OMB and GSA review agency data submissions for variances from the prior reporting period. However, agencies are required to validate their data prior to submission to the GSA- managed database. OMB staff reported that some agencies, as part of the PMA initiative, have provided OMB with plans for ensuring the quality of their inventory and performance data. OMB staff reported that OMB has not, to date, requested these plans of all agencies. OMB staff reported that agencies provide OMB with information that includes the frequency of data updates and any methods used for data validation. In addition, according to OMB staff, OMB relies on the quality assurance and quality control processes performed by individual agencies. Also, OMB staff noted that they rely on agency IGs, agency financial statements, and our reviews to establish the validity of the data. Furthermore, OMB staff indicated that a one-size-fits-all approach to data validation would be difficult to implement. Nonetheless, a general framework for data validation that could guide agencies in this area would be helpful, as agencies continue their efforts to populate the FRPP with data from their existing data systems. A framework for FRPP data validation approaches could be used in conjunction with the more ad hoc validation efforts OMB mentioned to, at a minimum, suggest standards for frequency of validation, validation methods, error tolerance, and reporting on reliability. Such a framework would promote a more comprehensive approach to FRPP data validation. In our recent report, we recommended that OMB, in conjunction with the FRPC, develop a framework that agencies can use to better ensure the validity and usefulness of key real property data in the FRPP. The threat of terrorism has increased the emphasis on physical security for federal real property assets. All of the nine agencies reported using risk-based approaches to some degree to prioritize facility security needs, as we have suggested; but some agencies cited challenges, including a lack of resources for security enhancements and issues associated with securing leased space. For example, DHS officials reported that the department is working to further develop a risk management approach that balances security requirements and the acquisition of real property and leverages limited resources for all its components. In many instances, available real property requires security enhancements before government agencies can occupy the space. Officials reported that these security upgrades require funding that is beyond the cost of acquiring the property, and, therefore, their acquisition is largely dependent on the availability of sufficient resources. While some agencies have indicated that they have made progress in using risk-based approaches, some officials told us that they still face considerable challenges in balancing their security needs and other real property management needs with their limited resources. According to GSA officials, obtaining funding for security countermeasures, both security fixtures and equipment, is a challenge, not only within GSA, but for GSA’s tenant agencies as well. In addition, Interior and NASA officials reported that their agencies face budget and resource constraints in securing real property. Interior officials further noted that despite these limitations, incremental progress is made each year in security. Given their competing priorities and limited security resources, some of the major real property-holding agencies face considerable challenges in balancing their security and real property management needs. We have reported that agencies could benefit from specific performance measurement guidance and standards for facility protection to help them address the challenges they face and help ensure that their physical security efforts are achieving the desired results. Without a means of comparing the effectiveness of security measures across facilities, particularly program outcomes, the U.S. government is open to the risk of either spending more money for less effective physical security measures or investing in the wrong areas. Furthermore, performance measurement helps ensure accountability, since it enables decision makers to isolate certain activities that are hindering an agency’s ability to achieve its strategic goals. Performance measurement can also be used to prioritize security needs and justify investment decisions so that an agency can maximize available resources. Despite the magnitude of the security problem, we noted that this area is largely unaddressed in the real property initiative. Without formally addressing security, there is a risk that this challenge could continue to impede progress in other areas. The security problem has an impact on the other problems that have been discussed. For example, to the extent that funding will be needed for a sustained investment in security, the funding available for repair and restoration, preparing excess property for disposal, and improving real property data systems may be further constrained. Furthermore, security requires significant staff time and other human capital resources and thus real property managers may have less time to manage other problems. In past high-risk reports, we called for a transformation strategy to address long-standing real property problems. While the administration’s current approach is generally consistent with what we envisioned and the administration’s central focus on real property management is a positive step, certain areas warrant further attention. Specifically, problems are exacerbated by underlying obstacles that include competing stakeholder interests and legal and budgetary limitations. For example, some agencies cited local interests as barriers to disposing of excess property. In addition, agencies’ limited ability to pursue ownership often leads them to lease property that they could more cost-effectively own over time. Another obstacle—the need for improved long-term capital planning— remains despite OMB efforts to enhance related guidance. Some major real property-holding agencies reported that competing local, state, and political interests often impede their ability to make real property management decisions, such as decisions about disposing of unneeded property and acquiring real property. For example, VA officials reported that disposal is often not an option for most properties because of political stakeholders and constituencies, including historic building advocates or local communities that want to maintain their relationship with VA. In addition, VA officials said that attaining the funding to follow through on Capital Asset Realignment for Enhanced Services (CARES) decisions is a challenge because of competing priorities. Also, Interior officials reported that the department faces significant challenges in balancing the needs and concerns of local and state governments, historical preservation offices, political interests, and others, particularly when coupled with budget constraints. Other agencies cited similar challenges related to competing stakeholder interests. If the interests of competing stakeholders are not appropriately addressed early in the planning stage, they can adversely affect the cost, schedule and scope of a project. Despite its significance, the obstacle of competing stakeholder interests has gone unaddressed in the real property initiative. It is important to note that there is precedent for lessening the impact of competing stakeholder interests. BRAC decisions, by design, are intended to be removed from the political process, and Congress approves BRAC decisions as a whole. OMB staff said they recognize the significance of the obstacle and told us that FRPC would begin to address the issue after the inventory is established and other reforms are initiated. Without addressing this issue, however, less than optimal decisions that are not based on what is best for the government as a whole may continue. As discussed earlier, budgetary limitations that hinder agencies’ ability to fund ownership leads agencies to rely on costly leased space to meet new space needs. Furthermore, the administrative complexity and costs of disposing of federal property continue to hamper some agencies’ efforts to address their excess and underutilized real property problems. Federal agencies are required by law to assess and pay for any environmental cleanup that may be needed before disposing of a property—a process that may require years of study and result in significant costs. As valuable as these legal requirements are, their administrative complexity and the associated costs of complying with them create disincentives to the disposal of excess property. For example, we reported that VA, like all federal agencies, must comply with federal laws and regulations governing property disposal that are intended, for example, to protect subsequent users of the property from environmental hazards and to preserve historically significant sites. We have reported that some VA managers have retained excess property because the administrative complexity and costs of complying with these requirements were disincentives to disposal. Additionally, some agencies reported that the costs of cleanup and demolition sometimes exceed the costs of continuing to maintain a property that has been shut down. In such cases, in the short run, it can be more beneficial economically to retain the asset in a shut-down status. Given that agencies are required to fund the costs of preparing property for disposal, the inability to retain any of the proceeds acts as an additional disincentive. It seems reasonable to allow agencies to retain enough of the proceeds to recoup the costs of disposal, and it may make sense to permit agencies to retain additional proceeds for reinvestment in real property where a need exists. However, in considering whether to allow federal agencies to retain proceeds from real property transactions, it is important for Congress to ensure that it maintains appropriate control and oversight over these funds, including the ability to redistribute the funds to accommodate changing needs. In our recent report, we recommended that OMB, in conjunction with the FRPC, develop an action plan for how the FRPC will address key problems, including the continued reliance on costly leasing in cases where ownership is more cost effective over the long term, the challenges of securing real property assets, and reducing the effect of competing stakeholder interests on businesslike outcomes in real property decisions. Over the years, we have reported that prudent capital planning can help agencies to make the most of limited resources, and failure to make timely and effective capital acquisitions can result in acquisitions that cost more than anticipated, fall behind schedule, and fail to meet mission needs and goals. In addition, Congress and OMB have acknowledged the need to improve federal decision making regarding capital investment. A number of laws enacted in the 1990s placed increased emphasis on improving capital decision-making practices and OMB’s Capital Programming Guide and its revisions to Circular A-11 have attempted to address the government’s shortcomings in this area. Our prior work assessing agencies’ implementation of the planning phase principles in OMB’s Capital Programming Guide and our Executive Guide found that some agencies’ practices did not fully conform to the OMB principles, and agencies’ implementation of capital planning principles was mixed. Specifically, while agencies’ capital planning processes generally linked to their strategic goals and objectives and most of the agencies we reviewed had formal processes for ranking and selecting proposed capital investments, the agencies have had limited success with using agencywide asset inventory systems and data on asset condition to identify performance gaps. In addition, we found that none of the agencies had developed a comprehensive, agencywide, long-term capital investment plan. The agency capital investment plan is intended to explain the background for capital decisions and should include a baseline assessment of agency needs that examines existing assets and identifies gaps and help define an agency’s long-term investment decisions. In January 2004, we recommended that OMB begin to require that agencies submit long-term capital plans to OMB. Since that report was issued, VA— which was one of our initial case study agencies—issued its first 5-year capital plan. However, the results of follow-up work in this area showed that although OMB now encourages such plans, it does not collect them, and the agencies that were included in our follow-up review do not have agency wide long-term capital investment plans. OMB agreed that there are benefits from OMB review of agency long-term capital plans, but that these plans should be shared with OMB on an as-needed basis depending on the specific issue being addressed and the need to view supporting materials. Shortcomings in the capital planning and decision-making area have clear implications for the administration’s real property initiative. Real property is one of the major types of capital assets that agencies acquire. Other capital assets include information technology, major equipment, and intellectual property. OMB staff said that agency asset management plans are supposed to align with the capital plans but that OMB does not assess whether the plans are in alignment. We found that guidance for the asset management plans does not discuss how these plans should be linked with agencies’ broader capital planning efforts outlined in the Capital Programming Guide. In fact, OMB’s asset management plan sample, referred to as the “shelf document,” which agencies use to develop the asset management plans, makes no reference to the guide. Without a clear linkage or crosswalk between the guidance for the two documents, there is less assurance that agencies will link them. Furthermore, there could be uncertainty with regard to how real property goals specified in the asset management plans relate to longer term capital plans. The executive order on real property management and the addition of real property to the PMA have provided a good foundation for strategically managing federal real property and addressing long-standing problems. These efforts directly address the concerns we raised in past high-risk reports about the lack of a governmentwide focus on real property management problems and generally constitute what we envisioned as a transformation strategy for this area. However, these efforts are in the early stages of implementation, and the problems that led to the high-risk designation—excess property, repair backlogs, data issues, reliance on costly leasing, and security challenges—still exist. As a result, this area remains high risk until agencies show significant results in eliminating the problems by, for example, reducing inventories of excess facilities and making headway in addressing the repair backlog. Furthermore, the current efforts lack an overall framework for helping agencies ensure the validity of real property data in FRPP and do not adequately address the costliness of long-term leases and security challenges. While the administration has taken several steps to overcome some obstacles in the real property area, the obstacle posed by competing stakeholder interests has gone largely unaddressed, and the linkage between the real property initiative and broader agency capital planning efforts is not clear. Focusing on these additional areas could help ensure that the problems and obstacles are addressed. We made three recommendations to OMB’s Deputy Director for Management in our April 2007 report on real property high risk issues. OMB agreed with the report and concurred with its recommendations. We recommended that the Deputy Director, in conjunction with FRPC, develop a framework that agencies can use to better ensure the validity and usefulness of key real property data in the FRPP. At a minimum, the framework would suggest standards for frequency of validation methods, error tolerance, and reporting on reliability. OMB agreed with our recommendation and reported that it will work with the FRPC to take steps to establish and implement a framework. For our second recommendation to develop an action plan for how the FRPC will address key problems, OMB said that the FRPC is currently drafting a strategic plan for addressing long-standing issues such as the continued reliance on costly leasing in cases where ownership is more cost effective over the long-term, the challenge of securing real property assets, and reducing the effect of competing stakeholder interests on businesslike outcomes in real property decisions. OMB agreed that it is important to build upon the substantial progress that has been realized by both the FRPC and the federal real property community in addressing the identified areas for improvement. OMB said that it will share the strategic plan with us once it is in place and will discuss strategies for ensuring successful implementation. For our third recommendation to establish a clearer link or crosswalk between agencies’ efforts under the real property initiative and broader capital planning guidance, OMB stated that as agencies update their asset management plans and incorporate updated guidance on capital planning, progressive improvement in this area will be realized. Mr. Chairman, this concludes my prepared statement. I would be happy to respond to any questions you or other Members of the Committee may have at this time. For further information on this testimony, please contact Mark Goldstein on (202) 512-2834 or at [email protected]. Key contributions to this testimony were made by Anne Izod, Susan Michal-Smith, and David Sausville. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | In January 2003, GAO designated federal real property as a high-risk area due to long-standing problems with excess and underutilized property, deteriorating facilities, unreliable real property data, and costly space challenges. Federal agencies were also facing many challenges protecting their facilities due to the threat of terrorism. This testimony is based largely on GAO's April 2007 report on real property high-risk issues (GAO-07-349). The objectives of that report were to determine (1) what progress the administration and major real property-holding agencies had made in strategically managing real property and addressing long-standing problems and (2) what problems and obstacles, if any, remained to be addressed. The administration and real property-holding agencies have made progress toward strategically managing federal real property and addressing long-standing problems. In response to the President's Management Agenda real property initiative and a related executive order, agencies have, among other things, established asset management plans; standardized data reporting; and adopted performance measures. Also, the administration has created a Federal Real Property Council (FRPC) and plans to work with Congress to provide agencies with tools to better manage real property. These are positive steps, but underlying problems still exist. For example, the Departments of Energy (Energy) and Homeland Security (DHS) and the National Aeronautics and Space Administration (NASA) reported that over 10 percent of their facilities are excess or underutilized. Also, Energy, NASA, the General Services Administration (GSA), and the Departments of the Interior (Interior), State (State), and Veterans Affairs (VA) reported repair and maintenance backlogs for buildings and structures that total over $16 billion. The Department of Defense (DOD) reported a $57 billion restoration and modernization backlog. Also, Energy, Interior, GSA, State, and VA reported an increased reliance on leasing to meet space needs. While agencies have made progress in collecting and reporting standardized real property data, data reliability is still a challenge at DOD and other agencies, and agencies lack a standard framework for data validation. Finally, agencies reported using risk-based approaches to prioritize security needs, which GAO has suggested, but some cited obstacles such as a lack of resources for security enhancements. In past high-risk updates, GAO called for a transformation strategy to address the long-standing problems in this area. While the administration's approach is generally consistent with what GAO envisioned, certain areas warrant further attention. Specifically, problems are exacerbated by underlying obstacles that include competing stakeholder interests, legal and budgetary limitations, and the need for improved capital planning. For example, agencies cited local interests as barriers to disposing of excess property, and agencies' limited ability to pursue ownership leads them to lease property that may be more cost-effective to own over time. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
Congress established the HBCU Capital Financing Program in 1992 under Title III, Part D, of the Higher Education Act of 1965, as amended, to provide HBCUs with access to low-cost capital to help them to continue and expand upon their educational missions. (See app. II for locations of HBCUs eligible to participate in the program.) Program funds, raised through bonds issued by the DBA and purchased by the FFB, are lent to eligible schools with qualified capital projects. Loan proceeds may be used for—among other things—repairing, renovating, or in exceptional circumstances, constructing and acquiring new instructional or residential facilities, equipment, or research instrumentation. Additionally, schools are able to refinance prior capital loans. Education guarantees loan repayment. Although Education administers the program, the DBA is responsible for many of the program’s operations and is subject to departmental oversight. Specifically, the DBA works with prospective borrowers to develop loan applications and monitors and enforces loan agreements. The loan process consists of multiple steps. HBCUs interested in obtaining funds through the program must first complete a preliminary application that includes information such as enrollment, some financial data— including a description of existing debt—and proposed capital projects. On the basis of this information, the DBA determines whether the school should formally complete an application, which includes more detailed financial information, such as audited financial statements and various campus plans and assessments. To be approved for the loan, an HBCU must satisfy certain credit criteria and have qualified projects. Once the DBA determines a school’s eligibility status, a memorandum is sent to Education for final approval. When approved, the loan goes through a closing process during which certain terms and conditions may be negotiated. Table 1 describes key loan terms and conditions to which schools are subject. The Federal Credit Reform Act of 1990, along with guidance issued by OMB and accounting standards, provides the framework agencies are to use in calculating the federal budget costs of federal credit programs, such as the HBCU Capital Financing Program. The two principles of credit reform are defining subsidy cost and requiring that budget authority to cover these costs be provided in advance before new loan obligations are incurred. OMB is responsible for coordinating the estimation of subsidy costs. Subsidy costs are determined by calculating the net present value of estimated cash flows to and from the government that result from providing loans and loan guarantees to borrowers. (Guaranteed loans that are financed by the FFB are treated as direct loans for budgetary purposes, in accordance with FCRA.) Cash flows for direct loans include, for example, loan disbursements to borrowers and borrower repayments of principal and payments of interest to the government. Estimated cash flows are adjusted to reflect the risks associated with potential borrower delinquencies and defaults, and estimates of amounts collected on defaulted loans. Subsidy costs can be positive or negative. If the net present value of cash outflows exceeds the net present value of cash inflows, the government incurs a positive subsidy cost. On the other hand, the government realizes a gain in revenue if there is a negative subsidy. Since the program was established, appropriations legislation has, in general, limited the subsidy costs of the program to be no greater than zero. In addition, the legislation authorizing the program established a credit authority limit of $375 million; of this amount, private HBCUs are collectively limited to borrowing $250 million, and public HBCUs are collectively limited to borrowing $125 million. Over a period of 2 months in 2005, three hurricanes struck the Gulf Coast region of the United States, resulting in more than $118 billion in estimated property damages. Two of these hurricanes, Katrina and Rita, struck New Orleans and surrounding areas within a month of each other, resulting in significant damages to several institutions of higher education in the region and including the campuses of several HBCUs, including Dillard University, Southern University at New Orleans, and Xavier University, in Louisiana, and Tougaloo College, in Mississippi. (See app. III for locations of the 8 hurricane affected HBCUs.) In June 2006, Congress passed the Emergency Act which, among other things, amends the HBCU Capital Financing Program to assist hurricane- affected HBCUs in their recovery efforts. To be eligible, a school must be located in an area affected by a Gulf Coast hurricane disaster and demonstrate that it (1) incurred physical damage resulting from Hurricane Katrina or Rita; (2) has pursued other sources of compensation from insurance, Federal Emergency Management Agency (FEMA), or the Small Business Administration, as appropriate; and (3) has not been able to fully reopen in existing facilities or to the levels that existed before the hurricanes because of physical damage to the institution. Key provisions include a lowered interest rate and cost of issuance (both set at 1 percent or less), elimination of the escrow, and deferment of principal and interest payments from program participants for a 3-year period. The Emergency Act also provides the Secretary of Education with authority to waive or modify any statutory or regulatory provisions related to the program in connection with a Gulf Coast hurricane disaster. FEMA assists states and local governments with the costs associated with disaster response and recovery efforts that exceed a state or locale’s capabilities. Grants are also provided to eligible postsecondary educational institutions to help them recover from the disaster. Some institutions of higher education are subsequently provided with referrals to SBA when seeking assistance from FEMA. For private, nonprofit institutions, SBA’s disaster loans are designed to be a primary form of federal assistance. Unlike their public counterparts, private colleges must apply for low-interest, long-term disaster loans prior to seeking assistance from FEMA. Schools may apply for SBA loans, and the aggregate loan amount cannot exceed $1.5 million. In general, the loan terms for each loan include a maximum of 30 years for repayment with interest rates of at least 4 percent. HBCU officials we interviewed reported extensive and diverse capital project needs, including construction and renovation of facilities and addressing deferred maintenance, yet just over half of the available program loan capital has been borrowed. While HBCU capital project needs are not well documented by national studies, the schools themselves have individually identified and documented them. Despite reported needs, only about a quarter of HBCUs have taken steps to participate in the program, and about half of these HBCUs became borrowers. Education has collected and reported limited information on the program’s utilization and has not established performance measures or goals to gauge program effectiveness, though Education officials noted that they are currently working on developing such measures and goals. There are few national studies that document the capital project needs of HBCUs, and they do not provide a current and comprehensive national picture. The four that we identified and reviewed are more than several years old, narrowly scoped, or had limited participation. Specifically, the studies are between 6 and 17 years old, and two studies focused only on specific types of need—renovation of historic properties and campus wiring for computer networks. One study that addressed a broader range of needs and was among the most recent had a low response rate— 37 percent. Despite the lack of national studies, schools that we interviewed reported extensive, diverse, and ongoing capital project needs. School officials reported that they routinely conduct facility assessments as part of their ongoing strategic planning and that these assessments help determine the institutions’ short- and long-term capital needs. They said that capital projects, including the construction of new dormitories, renovation of aging or historic facilities, repair of infrastructure, and addressing long- standing deferred maintenance are needed for a variety of reasons. New facilities such as dormitories and student centers are often needed as a result of enrollment growth, for example, while modernization of existing facilities is needed to accommodate technological advances. For example, Tuskegee University renovated an existing facility to house its hospitality management program, creating modern meeting facilities along with a full- service hotel, which provides students with a real-world laboratory in which they gain immediate hands-on experiences (see fig. 1). In addition, many of the school officials who we interviewed reported that their schools had particularly old facilities, many of which are listed in the National Register of Historic Places. Some school officials cited their need to repair or replace campus infrastructure. For example, some schools reported needing to replace leaking underground water pipes, while others reported the need to replace 100-year-old water and gas pipes. Many of the school officials we interviewed reported having deferred maintenance projects, some for over 15 years, and officials from 3 schools estimated their schools’ deferred maintenance to be over $50 million. For some schools, the deferred maintenance is substantial in light of existing resources, according to HBCU officials. These types of capital projects are essential to ensuring student safety and preserving assets that directly affect their ability to attract, educate, and retain students. Over the life of the program, approximately 14 percent of HBCUs have borrowed just over half of the available funds despite the substantial needs reported by schools. Specifically, 23 HBCUs, according to Education, have taken steps to participate in the program, and 14 became borrowers, with loans totaling just over $200 million—below the program’s $375 million total limit. About 20 percent of the eligible private institutions have borrowed a little more than half of the $250 million allotted for private schools, and less than 8 percent of public institutions have borrowed less than two-thirds of the $125 million allotted for public schools. To date, loan participants have all been 4-year institutions. Taking into account loan repayments, the total amount of outstanding loans was about $168 million as of August 2006, leaving about $207 million available for loans (about $66 million for public schools and about $141 million for private schools). Table 2 shows the participants and the amounts of their loans. Regarding other schools that took steps to participate in the program but did not become borrowers, 6 schools were reported to have withdrawn their applications, and 6 others had applications pending. To date, only one school has been denied a loan. Education has collected and reported limited information concerning HBCUs’ capital financing needs and the schools’ utilization of the program. Education officials said that, beginning in 2005, to understand schools’ financing needs and whether the program could assist schools, the DBA engaged in an outreach effort through which it identified 15 schools that might be candidates for the program. Over the history of the program, Education has collected some information to track program utilization, including the number of inquiries and applications received and the loan volume requested, approved, and awarded. However, Education has not widely reported such data. Education has provided certain elements of its program utilization data to Congress’ appropriations committees via its annual justifications of budget estimates documents. Table 3 shows the data collected by Education to track program utilization. Education officials noted that while the data they collect are useful to indicate the extent to which the schools have used or accessed the program, they are inadequate to address questions concerning whether the program is under- or overutilized or to demonstrate program effectiveness. These officials noted that they believe program performance measures would be useful but that developing such measures is particularly challenging for a credit program like the HBCU Capital Financing Program. This is so in part because participation in a loan program is dependent on complex factors, such as schools’ funding needs, the availability of other sources of financing, and schools’ desire and capacity to assume debt. Program officials cautioned against setting firm program participation goals, for example, because they would not want Education to be perceived as “pushing” debt onto schools that either do not want to, or should not, assume loan obligations before their circumstances warrant doing so. Another complicating factor program officials cited was the small number of potential program beneficiaries. One Education official noted that Education has established performance goals and measures for its student grant and loan aid programs, which are based on sophisticated survey mechanisms designed to measure customer (students, parents, and schools) satisfaction with the department’s aid application, receipt, and accounting processes. Because the scope of the student aid programs is large, encompassing millions of students and parents and thousands of schools, it is reasonable to develop and use such measures, the official noted. In contrast, such measures may not be meaningful given the small number of HBCUs and the frequency with which loans are made under the Capital Financing Program. Nevertheless, these officials told us that they believe program performance measures would be useful to gauge program effectiveness. They have established a working group to develop performance measures for the program and were consulting with OMB and other federal officials with expertise on federal credit programs to guide their efforts. The officials noted that they do not have any firm schedule with respect to completing their development of program performance measures. The HBCU loan program provides access to low-cost capital financing and flexibilities not always available elsewhere, but some loan terms and conditions discourage participation, though school officials said they remain interested in the program. The low interest rate and long repayment period were regarded favorably by participants and nonparticipants alike, and the program makes funds available for a broader range of needs than some federal grant programs. However, the pooled escrow arrangement, monthly repayment terms, and the extent to which some loans have been collateralized could discourage participation. The HBCU Capital Financing Program provides lower-cost financing and longer loan maturities and may be used for a broader range of capital projects by a greater number of schools than other funding sources, according to HBCU officials. Some officials noted that the program offers loans with lower interest rates than traditional bank loans. Moreover, the program’s interest rates are typically less than the interest rates schools would be required to pay investors if they issued their own bonds to raise funds. According to school officials and bond industry experts, some HBCUs could obtain, and some have obtained, lower interest rates than those offered under the program by issuing their own tax-exempt bonds. However, this is predicated on a school’s ability to obtain a strong credit rating from a credit rating agency. Schools with weaker or noninvestment grade credit ratings would likely have to pay investors higher interest rates. In addition, schools issuing taxable bonds would likely pay higher interest rates to investors, compared to the program’s interest rates, regardless of the schools’ credit ratings. While schools can lower interest rates paid to bond investors by purchasing bond insurance, the cost to do so may be prohibitive. For these reasons, officials at Education and HBCUs, as well as bond industry experts, told us that the HBCU Capital Financing Program may be ideally suited for schools that have or would receive a noninvestment grade rating. Participation in the program may also benefit schools by enhancing their ability to issue their own bonds in the future. An official at one HBCU, for example, told us that obtaining and repaying a loan under the program had allowed the school to demonstrate its fiscal stability and to subsequently issue its own bond with a lower interest rate than was then being offered under the program. In addition to citing lower interest rates, a large majority of the HBCU officials we spoke to said that the program’s 30-year loan repayment period was attractive, and some noted that private funding sources would likely offer 20 years or less. Some school officials noted that the longer repayment period allowed schools to borrow more or reduce the amount of monthly payments. Borrowing larger amounts, officials reported, allowed them to finance larger or more capital projects. Another school we spoke with that once considered using the program said that even though it was able to issue a tax-exempt bond and obtain a more favorable interest rate, it could only obtain a 20-year maturity period for the bond. Some HBCU officials told us they preferred grants to loans but noted that, in general, compared to other federal grant programs, more HBCUs are eligible for the HBCU loan program, and that it also funds a wider variety of projects. Grants are available for most HBCUs under the Higher Education Act’s strengthening institutions programs, also administered by Education, which fund capital projects as well as other activities, such as faculty and academic program development. However, fewer HBCUs are eligible for other federal grant programs that provide funding for capital projects. For example, the Department of Agriculture’s 1890 Facilities Grant Program is only for those 18 HBCUs that are land grant institutions. Similarly, the Department of Health and Human Services’ facilities improvement program provides only for those HBCUs with a biomedical and behavioral research program. While there are a variety of other assistance programs offered by charitable foundations, and state and local governments, available funding is limited. HBCU officials we spoke with—participants and nonparticipants alike— reported that a disincentive to participation in the program was the pooled escrow; additionally, other terms and conditions, such as the monthly repayment schedule and the extent to which loans are collateralized, were also viewed by some as deterrents. Over half of HBCU respondents we spoke with—both participants and nonparticipants—agreed that the pooled escrow was a drawback, and over one-fifth said that it actually deters participation. The escrow funds, which reduce the federal budget cost of the program by offsetting the estimated costs associated with delinquent loan repayments and borrower defaults, net of collections, are returned to program participants if no such losses occur. However, a recent default by one borrower—the first to occur in the program’s history—has heightened awareness among program participants of the financial risk for them inherent in the pooled escrow arrangement. Since the default, Education has withdrawn funds from participating schools’ escrow accounts twice and will continue doing so until the default is resolved, leaving other schools uncertain as to how much of their own escrow accounts will remain or be replenished. The pooled escrow feature also presents a problem for state institutions because they are prohibited from assuming the liability of another institution. One program official said that this issue was common for state schools because state law prohibits the lending of public funds to nonstate entities—considered to be the case when state funds in escrow are used to hedge against the delinquency of another institution. One participating public HBCU reported that it had to resolve this problem by accounting for its escrow payments as a fee that would not be returned to the school rather than a sum that could be recovered, as the program intends. Because the escrow feature is mandated by law, any changes to this arrangement would require congressional authorization. Additionally, in order to maintain the federal subsidy cost of the program at or below zero, other alternatives— such as assessing additional fees on borrowers, or requiring contributions to an alternative form of a contingency reserve—would be necessary in the absence of the pooled escrow arrangement. While frequency of payments is not as prevalent a concern as the pooled escrow, some schools objected to the program’s requirement that repayments be made monthly as opposed to semiannually, as is common in the private market. Schools participating in the HBCU Program have been required by the DBA to make payments monthly, although FFB lending policy is to require repayments only on a semiannual basis. Despite the fact that participants have met the terms of an extensive credit evaluation process, DBA officials expressed the view that the monthly repayment requirement promotes good financial stewardship on the part of the schools. However, some HBCU officials said that they incur opportunity costs in making payments on a monthly versus a semiannual basis. They also noted that it would be more practical if payments were to coincide with the beginning of their semesters, when their cash flows are typically more robust. Additionally, almost half of the participating schools expressed concern about the amount of collateral they had to pledge in order to obtain a loan. In most cases, program participants have pledged certain real property as collateral, though endowment funds and anticipated tuition revenue are also allowed as collateral. Some HBCU officials said their loans were overcollateralized in that the value of the real estate pledged as security exceeded the value of the loan. They noted that such circumstances can present a problem for those schools trying to obtain additional capital financing without sufficient assets remaining available as collateral. One nonparticipant cited the collateral required of other institutions as a reason for its decision not to participate. When asked about the amount of collateral required, Education and DBA officials reported that the extent and amount of collateral required to obtain a loan under the program varies depending on the individual circumstances of an institution. The amount of collateral required may be less for institutions that have maintained relatively large endowments and stable tuition revenue and more for institutions that have few or no physical properties to use as collateral, for example. Education officials further noted that requiring the value of collateral to be greater than the value of the loan was not an uncommon business practice. Overall, more than two-thirds of the participant schools and more than a third of the nonparticipants said they are interested in using the program but some said that their continued or future interest in the program would depend on its being modified. Several schools suggested the types of projects eligible for funding could be broadened, which might allow them to undertake capital projects that would, in turn, assist them in attracting and retaining additional students. Campus beautification projects and multipurpose community centers were cited as examples. In addition, they regarded new construction—for which program loans are available only under exceptional circumstances—as particularly important because new construction attracts more students and because renovations often incur unexpected costs. Nevertheless, many public HBCU school officials we spoke with said that in view of their states’ continuing fiscal constraints, they expect to consider the loan program as a future funding resource. While Education has taken limited steps to improve the program, we found significant weaknesses in management controls that compromise the extent to which Education can ensure program objectives are being achieved effectively and efficiently. Education has recently provided schools the choice of fixed or variable interest rates, allowed for larger loan amounts, and afforded more opportunity for schools to negotiate loan terms, which appealed to schools. In addition, Education has attempted to increase awareness of the program among HBCU officials through increased marketing of the program by the DBA. While Education has taken steps to improve the program, we found significant weaknesses in its management control with respect to its communications with HBCUs, compliance with program and financial reporting laws and guidance, and monitoring of its DBA. Since 2001, Education has taken some steps to improve the program— in some cases by allowing greater negotiation of certain loan terms and conditions. Department officials said that changes to the program were necessary to remain competitive with other programs and the private market. These flexible terms included a variable interest rate option and the opportunity to negotiate the amount of additional debt that a school can subsequently assume through other financing arrangements. In fact, since 2003, 4 of the 7 schools that have received loans have taken advantage of the variable interest rate. Regarding the department’s monitoring of their debt, officials at another school said that they were able to negotiate with the DBA the amount of additional debt they could assume—from $500,000 to $1 million—before they would have to notify the department. School officials said this change was important because it not only reduced their administrative burden but it also gave them additional leeway to pursue other capital financing. The program made greater use of loans for the sole purpose of refinancing existing debt since 2003. Two participants reported an estimated savings of at least $3.7 million by refinancing under the program. According to department officials, Education has also made greater use of the Secretary’s authority to originate loans exceeding $10 million and to make multiple loans to an institution, providing schools with more purchasing power. Program officials said that while the limit on the amount and number of loans that could be made was to prevent disproportionate use of the loan fund by larger and more affluent schools, it no longer reflected the reality of current costs for construction and renovation or the budgetary constraints facing many states. Additionally, program officials we spoke with said they had enhanced program marketing. For example, the DBA has developed a Web site describing the program and offering answers to frequently asked questions. In addition, officials reported attending the national and regional conferences for college executives shown in table 4, completing over 60 campus site visits and contacting other school officials by telephone. Program officials also reported that most schools received written correspondence or an e-mail to inform them of the program. By these efforts, all HBCUs have been contacted in 2005, according to DBA officials. They also said they timed these outreach efforts to correspond with schools’ annual budgetary and enrollment processes in order to prompt schools to think about potential capital projects that could fit the program. DBA officials said that their marketing approach for fiscal year 2006 would be the same as in the previous year. While Education has taken some steps to improve the HBCU loan program, we found significant weaknesses in its management control of the program with respect to its (1) communications with HBCUs, (2) compliance with program and financial reporting laws and guidance, and (3) monitoring of its DBA, as described below. Many HBCU officials we interviewed reported a lack of clear, timely, and useful information from Education and the DBA at various stages of the loan process, and said the need to pursue such information themselves had sometimes led to delays. While program materials represent the loan application as a 2- to 3-month process, about two-thirds of the loans made since January 2001 were not closed until 7 to 18 months after application. Officials from one school said that it had taken 6 to 7 months for the DBA to relay from Education a clarification as to whether its proposed project was eligible. Other schools reported that Education had not provided timely or clear information about the status of their loans. In some cases, schools reported that the lengthy loan process resulted in project delays and cost increases over the intervening time period. An official from one school told us that it remained unclear to him why his school was denied a loan. Education officials acknowledged that the loan process was lengthy for some borrowers and said its DBA had attempted to work with these borrowers to address problems with applications. School officials told us that in some cases the loan process could have been expedited had Education and the DBA made use of previous borrowers’ experiences to apprise them of problems that could affect their own applications—such as the fact that title searches can be especially time consuming and problematic for private HBCUs, some of which did not receive all property deeds from their founders when they were established in the 1800s. With regard to making loan payments, several officials we interviewed said that DBA officials had not provided information that was in sufficient detail. In one situation, officials from one school reported that school auditors had questioned the accuracy of the loan payment amount for which the school was billed by the DBA because the billing statements omitted information concerning the extent to which the amount billed included escrow payments. Other officials noted that they had not received written notification from the DBA concerning the full amount of their potential liability after funds had been withdrawn from the schools’ escrow accounts to cover payments on behalf of another borrower that had recently defaulted on a loan. Education has not complied with certain statutory requirements relating to the program’s operations and how federal agencies are to account for the government’s cost of federal loan programs. In creating the program, Congress established within the Department of Education an HBCU Capital Financing Advisory Board composed of the (1) Secretary of Education or the Secretary’s designee, (2) three members who are presidents of private HBCUs, (3) two members who are presidents of public HBCUs, (4) the President of the United Negro College Fund or his/her designee, (5) the President of the National Association for Equal Opportunity in Higher Education or his/her designee, and (6) the Executive Director of the White House Initiative on HBCUs. By law, the Advisory Board is to provide advice and counsel to the Secretary of Education and the DBA concerning the capital financing needs of HBCUs, how these needs can be met through the program, and what additional steps might be taken to improve the program. To carry out its mission, the law requires that the board meet with the Secretary of Education at least twice each year. Despite this requirement, the board has met only three times in the past 12 years, the most recent meeting occurring in May 2005. According to Education officials, the Advisory Board did not routinely meet because of turnover among Education staff as well as HBCU presidents designated to serve on the board. Education officials told us that there could have been other reasons why the Advisory Board did not meet in earlier years, but none that they had knowledge of. Although Education officials told us that they had believed another Advisory Board meeting would be convened soon after the May 2005 meeting, no such meeting has yet been scheduled. We also found that Education has not fully complied with requirements of the Federal Credit Reform Act of 1990, which, along with guidance issued by OMB and accounting standards, provide the framework that Education is to use in calculating the federal budget costs of the program. In particular, Education has excluded certain fees paid by HBCUs from its calculations of program costs. The interest payments made by HBCUs on program loans includes a surcharge of 1/8th of 1 percent assessed by FFB in accordance with its policy and as permitted by statutory provisions governing its transactions. Under the Federal Credit Reform Act of 1990, these fees—i.e., the surcharge—are to be recognized as cash flows to the government and included in agencies’ estimated costs of the credit programs they administer. In addition, these fees are to be credited to the program’s financing account. OMB officials responsible for coordinating agencies’ subsidy cost estimates acknowledged that Education should include the fees in its budgetary cost estimates and noted that other agencies with similar programs do so. Further, the written agreement among Education, the FFB, and the DBA that governs the issuance of bonds by the DBA for purchase by the FFB for the purpose of funding loans under the program also stipulates that these fees are to be credited to Education. Despite these provisions, Education has not included the fees in its calculations of the federal cost of the program, thereby overestimating the program’s costs; nor has Education accounted for the fees on its financial statements. Instead, the DBA has collected and held these fees in trust. Although the contract between Education and the DBA generally describes how the DBA is to manage the proceeds from and the payment of bonds issued to fund loans made to HBCUs, it does not specifically address how the DBA is to manage the payments that reflect the 1/8th of 1 percent paid by borrowers. In general, the DBA collects borrower repayments and remits the proceeds to the FFB to pay amounts due on the program’s outstanding bonds. However, the amounts paid to the FFB do not include the fees paid by borrowers. As a result, it is unclear how these funds, retained by the DBA, are to be eventually returned to the federal government. Moreover, Education has not monitored the DBA’s handling of these funds and is unaware of the accumulated balance. Although the current DBA has been under contract with Education for over 5 years, Education has not yet assessed its performance with respect to key program activities and contractual obligations, although Education officials said that they have been pleased with the DBA’s performance. One of these major activities is “marketing” the capital financing program among HBCUs in order to raise awareness and help ensure that the program is fully utilized. Although the DBA is required by its contract with Education to submit annual reports and audited financial statements to Education, it has not done so. While DBA officials told us the department has offered some informal assessments, Education officials have not guided their marketing efforts. Still, we found indications that the DBA’s marketing strategy has likely suffered from a lack of guidance and monitoring by Education. Officials we spoke with at 4 schools did not know of the program, and another eight told us they had learned about it from peers or advocacy organizations. Others were aware of the DBA’s marketing activities, but offered a number of suggestions for improvement, citing a need for more specific information as to the extent to which collateral would be needed, how the program meets the needs of both private and public schools, or examples and testimonials about funded projects. Several school officials said DBA outreach through conferences was not necessarily well targeted—either because the selected conferences covered a full range of topics for a variety of schools and not only HBCUs, or because they focused on issues relating to either public or private HBCUs, or because they drew school officials not involved in facilities planning. Additionally, the DBA has reserved its direct contact marketing largely for 4-year schools. DBA officials justified this decision on grounds that smaller schools tended to have more difficulty borrowing and that they had targeted larger schools that they believed would be most likely to benefit from the program. However, as prescribed by law, loans are to be fairly allocated among as many eligible institutions as possible. Because the DBA’s compensation is determined as a percentage of the amount borrowed, and the costs it incurs may not vary significantly from loan to loan, it is important to monitor its activities to ensure it is not making loans exclusively to schools that are likely to borrow larger amounts and for which its potential for profit is highest. With regard to the DBA’s basic responsibility for keeping records, we found several cases in which critical documents were missing from loan agreement files. Moreover, the DBA was unable to provide us with entirely complete files for any of the 14 institutions that had participated or were participating in the program. For example, documents that included loan applications, decision memoranda, financial statements, and real property titles were missing for several schools. In our file review, we found that files for 9 schools did not include the original application. Files for 8 schools did not include the required financial statements for demonstrating long-term financial stability, and 5 lacked DBA memoranda pertaining to the decision to make the loan. Moreover, until our review, key Education officials were unaware that such documents were missing. Officials from four HBCUs in the Gulf Region we spoke with (Dillard University, Southern University at New Orleans, Xavier University, and Tougaloo College) told us that, in light of the extensive hurricane damage to their campuses, they were pleased with the emergency loan provisions but concerned that the 1-year authorization would not provide sufficient time for them to take advantage of the special program features. School officials from each of the four schools noted that their institutions had incurred physical damages caused by water, wind, and, in the case of one institution, fire, and that the actual financial impact of the hurricanes may remain unknown for years. Although Education officials told us that they have not yet determined the extent to which the department would make use of its authority to waive or modify program provisions for hurricane- affected institutions, the department would be prepared to provide loans to hurricane-affected HBCUs. Officials from the three HBCUs we visited reported extensive damage to their campuses as a result of the 2005 hurricanes and noted that it may take another few years to determine the full financial impact. School officials told us that they have not been able to fully assess all hurricane- related costs, such as replacing property, repairing plumbing systems, landscaping, and replacing sidewalks, and as result, current estimates are only preliminary. School officials noted that the assessment process was lengthy because of, among other things, the time required to prioritize campus restoration needs, undertake complex assessments of historic properties, follow state assessment processes, and negotiate insurance settlements. Each of the four schools we contacted incurred physical damages caused by water and wind; one school also incurred damage by fire. For example, the campuses of all three schools in New Orleans were submerged in 2 to 11 feet of water for about a month after the hurricanes, damaging the first floors of many buildings as well as their contents. As a result, schools required removal of debris and hazardous waste (e.g., mold and asbestos), repair and renovation, and the taking of actions recommended by FEMA to mitigate future risks. Xavier University officials, who preliminarily estimated $40 million to $50 million in damage to their school, said that they faced the need to undertake several capital projects, including replacing elevators, repairing roofs, and rehabilitating the campus auditorium and replacing its contents. According to officials from Southern University at New Orleans, state officials have estimated damages at about $17 million; at the time of our visit 10 months after the hurricanes, state insurance assessors were beginning their work on the campus library, where mold reached all three floors, covering books, art collections, card catalogues, and computers. Officials at Dillard University also reported extensive damage, preliminarily estimated as high as $207 million. According to officials, five buildings—which were used for academic support services and residential facilities—had to be demolished because of extensive damage; three of these buildings were destroyed by fire. Further, they also reported that the international studies building, built adjacent to a canal levee in 2003, will have to be raised at least 18 feet to make it insurable. Officials at Tougaloo College, in Mississippi, reported wind and water damage to the roofs of some historic properties, which along with other damages, they preliminarily estimated at $2 million. Figures 2-4 show some of the damages and restoration under way at the three schools we visited. The school officials we spoke with found certain emergency provisions of the loan program favorable, but they expressed reservations about the time frame within which they are required to make application for the special loans. Most school officials appreciated the reduced interest rate and cost of issuance (both set at 1 percent or less) and that the Secretary of Education was provided discretion to waive or modify statutory or regulatory provisions, such as credit criteria, to better assist them with their recovery. They said the normal sources of information for credit evaluation—such as audited financial records from the last 5 years— would be difficult to produce. Other conditions of the emergency loan provisions some officials found favorable were the likelihood that loans would be awarded sooner—providing a timely infusion of funds—with more flexibility compared to other programs. Officials at both Dillard and Xavier Universities said that because their institutions had already spent a significant amount of their available resources, the emergency loans could be used to bridge any emerging financial difficulties they experience as they continue to pursue insurance settlements and assistance from other federal agencies, including FEMA and SBA. Additionally, some school officials said that the program may allow for greater flexibility compared to FEMA and SBA aid. For example, some officials told us that in addressing damages caused by the hurricanes they would like to improve upon their facilities to mitigate potential environmental damages in the future and, at one school, upgrade an obsolete science laboratory with state-of-the-art equipment. They said, however, that in some cases FEMA aid is limited to restoring campus facilities to their prestorm conditions and in other cases desired improvements might not be consistent with requirements for historic preservation. While most school officials we spoke with found select provisions favorable, they expressed concerns with stipulations that limit the extension of the special provisions to 1 year, primarily because all of the costs associated with damage from the hurricanes have not been fully identified. Further, officials at Southern University at New Orleans—a public institution—said that they are subject to an established capital improvement approval process involving both its board of directors and state government officials that alone normally requires a year to complete. Additionally, some of the schools are concerned that they may not be able to restore damaged and lost records needed to apply to the program. Officials reported that a time frame of at least 2 to 3 years would allow them to better assess the costs of the damages. Other concerns cited included eligibility requirements for the deferment provision, and officials from one institution expressed disappointment that the emergency provisions did not include some form of loan forgiveness. According to Education officials, they are preparing to take the steps necessary to ensure that the department is prepared to provide loans to hurricane-affected HBCUs. Education officials noted that in light of the statutory limit on the total amount of loans it can make under the program and the balance of loans outstanding as of August 2006, about $141 million in funding is available for private, and $66 million for public, HBCUs— both those affected by the hurricanes and others. The officials noted that the department had not yet determined to what extent the Secretary would use her discretion to waive or modify program requirements, including the statutory loan limits. They told us that some of their next steps included determining how the program’s application processes could be changed to ensure that funds can be provided to hurricane-affected schools in a timely manner. They said the department would need to consider to what extent it would apply credit criteria to hurricane-affected institutions in light of the fact that these institutions would likely be experiencing fiscal stresses as they seek to rebuild their campuses and attempt to return to their prior levels of enrollment. They noted that they would talk with school officials to gain a better understanding of which program criteria remain applicable, but anticipate using fewer credit criteria in their determinations. Education officials also noted that they will likely have to decide on the appropriate level of flexibility to exercise with respect to collateralizing loans for hurricane-affected HBCUs because some institutions may lack the collateral they had prior to the hurricanes. Moreover, these officials stated that the department would need to consider establishing limits on the types of projects for which it would provide funding to ensure that loans are not provided for capital projects for which other federal aid is available, such as that provided by FEMA. For example, program officials recognized that a significant cost of recovery for the schools in the Gulf Coast region is debris removal, but believe FEMA is likely to provide funding for such costs. Even with these challenges and outstanding questions, program officials said that they are confident the department will be able to lend funds to hurricane-affected institutions prior to expiration of the special legislative provisions applicable to hurricane-affected HBCUs. They noted that the department has already notified eligible institutions of the availability of funds and would hold additional meetings with schools to gain an understanding of their capital improvement and restoration needs. HBCUs play an important role in fulfilling the educational aspirations of African-Americans and others and in helping the nation attain equal opportunity in higher education. In establishing the Capital Financing Program, Congress sought to help HBCUs continue and expand their educational mission. The program has in fact assisted some HBCUs in financing their capital projects. Factors, however, including awareness of the program; clear, timely, and useful information concerning the status of loan applications and approvals; and certain loan terms and conditions, may be discouraging other schools from participating in the program. Some HBCUs have accessed even more attractive financing outside of the program, while yet others may face financial challenges that make it unwise to borrow through the program—factors that affect program utilization and make the development of program performance goals and measures challenging. Despite the challenge, Education is attempting to design performance goals and measures—a positive step that if successfully completed could be useful in informing Congress and others about the extent to which the program is meeting Congress’ vision in establishing it. HBCU officials had a number of suggestions, such as changing the frequency of schools’ loan repayments from a monthly to a semiannual basis, that they believed could improve the program and positively influence program utilization. By soliciting and considering such feedback from HBCU officials, Education could ensure that the program is optimally designed to achieve its objectives effectively and efficiently. However, Education has not made consistent use of the mechanism—the HBCU Capital Financing Advisory Board—Congress provided to help ensure Education received input from critical program stakeholders. Receiving feedback from schools would also allow the department to better inform Congress about the progress made under the program. Effective management control is essential to ensuring that programs achieve results and depends on, among other things, effective communication. Agencies must promote relevant, reliable, and timely communication to achieve their objectives and for program managers to ensure the effective and efficient use of resources. Effective management control also entails ensuring that an agency complies with applicable laws and regulations and that ongoing monitoring occurs during the normal course of an agency’s operations. In failing to follow the requirements of the Federal Credit Reform Act, Education has overstated the budgetary cost of the program. Accurately accounting for the cost of federal programs is all the more important in light of the fiscal challenges facing the nation. Moreover, failing to adequately monitor the DBA’s performance with respect to critical program responsibilities—record keeping, marketing, accounting, and safeguarding the federal funds it has been collecting from program borrowers—increases the program’s exposure to potential fraud, waste, abuse, and mismanagement. To better ensure that the HBCU Capital Financing Program can assist these schools to continue and expand their educational missions, GAO is making the following five recommendations for Executive Action. To ensure that it obtains the relevant, reliable, and timely communication that could help ensure that program objectives are being met efficiently and effectively, and to meet statutory requirements, we recommend that the Secretary of Education regularly convene and consult with the HBCU Advisory Board. Among other things, the Advisory Board could assist Education in its efforts to develop program performance goals and measures, thereby enabling the department and the board to advise Congress on the program’s progress. Additionally, Education and the Advisory Board could consider whether alternatives to the escrow arrangement are feasible that both address schools’ concerns and the need to keep federal costs at a minimum. If Education determines that statutory changes are needed to implement more effective alternatives, it should seek such changes from Congress. To ensure program effectiveness and efficiency, we recommend that the Secretary of Education enhance communication with HBCU program participants by (1) developing guidance for HBCUs, based on other schools’ experiences with the program, on steps that applicants can take to expedite loan processing and receipt of loan proceeds, and (2) regularly informing program applicants of the status of their loan applications and department decisions. In light of the program’s existing credit requirements for borrowers and the funds placed in escrow by borrowers to protect against loan delinquency and default, we recommend that the Secretary of Education change its requirement that borrowers make monthly payments to a semiannual payment requirement consistent with the DBA’s requirement to make semiannual payments to the FFB. To improve its estimates of the budgetary costs of the program, and to comply with the requirements of the Federal Credit Reform Act, we recommend that the Secretary of Education ensure that the program subsidy cost estimation process include as a cash flow to the government the surcharge assessed by the FFB and paid by HBCU borrowers and pay such amount to the program’s financing account. Additionally, we recommend that the Secretary of Education audit the funds held by the DBA generated by this surcharge and ensure the funds are returned to the Department of the Treasury and paid to the program’s financing account. To ensure adequate management control and efficient program operations, we recommend that the Secretary of Education increase its monitoring of the DBA to ensure its compliance with contractual requirements, including record keeping, and that the DBA is properly marketing the program to all potentially eligible HBCUs. In written comments on a draft of this report, Education agreed with our findings and all but one of our recommendations and noted that our report would help it enhance the program and better serve the nation’s HBCUs. Education agreed with our recommendation to regularly convene and consult with the HBCU Advisory Board and noted that the department would leverage the board’s knowledge and expertise to improve program operations and that the department had scheduled a board meeting for October 27, 2006. Education also agreed with our recommendation to improve communications with HBCUs, noting that it would take steps including developing guidance based on lessons learned to expedite loan processing and receipt of proceeds, and regularly informing applicants of their loan status and department decisions. Moreover, Education agreed with our recommendation to improve its budget estimates for the program, indicating that it would work with OMB and Treasury to do so. Further, with regard to our recommendation that the department increase its monitoring of its DBA, the department stated that it would require the DBA to submit quarterly reports on program participation and financing, identify and locate missing loan documentation, and maintain these efforts for each subsequent loan disbursal. Additionally, the department said that it was planning to conduct an audit of the DBA’s handling of loan funds and associated fees, as we recommended. With respect to our recommendation that would allow participating schools to make semiannual payments, Education said it would be imprudent to implement the recommendation at this time because of the potential for default as well as the exposure from a default by a current program participant. We considered these issues in the development of our recommendation and continue to believe that the credit evaluation performed by the DBA, the funds set aside by borrowers held in escrow, and the security pledged by borrowers provide important and sufficient measures to safeguard taxpayers against potential delinquencies and default. Further, while not noted in our draft report reviewed by the department, the law requires that borrowers make payments to the DBA at least 60 days prior to the date for which payment on the bonds is expected to be needed. In addition, borrowers have been required to submit, on an annual basis, audited financial reports and 3-year projections of income and expenses to the DBA. These measures provide additional safeguards as well as a mechanism to alert the department of potential problems. We added this information to our description of program terms and conditions in table 1. Education also provided technical comments that we incorporated into this report where appropriate. As agreed with your offices, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days from its issue date. At that time, we will send copies to the Secretary of Education appropriate congressional committees, the Director of OMB, and other interested parties. We will also make copies available to others upon request. In addition, the report will be available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-7215 or [email protected]. Contact points for our Office of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff that made major contributions to this report are listed in appendix IV. Ala. Ga. S.C. N.C. N.C. Fla. Md. Pa. Ga. La. Tenn. Fla. Tex. Ky. Pa. N.C. Ala. Va. N.C. Tex. N.C. S.C. La. Tenn. Tex. Miss. Ala. Ark. D.C. Md. Va. W.Va. La. ppendix II: Number of HBCUs Eligible to ticipate in Capital Financing Program by as of August 31, 2006) In addition to those named above the following individuals made important contributions to the report: Jeff Appel, Assistant Director; Tranchau Nguyen, Analyst-in-Charge; Carla Craddock; Holly Gerhart; Lauren Kennedy; Sue Bernstein; Margie Armen; Christine Bonham; Jessica Botsford; Michaela Brown; Richard Burkard; Carlos Diz; Kevin Jackson; Tom McCool. | Historically Black Colleges and Universities (HBCU), which number around 100, undertake capital projects to provide appropriate settings for learning, but many face challenges in doing so. In 1992, Congress created the HBCU Capital Financing Program to help HBCUs fund capital projects by offering loans with interest rates near the government's cost of borrowing. We reviewed the program by considering (1) HBCU capital project needs and program utilization, (2) program advantages compared to other sources of funds and schools' views on loan terms, (3) the Department of Education's (Education) program management, and (4) certain schools' perspectives on and Education's plan to implement loan provisions specifically authorized by Congress in June 2006 to assist in hurricane recovery efforts. To conduct our work, we reviewed applicable laws and program materials and interviewed officials from federal agencies and 34 HBCUs. HBCU officials we interviewed reported extensive and diverse capital project needs, yet just over half of available loan capital ($375 million) has ever been borrowed. About 23 HBCUs have taken steps to participate in the program, and 14 have become borrowers. Education has collected and reported limited data on the program's utilization and has not established performance measures or goals to gauge program effectiveness, though Education officials noted they are developing measures and goals. The HBCU loan program provides access to low-cost capital financing and flexibilities not always available elsewhere, but some loan terms and conditions discourage participation, though school officials said they remain interested in the program. The low interest rate and 30-year repayment period were regarded favorably by participants and nonparticipants alike, and the program makes funds available for a broader range of needs than some federal grant programs. However, the requirement to place in a pooled escrow 5 percent of loan proceeds--an insurance mechanism that reduces federal program costs due to any program borrower's potential delinquency or default--monthly payments versus semiannual ones traditionally available from private sources of loans, and the extent to which some loans have been collateralized could discourage participation. While Education has taken steps to improve the program, significant weaknesses in its management control could compromise the program's effectiveness and efficiency. Education has recently provided schools with both fixed and variable interest rate options, allowed for larger loans, and afforded more opportunities to negotiate loan terms. Also, Education has increased its marketing efforts for the program. However, Education has not established effective management control to ensure that it is (1) communicating with schools in a useful and timely manner, (2) complying with statutory requirements to meet twice each year with an advisory board composed of HBCU experts and properly account for the cost of the program, and (3) monitoring the performance of the program's contractor. Officials from 4 HBCUs in Louisiana and Mississippi told us that in light of the extensive 2005 hurricane damage to their campuses, they were pleased with certain emergency loan provisions but concerned that there would not be sufficient time to take advantage of Education's authority to waive or modify the program provisions. School officials from the 4 schools noted that their institutions had incurred extensive physical damage that was caused by water, wind, and, in one case, fire, and that the full financial impact of the hurricanes may remain unknown for years. Although Education officials told us that they have not yet determined the extent to which the authority under the emergency legislation to waive or modify program provisions for hurricane-affected institutions would be used, the department would be prepared to provide loans to hurricane-affected HBCUs. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
Internal control generally serves as a first line of defense for public companies in safeguarding assets and preventing and detecting errors and fraud. Internal control is defined as a process, effected by an entity’s board of directors, management, and other personnel, designed to provide reasonable assurance regarding the achievement of the following objectives: (1) effectiveness and efficiency of operations; (2) reliability of financial reporting; and (3) compliance with laws and regulations. Internal control over financial reporting is further defined in the SEC regulations implementing Section 404 of the Sarbanes-Oxley Act.regulations define internal control over financial reporting as a means of providing reasonable assurance regarding the reliability of financial reporting and the preparation of financial statements, including those policies and procedures that: pertain to the maintenance of records that, in reasonable detail, accurately and fairly reflect the transactions and dispositions of the assets of the company; provide reasonable assurance that transactions are recorded as necessary to permit preparation of financial statements in conformity with generally accepted accounting principles, and that receipts and expenditures of the company are being made only in accordance with authorizations of management and directors of the company; and provide reasonable assurance regarding prevention or timely detection of unauthorized acquisition, use, or disposition of the company’s assets that could have a material effect on the financial statements. Regulators regard an effective internal control system as a foundation for high-quality financial reporting by companies. Title IV, Section 404 of the Sarbanes-Oxley Act, aims to help protect investors by, among other things, improving the accuracy, reliability, and transparency of corporate financial reporting and disclosures. Section 404 has the following two key sections: Section 404(a) requires company management to state its responsibility for establishing and maintaining an adequate internal control structure and procedures for financial reporting and assess the effectiveness of its internal control over financial reporting in each annual report filed with SEC. In 2007, SEC issued guidance for management regarding its report on internal control over financial reporting. Section 404(b) requires the firms that serve as external auditors for public companies to provide an opinion on the internal control assessment made by the companies’ management regarding the effectiveness of the company’s internal control over financial reporting as of year-end. In 2007, PCAOB issued Auditing Standard No. 5, which contains the requirements that apply when an auditor is engaged to perform an audit of management’s assessment of the effectiveness of internal control over financial reporting. While management is responsible for the implementation of an effective internal control process, the external auditor obtains reasonable assurance to provide an opinion on the effectiveness of a company’s internal control over financial reporting through an independent audit. Investors need to know that the financial statements on which they make investment decisions are reliable. The auditor attestation process involves the external auditor’s testing and evaluation of the company’s internal control over financial reporting and relevant documentation in order to provide an opinion on the effectiveness of the company’s internal control over financial reporting as of year-end; a company’s internal control over financial reporting cannot be considered effective if one or more material weaknesses exist. Auditor attestation of the effectiveness of internal control over financial reporting has been required for public companies with a public float of $75 million or more (accelerated filers) since 2004. However, SEC delayed implementing the auditor attestation for public companies with less than $75 million in public float (nonaccelerated filers) several times from the original compliance date of April 15, 2005, to June 15, 2010, in response to concerns about compliance costs and management and auditor preparedness. On July 21, 2010, the Dodd-Frank Act permanently exempted nonaccelerated filers from the auditor attestation requirement. The Dodd-Frank Act did not exempt nonaccelerated filers from Section 404(a) of the Sarbanes-Oxley Act (management’s assessment of internal controls). See table 1 for final compliance dates for internal control over financial reporting by issuer filer status. The number of exempt companies exceeded the number of nonexempt companies in each year from 2005 through 2011 (see table 2). According to our analysis of Audit Analytics data, the number of exempt companies fluctuated and ultimately declined from 6,333 in 2005 to 5,459 in 2011 (13.8 percent during that period). The number of nonexempt companies also fluctuated and ultimately declined from 4,256 in 2005 to 3,671 in 2011(13.7 percent). SEC and PCAOB have issued regulations, standards, and guidance to implement the Sarbanes-Oxley Act. In 2007, in response to companies’ concerns about implementation costs, SEC provided implementation guidance to company management, and PCAOB issued a new auditing standard to external auditors to make the internal controls audit process more efficient and more cost-effective. SEC’s guidance for management in implementing Section 404(a) of Sarbanes-Oxley Act and PCAOB’s Auditing Standard No. 5 for external auditors in implementing Section 404(b) of Sarbanes-Oxley Act endorsed a “top-down, risk-based approach” that emphasizes preventing or detecting material misstatements in financial statements by focusing on those risks that are more likely to contribute to such misstatements. These changes were provided to create a more flexible environment where company management and external auditors can scale their internal controls evaluation based on the particular characteristics of a company to reduce costs and to align SEC and PCAOB requirements for evaluating the effectiveness of internal controls. Both SEC regulations and PCAOB Auditing Standard No. 5 state that management is required to base its assessment of the effectiveness of the company’s internal control over financial reporting on a suitable, recognized control framework established by a body of experts that followed due process procedures. Both the SEC guidance and PCAOB’s auditing standard cite the Committee of Sponsoring Organizations of the Treadway Commission (COSO) framework as an example of a suitable framework for purposes of Section 404 compliance. In 1992, COSO issued its “Internal Control—Integrated Framework” (the COSO framework) to help businesses and other entities assess and enhance their internal controls. Since that time, the COSO framework has been recognized by regulatory standard setters and others as a comprehensive framework for evaluating internal control, including internal control over financial reporting. The framework consists of five interrelated components: control environment, risk assessment, control activities, information and communication, and monitoring.PCAOB do not mandate the use of any particular framework. Since the implementation of the Sarbanes-Oxley Act, the number and percentage of exempt companies restating their financial statements has generally exceeded the number and percentage of nonexempt companies restating. However, from 2005 through 2011, restatements by exempt companies were generally proportionate to their percentage of our total population. Specifically, on average, almost 64 percent of companies restating were exempt companies and exempt companies made up, on average, 60 percent of our total population. Exempt and nonexempt companies restated their financial statements for similar reasons, and the majority of these restatements produced a negative effect on the companies’ financial statements. The number of financial statement restatements by exempt and nonexempt companies has generally declined since 2005. As illustrated in figure 1, the number of financial restatements peaked in 2006 for exempt companies and declined gradually until 2011, despite a slight uptick in 2010. The number of restatements peaked in 2005 for nonexempt companies, declined gradually until 2009, and then trended upward for the remaining 2 years of the review period. As we have previously reported, some industry observers noted the financial reporting requirements of the Sarbanes-Oxley Act and PCAOB inspections may have led to a higher than average number of restatements in 2005 and 2006. A 2010 Audit Analytics report noted that some observers attributed the subsequent decline in restatements to a belief that SEC relaxed standards in 2008 relating to materiality of errors and the need to file restatements.companies exceeded the number of financial restatements by nonexempt companies each year from 2005 through 2011. However, although the overall number of financial restatements from 2009 through 2011 remained lower than the prior period, the number of financial restatements by nonexempt companies increased about 23 percent from The number of financial restatements by exempt 2010 through 2011. The number of financial restatements by exempt companies declined almost 8 percent during the same period. SEC officials and one market expert with whom we spoke indicated that there is no clear explanation for these restatement trends. They also said that a review of each individual financial restatement would be necessary to determine the reasons for the restatement trends, but they offered a few factors to consider when assessing the trends. In particular, a recent Audit Analytics report found that approximately 57 percent of restatements disclosed in 2011 were defined as revision restatements, the highest level since 2005 (the first full year of the disclosure requirement). According to the report, revision restatements generally do not undermine reliance on past financials and are less disruptive to the market. SEC officials noted that although restatements by nonexempt companies have increased, as illustrated in the Audit Analytics report, they may be less severe as a result of higher numbers of revision restatements, fewer issues per restatement, and a lower cumulative impact on the company’s net income. According to our analysis of Audit Analytics data, in 2011, the percentage of restatements that were revision restatements was approximately 62 percent for exempt companies compared to approximately 70 percent for nonexempt companies. SEC officials also suggested that the detection rate of financial restatements could affect restatement trends, especially when looking only at a one or two year period. The officials said that the lag time on detection and the likelihood of detection could be different between exempt and nonexempt companies. Finally, SEC officials said that it is important to consider the nature and severity of restatements. Except for 2005, the percentage of exempt companies restating their financial statements exceeded the percentage of nonexempt companies restating. From 2006 through 2009, there was a decline in the percentage of restatements for both exempt companies and nonexempt companies. The percentage of exempt companies restating their financial statements rose in 2010 to 7.6 percent and remained constant in 2011 (see fig. 2). At the same time, starting in 2010, the percentage of nonexempt companies restating has been on the increase. In addition, from 2005 to 2011, on average, almost 64 percent of companies restating were exempt companies, which made up 60 percent of our total population. Our analysis is generally consistent with a number of studies that have found that exempt companies restate their financial statements at a higher rate than nonexempt companies.having an auditor attest to the effectiveness of a company’s internal control over financial reporting generally reduces the likelihood of financial restatements. For example, in 2009, Audit Analytics found that for companies that did not obtain an auditor attestation and stated that These studies suggest that they had effective internal controls, their financial restatement rate was 46 percent higher than the restatement rate for companies that had obtained an auditor attestation and stated that they had effective internal controls. Exempt companies that voluntarily complied with the auditor attestation requirement constitute a small percentage of exempt companies (see table 3). Prior to the passage of the Dodd-Frank Act in July 2010, the number of exempt companies voluntarily complying with the auditor attestation requirement grew 70 percent from 2008 through 2009. Although SEC deferred the requirement for nonaccelerated filers to comply until June 15, 2010, some exempt companies likely voluntarily complied in anticipation of SEC’s implementation of the requirement. Nonetheless, in 2009 during the peak compliance period for exempt companies that voluntarily complied, 6.9 percent (435) of a total population of 6,285 exempt companies voluntarily complied with the auditor attestation requirement. According to one academic study, exempt companies that voluntarily comply with the auditor’s attestation requirement are more likely than companies that do not comply to have evidence of the superior quality of their internal control over financial reporting and fewer restatements, among other factors. As table 3 also shows, the percentage of financial restatements by exempt companies that voluntarily complied with the requirement is generally lower than that of exempt companies that did not voluntarily comply. From 2005 through 2011, on average, 7.5 percent of exempt companies that voluntarily complied restated their financial statements compared to 8.9 percent of restating exempt companies that did not voluntarily comply. From 2005 through 2011, based on our analysis of Audit Analytics data, the majority of exempt and nonexempt companies that restated their financial statements did so as the result of an accounting rule misapplication. That is, a company revised previously issued public financial information that contained an accounting inaccuracy. To analyze the reasons for financial restatements, we used Audit Analytics’ 69 classifications to classify the type of financial restatements into six categories (see table 4): revenue recognition, core expenses, noncore expenses, reclassifications and disclosures, underlying events, and other. Based on our classification, core expenses (i.e., ongoing operating expenses) were the most frequently identified category of restatement for both exempt and nonexempt companies. Specifically, core expenses accounted for 30.2 percent of disclosures by exempt companies and 28.5 percent of disclosures by nonexempt companies from 2005 through 2011 (see fig. 3). Core expenses include cost of sales, compensation expenses, lease and depreciation costs, selling, general and administrative expenses, and research and development costs. Noncore expenses (i.e., nonoperating expenses) were the second most frequently identified reason for restatement across exempt and nonexempt companies during this period. Each of the other reasons for restatements represented less than 20 percent of all restatements by exempt and nonexempt companies during the period. From 2005 through 2011, the majority of financial restatements by exempt and nonexempt companies negatively impacted the company’s financial statements. Specifically, 87.6 percent of financial restatements by exempt companies resulted in a negative net effect on the financial statements—the income statement, the balance sheet, the statement of cash flows, or the statement of shareholder’s equity—of these companies. Similarly, 80.6 percent of financial restatements by nonexempt companies resulted in a negative net effect on the company’s financial statements. The characteristics of exempt and nonexempt companies with financial restatements varied from 2005 through 2011. For example, in terms of industry characteristics, on average, most exempt companies restating were in the manufacturing sector (29.4 percent), followed by agriculture, construction, and mining (14.6 percent). On average, most of the nonexempt companies restating were in the manufacturing sector (29.3 percent), followed by the financial sector (16.6 percent). Further, in 2011, 91.4 percent of nonexempt companies restating compared to 35.3 percent of exempt companies were listed on an exchange. In addition, nonexempt companies had an average financial restatement period that was longer than that of exempt companies. Specifically, from 2005 through 2011, nonexempt companies had an average financial restatement period of 9 quarters compared to an average financial restatement period of almost 6 quarters for exempt companies. Companies and others identified various costs of the auditor attestation requirement. A number of studies and surveys show that since the passage of the Sarbanes-Oxley Act, and especially since the 2007 reforms by SEC and PCAOB, audit costs have declined for companies of all sizes. These studies and surveys also show that these costs, as a percentage of revenues, affect smaller companies disproportionately compared to their larger counterparts. Companies and others also identified benefits of compliance, including stronger internal controls and more transparent and reliable financial reports. However, determining whether auditor attestation compliance costs outweigh the benefits is difficult because many costs and benefits cannot be readily quantified. A number of studies and surveys show that the estimated costs of obtaining an external auditor attestation on internal control over financial reporting are significant for companies of all sizes. Obtaining an auditor attestation incurs both direct and indirect costs, according to one study. Direct costs are expenses incurred to fulfill the auditor attestation requirement, such as the audit fees, external fees paid to outside contractors and vendors that help companies comply with the requirement, salaries of internal staff for hours spent preparing for auditor attestation compliance, and nonlabor expenses (e.g., technology, software, travel, and computers related to compliance). Indirect costs are those costs not directly linked to obtaining the auditor attestation. Two examples of indirect costs cited by one interviewee and one study are the time spent by management in preparing for and addressing auditors’ inquiries, which diverts their attention from strategic planning, and the diversion of funds from capital investments to auditor attestation-related expenses. Audit fees are a significant direct cost of the auditor attestation requirement. Sarbanes-Oxley Act and PCAOB standards require that the financial statement audit and the auditor attestation audit be conducted on an integrated basis. As a result, the auditor attestation is included in the total audit fees—that is, the total amount companies pay to their external auditors to conduct the integrated audit. Audit fees are based on several factors, including but not limited to the scope of an audit, which is a function of a company’s complexity and risk; the total effort required by the external auditor to complete the audit; and the risk associated with performing the audit. However, according to SEC’s 2011 study and one interviewee, the costs incurred by a company to comply with the auditor attestation requirement generally decline after the initial year. We analyzed total audit fees as a percentage of revenues from 2005 through 2011 for exempt and nonexempt companies. We found that exempt companies, which tend to be smaller, had higher average total audit costs, measured as a percentage of revenues, compared to nonexempt companies (see table 5). Among exempt companies, the data indicate that exempt companies that do not voluntarily comply with the auditor attestation requirement have (except for 2006) higher average total audit fees as a percentage of revenues than the exempt companies that voluntarily comply. While two academics we contacted about this trend could not provide a definitive explanation, there are many factors beside company size that can affect audit fees. Our data analysis results are consistent with our previous work on audit fees. Specifically, in 2006, we reported that smaller public companies paid disproportionately higher audit fees compared to larger public companies. Smaller public companies noted that they incur higher audit fees and other costs, such as hiring more staff or paying outside consultants to comply with the internal control provisions of the Sarbanes- Oxley Act. One study noted that historically, these higher audit fees and other costs increased regulatory costs for smaller public companies because regulatory compliance, in general, involves a significant number of fixed costs regardless of the size of a company. Thus, smaller companies with lower revenues are forced to bear these fixed costs over a smaller revenue base compared to larger companies. However, the auditor attestation is one element of the total audit fees. To gauge the amount spent on the auditor attestation, we asked respondents to our survey to provide us with the amount of total audit fees and the approximate amount attributable to complying with the auditor attestation requirement. Based on our survey results, we estimate that all companies with a market capitalization of less than $10 billion that obtained an auditor attestation in 2012 spent, on average, about $350,000 for auditor attestation fees, representing about 29 percent of their average total audit fees. Although these costs remain significant for many companies, the cost of implementing the auditor attestation provision has been declining and varies by company size. For example, SEC’s 2009 study on internal control over financial reporting found that, among other things, the mean auditor attestation costs declined from about $821,000 to about $584,000 (approximately 29 percent) pre- and –post 2007 reforms for all companies that obtained an auditor attestation. Median costs declined from about $358,000 to $275,000 (approximately 23 percent) pre- and –post 2007 reforms. According to the study and an academic we interviewed, costs have been declining for a variety of reasons, including companies and auditors gaining experience in the auditor attestation environment and the 2007 SEC and PCAOB guidance. The academic further stated that in the early years of implementation of Section 404(b), initial costs were high for all companies, in part, because they had not previously implemented effective internal controls. There are two types of potential benefits or positive impacts—direct and indirect—that companies can receive from complying with the auditor attestation requirement according to one study. Direct benefits are those directly related to improvements in the company’s financial reporting process, such as the quality of the internal control structure, the audit committee’s confidence in the internal control structure, the quality of financial reporting, and the company’s ability to prevent and detect fraud. Indirect benefits are other dimensions that may be affected by changes in the quality of the financial reporting process, such as a company’s ability to raise capital, the liquidity of the common stock, and the confidence investors and other users of financial statements may have in the company. Respondents to our survey identified a number of benefits or positive impacts stemming from compliance with the auditor attestation requirement, although fewer of them perceived indirect benefits compared to direct benefits. Many survey respondents noted that they experienced a number of direct benefits. For example, we estimate that: 80 percent of all companies view the quality of their company’s internal control structure as benefiting from the auditor attestation; 73 percent view their audit committee’s confidence in internal control over financial reporting as benefiting from the auditor attestation; 53 percent view their financial reporting as benefiting from the 46 percent view their ability to prevent and detect fraud as benefiting from the auditor attestation (see table 6). Our findings are consistent with other surveys. In particular, Protiviti’s 2013 survey found that, among other things, 80 percent of respondents reported that their company’s internal control over financial reporting structure had improved since they began complying with the auditor attestation requirement.improved confidence in the financial reports of other Section 404(b) compliant companies, fewer companies’ perceived indirect benefits of the requirement. Specifically, based on our survey results, no more than 30 percent of all companies with less than $10 billion in market capitalization perceived any of the identified indirect benefits (see table 6) as stemming from the auditor attestation requirement. Research suggests that auditor attestation generally has a positive effect on investor confidence. Although exempt companies are currently not required to disclose whether they voluntarily complied with the auditor attestation requirement in their annual reports, doing so would provide investors with important information that may influence their investment decisions. Recent empirical studies we reviewed found that auditor attestation of internal controls generally has a positive impact on investor confidence. Investor confidence is considered an indirect benefit to companies that comply with the auditor attestation requirement. Specifically, an auditor attestation of internal controls helps to reduce information asymmetries between a company’s management and investors. With increased transparency and better financial reporting due to reliable third-party attestation, investors face a lower risk of losses from fraud. This lowered risk has a number of positive consequences for companies, such as enabling them to pay less for the capital as more confident investors require a lower rate of return on their money. Because investor confidence is difficult to measure directly, empirical research has examined the impact of auditor attestation on other variables that are considered proxies for investor confidence, including the cost of equity and debt capital, stock performance, and liquidity. As described below, such research has found that the auditor attestation increases investor confidence. A 2012 study examined exempt and nonexempt companies with market capitalization between $25 million and $125 million. This study found that the market value of equity—as measured by the common stock price—is positively associated with the book value of equity—which is an element in financial statements—but that this relationship is stronger for nonexempt companies. In other words, investors appear to put greater trust on the book value of equity of companies that are subject to auditor attestation compared to those companies that are not. As a result, book value is more likely to have a positive effect on market value if the auditor attestation is present. These results are consistent with the notion that the auditor attestation provides useful and relevant information to investors. A 2013 study found that exempt companies that voluntarily comply with the auditor attestation enjoy a lower cost of capital. Specifically, both the cost of equity and the cost of debt are significantly lower for companies that voluntarily comply with the requirement compared to those exempt companies that do not. C. A. Cassell, L.A. Myers, and J. Zhou, “The Effects of Voluntary Internal Control Audits on the Cost of Capital,” Working paper, (Feb. 13, 2013). 75 million. The study found a negative market response to the exemption but less so for those companies that voluntarily complied before 2009. It also found that to reduce information asymmetry, companies that voluntarily comply use their compliance as a signal to the marketplace of the superior quality of their financial reporting—a signal that is credible because it is costly and difficult to imitate by companies with weak internal controls. Also, companies that voluntarily complied with auditor attestation had significant increases in liquidity. Other research supports the view that auditor attestation of internal control effectiveness matters for investors and other market participants insofar as adverse auditor reports have negative consequences for companies. Such consequences include higher cost of debt (and possibly higher cost of equity), lower probability that lenders will extend lines of credit, stricter loan terms, and unfavorable stock recommendations. While most research findings we reviewed suggest auditor attestation provides valuable information to investors and has a positive effect on confidence, a 2011 study questions the value of the auditor attestation for small companies. Looking at exempt and small nonexempt companies with market capitalization of $300 million or less, the study finds that small companies that became nonexempt, and therefore subject to the auditor attestation requirement, in 2004 experienced a statistically significant increase in their material weakness disclosure rate, but companies that remained exempt saw similar increases through their management reports under Section 404(a) of the Sarbanes-Oxley Act. The results suggest that auditor attestation provides little additional information to investors in terms of detecting material weaknesses because there is no statistically significant difference in the rate of disclosure of material weakness between the two types of companies. The majority of academics and market participants we interviewed suggest that having auditor attestation positively impacts investor confidence. Specifically, they told us that the involvement of auditors in attesting to the effectiveness of internal controls improves the reliability of the financial reporting and serves to protect investors. As a result, they said, the exemption granted to small companies is likely to reduce investor confidence because these companies already have greater informational asymmetry. They said that according to academic and other studies, small companies are also more likely than large ones to have serious internal control problems. Furthermore, they commented that management’s report on internal controls alone is often uninformative because management often fails to detect internal control deficiencies or classifies them as less severe than they are. Some market participants also told us that any company accessing capital markets, regardless of size, should be required to comply with the auditor attestation requirement as investors in any company, large or small, are entitled to the same investor protection. Our survey results also indicate that some companies view auditor attestation as contributing to investor confidence, which is similar to findings from others’ studies and surveys. Our survey results show that the majority of respondents are more confident in the financial reports of companies that comply with the auditor attestation requirement than companies that do not. In addition, we estimate that 30 percent of responding nonexempt and exempt companies that voluntarily comply thought that the requirement increased investor confidence in their own company, while 20 percent were not sure and the remaining 50 percent reported no impact. This perspective is consistent with the results from an in-depth 2009 telephone survey SEC conducted of a small group of financial statement users—such as lenders, securities analysts, credit rating agencies, and other investors—regarding their views on the benefits of auditor attestation. These SEC survey respondents indicated that the auditor’s attestation report provides additional benefits to users and other investors beyond the management’s report under Section 404(a) and that the requirement generally has a positive impact on their confidence in companies’ financial reports. Moreover, in response to a 2010 Center for Audit Quality (CAQ) survey of individual investors, almost two-thirds of investors said they were concerned about exempting companies with annual revenues of under $75 million from the independent auditor attestation requirement, suggesting that the requirement has a positive effect on individual investors’ confidence in the financial information generated by smaller companies. Similarly, in a 2012 survey of investors conducted by the PCAOB Investor Advisory Group on the role, relevance, and value of the audit, over 60 percent of respondents said that the auditor’s opinion on the effectiveness of internal controls is critical in making investment decisions. Further, in a 2012 survey of individual investors by CAQ, 70 percent of the respondents identified independent audits in general as the most effective means of protecting their interests. Explicit disclosure of auditor attestation status in exempt companies’ annual reports could quickly provide investors useful information that may influence their investment decisions. Currently, exempt companies are not required to disclose in their annual reports whether they have voluntarily obtained an auditor attestation on their internal controls. From 2005 through 2010, SEC granted small public companies multiple extensions from having to comply with the auditor attestation requirement. During this time of forbearance, SEC required exempt companies to include a general statement in their annual report that the company was not required to comply with the auditor attestation requirement because of SEC’s grant of temporary exemption status. According to SEC officials, the statement served to provide investors who may have been looking for the attestation an explanation of its absence. SEC granted its final temporary exemption to take effect on June 15, 2010, prior to the passage of the Dodd-Frank Act. SEC did not require exempt companies to include the disclosure statement when implementing the provision of the Dodd-Frank Act that created the permanent exemption. SEC officials said that it is not common for the agency to require a company to disclose compliance status for requirements that are not applicable to the company—which, according to SEC officials, could potentially influence a company’s behavior. Further, SEC officials noted that information on the company’s filing status—and, therefore exemption status—can be found in the company’s annual reports and other documents, which are available to all investors.stated that such information allows investors to determine whether an attestation has been obtained. However, while this information is available, a company’s attestation status is not readily apparent without some knowledge or interpretation of the current reporting requirements. As noted earlier, SEC has previously required companies to provide additional clarity on their compliance with the auditor attestation requirement. Thus, requiring companies to explicitly disclose their auditor attestation status would be consistent with its past action. Further, federal securities laws require public companies to disclose relevant information to investors to aid them in their investment decisions. Many market participants we interviewed consider the external auditor’s assessment of the effectiveness of a company’s internal control over financial reporting to be important information for investors. Thus, many market participants we interviewed and companies we surveyed noted that exempt companies should be required to explicitly disclose whether or not they obtained an auditor attestation to make the information more transparent for investors. In particular, according to the results of our survey, we estimate that 57 percent of all companies with less than $10 billion in market capitalization are in favor of requiring exempt companies to disclose whether they have voluntarily obtained an auditor attestation. A representative from one company said “I believe there is an assumption that SEC-listed companies are in compliance with 404. If companies are not, they should disclose such.” A representative from another company said that “If investors value the independent audit, then they should be made aware of situations where such audit has not been performed. Investors should not have to interpret the regulations to know if the audit is required.” Some companies we surveyed that were not in favor of such disclosure generally believed that investors can get the information from the audit opinion in the annual report. As of year-end 2011, approximately 300 exempt companies had voluntarily complied with the auditor attestation requirement. Although information on voluntary compliance with the auditor attestation requirement is determinable, having the information explicitly disclosed could benefit investors. Such disclosure would increase transparency and investor protection by making investors more aware of this important investment information. Investors need accurate financial information with which to make informed investment decisions, and effective internal controls are necessary for accurate and reliable financial reporting. The attestation requirement is part of legislation aimed at helping to protect investors by, among other things, improving the quality of corporate financial reporting and disclosures. Perceptions of the costs and benefits of auditor attestation continue to vary among companies and others, but among other benefits, obtaining auditor attestation appears to have a positive impact on investor confidence. In addition, our analysis found that companies (both exempt and nonexempt) that obtained an auditor attestation generally had fewer financial restatements than those that did not, which suggests that knowing whether a company has obtained the auditor attestation may be useful for investors in gauging the reliability of a company’s financial reporting. However, because SEC regulations currently do not require explicit statements regarding the voluntary attainment of auditor attestation, investors may have to interpret reporting requirements and filings to determine whether exempt companies have obtained an auditor attestation. Previously, when certain companies were temporarily exempt from the auditor attestation requirement, SEC required explicit disclosure of exemption status in companies’ annual reports. However, SEC eliminated this requirement in 2010 when companies of certain sizes were permanently exempted. Federal securities laws require public companies to disclose relevant information to investors to aid them in their investment decisions. Although information on a company’s exempt status is available to investors, explicit disclosure would increase transparency and investor protection by making investors readily aware of whether a company has obtained an auditor attestation on internal controls. The disclosure could serve as an important indicator of the reliability of a company’s financial reporting, which may influence investors’ decisions. To enhance transparency and investor protection, we recommend that SEC consider requiring public companies, where applicable, to explicitly disclose whether they obtained an auditor attestation of their internal controls. We provided a draft of the report to the SEC Chairman for her review and comment. SEC provided written comments that are summarized below and reprinted in appendix II. We also provided a draft of the report to PCAOB and relevant excerpts of the draft report to Audit Analytics for technical review. We received technical comments from SEC, PCAOB, and Audit Analytics that were incorporated as appropriate. In its written comments, SEC did not comment on our recommendation that it consider requiring public companies to explicitly disclose whether they have obtained an internal control attestation. Rather, SEC confirmed, as described in the draft report, that a nonaccelerated filer (referred to as an exempt company in our report) does not have to explicitly disclose whether it obtained an auditor attestation report on its internal controls in its annual report. However, SEC stated that this fact can be easily determined by investors from information that is already disclosed in the annual report. In addition, SEC stated that investors can also find information regarding the existence of an opinion on internal controls by looking at the audit report in the company’s filing. SEC also noted that PCAOB standards permit an auditor that is not engaged to opine on internal controls to include a statement in its report on the financial statements indicating that it is not opining on the internal controls. In our report, we acknowledge that information needed to determine a company’s auditor attestation status is available. However, because an explicit statement on the company’s status is not required, investors must deduce the company’s status from the available information. Explicit disclosure could significantly decrease the potential for investors to misinterpret the information regarding a company’s audit attestation status. Such disclosure would increase transparency and investor protection by making investors readily aware of this important investment information. We therefore maintain that the disclosure warrants further consideration by SEC. We are sending copies of this report to appropriate congressional committees, SEC, PCAOB, Audit Analytics and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-8678 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix IV. This report discusses: (1) how the number of financial statement restatements compares between exempt and nonexempt companies; (2) the costs and benefits for nonexempt companies as well as exempt companies that voluntarily comply with the auditor attestation requirement; and (3) what is known about the extent to which investor confidence in the integrity of financial statements is affected by whether or not companies comply with the auditor attestation requirement. We define exempt companies as those with less than $75 million in public float (nonaccelerated filers) and nonexempt companies as those with $75 million or more in public float (accelerated filers). For the purposes of this report, we define exempt companies as those with less than $75 million in public float (nonaccelerated filers) and nonexempt companies as those with $75 million or more in public float (accelerated filers). To address all three objectives, we reviewed and analyzed information from a variety of sources, including the Sarbanes-Oxley Act of 2002 (Sarbanes-Oxley Act), the Dodd-Frank Wall Street Reform and Consumer Protection Act (Dodd-Frank Act), relevant regulatory press releases and related public comment letters, and available research studies. We also interviewed officials from the Securities and Exchange Commission (SEC) and the Public Company Accounting Oversight Board (PCAOB), and we interviewed chief financial officers of small public companies, representatives of relevant trade associations (representing individual and institutional investors, accounting companies, financial analysts and investment professionals, and financial executives), a large pension fund, a credit rating agency, academics knowledgeable about accounting issues, and industry experts. To determine the number of financial statement restatements (referred to as financial restatements) and trends, we analyzed data from the Audit Analytics database from 2005 through 2011. We used the Audit Analytics’ Auditor Opinion database to generate the population of exempt and nonexempt companies in each year from 2005 through 2011. Our analysis does not include 2012 data because 2012 small-company data was incomplete. According to Audit Analytics, the incomplete data was often due to the fact that small companies had not yet filed the relevant information with SEC. The sample we used to produce the population of exempt and nonexempt companies does not include subsidiaries of a public company, registered investment companies, or asset-backed securities issuers. Once we excluded these companies from the entire population, we grouped the remaining companies based on their filing status (i.e., nonaccelerated filer, smaller reporting company, accelerated filer, large accelerated filer, and filers that did not disclose their filing status). Exempt companies are nonaccelerated filers, including smaller reporting companies. For our purposes, we grouped companies that did not disclose their filing status but whose market capitalization was less than $75 million with exempt companies. We also identified for each year from 2005 through 2011 exempt companies that voluntarily complied with the integrated audit requirement as indicated in the data. Nonexempt companies are accelerated filers and large accelerated filers. For our purposes, we grouped companies that did not disclose their filing status but whose market capitalization was equal to or greater than $75 million with nonexempt companies. We excluded companies that did not disclose their filing status and did not have a reported market capitalization. We then used Audit Analytics’ Restatement database, which contains company information (e.g., assets, revenues, restatements, market capitalization, location, and industry classification code) to identify the number of financial restatements from 2005 through 2011 based on our population of exempt companies, exempt companies that voluntarily complied, and nonexempt companies. Using this database, we identified 6,436 financial restatements by 4,536 public companies, 2,834 of which were exempt companies. We used Audit Analytics’ 69 classifications to classify the type of financial restatements into six categories: core expenses (i.e., ongoing operating expenses), noncore expenses (i.e., nonoperating or nonrecurring expenses), revenue recognition (i.e., improperly record revenues), reclassifications and disclosures, underlying events (i.e., accounting for mergers and acquisitions), and other. The majority of restatements we classified were the result of an accounting rule misapplication. To identify audit costs of compliance, we analyzed data from Audit Analytics’ Auditor Opinion database, which contains auditors’ report information such as audit fees, nonaudit fees, auditor name, audit opinions, revenues, and company size, among other information from 2005 through 2011. Our analyses of audit costs do not include 2012 data because 2012 small-company data was incomplete. The incomplete data was often due to the fact that small companies had not yet filed the relevant information with SEC. We tested a sample of the Audit Analytics database information and found it to be reliable for our purposes. For example, we cross-checked random samples from each of Audit Analytics’ databases with information on financial restatements, filing status, and internal controls from SEC’s Electronic Data Gathering, Analysis, and Retrieval system. We also spoke with other users of Audit Analytics data as well as Audit Analytics officials. In addition, we reviewed relevant research studies and papers on the impact of compliance with the internal control audits on financial restatements. We consider the information to be reliable for our purpose of determining financial statement restatement trends and audit fee calculations. To examine the characteristics of publicly traded companies that complied, either voluntarily or because required, with the requirement to obtain an independent auditor attestation of their internal controls, we conducted a web-based survey of companies that had either voluntarily complied or were required to comply with the integrated audit requirement in any year between 2004 and 2011. Based on a list of publicly traded companies obtained from Audit Analytics, we identified 4,053 companies that had either voluntarily complied with the integrated audit requirement in any year from 2004 through 2011 or that were required to comply in 2011 as determined by their filing status. We stratified the population into three strata by first identifying the nonaccelerated voluntary filers. These are companies that voluntarily complied with the integrated audit requirement in any year from 2004 through 2011. Since our primary focus was on the nonaccelerated voluntary filers, we selected all 392 of these companies. From the remaining companies in the population, we created two additional strata based on 2011 filing status, and we took a random sample of companies from the remaining strata. The sample sizes for the remaining strata were determined to produce a proportion estimate within each stratum that would achieve a precision of plus or minus 10 percentage points or less, at the 95 percent confidence level. Finally, we increased the sample size based on the expected response rate of 40 percent. We submitted our survey to a total of 850 companies from the original population of 4,053. We identified 104 companies in our sample that were closed, merged with another company, or improperly included in the sampling frame. We received valid responses from 195 out of the remaining 746 sampled companies (see table 7). The weighted response rate, which accounts for the differential sampling fractions within strata, is 25 percent. We conducted this survey in a web-based format. The questionnaire was designed by a GAO survey specialist in collaboration with GAO staff with subject-matter expertise. The questionnaire was also reviewed by experts at SEC. We pretested drafts of our questionnaire with three public companies of different sizes to ensure that the questions and response categories were clear, that terminology was used correctly, and that the questions did not place an undue burden on the respondents. The pretests were conducted by telephone with company financial executives in Iowa, Virginia, and Washington, D.C. Pretests included GAO methodologists and GAO subject-matter experts. Based on the feedback received from the pretests, we made changes to the content and format of some survey questions. We directed our survey to the chief executive officer, chief financial officer, or chief accounting officer, whose names and email addresses we obtained from Nexis. We activated our web- based survey on December 17, 2012, and closed the survey on February 19, 2013. We sent follow-up emails on three occasions to remind respondents to complete the survey and conducted telephone follow-ups to increase the response rate. Because our survey was based on a random sample of the population, it is subject to sampling errors. In addition, the practical difficulties of conducting any survey may introduce nonsampling errors. For example, difference in how a particular question is interpreted or the sources of information available to respondents may introduce errors. We took steps, such as those described above, to minimize such nonsampling errors in the development of the questionnaire and the data collection and data analysis stages as well. For example, because this was a web-based survey, respondents entered their responses directly into the database, reducing the possibility of data-entry error. Finally, when the data were analyzed, a second independent analyst reviewed all computer programs. We conducted an analysis of our survey results to identify potential sources of nonresponse bias using two methods. First, we examined the response propensity of the sampled companies by several demographic characteristics. These characteristics included market capitalization size categories, region, and sector. Our second method consisted of comparing weighted estimates from respondents and nonrespondents to known population values for total market capitalization. We conducted statistical tests of differences, at the 95 percent confidence level, between estimates and known population values, and between respondents and nonrespondents. We determined that there was significant bias induced by the largest companies (measured by market capitalization) not responding to the survey. In other words, we found that companies with market capitalization over $10 billion were underrepresented in our sample. However, we found no evidence of substantial nonresponse bias based on these characteristics when generalizing to the population of companies with market capitalization less than or equal to $10 billion. Therefore, we adjusted the scope of our survey to include only those companies with market capitalization of less than or equal to $10 billion (see table 8). Because we found no evidence of substantial nonresponse bias when generalizing to the adjusted target population and the weighted response rate of 25 percent, we determined that weighted estimates generated from these survey results are generalizable to the population of in-scope companies. We generated weighted estimates and generalized the results to the estimated in-scope population of 3,432 companies (plus or minus 42 companies). Because we followed a probability procedure based on random selections, our sample is only one of a large number of samples that we might have drawn. Since each sample could have provided different estimates, we express our confidence in the precision of our particular sample’s results as a 95 percent confidence interval. This is the interval that would contain the actual population value for 95 percent of the samples we could have drawn. As a result, we are 95 percent confident that each of the confidence intervals in this report includes the true values in the study population. All percentage estimates presented in this report have a margin of error of plus or minus 15 percentage points or fewer, and all estimates of averages have a relative margin of error of plus or minus 20 percent or less, unless otherwise noted. To obtain information on the impact of obtaining an auditor attestation on a company’s cost of capital, we included questions in our web-based survey to large and small public companies of various industries about this matter, interviewed trade associations, industry experts, a large pension fund, and academics; and reviewed relevant academic and SEC research studies. To examine the extent to which investor confidence in the integrity of financial statements is affected by companies’ compliance with the auditor attestation requirement, we reviewed relevant empirical literature written by academic researchers, as well as recent surveys, studies, reports, and articles by others. To identify these studies, we asked for recommendations from academics, SEC, PCAOB, and representatives of organizations that address issues related to the auditor attestation requirement. We reviewed bibliographies of papers we obtained to identify additional material. In addition, we conducted searches of online databases such as ProQuest and Nexis using keywords to link Section 404(b) of the Sarbanes-Oxley Act with investor confidence. We also conducted interviews with agencies and organizations, as well as academics and other knowledgeable individuals who focus on issues related to investor confidence and the auditor attestation requirement. Moreover, we interviewed small public companies exempt from auditor attestation but who nonetheless complied with the requirement. In addition, we reviewed surveys undertaken by various government agencies and organizations to gauge the impact of the auditor attestation on investor confidence. We conducted a focused review of the research related to Section 404(b) of the Sarbanes-Oxley Act and summarized the recent studies most relevant to our objective. The empirical research discussed may have limitations, such as accuracy of measures and proxies used. We reviewed published works by academic researchers, government agencies, and organizations with expertise in the field. We performed our searches from September 2012 through May 2013. We assessed the reliability of these studies for use as corroborating evidence and found them to be reliable for our purposes. We also included questions in our web-based survey to large and small public companies of various industries about this matter. Lastly, we reviewed relevant federal securities laws, the Securities Act of 1933 and the Securities Exchange Act of 1934. We conducted this performance audit from May 2012 to July 2013 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the contact named above, Karen Tremba, (Assistant Director), James Ashley, Bethany Benitez, William Chatlos, Janet Eackloff, Joe Hunter, Cathy Hurley, Stuart Kaufman, Marc Molino, Lauren Nunnally, Jennifer Schwartz, and Seyda Wentworth made key contributions to this report. Alexander, C. R., S. W. Bauguess, G. Bernile, Y. A. Lee, and J. Marietta- Westberg. “The Economic Effects of SOX Section 404 Compliance: A Corporate Insider Perspective.” Working paper. March 2010. Asare, S. K., and A. Wright. “The Effect of Type of Internal Control Report on Users’ Confidence in the Accompanying Financial Statement Audit Report.” Contemporary Accounting Research, vol. 29, no. 1 (2012). Ashbaugh-Skaife, H., D. Collins, W. Kinney, and R. LaFond. “The Effect of Internal Control Deficiencies on Firm Risk and Cost of Equity.” Journal of Accounting Research, vol. 47, no. 1 (2009). Audit Analytics. “2011 Financial Restatements: An Eleven Year Comparison.” Sutton, Mass.: 2012. Audit Analytics, “2009 Financial Restatements: A Nine Year Comparison.” (Sutton, Mass.: February 2010). Audit Analytics. “Restatements Disclosed by the Two Types of SOX 404 Issuers: (1) Auditor Attestation Filers and (2) Management-Only Report Filers.” Sutton, Mass.: November 2009. Brown, K., P. Pacharn, J. Li, E. Mohammad, F. A. Elayan, and F. Chu. “The Valuation Effect and Motivations of Voluntary Compliance with Auditor’s Attestation Under Sarbanes-Oxley Act Section 404 (B).” Working paper. January 15, 2012. Cassell, C.A., L. A. Myers, and J. Zhou. “The Effects of Voluntary Internal Control Audits on the Cost of Capital.” Working paper. February 13, 2013. Chief Financial Officers’ Council and the President’s Council on Integrity and Efficiency, Estimating the Costs and Benefits of Rendering an Opinion on Internal Control over Financial Reporting. Coates IV, J. C. “The Goals and Promise of the Sarbanes-Oxley Act.” Journal of Economic Perspective, vol. 21, no. 1 (2007). Crabtree, A., and J. J. Mahler. “Credit ratings, Cost of Debt, and Internal Control Disclosures: A Comparison of SOX 302 and SOX 404.” The Journal of Applied Business Research, vol. 28, no. 5 (2012). Dhaliwal, D., C. Hogan, R. Trezevant, and M. Wilkins. “Internal Control Disclosures, Monitoring, and the Cost of Debt.” The Accounting Review, vol. 86, no. 4 (2011). GAO. Community Banks and Credit Unions: Impact of the Dodd-Frank Act Depends Largely on Future Rule Makings. GAO-12-881. Washington, D.C.: September 13, 2012. GAO. Financial Restatements: Update of Public Company Trends, Market Impacts, and Regulatory Enforcement Activities. GAO-06-678. Washington, D.C.: March 5, 2007. GAO. Sarbanes-Oxley Act: Consideration of Key Principles Needed in Addressing Implementation for Smaller Public Companies. GAO-06-361. Washington, D.C.: April 13, 2006. Holder, A. D., K. E. Karim, and A. Robin. “Was Dodd-Frank Justified in Exempting Small Firms from Section 404b Compliance?” Accounting Horizons, vol. 27, no. 1 (2013). Iliev, P. “The Effect of SOX Section 404: Costs, Earnings Quality, and Stock Prices.” Journal of Finance, vol. 65, no. 3 (2010). Kim, J. B., B. Y. Song, and L. Zhang. “The Internal Control Weakness and Bank Loan Contracting: Evidence from SOX Section 404 Disclosures.” The Accounting Review, vol. 86, no. 4 (2011). Kinney, W. R., and M. L. Shepardson. “Do Control Effectiveness Disclosures Require SOX 404(b) Internal Control Audits?: A Natural Experiment with Small U.S. Public Companies.” Journal of Accounting Research, vol. 49, no. 2 (2011). Krishnan, G.V., and W. Yu. “Do Small Firms Benefit from Auditor Attestation of Internal Control Effectiveness?” Auditing: A Journal of Practice and Theory, vol. 34, no. 4 (2012). Nagy, A. L. “Section 404 Compliance and Financial Reporting Quality.” Accounting Horizons, vol. 24, no. 3 (2010). Orcutt, J. L. “The Case Against Exempting Smaller Reporting Companies from Sarbanes-Oxley Section 404: Why Market-Based Solutions are Likely to Harm Ordinary Investors.” Fordham Journal of Corporate and Financial Law, vol. 14, no. 2 (2009). Schneider, A., A. Gramling, D. R. Hermanson, and Z. Ye. “A Review of Academic Literature on Internal Control Reporting Under SOX.” Journal of Accounting Literature, vol. 28 (2009). Schneider, A., and B. K. Church. “The Effect of Auditors’ Internal Control Opinions on Loan Decisions.” Journal of Accounting and Public Policy, vol. 27, no.1 (2008). Scholz, Susan. The Changing Nature and Consequences of Public Company Financial Restatements: 1997-2006. A special report prepared at the request of the Department of the Treasury. April 2008. U.S. Securities and Exchange Commission. Study and Recommendations on Section 404(b) of the Sarbanes-Oxley Act of 2002 For Issuers with Public Float Between $75 and $250 Million. Washington, D.C.: 2011. U.S. Securities and Exchange Commission. Study of the Sarbanes-Oxley Act of 2002 Section 404 Internal Control over Financial Reporting Requirements. Washington, D.C.: 2009. Center for Audit Quality. The CAQ’s Sixth Annual Main Street Investor Survey, September 2012. Center for Audit Quality. The CAQ’s Fourth Annual Individual Investor, September 2010. Financial Executives International and Financial Executives Research Foundation, 2012 Audit Fee Survey. Morristown, N.J.: 2012. Financial Executives International and Financial Executives Research Foundation, Special Survey on Sarbanes-Oxley Section 404 Implementation. Morristown, N.J.: 2005. PCAOB. 2012 SOX Compliance Survey: Role, Relevancy and Value of the Audit. 2012. Protiviti, 2013 Sarbanes-Oxley Compliance Survey: Building Value in Your SOX Compliance Program. 2013. Protiviti, 2012 Sarbanes-Oxley Compliance Survey: Where U.S.-Listed Companies Stand – Reviewing Cost, Time, Effort and Process. 2012. | Section 404(b) of the Sarbanes-Oxley Act requires a public company to have its independent auditor attest to and report on management's internal control over financial reporting; this is known as the auditor attestation requirement. In July 2010, the Dodd-Frank Wall Street Reform and Consumer Protection Act exempted companies with less than $75 million in public float from the auditor attestation requirement. The act mandated that GAO examine the impact of the permanent exemption on the quality of financial reporting by small public companies and on investors. This report discusses (1) how the number of financial statement restatements compares between exempt and nonexempt companies (i.e., those with $75 million or more in public float), (2) the costs and benefits of complying with the attestation requirement, and (3) what is known about the extent to which investor confidence is affected by compliance with the auditor attestation requirement. GAO analyzed financial restatements and audit fees data; surveyed 746 public companies with a response rate of 25 percent; interviewed regulatory officials and others; and reviewed laws, surveys, and studies. Since the implementation of the auditor attestation requirement of the Sarbanes-Oxley Act of 2002 (Sarbanes-Oxley Act), companies exempt from the requirement have had more financial restatements (a company's revision of publicly reported financial information) than nonexempt companies, and the percentage of exempt companies restating generally has exceeded that of nonexempt companies. Exempt and nonexempt companies restated their financial statements for similar reasons (e.g., revenue recognition and expenses), and the majority of these restatements produced a negative effect on the companies' financial statements. Views on the costs and benefits of auditor attestation vary among companies and others. Although companies and others reported that the costs associated with compliance can be significant, especially for smaller companies, GAO's and others' analyses show that these costs have declined for companies of all sizes since 2004. Companies and others reported benefits of compliance, such as improved internal controls and reliability of financial reports. However, measuring whether auditor attestation compliance costs outweigh the benefits is difficult and views among companies and others were mixed as to whether the costs exceeded the benefits of compliance. A majority of empirical studies GAO reviewed suggest that compliance with the auditor attestation requirement has a positive impact on investor confidence in the quality of financial reports. Some interviewees said the independent scrutiny of a company's internal controls is an important investor protection safeguard. The Securities and Exchange Commission (SEC) does not require exempt companies to disclose in their annual report whether they voluntarily obtained an auditor attestation. SEC officials said it is not common for SEC to require a company to disclose voluntary compliance with requirements from which it is exempt. However, federal securities laws require companies to disclose relevant information to investors to aid in their investment decisions. Although information on auditor attestation status is available to investors, requiring a company to explicitly state whether it has obtained an auditor attestation on internal controls could increase transparency and investor protection. GAO recommends that SEC consider requiring public companies, where applicable, to explicitly disclose whether they obtained an auditor attestation of their internal controls. SEC responded that investors could determine attestation status from available information. But without clear disclosure, investors may misinterpret a company's status; therefore, this warrants SEC's further consideration. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
In recent years, we, Congress, the 9/11 Commission, and others have recommended that federal agencies with homeland security responsibilities utilize a risk management approach to help ensure that finite resources are dedicated to assets or activities considered to have the highest security priority. The purpose of risk management is not to eliminate all risks, as that is an impossible task. Rather, given limited resources, risk management is a structured means of making informed trade-offs and choices about how to use available resources effectively and monitoring the effect of those choices. Thus, risk management is a continuous process that includes the assessment of threats, vulnerabilities, and consequences to determine what actions should be taken to reduce or eliminate one or more of these elements of risk. To provide guidance to agency decision makers, we developed a risk management framework, which is intended to be a starting point for applying risk-informed principles. Our risk management framework entails a continuous process of managing risk through a series of actions, including setting strategic goals and objectives, assessing risk, evaluating alternatives, selecting initiatives to undertake, and implementing and monitoring those initiatives. Additional information on risk management, including our risk management framework, can be found in appendix I. DHS is required by statute to utilize risk management principles with respect to various DHS functions. With regard to the Coast Guard, federal statutes call for the Coast Guard to use risk management in specific aspects of its homeland security efforts. The Maritime Transportation Security Act of 2002 (MTSA), for example, calls for the Coast Guard and other port security stakeholders, through implementing regulations, to carry out certain risk-based tasks, including assessing risks and developing security plans for ports, facilities, and vessels. In addition, the Coast Guard Authorization Act of 2010 requires, for example, the Coast Guard to (1) develop and utilize a national standard and formula for prioritizing and addressing assessed security risks at U.S. ports and facilities, such as MSRAM; (2) require Area Maritime Security Committees to use this standard to regularly evaluate each port’s assessed risk and prioritize how to mitigate the most significant risks; and (3) make MSRAM available, in an unclassified version, on a limited basis to regulated vessels and facilities to conduct risk assessments of their own facilities and vessels. From 2001 to 2006, the Coast Guard assessed maritime security risk using the Port Security Risk Assessment Tool (PSRAT), which was quickly developed and fielded after the terrorist attacks of September 11, 2001. PSRAT served as a rudimentary risk calculator that ranked maritime critical infrastructure and key resources (MCIKR) with respect to the consequences of a terrorist attack and evaluated vessels and facilities that posed a high risk of a transportation security incident. While PSRAT provided a relative risk of targets within a port region, it could not compare and prioritize relative risks of various infrastructures across ports, among other limitations. Recognizing the shortcomings of PSRAT that had been identified by the Coast Guard and us, in 2005 the Coast Guard developed and implemented MSRAM to provide a more robust and defensible terrorism risk analysis process. MSRAM is a risk-based decision support tool designed to help the Coast Guard assess and manage maritime security risks throughout the Coast Guard’s area of responsibility. Coast Guard units throughout the country use this tool to assess security risks to over 28,000 key maritime infrastructure assets—also known as targets—such as chemical facilities, passenger terminals, and bridges, as well as vessels such as cruise ships, ferries, and vessels carrying hazardous cargoes, among other things. Unlike PSRAT, MSRAM is designed to capture the security risks facing different types of targets, allowing comparison between different targets and geographic areas at the local, regional, and national levels. MSRAM’s risk assessment methodology assesses the risk of a terrorist attack based on different scenarios; that is, it combines potential targets with different attack modes for each target/attack mode combination (see table 1). MSRAM automatically determines which attack modes are required to be assessed for each target type, though local MSRAM analysts have the ability to evaluate additional optional attack modes against any target. For each target/attack mode combination, MSRAM can provide different risk results, such as the inherent risk of a target and the amount of risk mitigated by Coast Guard security efforts. MSRAM calculates risk using the following risk equation: Risk = Threat x Vulnerability x Consequence. Numerical values representing Coast Guard’s assessment of threat (or relative likelihood of attack), vulnerability should an attack occur, and consequences of a successful attack are combined to yield a risk score for each maritime target. The model calculates risk using threat judgments provided by the Coast Guard Intelligence Coordination Center (ICC), and vulnerability and consequence judgments provided by MSRAM users at the sector level— typically Coast Guard port security specialists—which are reviewed at the district, area, and headquarters levels. The risk equation variables are as follows: Threat represents the relative likelihood of an attempted attack on a target. The ICC provides threat probabilities to MSRAM, based upon judgments regarding specific intent, capability, and geographic preference of terrorist organizations to deliver an attack on a specific type of maritime target class—for example, a boat bomb attack on a ferry terminal. To make these judgments, ICC officials use intelligence reports generated throughout the broader intelligence community to make qualitative determinations about certain terrorist organizations and the threat they pose to the maritime domain. At the sector level, Coast Guard MSRAM users do not input threat probabilities and are required to use the threat probabilities provided by the ICC. This approach is intended to ensure that threat information is consistently applied across ports. Vulnerability represents the probability of a successful attack given an attempt. MSRAM users at the sector level assess the vulnerability of targets within their respective areas of responsibility. Table 2 shows the factors included in the MSRAM vulnerability assessment. Consequence represents the projected overall impact of a successful attack on a given target or asset. Similar to vulnerability assessments, MSRAM users at the sector level assess the consequences of a successful attack on targets within their respective area of responsibility. Table 3 shows the factors included in the MSRAM consequence assessment. In addition to the consequence factors listed in table 3, sector MSRAM users also assess the response capabilities of the Coast Guard, port stakeholders, and other governmental agencies and their ability to mitigate death/injury, primary economic, and environmental consequences of a successful attack. Because there is a broad array of target types operating in the maritime domain that can result in different types of impacts if successfully attacked, MSRAM uses an approach for drawing equivalencies between the different types of impacts. This approach was based on establishing a common unit of measure, called a consequence point. One consequence point represents $1 million of equivalent loss to the American public. To support MSRAM development and risk analysis at the headquarters level, the Coast Guard has provided MSRAM-dedicated staff and resources. According to the Coast Guard, resources for MSRAM or port security risk analysis are not from a specific budget line item. From fiscal year 2006 to fiscal year 2011, the Coast Guard reported assigning from two to five staff (full-time equivalents) and from $0.6 million to $1.0 million annually to support MSRAM at headquarters. There are no MSRAM- dedicated staff at the area, district, and sector levels; rather, MSRAM assessment and analysis is generally conducted by port security specialists, who have other responsibilities. The port security specialist typically has responsibility for numerous activities, including the Port Security Grant Program, Area Maritime Security Committees, and Area Maritime Security Training Exercise Program, among others. The NIPP is DHS’s primary guidance document for conducting risk assessments and includes core criteria that identify the characteristics and information needed to produce quality risk assessment results. The NIPP’s basic analytical principles state that risk assessments should be complete, reproducible, documented, and defensible, as defined in table 4. MSRAM generally aligns with DHS’s criteria for a complete and reproducible risk assessment, but some challenges remain, such as the limited time for Coast Guard personnel to complete assessments. MSRAM also generally aligns with the NIPP criteria for a documented and defensible risk assessment, but the Coast Guard could improve its documentation of the model’s assumptions and other sources of uncertainty, such as the subjective judgments made by Coast Guard analysts about vulnerabilities and consequences, and how these assumptions and other sources of uncertainty affect MSRAM’s results. In addition to providing decision makers with an understanding of how to interpret any uncertainty in MSRAM’s risk estimates, greater transparency and documentation could facilitate periodic peer reviews of the model—a best practice in risk management. MSRAM generally aligns with NIPP criteria for a complete risk assessment. In accordance with NIPP criteria for a complete risk assessment, MSRAM assesses risk using three main variables— consequence, vulnerability, and threat. MSRAM’s risk assessment methodology also follows the NIPP criteria for factors that should be assessed in each of the three risk variables. Specifically, for threat, MSRAM generally follows the NIPP criteria by identifying attack methods that may be employed and by considering the adversary’s intent and capability to attack a target. MSRAM generally follows the vulnerability assessment criteria by estimating the likelihood of an adversary’s success for each attack scenario and describing the protective measures in place, and MSRAM generally follows the consequence assessment criteria by estimating economic loss in dollars, estimating fatalities, and describing psychological impacts, among other things. MSRAM’s risk assessment methodology also generally aligns with the NIPP criteria for a reproducible risk assessment. To be reproducible, the methodology must produce comparable, repeatable results and minimize the number and impact of subjective judgments, among other things. Although Coast Guard officials acknowledge that MSRAM risk data are inherently subjective, the MSRAM model and data collection processes include features designed to produce comparable, repeatable results across sectors. For instance, the Coast Guard prepopulates threat data into MSRAM from the Coast Guard’s ICC. This allows for nationally vetted threat scores that do not rely on multiple subjective local judgments. DHS, in its 2010 Transportation Systems Sector-Specific Plan, stated that MSRAM produces comparable, repeatable results. The Coast Guard has taken numerous actions that contribute to MSRAM being a complete and reproducible risk assessment model. To improve the quality and accuracy of MSRAM data and reduce the amount of subjectivity in the MSRAM process, the Coast Guard conducts an annual review and validation of MSRAM data produced at each sector; provides MSRAM users with tools, calculators, and benchmarks to assist in calculating consequence and vulnerability; and provides training to sectors on how to enter data into MSRAM. Specific actions are detailed below. Annual validation and review. The Coast Guard uses a multilevel annual validation and review process, which helps to ensure that MSRAM risk data are comparable and repeatable across sectors. According to a 2010 review of MSRAM, conducting a thorough review process across sectors is especially important if the data are to be used for national-level decision making. This process includes sector, district, area, and headquarters officials and aims to normalize MSRAM data by establishing national averages of risk scores for attack modes and targets and by identifying outliers. The annual MSRAM validation and review process begins with sectors completing vulnerability and consequence assessments for targets within their areas of responsibility. Once the sector Captain of the Port validates the assessments, the risk assessment data are sent to district and area officials for review. Following these reviews, Coast Guard headquarters officials combine each sector’s data into a national classified dataset and perform a statistical analysis of the data. The statistical analysis involves calculating national averages for vulnerability, consequence, and response When determining whether a sector’s risk score capabilities risk scores.for a specific target is questionable or is an outlier, reviewers consider the results of the statistical analysis as well as supporting comments or rationale provided by sector officials. According to the Coast Guard, for each outlier identified during the national review process, sector officials reconsider the data point in question and either change the inputs to reflect national averages or provide additional justification for why the risk score for the target in question should be outside of the national average. Headquarters officials explained that they generally accept justification for data outliers and that a goal of the review process is to spur discussions related to maritime risk rather than forcing compliance with national data averages. For example, officials from one sector told us that a small port in their sector is critical for their state’s energy imports, and accordingly, the port infrastructure is high risk on a national scale. The officials said that Coast Guard headquarters officials have questioned the relatively high risk rankings of the port’s infrastructure because they are statistical outliers, but have deferred to the expertise of the sector regarding the risk scores. Tools and calculators. Recognizing that sector port security specialists who assess risk using MSRAM generally do not have expertise in all aspects of assessing vulnerability and consequence, the Coast Guard has regularly added new tools and calculators to MSRAM to improve the quality, accuracy, and consistency of vulnerability and consequence assessments. For example, MSRAM now includes a blast calculator that allows users to more easily determine the death and injury consequences of an explosion close to population centers. Coast Guard officials from 29 sectors (82 percent of sectors) cited a variety of challenges with assessing vulnerability and consequence values in MSRAM, but officials from 10 sectors said that it was becoming easier to do over time and officials from 14 sectors said that the tools and calculators in MSRAM have helped. Benchmarks and recommended ranges. To limit inconsistencies caused by different judgments by individual MSRAM users and to minimize user subjectivity, the Coast Guard built into MSRAM a suggested range of scores for each risk factor—including vulnerability, consequence, and response capabilities—as well as averages, or benchmarks, of scores for each factor. The benchmarks are based on Coast Guard and expert evaluation of target classes and attack modes. The benchmarks and recommended ranges are reviewed and updated each year following the annual data revalidation cycle. Training. The Coast Guard has also provided annual training for MSRAM users, including beginning, intermediate, and advanced courses intended to standardize the data entry process across the Coast Guard. Officials from 34 sectors (97 percent) reported finding the training moderately to very useful in terms of enhancing their ability to assess, understand, and communicate the risks facing their sectors. In 2011, Coast Guard headquarters also started providing live web-based training sessions on various MSRAM issues, such as resolving national review comments, to help sector staff gain familiarity with MSRAM’s features on an as-needed basis. In addition to MSRAM training provided by headquarters, one Coast Guard district official we spoke with had developed and provided localized training to the sector-level port security specialists on assessing the vulnerability of chemical facilities. The district official told us that Coast Guard headquarters was interested in this local model for delivering training and was planning to pilot a similar training program in a different district. MSRAM generally aligns with DHS’s criteria for a complete and reproducible risk assessment, but challenges remain with the MSRAM methodology and risk assessment process. The Coast Guard has acknowledged these challenges and limitations and has actions underway to address them and make MSRAM more complete and reproducible. Coast Guard officials noted that some of these challenges are not unique to MSRAM and are faced by others in the homeland security risk assessment community. Specific challenges are detailed below. Data subjectivity. While the Coast Guard has taken actions to minimize the subjectivity of MSRAM data, officials acknowledged that assessing threat, vulnerability, and consequence is inherently subjective. To assess threat, the Coast Guard’s ICC quantifies judgments related to the intent and capability of terrorist organizations to attack domestic maritime infrastructure. However, there are limited national historic data for domestic maritime attacks and thus intelligence officials must make a number of subjective judgments and draw inferences from international maritime attacks. Further, GAO has previously reported on the inherently difficult nature of assessing the capability and intent of terrorist groups.Vulnerability and consequence assessments in MSRAM are also inherently subjective. For example, officials from 20 sectors we interviewed said that even with training, tools, and calculators, assessing consequences can be challenging and that it often involved subjectivity and uncertainty. Officials noted that assessing economic impacts—both primary and secondary—was particularly challenging because it required some level of expertise in economics—such as supply chains and industry recoverability—which port security specialists said is often beyond their skills and training. The input for secondary economic impacts can have a substantial effect on how MSRAM’s output ranks a target relative to other potential targets. Undervaluing secondary economic impacts could result in a lower relative risk ranking that underestimates the security risk to a target, or inversely, overvaluing secondary economic impacts could result in overestimating the security risk to a target. Recognizing the challenges with assessing secondary economic impacts, Coast Guard officials said they are working with the DHS Office of Risk Management and Analysis to study ways to more accurately assess secondary economic impacts. Additionally, during the course of our review the Coast Guard implemented a tool called IMPLAN that has the potential to inform judgments of secondary economic impacts by showing what the impact could be for different terrorist scenarios. Limited time to complete assessments. Officials from 19 sectors (54 percent) told us that the lack of time to complete their annually required vulnerability and consequence assessments is a key challenge and many expressed that they believed their sector’s data suffered in quality as a result. Each year, sectors are required to update and validate their risk assessments for targets in their areas of responsibility, which can involve site visits to port facilities and discussions with facility security officers to obtain information on vulnerability and consequences. Officials from a Gulf Coast sector noted that obtaining this information from facilities can be challenging because of the number of facilities in the sector and the time involved in meeting with each facility. Officials from an inland river sector also noted that gathering data from certain facilities—such as information on a chemical plant’s security enhancements or the expected loss of life from a terrorist attack—is challenging because facilities may not want to share proprietary information that could be damaging in the hands of a competitor. As a result, it often takes additional visits, phone calls, e-mails, and time to obtain this information. Officials from a northeastern sector said that having the people and time to update MSRAM data is their key challenge and completing the update is a heavy lift because the update is required at the same time as several other requirements, such as reviewing investment justifications for the Port Security Grant Program. Coast Guard sector officials and one district official we spoke with reported raising concerns to headquarters about the time it takes to complete MSRAM assessments. Headquarters staff also said they were looking into additional ways to make the assessment process easier for sectors, such as providing job aids and examining the possibility of completing the data update at different times in the year. Limitations in modeling methodology—adaptive terrorist behavior. There are inherent limitations in the overall methodology the Coast Guard uses to model risk. For instance, MSRAM threat information does not account for adaptive terrorist behavior, which is defined by the National Research Council as an adversary adapting to the perceived defenses around targets and redirecting attacks to achieve its goals. Accounting for adaptive terrorist behavior could be modeled by making threat a function of vulnerability and consequence rather than the MSRAM formula which treats threat, vulnerability, and consequence as independent variables.a critique of MSRAM raised by terrorism risk assessment experts. For example, officials from the DHS Office of Risk Management and Analysis have stressed the need to account for adaptive terrorist behavior in risk models. In addition, DHS’s 2011 Risk Management Fundamentals guidance states that analysts should be careful when calculating risk by multiplying threats, vulnerabilities, and consequences (as MSRAM does), Not accounting for adaptive terrorist behavior is especially for terrorism, because of interdependencies between the three variables. Coast Guard officials agreed with the importance of accounting for adaptive terrorist behavior and with the risks of treating threat, vulnerability, and consequence as independent variables. The officials explained that although they did not design MSRAM to account for adaptive terrorist behavior, they are working to develop the Dynamic Risk Management Model, which will potentially address this issue. For more information on network effects, see Gerald G. Brown, W. Matthew Carlyle, Javier Salmerón, and Kevin Wood, Operations Research Department, Naval Postgraduate School, Analyzing the Vulnerability of Critical Infrastructure to Attack and Planning Defenses (Monterey, Calif.: 2005). initiatives to identify and document networked systems of targets that if successfully attacked would have large ripple effects throughout the port or local economy. Coast Guard officials agreed that assessing network effects is a challenge and they are examining ways to meet this challenge. However, the Coast Guard’s work in this area is still in its infancy and there is uncertainty regarding the way in which the agency will move forward in measuring network effects. MSRAM is generally documented and defensible, but the Coast Guard could improve its documentation of the model’s assumptions and other sources of uncertainty, such as subjective judgments made by Coast Guard analysts about threats, vulnerabilities, and consequences, and how these assumptions and other sources of uncertainty affect MSRAM’s results. The NIPP states that for a risk assessment methodology to be documented, any assumptions and subjective judgments need to be transparent to the individuals who are expected to use the results. For a risk assessment methodology to be defensible, uncertainty associated with consequence estimates and the level of confidence in the vulnerability and threat estimates should also be communicated to users of the results. There are multiple assumptions and other sources of uncertainty in MSRAM. For example, assumptions used in MSRAM include the particular dollar value for a statistical life or the assumed dollar amount of environmental damage resulting from oil or hazardous material spilled as the result of a terrorist attack. MSRAM also relies on multiple subjective judgments made by Coast Guard analysts, which mean a range of possible values for risk calculated from the model. For example, to assess risk in MSRAM, Coast Guard analysts make judgments regarding such factors as the likelihood of success in interdicting an attack and the number of casualties expected to result from an attack. These subjective judgments are sources of uncertainty with implications that, according to the NIPP and risk management best practices, should be documented and communicated to decision makers. MSRAM’s primary sources of documentation provide information on how data are used to generate a risk estimate and information on some assumptions, and the Coast Guard has made efforts to document and reduce the number of assumptions made by the field-level user in order to increase the consistency of MSRAM’s data. For example, the MSRAM training and software manual states that MSRAM users are expected to specify the assumptions they make in evaluating various attack modes and provides assumptions for users to consider when scoring attack scenarios, such as specifying the type and amount of biological agent used in a biological attack scenario and assuming that attackers are armed and suicidal in a boat bomb attack scenario. While these documentation efforts are positive steps to reduce MSRAM data subjectivity and increase data consistency, we found that the Coast Guard has not documented all the sources of uncertainty associated with threat, vulnerability, and consequence assessments and what implications this uncertainty has for interpreting the results, such as an identification of the highest-risk targets in a port. As a result, decision makers do not know how robust the risk rankings of targets are and the degree to which a list of high-risk targets could change given the uncertainty in the risk model’s inputs and parameters. Moreover, overlapping ranges of possible risk values caused by uncertainty could have implications for strategic decisions or resource allocation, such as allocating grant funding or targeting patrols. Overlapping ranges of risk values due to uncertainty also underscores the importance of professional judgment in decision making because risk models do not produce precise outcomes that should be followed without a degree of judgment and expertise. According to the NIPP, the best way to communicate uncertainty will depend on the factors that make the outcome uncertain, as well as the amount and type of information that is available. The NIPP states that in any given terrorist attack scenario there is often a range of outcomes that could occur, such as a range of dollar amounts for environmental damage or a range of values for a statistical life. For some incidents, the range of outcomes is small and a single estimate may provide sufficient data to inform decisions. However, if the range of outcomes is large, the scenario may require additional specificity about conditions to obtain appropriate estimates of the outcomes. Often, this means providing a range of possible outcomes rather than a single point estimate. Coast Guard officials agreed with the importance of documenting and communicating the sources and implications of uncertainty for MSRAM’s risk estimates, and noted that they planned to develop this documentation as part of an internal MSRAM verification, validation, and accreditation (VV&A) process that they expect to complete in the fall of 2011. According to the Coast Guard, accreditation is an official determination that a model or simulation is acceptable to use for a specific purpose. While this accreditation process is expected to document the scope and limitations of MSRAM’s capabilities and determine whether these capabilities are appropriate for MSRAM’s current use, the Coast Guard’s draft accreditation plan does not discuss how the Coast Guard plans to assess and document uncertainty in its model or communicate those results to decision makers. National Research Council of the National Academies, Review of the Department of Homeland Security’s Approach to Risk Analysis. addressed and should address the structure of the model, the types and certainty of the data, and how the model is intended to be used. Peer reviews can also identify areas for improvement and can facilitate sharing best practices. As we have previously reported, external peer reviews cannot ensure the success of a model, but they can increase the probability of success by improving the technical quality of projects and the credibility of the decision-making process. MSRAM has been reviewed twice—in 2010 by risk experts affiliated with the Naval Postgraduate School and, to a lesser extent, in 2009 by CREATE at the University of Southern California. The authors of the Naval Postgraduate School report stated that their review was intended to validate and verify the equations used in MSRAM, evaluate MSRAM’s quality control procedures, and review the use of MSRAM outputs to manage risk. The authors of the CREATE report stated that their review focused on suggestions for improvement rather than a comprehensive evaluation, and they suggested that the Coast Guard continue to seek feedback and reviews from the risk and decision analysis community, as well as from practitioners of other disciplines. Coast Guard officials told us that they have generally benefited from reviews of MSRAM and have worked to implement many of the resulting recommendations. Officials noted they intend to pursue external reviews of MSRAM as part of the ongoing VV&A process, but they have not identified who would be conducting the reviews, or when the reviews would occur. As the Coast Guard’s risk assessment model continues to evolve, the Coast Guard could benefit from periodic external peer review to ensure that the structure and outputs of the model are appropriate for its given uses and to identify possible areas for improvement. MSRAM is a security risk analysis and risk management tool and the Coast Guard intends for it to be used to inform risk management decisions at all levels of command. As such, in a May 2011 guidance document, the Coast Guard set expectations for how MSRAM should be used at the national and sector levels. At the national level, the Coast Guard expects its offices to use MSRAM to support strategic plans, policy, and guidance; to integrate MSRAM into maritime security programs; and to ensure that sectors have adequate personnel ready to perform MSRAM duties, among other goals. Operational activities include conducting boat escorts, implementing positive control measures—that is, stationing armed Coast Guard personnel in key locations aboard a vessel to ensure that the operator maintains control—and providing a security presence through various actions. (MSRO) program. By identifying the nation’s highest-risk maritime targets, MSRAM helps establish the national MCIKR list, which sectors use to complete their annually required number of MCIKR visits. According to Coast Guard officials, MSRAM has aided in reducing the MCIKR list from 740 assets to 324 assets and allowed the Coast Guard to further prioritize within that more focused list of 324, since MSRAM analysis demonstrated that a small number of assets make up the majority of the nation’s risk. MSRAM has also been used as a tool to inform resource allocation and performance measurement, which is consistent with the Coast Guard’s goals for MSRAM. For instance, risk-informed methods and processes or models, such as MSRAM, are used in the Coast Guard’s annual Standard Operational Planning Process, which establishes a standardized process to apportion major assets, such as boats, aircraft, and deployable specialized forces. Coast Guard officials said that MSRAM data supports the PWCS mission in this process by demonstrating how risk is distributed geographically. In addition, Coast Guard used MSRAM to support a funding request for boats, personnel, and associated support costs to assist with Coast Guard efforts to reduce the risk of certain dangerous cargoes by escorting ships passing through coastal ports carrying cargoes such as liquefied natural gas. MSRAM also supports resource allocation through the Port Security Grant Program by informing the risk formula used by DHS to allocate grant funding. MSRAM data are also used in the Coast Guard’s model for measuring its performance in the PWCS mission, which is discussed in depth later in this report. MSRAM has also supported strategic documents and efforts throughout DHS. Specifically, the Coast Guard reported that MSRAM data are an essential building block for a number of key strategic documents, such as the National Maritime Strategic Risk Assessment, the National Maritime Terrorism Threat Assessment, and the Combating Marine Terrorism Strategic and Performance Plan, among others. In addition, the Coast Guard uses MSRAM, among other inputs, to provide DHS with maritime risk information for the Transportation Sector Security Risk Assessment tool. DHS also reported that the Coast Guard has shared MSRAM- based identification of critical assets beyond the transportation system with 13 of the 18 DHS critical infrastructure and key resource sectors. For example, MSRAM has been used to assess the risk of some chemical facilities and power plants. MSRAM has been used to inform a variety of efforts at the sector level, such as strategic planning, communication with port stakeholders, and operational and tactical decision making, but its use for operational and tactical risk management efforts has been limited by a lack of staff time, the complexity of the MSRAM tool, and competing mission demands, among other factors. The Coast Guard expects its 35 sectors, with support from its nine districts, to integrate MSRAM data into strategic, operational, and tactical plans, operations, and programs as necessary and required, among other actions. Based on results from our interviews with officials from all 35 Coast Guard sectors, officials from 26 sectors (74 percent) reported finding MSRAM moderately to very useful for informing strategic planning, which includes developing portions of local Area Maritime Security Plans and planning security exercises. sector reported using MSRAM to find the highest-risk areas in which to conduct exercises. Further, lessons learned from the exercises are incorporated into strategic plans, which officials said leads to planning process improvements and overall better plans. However, officials from a southeastern sector pointed out that MSRAM is a snapshot view of port risk and therefore long-term strategic plans require additional information from many sources. Area Maritime Security Plans have been established pursuant to the Maritime Transportation Security Act of 2002. Content requirements for the plans were established by 33 C.F.R. § 103.505 and expanded by the Security and Accountability For Every Port (SAFE Port) Act of 2006 to include a Salvage Response Plan. The plans are intended to sponsor and support engagement with port community stakeholders to develop, test, and when necessary, implement joint efforts for responding to and mitigating the effects of a maritime transportation security incident. percent) said that MSRAM was moderately to very useful. For instance, officials from a southeastern sector said that MSRAM is used to communicate and justify additional security procedures. Further, during annual compliance inspections, MSRAM data are discussed with facility security officers and compared to security data that the facility security officers have calculated. In addition, officials from a Gulf Coast sector reported that MSRAM provides a convenient, objective way to communicate risk to port security stakeholders, and stakeholders appreciate that risk information from MSRAM is computer driven and based on a rigorous process. For informing sector operational and tactical decision making, such as planning MSRO activities, developing local critical infrastructure lists, and planning for special events, officials from 18 sectors (51 percent) reported that MSRAM moderately or greatly provided them with the information needed to make risk-informed decisions regarding port security. Regarding planning MSRO activities, one eastern sector reported that MSRAM was very helpful for identifying priority targets for MSRO patrols and escorts. Regarding developing local critical infrastructure lists, officials from an eastern sector said that since the sector has no assets on the national MCIKR list, they were able to use MSRAM to generate a local list to help determine patrols and other security efforts. Regarding special event planning, officials from 16 sectors (45 percent) told us they used MSRAM to determine where to allocate resources for special events, such as the Fourth of July, dignitary visits, or political conventions. For example, officials from an inland river sector said that they used MSRAM to identify possible attack scenarios and to help identify what security resources they should request to provide security for a special event. See figure 1 for photographs of various Coast Guard security-related activities that can be informed by MSRAM. In addition to using MSRAM to inform maritime security decisions, officials from almost every sector noted that they also assess and manage risk using other tools or methods, such as the High Interest Vessel matrix, outreach to port partners, working relationships with Area Maritime Security Committees, or professional judgment. Although officials from most sectors found that MSRAM provided useful risk information for sector-level decision making, officials from 32 sectors (91 percent) reported that their overall use of MSRAM data in managing risk was hindered by a lack of staff time for data analysis, the complexity of the MSRAM tool, or competing mission demands, among other things. These challenges are discussed below. Limited staff time for analyzing and using MSRAM. Officials from 21 sectors (60 percent) told us that limited staff time posed a challenge to incorporating MSRAM into strategic, operational, and tactical planning efforts. For example, officials from a northeastern sector said that a lack of available staff time was one of the most significant limitations to utilizing MSRAM. These officials stated that they would like to have dedicated MSRAM personnel to develop the tool and make it useful on a daily basis. They added that even though MSRAM had many capabilities, they were unable to use it to its full capability because their port security specialist—the primary user of MSRAM—was busy with other programs, such as the Port Security Grant Program. Each of the port security specialists from the three districts we interviewed—which encompass 15 sectors over the West Coast, East Coast, Gulf Coast, and Mississippi River area—echoed the challenges with the level of sector resources for MSRAM. For example, one district official stated that although Coast Guard headquarters has dedicated MSRAM staff, there are no full-time MSRAM analysts at the sector level. He added that each sector would need a dedicated person for MSRAM and risk analysis to bring MSRAM analysis into operational and tactical decision making. Complexity of the MSRAM tool. Officials from 14 sectors (40 percent) reported that MSRAM use has been limited because data outputs require a substantial degree of analysis to use in decision making, or because the MSRAM tool itself is not easy to use. Some of the challenges raised by sectors that contribute to the complexity of the tool and interpreting its outputs included keeping abreast of yearly changes to the MSRAM tool and bridging knowledge gaps that occur when staff familiar with MSRAM rotate or leave the sector. In its MSRAM core document, the Coast Guard recognized that the frequent rotation of active duty personnel presents a risk to both the consistency of the MSRAM risk scoring efforts and the application of risk results. Competing mission demands and resource constraints. Officials from 14 sectors (40 percent) reported that competing mission demands or resource constraints limited the use of MSRAM. Specifically, officials from 11 sectors reported that MSRAM’s usefulness was limited by the fact that it only considers risk in the PWCS mission, which is 1 of the Coast Guard’s 11 statutorily required missions. For example, a Great Lakes sector told us that while MSRAM identifies the risks in the sector, the sector is limited in its ability to move assets to address those security risks because the assets are also fulfilling other Coast Guard mission requirements, such as search and rescue. Additionally, officials from 6 sectors said that limited resources, such as boats or personnel, constrained their sectors’ ability to address the risks identified by MSRAM. For example, officials from 2 inland river sectors said that MSRAM identifies their security risks and demonstrates where they should patrol and plan for special events, but that they do not have the resources to carry out the plans. Further, officials from 1 of the inland river sectors added that their response boats are often busy escorting the Army Corps of Engineers or engaged in flood relief efforts. This leaves the work of security patrols to the local harbor patrol, which the officials said does not have the same capabilities, in terms of boats and weapons, as the Coast Guard. Other challenges. Sector officials also identified other challenges with using MSRAM for informing decision making. Specifically, officials from 16 sectors (45 percent) said that MSRAM would be more useful if it was linked to other Coast Guard data systems, such as the Coast Guard’s inspections database, or if MSRAM was integrated into the sector command center. For example, officials from an east coast sector told us that they would like to see MSRAM linked to other databases in the sector command center, such as the Coast Guard’s vessel tracking system. Similarly, officials from a west coast sector said that integrating MSRAM into the Coast Guard’s inspections database would keep MSRAM continually updated and reflective of inspection results. Further, the command center has to consider other mission response needs, such as for pollution incidents or search and rescue, among others, and if MSRAM was integrated into the sector command center it could be used more in day-to-day operations. In addition, officials from 5 sectors noted that MSRAM does not capture dynamic risk, which limits its ability to inform daily decisions at the sector level. For instance, officials from a Gulf Coast sector said that they did not use MSRAM on a daily basis to allocate resources because daily fluctuations in vessel and barge risk are their greatest concern and this risk is not currently captured in MSRAM. The sectors that raised these issues believed that linking MSRAM into other data systems, integrating MSRAM into the command center, and having MSRAM account for dynamic risks could contribute to making its data more accurate, robust, and useful for decision making. Coast Guard headquarters officials told us that they were aware of the challenges field-level MSRAM users were facing and have taken some steps to address them, but providing additional training could help integrate MSRAM throughout sector decision making. The Coast Guard’s current actions to address MSRAM user challenges include assessing the feasibility of adding additional risk analyst staff, increasing the data’s usability, developing decision-supporting modules, and providing training. These actions are described below. Examining the feasibility of dedicated risk analysts. Presently, there is no dedicated risk analyst or MSRAM analyst position at the sector level, but headquarters officials told us in June 2011 that they are examining the feasibility of assigning additional port security specialists to the field and submitted a resource proposal for the additional staff. According to a senior Coast Guard budget official, given competing priorities and a constrained resource environment, it is unclear when or if this resource proposal will be funded. Deploying MSRAM to sector command centers. To help make MSRAM more dynamic and increase its usability, the Coast Guard is piloting an Enterprise Geographic Information System (EGIS) display for sector command centers, which layers facility and vessel locations onto a satellite-based map and visually displays changing risk as vessels move into and out of ports. Officials from 7 sectors that participated in or were familiar with the initial EGIS test group reported that the functionality was very useful and had the potential to substantially increase MSRAM’s use for sector risk management efforts. In addition, headquarters officials told us in June 2011 that efforts were under way to integrate MSRAM into the Coast Guard’s inspections database, which would allow MSRAM to be continually updated and reflective of year-round facility and vessel inspection results. Developing risk management modules. To assist with incorporating risk assessment information into decision making, in the fall of 2008, the Coast Guard began developing risk management modules within MSRAM that are able to provide specific types of analyses, such as comparing alternative security strategies. We asked officials from all 35 sectors their views on four modules—the Alternatives Evaluation Module, the Simplified Reporting Interface, the Daily Risk Profile, and the Risk Management Module. Sectors had mixed views on the utility of these modules. Specifically, officials from 14 sectors (40 percent) found the Alternatives Evaluation module very useful and cited such uses as evaluating Port Security Grant Program proposals and planning security for special events, and officials from 15 sectors (42 percent) found the Simplified Reporting Interface very useful for communicating risk information to port partners. However, with respect to the other two modules—the Daily Risk Profile and Risk Management Module—officials from 2 sectors (5 percent) found the Daily Risk Module very useful and officials from 3 sectors (8 percent) found the Risk Management Module very useful. For both modules, officials from 18 sectors (51 percent) reported that either they had not seen them or they were aware of the modules but did not have the time or training, among other reasons, to use them. Many of the modules are new and headquarters and some sector officials reported that they expected the modules would be more useful in the future as sectors gained familiarity with them through additional exposure and the annual MSRAM training. Providing training. While the Coast Guard offers annual MSRAM training, officials from 25 sectors (71 percent) identified areas of the training for improvement, which the Coast Guard could do more to address. Specifically, officials from these sectors said that increasing the number of people who take MSRAM training, providing MSRAM training to command-level staff or senior management, and offering training on how to conduct risk analysis to inform decision making, among other things, would help integrate MSRAM throughout sector decision- making processes. Since MSRAM is a collateral duty, MSRAM training is not part of any Coast Guard personnel’s required training curriculum. However, Coast Guard guidance from May 2011 states that area, district, and sector commanders are responsible for ensuring that adequate numbers of appropriate personnel are trained in MSRAM. Only one sector did not, at the time of our interview, have at least one staff person trained in MSRAM. Officials from a Gulf Coast sector said that the training provided on the MSRAM tool itself is good, but the training does not teach the skills needed to make decisions in the field. Officials from a Great Lakes sector suggested that the Coast Guard develop an advanced course on how to use MSRAM to inform operational decisions. Officials from a southeastern sector added that the Coast Guard provides guidance on how to assess risks using MSRAM, but needs to provide more training on how to communicate MSRAM results and how those results can be used. In addition, a sector commanding officer who participated in one of our interviews told us that he was provided minimal training on MSRAM and wanted to understand more about how it can be used to support command-level decisions. MSRAM has the capability of informing operational, tactical, and resource allocation decisions at all levels of a sector, but the Coast Guard has generally provided MSRAM training to a limited number of sector staff with specific MSRAM risk assessment responsibilities, such as port security specialists, rather than sector staff who may have command or management responsibilities where MSRAM may apply. Coast Guard headquarters officials said that this was because of limited resources to provide training for numerous sector personnel and variations in how MSRAM responsibilities are managed at different sectors. Standards for Internal Control in the Federal Government states that effective management of an organization’s workforce is essential to achieving results. Further, only when the right personnel for the job are on board and are provided the right training and tools, among other things, is operational success possible. To this end, management should ensure that training is aimed at developing and retaining employee skill levels to meet changing organizational needs. Coast Guard headquarters officials agree that providing MSRAM training to additional sector staff, particularly those with command and management responsibilities, would be valuable. Such training on how MSRAM can be used at all levels of command for risk-informed decision making—including how MSRAM can assist with the selection of different types of security measures to address areas of risk and the evaluation of their impacts—could further the Coast Guard’s efforts to implement its risk management framework and meet its goal to institutionalize MSRAM as the risk management tool for maritime security. The Coast Guard developed a performance measure and supporting model to measure and report its overall performance in reducing maritime security risk. This measure identifies the percentage reduction of maritime security risk, subject to Coast Guard influence, resulting from various Coast Guard actions. The Coast Guard considers this performance measure its key outcome measure for its PWCS mission. According to DHS’s Risk Management Fundamentals and the NIPP, it is crucial that a process of performance measurement be established to evaluate whether actions taken ultimately achieve the intended performance objective, such as reducing risk. This is important not only in evaluating program performance but also in holding the organization accountable for progress. We have also previously reported on the importance of developing outcome-based performance goals and measures as part of results management efforts. From fiscal years 2006 to 2010, the Coast Guard annually reported reducing from 15 to 31 percent of the maritime risk it is responsible for, in each year either meeting or exceeding its target. For fiscal years 2011 and 2012, the Coast Guard’s planned performance targets are to reduce more than 44 percent of the maritime security risk for which it is responsible. To measure how its actions have reduced risk, the Coast Guard developed a model that uses a two-step approach. The first step is to estimate the total amount of terrorism risk that exists in the maritime domain, in the absence of any Coast Guard activities. This is referred to as raw risk, and this information comes primarily from MSRAM. second step relies on an elicitation process whereby Coast Guard subject matter experts estimate how various security activities and operations, maritime domain awareness programs, and regulatory structures— referred to by the Coast Guard as regimes—that the Coast Guard has implemented have reduced risk to U.S. ports and waterways. This step involves Coast Guard subject matter experts assessing the probability of these Coast Guard efforts failing to prevent a successful terrorist attack for 16 potential maritime terrorist attack scenarios. Information also comes from DHS’s Risk Analysis Process for Informed Decision Making (RAPID) project, which is designed to provide strategic planning guidance and support resource allocation decisions at the DHS level. According to DHS’s Risk Management Fundamentals, elicitations involve using structured questions to gather information from individuals with in-depth knowledge of specific areas or fields. missions, such as search and rescue, there is not a rich historical data set of maritime terrorism incidents that the Coast Guard can use to measure its actual performance. In other words, in the absence of an actual domestic maritime terrorism event, the Coast Guard uses internal subject matter experts to estimate risk reduction as a proxy measure of performance—an attempt to measure performance against a terrorism incident that did not occur. The Coast Guard’s efforts to develop an outcome measure to quantify the impact its actions have had on risk is a positive step. However, the use of the measure has been limited, and even with recent improvements, the Coast Guard faces challenges using this measure to inform decision making. Performance goals and measures are intended to provide Congress and agency management with information to systematically assess a program’s strengths, weaknesses, and performance. Thus, measures should provide information for management decision making. Coast Guard officials explained that the primary purpose of the risk reduction measure has been for external performance reporting, and to a more limited extent for informing strategic decision making and for conducting internal analysis of performance to identify areas for improvement. Specifically, officials said the measure has been used to compare risk across maritime terrorism scenarios and compare those results to other studies and analysis on maritime terrorism scenarios, which provided information on whether PWCS activities were appropriately balanced to address those risks. However, Coast Guard officials stated that over time, internal and external reviews identified limitations in the risk reduction measure, such as not allowing for comparisons of performance across sectors. Recognizing these limitations, in 2010, the Coast Guard made improvements to the risk reduction model intended to enhance its utility for management decision making and to provide a more accurate measure of risk reduction. For example, the updated model includes information on the locations of Coast Guard assets and potential targets, which can be used to calculate the probability that Coast Guard assets will be able to intercept attacks. The Coast Guard also improved the elicitation techniques by which subject matter experts provided their estimates of Coast Guard risk reduction performance, and expanded the size and diversity of the subject matter experts involved in the elicitation According to Coast Guard officials, these improvements have process. made the measure and supporting model more useful for informing strategic decisions by allowing, for example, the ability to calculate risk reduction at the sector, district, area, and national levels and the risk reduction value of each element of the Coast Guard’s strategy. In other words, the updated model is able to show the risk reduction value of Coast Guard operational assets, such as small boats or helicopters, compared with regime activities, such as regulation enforcement. This information can help inform resource allocation decisions because it could identify which actions provide the greatest risk-reduction, according to these officials. The Coast Guard plans to use the updated model to measure its performance in reducing risk for the 2011 fiscal year. According to the Coast Guard, in 2009 a total of 26 subject matter experts were used, mostly from headquarters. In 2010, a total of 46 subject matter experts were used coming from headquarters, areas, districts, sectors, and operational units. making. For example, given the inherent uncertainties in estimating risk reduction, it is unclear if a measure of risk reduction would provide meaningful performance information for tracking progress against goals and performance over time. According to our performance measurement criteria, to be able to assess progress toward the achievement of performance goals, the measures used must be reliable and valid. Reliability refers to the precision with which performance is measured, while validity is the extent to which the measure adequately represents actual performance. Therefore, the usefulness of agency performance information depends to a large degree on the reliability of performance data. We have also reported that decision makers must have assurance that the program data being used to measure performance are sufficiently reliable and valid if the data are to inform decision making. Although the Coast Guard has taken steps to improve the quality of the supporting model to provide a more accurate measure, estimating risk reduction is inherently uncertain and this measure is based on largely subjective judgments of Coast Guard personnel, and therefore the risk reduction results reported by the Coast Guard are not based on measurable or observable activities. As a result, it is difficult to independently verify or assess the validity or appropriateness of the judgments or to determine if this is an accurate measure of Coast Guard performance in the PWCS mission. However, Coast Guard officials told us that they believe these reported results provide a useful proxy measure of Coast Guard performance, and noted that this is one of several metrics the Coast Guard uses to assess performance in the PWCS mission. According to DHS’s Risk Management Fundamentals, it is also important to be transparent about assumptions and key sources of uncertainty, so that decision makers are informed of the limitations of the risk information provided by the model. In its 2009 review of the risk reduction model, CREATE at the University of Southern California stated that it seemed likely that the model ignored important uncertainties and implied incorrectly high precision of risk estimates. Furthermore, OMB’s Updated Principles for Risk Analysis notes that because of the inherent uncertainties associated with estimates of risk, presentation of a single risk estimate may be misleading and provide a false sense of precision. OMB suggests that when a quantitative characterization of risk is provided, a range of plausible risk estimates should also be provided. From fiscal years 2006 to 2010, the Coast Guard reported the risk reduction measure as a specific risk reduction number rather than as a range of plausible risk reduction estimates. The Coast Guard official responsible for this measure told us this was because the previous risk reduction model was not capable of producing a range of plausible risk reduction estimates. The official noted that while the new risk reduction model—which will be used to report results for fiscal year 2011—is capable of producing a range of estimated risk reduction, the Coast Guard will continue to report the risk reduction measure as a single number because the DHS data system for performance reporting does not accept ranges—only numerical values. However, the official added that there is value in reporting a range of risk reduction and officials are considering a transition to a range of estimated reduction for the PWCS mission in future years. One alternative could be to report the percentage of risk reduced as a single number, but having an explanatory note indicating the range of plausible risk reduction estimates. Using a risk reduction measure that more accurately reflects performance effectiveness can give Coast Guard leaders and Congress a better sense of progress toward goals, which can support efforts to identify areas for improvement. DHS officials have also raised some questions about the risk reduction measure. Recently, DHS determined that the Coast Guard’s risk reduction measure was not appropriate for inclusion as a DHS strategic performance measure and has designated it as a management measure. According to DHS, a strategic measure is designed to communicate achievement of strategic goals and objectives and be readily understandable to the public, and a management measure is designed to gauge program results and tie to resource requests and be used to support achievement of strategic goals. According to a senior DHS official, in 2010, DHS leadership reviewed all existing department measures and made decisions about which measures they believed were clearly tied to the DHS Quadrennial Homeland Security Review missions and were easily understandable by the public. This official noted that based on this review, DHS leadership did not feel the risk reduction measure and its methodology would be easily understandable by the public and therefore did not designate the measure as a strategic measure. As a result, the risk reduction measure will not be included in DHS’s annual performance plan, formally published with the Annual Performance Report, because this report only includes the smaller set of strategic measures. However, this official noted that the risk reduction measure is important as one piece of information to manage risk and is considered to be part of the full suite of DHS performance measures, and will continue to be published in the Coast Guard’s strategic context that is submitted with DHS’s Annual Performance Report. The Coast Guard has invested substantial effort incorporating risk management principles into its security priorities and investments, and continues to proactively strengthen its assessment, management, and evaluation practices. As a result, the Coast Guard’s risk assessments and risk model are generally sound and in alignment with DHS standards. However, there are some additional actions that the Coast Guard could take to further its risk management approach by facilitating a wider use of risk information and making the results more valuable to the users. For example, since risk management is a tool for informing policymakers’ decisions about assessing risks, allocating resources, and taking actions under conditions of uncertainty, the Coast Guard could better document and communicate the uncertainty or confidence levels of its risk assessment results, including any implications that the uncertainty may have for decision makers. This added information would allow Coast Guard decision makers to prioritize strategies, tactics, and long-term investments with greater insight about the range of likely results and associated trade-offs with each decision. Additional information would also allow external reviewers of the risk model to reach the most appropriate conclusions or provide the most useful improvement recommendations through periodic reviews. The Coast Guard could also enhance the risk-informed prioritization of its field-level strategies, operations, and tactics by ensuring that risk management training is expanded to multiple levels of Coast Guard decision makers at the sector level, including command-level personnel. Expanding training on how MSRAM could be used at all levels of command for risk-informed decision making—including how MSRAM can assist with the selection of different types of security measures and the evaluation of their impacts—would further the Coast Guard’s efforts to implement its risk management framework and meet its goal of institutionalizing MSRAM as the risk management tool for maritime security. Finally, accurately representing performance results is important and the Coast Guard could more accurately convey its risk reduction performance measure by reporting risk reduction results as a range rather than a point estimate. Presenting risk reduction as a single number without a corresponding range of uncertainty could hamper Coast Guard efforts to identify areas for improvement. Taking these steps would make the Coast Guard’s risk management approach even stronger. To help the Coast Guard strengthen MSRAM and better align it with NIPP risk management guidance, as well as facilitate the increased use of MSRAM across the agency, we recommend that the Commandant of the Coast Guard take the following three actions: (1) Provide more thorough documentation related to key assumptions and sources of uncertainty within MSRAM and inform users of any implications for interpreting the results from the model. (2) Make MSRAM available to appropriate parties for additional external peer review. (3) Provide additional training for sector command staff and others involved in sector management and operations on how MSRAM can be used as a risk management tool to inform sector-level decision making. To improve the accuracy of the risk reduction measure for internal and external decision-making, we recommend that the Commandant of the Coast Guard take action to report the results of the risk reduction measure as a range rather than a point estimate. We provided a draft of this report to DHS and the Coast Guard on October 17, 2011, for review and comment. DHS provided written comments, which are reprinted in appendix II. DHS and the Coast Guard concurred with the findings and recommendations in the report, and stated that the Coast Guard is taking actions to implement our recommendations. The Coast Guard concurred with our first recommendation that it provide more thorough documentation related to key assumptions and sources of uncertainty within MSRAM. Specifically, the Coast Guard stated that the documentation of uncertainty is part of the ongoing MSRAM VV&A process, and that the Coast Guard will continue to work with the DHS Office of Risk Management and Analysis in developing a feasible and deployable model that will benefit field-level security operations. These actions should improve the Coast Guard’s ability to document and inform MSRAM users of any implications for interpreting results from the model, thereby addressing the intent of our recommendation. Regarding the second recommendation that the Coast Guard make MSRAM available to appropriate parties for additional external peer review, the Coast Guard concurred. The Coast Guard stated that external peer review is part of the ongoing MSRAM VV&A process, and that additional external peer review will be part of an independent verification and validation of MSRAM expected to be completed in the fall of 2012. Such actions should address the intent of the recommendation. Regarding the third recommendation that the Coast Guard provide additional training for sector command staff and others involved in sector management on how MSRAM can be used as a risk management tool, the Coast Guard concurred. Specifically, the Coast Guard stated that MSRAM is part of the Coast Guard’s contingency planning course, and the Coast Guard will explore other opportunities to provide risk training to sector command staff, including online and webinar training opportunities. Such actions, once implemented, should address the intent of the recommendation. Finally, the Coast Guard also concurred with the fourth recommendation to take action to report the results of the risk reduction measure as a range rather than a point estimate. The Coast Guard stated that it is currently limited by the DHS data reporting system with regard to the format of presenting performance targets and results, but noted that it is currently working with DHS to determine options for reporting risk as a range. Such action, when fully implemented, should address the intent of the recommendation. DHS and the Coast Guard also provided us with technical comments, which we incorporated as appropriate. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the Secretary of Homeland Security, appropriate congressional committees, and other interested parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any further questions about this report, please contact me at (202) 512-9610 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors are listed in appendix III. To provide guidance to agency decision makers, we developed a risk management framework which is intended to be a starting point for applying risk-informed principles. Our risk management framework, shown in figure 2, entails a continuous process of managing risk through a series of actions, including setting strategic goals and objectives, assessing risk, evaluating alternatives, selecting initiatives to undertake, and implementing and monitoring those initiatives. Setting strategic goals, objectives, and constraints is a key first step in applying risk management principles and helps to ensure that management decisions are focused on achieving a purpose. Risk assessment, an important element of a risk-informed approach, helps decision makers identify and evaluate potential risks so that countermeasures can be designed and implemented to prevent or mitigate the effects of the risks. Risk assessment is a qualitative determination, quantitative determination, or both of the likelihood of an adverse event occurring and the severity, or impact, of its consequences. Risk assessment in a homeland security application involves assessing three key components—threat, vulnerability, and consequence. A threat assessment is the identification and evaluation of adverse events that can harm or damage an asset. A vulnerability assessment identifies weaknesses in physical structures, personal protection systems, processes, or other areas that may be exploited. A consequence assessment is the process of identifying or evaluating the potential or actual effects of an event, incident, or occurrence. Information from these three assessments contributes to an overall risk assessment that characterizes risks, which can provide input for evaluating alternatives and prioritizing security initiatives. The risk assessment element in the overall risk management cycle informs each of the remaining steps of the cycle. Alternatives evaluation addresses the evaluation of risk reduction methods by consideration of countermeasures or countermeasure systems and the costs and benefits associated with them. Management selection addresses such issues as determining where resources and investments will be made, the sources and types of resources needed, and where those resources would be targeted. The next phase in the framework involves the implementation of the selected countermeasures. Following implementation, monitoring is essential to help ensure that the entire risk management process remains current and relevant and reflects changes in the effectiveness of the alternative actions and the risk environment in which it operates. Program evaluation is an important tool for assessing the efficiency and effectiveness of the program. As part of monitoring, consultation with external subject area experts can provide a current perspective and an independent review in the formulation and evaluation of the program. The National Infrastructure Protection Plan (NIPP), originally issued by the Department of Homeland Security (DHS) in 2006 and updated in 2009, includes a risk analysis and management framework, which, for the most part, mirrors our risk management framework. This framework includes six steps—set goals and objectives; identify assets, systems, and networks; assess risks; prioritize; implement programs; and measure effectiveness. The NIPP is DHS’s base plan that guides how DHS and other relevant stakeholders should use risk management principles to prioritize protection activities. In 2009, DHS updated the NIPP to, among other things, increase its emphasis on risk management, including an expanded discussion of risk management methodologies and discussion of a common risk assessment approach that provided core criteria for these analyses. Beyond the NIPP, DHS has issued additional risk management guidance and directives. For example, in January 2009 DHS published its Integrated Risk Management Framework, which, among other things, calls for DHS to use risk assessments to inform decision making. In April 2011, DHS issued its Risk Management Fundamentals, which establishes specific doctrine and guidance for risk management across DHS. In addition to the contact named above, Dawn Hoff, Assistant Director and Adam Hoffman, Analyst-in-Charge, managed this assignment. Chuck Bausell, Charlotte Gamble, and Grant Sutton made significant contributions to this report. Colleen McEnearney provided assistance with interviews and data analysis. Michele Fejfar assisted with design, methodology, and data analysis. Jessica Orr provided assistance with report development, and Geoff Hamilton provided legal assistance. Port Security Grant Program: Risk Model, Grant Management, and Effectiveness Measures Could Be Strengthened. GAO-12-47. Washington, D.C.: November 17, 2011. Maritime Security: Progress Made but Further Actions Needed to Secure the Maritime Energy Supply. GAO-11-883T. Washington, D.C.: August 24, 2011. Maritime Security: DHS Progress and Challenges in Key Areas of Port Security. GAO-10-940T. Washington, D.C.: July 21, 2010. Maritime Security: Varied Actions Taken to Enhance Cruise Ship Security, but Some Concerns Remain. GAO-10-400. Washington, D.C.: April 9, 2010. Critical Infrastructure Protection: Update to National Infrastructure Protection Plan Includes Increased Emphasis on Risk Management and Resilience. GAO-10-296. Washington, D.C.: March 5, 2010. Transportation Security: Comprehensive Risk Assessments and Stronger Internal Controls Needed to Help Inform TSA Resource Allocation. GAO-09-492. Washington, D.C.: March 27, 2009. Department of Homeland Security: Progress Report on Implementation of Mission and Management Functions. GAO-07-454. Washington, D.C.: August 17, 2007. Risk Management: Further Refinements Needed to Assess Risks and Prioritize Protective Measures at Ports and Other Critical Infrastructure. GAO-06-91. Washington, D.C.: December 15, 2005. Managing for Results: Enhancing Agency Use of Performance Information for Management Decision Making. GAO-05-927. Washington, D.C.: September 9, 2005. Homeland Security: Summary of Challenges Faced in Targeting Oceangoing Cargo Containers for Inspection. GAO-04-557T. Washington D.C.: March 31, 2004. Managing for Results: Challenges Agencies Face in Producing Credible Performance Information. GAO/GGD-00-52. Washington, D.C.: February 4, 2000. The Results Act: An Evaluator’s Guide to Assessing Agency Annual Performance Plans. GAO/GGD-10.1.20. Washington, D.C.: April 1998. | Since the terrorist attacks of September 11, 2001, the nation's ports and waterways have been viewed as potential targets of attack. The Department of Homeland Security (DHS) has called for using risk-informed approaches to prioritize its investments, and for developing plans and allocating resources that balance security and the flow of commerce. The U.S. Coast Guard--a DHS component and the lead federal agency responsible for maritime security--has used its Maritime Security Risk Analysis Model (MSRAM) as its primary approach for assessing and managing security risks. GAO was asked to examine (1) the extent to which the Coast Guard's risk assessment approach aligns with DHS risk assessment criteria, (2) the extent to which the Coast Guard has used MSRAM to inform maritime security risk decisions, and (3) how the Coast Guard has measured the impact of its maritime security programs on risk in U.S. ports and waterways. GAO analyzed MSRAM's risk assessment methodology and interviewed Coast Guard officials about risk assessment and MSRAM's use across the agency. MSRAM generally aligns with DHS risk assessment criteria, but additional documentation on key aspects of the model could benefit users of the results. MSRAM generally meets DHS criteria for being complete, reproducible, documented, and defensible. Further, the Coast Guard has taken actions to improve the quality of MSRAM data and to make them more complete and reproducible, including providing training and tools for staff entering data into the model. However, the Coast Guard has not documented and communicated the implications that MSRAM's key assumptions and other sources of uncertainty have on MSRAM's risk results. For example, to assess risk in MSRAM, Coast Guard analysts make judgments regarding such factors as the probability of an attack and the economic and environmental consequences of an attack. These multiple judgments are inherently subjective and constitute sources of uncertainty that have implications that should be documented and communicated to decision makers. Without this documentation, decision makers and external MSRAM reviewers may not have a complete understanding of the uses and limitations of MSRAM data. In addition, greater transparency and documentation of uncertainty and assumptions in MSRAM's risk estimates could also facilitate periodic peer reviews of the model--a best practice in risk management. MSRAM is the Coast Guard's primary tool for managing maritime security risk, but resource and training challenges hinder use of the tool by Coast Guard field operational units, known as sectors. At the national level, MSRAM supports Coast Guard strategic planning efforts, which is consistent with the agency's intent for MSRAM. At the sector level, MSRAM has informed a variety of decisions, but its use has been limited by lack of staff time, the tool's complexity, and competing mission demands, among other things. The Coast Guard has taken actions to address these challenges, but providing additional training on how MSRAM can be used at all levels of sector decision making could further the Coast Guard's risk management efforts. MSRAM is capable of informing operational, tactical, and resource allocation decisions, but the Coast Guard has generally provided MSRAM training only to a small number of sector staff who may not have insight into all levels of sector decision making. The Coast Guard developed an outcome measure to report its performance in reducing maritime risk, but has faced challenges using this measure to inform decisions. Outcome measures describe the intended result of carrying out a program or activity. The measure is partly based on Coast Guard subject matter experts' estimates of the percentage reduction of maritime security risk subject to Coast Guard influence resulting from Coast Guard actions. The Coast Guard has improved the measure to make it more valid and reliable and believes it is a useful proxy measure of performance, noting that developing outcome measures is challenging because of limited historical data on maritime terrorist attacks. However, given the uncertainties in estimating risk reduction, it is unclear if the measure would provide meaningful performance information with which to track progress over time. In addition, the Coast Guard reports the risk reduction measure as a specific estimate rather than as a range of plausible estimates, which is inconsistent with risk analysis criteria. Reporting and using outcome measures that more accurately reflect mission effectiveness can give Coast Guard leaders and Congress a better sense of progress toward goals. GAO recommends that the Coast Guard provide more thorough documentation on MSRAM's assumptions and other sources of uncertainty, make MSRAM available for peer review, implement additional MSRAM training, and report the results of its risk reduction performance measure in a manner consistent with risk analysis criteria. The Coast Guard agreed with these recommendations. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
A safe and secure aviation system is a critical component to securing the nation’s overall physical infrastructure and maintaining its economic vitality. Billions of dollars and a myriad of programs and policies have been devoted to achieving such a system. Critical to ensuring aviation security are screening checkpoints, at which screening personnel check over 2 million individuals and their baggage each day for weapons, explosives, and other dangerous articles that could pose a threat to the safety of an aircraft and those aboard it. All passengers who seek to enter secure areas at the nation’s airports must pass through screening checkpoints and be cleared by screeners. In addition, many airline and airport employees, including flight crews, ground personnel, and concession vendors, have to be cleared by screeners. At the nation’s 429 commercial airports that are subject to security requirements, screeners use a variety of technologies and procedures to screen individuals. These include x-ray machines to examine carry-on baggage, metal detectors to identify any hidden metallic objects, and physical searches of items, including those that cannot be scanned by x-rays, such as baby carriers or baggage that has been x-rayed and contains unidentified objects. In response to the terrorist attacks of September 11, 2001, the Federal Aviation Administration (FAA) and the air carriers implemented new security controls to improve security. These actions included increased screening of baggage and passengers at airport checkpoints with the use of explosives trace detection devices and hand-held metal detectors, the mandatory removal of laptop computers from carrying cases, and the removal of shoes. They included additional screening of randomly selected passengers at an airline’s boarding gate. Although these initiatives have been a visible sign of heightened security procedures, they have also, in some instances, caused longer security delays, inconvenienced the traveling public, and raised questions about the merits of using these techniques on assumed lower-risk travelers, such as young children. Congress has also taken actions to improve aviation security. In November 2001, it passed the Aviation and Transportation Security Act, which transferred aviation security from FAA to the newly created TSA and directed TSA to take over responsibility for airport screening. The Act also left to TSA’s discretion whether to “establish requirements to implement trusted passenger programs and use available technologies to expedite security screening of passengers who participate in such programs, thereby allowing security screening personnel to focus on those passengers who should be subject to more extensive screening.” In response to this Act, officials representing aviation and business travel groups have proposed developing a registered traveler program. Under their proposals, travelers who voluntarily provide personal information and clear a background check would be enrolled as registered travelers. These participants would receive some form of identification, such as a card that includes a unique personal characteristic like a fingerprint, which they would use at an airport to verify their identity and enrollment in the program. Because they would have been prescreened, they would be entitled to different security screening procedures at the airport. These could be as simple as designating a separate line for registered travelers, or could include less intrusive screening. Although TSA had initially resisted such a program because of concerns that it could weaken the airport security system, it has recently changed its position and has begun assessing the feasibility and need for such a program and considering the implementation of a test program. The concept underlying a registered traveler program is similar to one that TSA has been studying for transportation workers—a Transportation Worker Identity Credential (TWIC)—that could be used to positively identify transportation workers such as pilots and flight attendants and to expedite their processing at airport security checkpoints. TSA had been studying the TWIC program for several months. Initially, the agency had planned to implement the TWIC program first, saying that any registered traveler program would be implemented after establishing the TWIC program. In recent months, congressional appropriations restrictions have caused TSA to postpone TWIC’s development. According to a senior agency official, however, TSA was still planning to go forward with studying the registered traveler program concept. Although most of the 22 stakeholders we interviewed supported a registered traveler program, several stakeholders opposed it. Our literature review and supporters of the program whom we interviewed identified two primary purposes for such a program—improving the quality and efficiency of airport security and reducing the inconvenience that some travelers have experienced by reducing uncertainties about the length of delay and the level of scrutiny they are likely to encounter. The literature we reviewed and more than a half-dozen of the 22 stakeholders we contacted suggested that such a program could help improve the quality and efficiency of security by allowing security officials to target resources at potentially higher risk travelers. Several stakeholders also indicated that it could reduce the inconvenience of heightened security measures for some travelers, thus encouraging Americans to fly more often, and thereby helping to improve the economic health of the aviation industry. Representatives of air traveler groups identified other potential uses of a registered traveler program that were not directly linked to improving aviation security, such as better tracking of frequent flier miles for program participants. Many of the 22 stakeholders we contacted and much of the literature we reviewed identified the improvement of aviation security as a key purpose for implementing a registered traveler program. Such a program would allow officials to target security resources at those travelers who pose a greater security risk or about whom little is known. This concept is based on the idea that not all travelers present the same threat to aviation security, and thus not everyone requires the same level of scrutiny. Our recent work on addressing homeland security issues also highlights the need to integrate risk management into the nation’s security planning and to target resources at high-priority risks. The concept is similar to risk- based security models that have already been used in Europe and Israel, which focus security on identifying risky travelers and more appropriately matching resources to those risks, rather than attempting to detect objects on all travelers. For example, one study suggested that individuals who had been prescreened through background checks and credentialed as registered travelers be identified as low risk and therefore subjected to less stringent security. This distinction would allow security officials to direct more resources and potentially better screening equipment at other travelers who might pose a higher security risk, presumably providing better detection and increased deterrence. In addition, several stakeholders also suggested that a registered traveler program would enable TSA to more efficiently use its limited resources. Several of these stakeholders suggested that a registered traveler program could help TSA more cost-effectively focus its equipment and personnel needs to better meet its security goals. For example, two stakeholders stated that TSA would generally not have to intensively screen registered travelers’ checked baggage with explosives detection systems that cost about $1 million each. As a result, TSA could reduce its overall expenditures for such machines. In another example, a representative from a major airline suggested that because registered travelers would require less stringent scrutiny, TSA could provide a registered traveler checkpoint lane that would enable TSA to use fewer screeners at its checkpoint lanes; this would reduce the number of passenger screeners from the estimated 33,000 that it plans to hire nationwide. In contrast, several stakeholders and TSA officials said that less stringent screening for some travelers could weaken security. For example, two stakeholders expressed concerns that allowing some travelers to undergo less stringent screening could weaken overall aviation security by introducing vulnerabilities into the system. Similarly, the first head of TSA had publicly opposed the program because of the potential for members of “sleeper cells”—terrorists who spend time in the United States building up a law-abiding record—to become registered travelers in order to take advantage of less stringent security screening. The program manager heading TSA’s Registered Traveler Task Force explained that the agency has established a baseline level of screening that all passengers and workers will be required to undergo, regardless of whether they are registered. Nevertheless, a senior TSA official told us that the agency now supports the registered traveler concept as part of developing a more risk- based security system, which would include a refined version of the current automated passenger prescreening system. While the automated prescreening system is used on all passengers, it focuses on those who are most likely to present threats. In contrast to a registered traveler program, the automated system is not readily apparent to air passengers. Moreover, the registered traveler program would focus on those who are not likely to present threats, and it would be voluntary. Some stakeholders we contacted said that a registered traveler program, if implemented, should serve to complement the automated system, rather than replace it. According to the literature we reviewed and our discussions with several stakeholders, reducing the inconvenience of security screening procedures implemented after September 11, 2001, constitutes another major purpose of a registered traveler program, in addition to potentially improving security. The literature and these stakeholders indicated that participants in a registered traveler program would receive consistent, efficient, and less intrusive screening, which would reduce their inconvenience and serve as an incentive to fly more, particularly if they are business travelers. According to various representatives of aviation and business travelers groups, travelers currently face uncertainty regarding the time needed to get through security screening lines and inconsistency about the extent of screening they will encounter at various airports. For example, one stakeholder estimated that prior to September 11, 2001, it took about 5 to 8 seconds, on average, for a traveler to enter, be processed, and clear a security checkpoint; since then, it takes about 20 to 25 seconds, on average, resulting in long lines and delays for some travelers. As a result, travelers need to arrive at airports much earlier than before, which can result in wasted time at the airport if security lines are short or significant time spent in security lines if they are long. Additionally, a few stakeholders stated that travelers are inconvenienced when they are subjected to personal searches or secondary screening at the gates for no apparent reason. While some stakeholders attributed reductions in the number of passengers traveling by air to these inconveniences, others attributed it to the economic downturn. Some literature and three stakeholders indicated that travelers, particularly business travelers making shorter trips (up to 750 miles), have as a result of these inconveniences reduced the number of flights they take or stopped flying altogether, causing significant economic harm to the aviation industry. For example, according to a survey of its frequent fliers, one major airline estimates that new airport security procedures and their associated inconveniences have caused 27 percent of its former frequent fliers to stop flying. Based on this survey’s data, the Air Transport Association, which represents major U.S. air carriers, estimates that security inconveniences have cost the aviation industry $2.5 billion in lost revenue since September 11, 2001. Supporters of a registered traveler program indicated that it would be a component of any industry recovery and that it is particularly needed to convince business travelers to resume flying. To the extent that registered travelers would fly more often, the program could also help revitalize related industries that are linked to air travel, including aviation-related manufacturing and such tourism-related businesses as hotels and travel agencies. However, not all stakeholders agreed that a registered traveler program would significantly improve the economic condition of the aviation industry. For example, officials from another major U.S. airline believed that the declining overall economy has played a much larger role than security inconveniences in reducing air travel. They also said that most of their customers currently wait 10 minutes or less in security lines, on average—significantly less than immediately after September 11, 2001—and that security inconveniences are no longer a major issue for their passengers. In addition to the two major purposes of a registered traveler program, some stakeholders and some literature we reviewed identified other potential uses. For example, we found that such a program could be part of an enhanced customer service package for travelers and could be used to expedite check-in at airports and to track frequent flier miles. Some stakeholders identified potential law enforcement uses, such as collecting information obtained during background checks to help identify individuals wanted by the police, or tracking the movement of citizens who might pose criminal risks. Finally, representatives of air traveler groups envisioned extensive marketing uses for data collected on registered travelers by selling it to such travel-related businesses as hotels and rental car companies and by providing registered travelers with discounts at these businesses. Two stakeholders envisioned that these secondary uses would evolve over time, as the program became more widespread. However, civil liberties advocates we spoke with were particularly concerned about using the program for purposes beyond aviation security, as well as about the privacy issues associated with the data collected on program participants and with tracking their movements. Our literature review and discussions with stakeholders identified a number of policy and implementation issues that might need to be addressed if a registered traveler program is to be implemented. Stakeholders we spoke with held a wide range of opinions on such key policy issues as determining (1) who should be eligible to apply to the program; (2) the type and the extent of background checks needed to certify that applicants can enroll in the program, and who should perform them; (3) the security screening procedures that should apply to registered travelers, and how these would differ from those applied to other travelers; and (4) the extent to which equity, privacy, and liability issues would impede program implementation. Most stakeholders indicated that only the federal government has the resources and authority to resolve these issues. In addition to these policy questions, our research and stakeholders identified practical implementation issues that need to be considered before a program could be implemented. These include deciding (1) which technologies to use, and how to manage the data collected on travelers; (2) how many airports and how many passengers should participate in a registered traveler program; and (3) which entities would be responsible for financing the program, and how much it would cost. Most stakeholders we contacted agreed that, ultimately, the federal government should make the key policy decisions on program eligibility criteria, requirements for background checks, and specific security- screening procedures for registered travelers. In addition, the federal government should also address equity, privacy, and liability issues raised by such a program. Stakeholders also offered diverse suggestions as to how some of these issues could be resolved, and a few expressed eagerness to work with TSA. Although almost all the stakeholders we contacted agreed that a registered traveler program should be voluntary, they offered a wide variety of suggestions as to who should be eligible to apply to the program. These suggestions ranged from allowing any U.S. or foreign citizen to apply to the program to limiting it only to members of airline frequent flier programs. Although most stakeholders who discussed this issue with us favored broad participation, many of them felt it should be limited to U.S. citizens because verifying information and conducting background checks on foreigners could be very difficult. Several stakeholders said that extensive participation would be desirable from a security perspective because it would enable security officials to direct intensive and expensive resources toward unregistered travelers who might pose a higher risk. Several stakeholders indicated that it would be unfair to limit the program only to frequent fliers, while representatives from two groups indicated that such a limitation could provide airlines an incentive to help lure these travelers back to frequent air travel. We also found differing opinions as to the type and extent of background check needed to determine whether an applicant should be eligible to enroll in a registered traveler program. For example, one stakeholder suggested that the background check should primarily focus on determining whether the applicant exists under a known identity and truly is who he or she claims to be. This check could include verification that an individual has paid income taxes over a certain period of time (for example, the past 10 years), has lived at the same residence for a certain number of years, and has a sufficient credit history. Crosschecking a variety of public and private data sources, such as income tax payment records and credit histories, could verify that an applicant’s name and social security number are consistent. However, access to income tax payment records would probably require an amendment to existing law. Another stakeholder said that the program’s background check should be similar to what is done when issuing a U.S. passport. A passport check consists, in part, of a name check against a database that includes information from a variety of federal sources, including intelligence, immigration, and child support enforcement data. In contrast, others felt that applicants should undergo a more substantial check, such as an FBI- type background check, similar to what current airline or federal government employees must pass; or a criminal background check, to verify that the applicant does not have a criminal history. This could include interviewing associates and neighbors as well as credit and criminal history checks. In this case, applicants with criminal histories might be denied the right to participate in a registered traveler program. No matter what the extent of these checks, most stakeholders generally agreed that the federal government should perform or oversee them. They gave two reasons for this: (1) the federal government has access to the types of data sources necessary to complete them, and (2) airlines would be unwilling to take on the responsibility for performing them because of liability concerns. One stakeholder also suggested that the federal government could contract out responsibility for background checks to a private company, or that a third-party, nonprofit organization could be responsible for them. A majority of stakeholders also agreed that the federal government should be responsible for developing the criteria needed to determine whether an applicant is eligible to enroll and for making the final eligibility determination. Some stakeholders also stated that background checks should result in a simple yes or no determination, meaning that all applicants who passed the background check would be able to enroll in the program and the ones who did not pass would be denied. Other stakeholders alternatively recommended that all applicants be assigned a security score, determined according to the factors found during the background check. This security score would establish the level of screening given an individual at a security checkpoint. TSA has indicated that, at a minimum, the government would have to be responsible for ensuring that applicants are eligible to enroll and that the data used to verify identities and perform background checks are accurate and up-to-date. All the stakeholders we contacted agreed that registered travelers should be subjected to some minimum measure of security screening, and that the level of screening designated for them should generally be less extensive and less intrusive than the security screening required for all other passengers. Most stakeholders anticipated that a participant would receive a card that possessed some unique identifier, such as a fingerprint or an iris scan, to identify the participant as a registered traveler and to verify his or her identity. When arriving at an airport security checkpoint, the registered traveler would swipe the card through a reader that would authenticate the card and verify the individual’s identity by matching him or her against the specific identifier on the card. If the card is authenticated and the holder is verified as a registered traveler, the traveler would proceed through security. Most stakeholders suggested that registered travelers pass through designated security lines, to decrease the total amount of time they spend waiting at the security checkpoint. If the equipment cannot read the card or verify the traveler’s identity, or if that passenger is deemed to be a security risk, then the traveler would be subjected to additional security screening procedures, which might also include full-body screening and baggage searches. If the name on the registered traveler card matches a name on a watch-list or if new concerns about the traveler emerge, the card could be revoked. A common suggestion was that registered travelers would undergo pre- September 11th security-screening measures, which involved their walking through a magnetometer and the x-raying of their carry-on baggage. Moreover, they would not be subjected to random selection or additional security measures unless warranted, and they would be exempted from random secondary searches at the boarding gate. According to TSA officials, the agency is willing to consider some differentiated security procedures for program participants. As for security procedures for those not enrolled in such a program, several stakeholders agreed that nonparticipants would have to undergo current security screening measures, at a minimum. Current security measures involve walking through a magnetometer, having carry-on baggage run through an x-ray machine, and being subjected to random searches of baggage for traces of explosives, hand searches for weapons, and the removal of shoes for examination. Travelers may also be randomly selected for rescreening in the gate area, although TSA has planned pilot programs to determine whether to eliminate this rescreening. Other stakeholders suggested that travelers who were not enrolled in the registered traveler program should be subjected to enhanced security screening, including more stringent x-rays and baggage screening than are currently in place at the airports. These stakeholders thought that because little would be known about nonparticipants, they should be subjected to enhanced security screening measures. In addition, several stakeholders mentioned that a registered traveler program might be useful in facilitating checked-baggage screening. For example, one stakeholder suggested that the x-ray screening of registered travelers’ baggage could be less intensive than the screening required for all other passengers, thus reducing the time it would take to screen all checked baggage. A few stakeholders even suggested that the most sophisticated baggage screening technology, such as explosives detection machines, would not be needed to screen a registered traveler’s checked baggage. However, the 2001 Aviation and Transportation Security Act requires the screening of all checked baggage, and using a registered traveler program to lessen the level of the checked baggage screening would not be permissible under the requirements of the Act. Finally, our research and discussions with stakeholders raised nonsecurity- related policy issues, including equity, privacy, and liability concerns that could impede implementation of a registered traveler program. With respect to equity issues, some stakeholders raised concerns that the federal government should carefully develop eligibility and enrollment criteria that would avoid automatically excluding certain classes of people from participating in the program. For example, requiring applicants to pay a high application or enrollment fee could deter some applicants for financial reasons. In addition, concern was expressed that certain races and ethnicities, mainly Arab-Americans, would be systematically excluded from program participation. Most stakeholders, however, did not generally view equity issues as being a major obstacle to developing the program, and one pointed to the precedent set by existing government programs that selectively confer known status to program participants. For example, the joint U.S./Canadian NEXUS pilot program, a program for travelers who frequently cross the U.S./Canadian border, is designed to streamline the movement of low-risk travelers across this border by using designated passage lanes and immigration-inspection booths, as well as some risk- management techniques similar to those proposed for use in a registered traveler program. With respect to privacy issues, civil liberties advocates we spoke with expressed concerns that the program might be used for purposes beyond its initial one and that participants’ information would need protection. They were particularly concerned about the potential for such a program to lead to the establishment of a national identity card, or to other uses not related to air travel. For example, some suggested that there could be enormous pressure on those who are not part of the program to apply, given the advantages of the program, and this would therefore, in effect, lead to a national identity card. One stakeholder raised a concern about the card’s becoming a prerequisite for obtaining a job that includes traveling responsibilities, or the collected information’s being used for other purposes, such as identifying those sought by police. Others countered that because participation in a registered traveler program would be voluntary, privacy concerns should not be a significant issue. According to TSA attorneys, legal protections already in place to prevent the proliferation of private information are probably applicable, and additional safeguards for this program could be pursued. Through our review, we identified two particular liability issues potentially associated with the concept of a registered traveler program. First, it is uncertain which entity would be liable and to what extent that entity would be liable if a registered traveler were to commit a terrorist act at an airport or on a flight. Second, it is also unclear what liability issues might arise if an applicant were rejected based on false or inaccurate information, or the applicant did not meet the eligibility criteria. For the most part, stakeholders who addressed the liability issue maintained that, because the federal government is already responsible for aviation security, and because it is likely to play an integral role in developing and administering such a program, security breaches by registered travelers would not raise new liability concerns. Although the assumption of screening responsibilities has increased the federal government’s potential exposure to liability for breaches of aviation security, TSA representatives were unsure what the liability ramifications would be for the federal government for security breaches or terrorist acts committed by participants of a registered traveler program. Fewer stakeholders offered views on whether there would be liability issues if an applicant were denied participation in a registered traveler program because of false or inaccurate information. However, some indicated that the federal government’s participation, particularly in developing eligibility criteria, would be key to mitigating liability issues. One stakeholder said that the program must include appeal procedures to specify under what conditions an individual could appeal if denied access to the program, who or what entity would hear an appeal, and whether an individual would be able to present evidence in his or her defense. Other stakeholders, however, stressed the importance of keeping eligibility criteria and reasons for applicant rejection confidential, because they believe that confidentiality would be crucial to maintaining the security of the program. TSA maintained that if the program were voluntary, participants might have less ability to appeal than they would in a government entitlement program, in which participation might be guaranteed by statute. In addition to key policy issues, some stakeholders we spoke with identified a number of key program implementation issues to consider. Specifically, they involve choosing appropriate technologies, determining how to manage data collection and security, defining the program’s scope, and determining the program’s costs and financing structure. Our research indicated that developing and implementing a registered traveler program would require key choices about which technologies to use. Among the criteria cited by stakeholders were a technology’s ability to (1) provide accurate data about travelers, (2) function well in an airport environment, and (3) safeguard information from fraud. One of the first decisions that would have to be made in this area is whether to use biometrics to verify the identity of registered passengers and, if so, which biometric identifier to use. The term “biometrics” refers to a wide range of technologies that can be used to verify a person’s identity by measuring and analyzing human characteristics. Identifying a person’s physiological characteristics is based on data derived from scientifically measuring a part of the body. Biometrics provides a highly accurate confirmation of the identity of a specific person. While the majority of those we interviewed said that some sort of biometric identifier is critical to an effective registered traveler program, there was little agreement among stakeholders as to the most appropriate biometric for this program. Issues to consider when making decisions related to using biometric technology include the accuracy of a specific technology, user acceptance, and the costs of implementation and operation. Although there is no consensus on which biometric identifier should be used for a registered traveler program, three biometric identifiers were cited most frequently as offering the requisite capabilities for a program: iris scans (using the distinctive features of the iris), fingerprints, and hand geometry (using distinctive features of the hand). Although each of the three identifiers has been used in airport trials, there are disadvantages associated with each of them. (Appendix III outlines some of the advantages and disadvantages of each.) A few stakeholders also claimed that a biometric should not be part of a registered traveler program. Among the reasons cited were that biometric technology is expensive, does not allow for quick processing of numerous travelers, and is not foolproof. Some studies conducted have concluded that current biometric technology is not as infallible as biometric vendors claim. For example, a German technology magazine recently demonstrated that using reactivated latent images and forgeries could defeat fingerprint and iris recognition systems. In addition, one stakeholder stated that an identity card with a two-dimensional barcode that stores personal data and a picture would be sufficient to identify registered travelers. Such a card would be similar to those currently used as drivers’ licenses in many states. In addition to choosing specific technologies, stakeholders said that decisions will be needed regarding the storage and maintenance of data collected for the program. These include decisions regarding where a biometric or other unique identifier and personal background information should be stored. Such information could be stored either on a card embedded with a computer chip or in a central database, which would serve as a repository of information for all participants. Stakeholders thought the key things to consider in deciding how to store this information are speed of accessibility, levels of data protection, methods to update information, and protections against forgery and fraudulent use by others. One stakeholder who advocates storing passenger information directly on a “smart” card containing an encrypted computer chip said that this offers more privacy protections for enrollees and would permit travelers to be processed more quickly at checkpoints than would a database method. On the other hand, advocates for storing personal data in a central database said that it would facilitate the updating of participants’ information. Another potential advantage of storing information in a central database is that it could make it easier to detect individuals who try to enroll more than once, by checking an applicant’s information against information on all enrollees in a database. In theory, this process would prevent duplication of enrollees. Another issue related to storing participant information is how to ensure that the information is kept up-to-date. If participant information is stored in a database, then any change would have to be registered in a central database. If, however, information is stored on an identification card, then the card would have to feature an embedded computer chip to which changes could be made remotely. Keeping information current is necessary to ensure that the status of a registered traveler has not changed because of that person’s recent activities or world events. One stakeholder noted the possibility that a participant could do something that might cause his or her eligibility status to change. In response to that concern, he stressed that a registered traveler program should incorporate some sort of “quick revoke” system. When that traveler is no longer entitled to the benefits associated with the program, a notification would appear the next time the card is registered in a reader. Stakeholders differed in their opinions as to how many airports and how many passengers should participate in a registered traveler program. While some believe that the program should be as expansive as possible, others maintain that the program would function most efficiently and cost- effectively if it were limited to those airports with the most traffic and to those passengers who fly the most frequently. As for airports, some suggested that all 429 airports subject to security requirements in the United States should be equipped to support the program, to convince more passengers to enroll. Others contended that, because of equipment costs, the program should optimally include only the largest airports, such as the fewer than 100 airports that the FAA classifies as Category X and Category 1 airports, which the vast majority of the nation’s air travelers use. There were also different opinions as to whether the program should limit enrollment to frequent travelers or should strive for wider enrollment to maximize participation. Representatives of a passenger group asserted that the program should be limited to passengers who fly regularly because one of the goals of the program would be to process known passengers more quickly, and that having too many enrollees would limit the time saved. Others, however, maintained that the program should enroll as many passengers as possible. This case is made largely based on security concerns—the more people who register, the more information is known about a flight’s passengers. It is unclear who would fund any registered traveler program, although a majority of the stakeholders we contacted who discussed the issue expect that participants would have to fund most of its costs. Representatives of aviation traveler groups said that participants would be willing to bear almost all of the costs. One airline representative estimated that frequent passengers would be willing to pay up to $100 for initial enrollment and an additional $25 to $50 annually for renewal. For similar reasons, some stakeholders have suggested that the airlines bear some of the costs of the program, probably by offering subsidies and incentives for their passengers to join, since the aviation industry would also benefit. For instance, one stakeholder said that airlines might be willing to partially subsidize the cost if the airlines could have access to some of the participant information. A few stakeholders also expect that the federal government would pay for some of the cost to develop a registered traveler program. One stakeholder who said the government should pay for a significant portion of the program did so based on the belief that national security benefits will accrue from the program and so, therefore, funding it is a federal responsibility. Others maintained that significant long-term federal funding for the program is unrealistic because of the voluntary aspect of the program, the possibility that it might be offered only to selected travelers, and TSA’s current funding constraints. In addition to the uncertainty about which entity would primarily fund a registered traveler program, there are also questions about how much the program would cost. None of the stakeholders who were asked was able to offer an estimate of the total cost of the program. A technology vendor who has studied this type of program extensively identified several primary areas of cost, which include but are not limited to background checks, computer-chip–enabled cards, card readers, biometric readers, staff training, database development, database operations, and enrollment center staffing. The fact that the costs of many of these components are uncertain makes estimating the overall program costs extremely difficult. For example, one stakeholder told us that extensive background checks for enrollees could cost as much as $150 each, while another stakeholder maintained that detailed, expensive background checks would be unnecessary. Therefore, the choice of what type of background check to use if a program is implemented would likely significantly influence the program’s overall costs. Our research indicated that there are also significant price range differences in computer-chip–enabled cards and biometric readers, among other components. Regardless of the policy and program decisions made about a registered traveler program, we identified several basic principles TSA might consider if it implements such a program. We derived these principles from our discussions with stakeholders and from review of pertinent literature as well as best practices for implementing new programs. Chief among these is the principle that vulnerabilities in the aviation system be assessed in a systematic way and addressed using a comprehensive risk management plan. Accordingly, the registered traveler program must be assessed and prioritized along with other programs designed to address security vulnerabilities, such as enhancing cockpit security, controlling access to secure areas of the airport, preventing unsafe items from being shipped in cargo or checked baggage, and ensuring the integrity of critical air traffic control–computer systems. TSA officials also noted that the agency is responsible for the security of all modes of transportation, not just aviation. They added that a program such as registered traveler needs to be assessed in the broader context of border security, which can include the security of ports and surface border crossings overseen by a number of federal agencies, such as Customs, Coast Guard, and INS. TSA might consider the following principles if, and when, a registered traveler program is implemented: Apply lessons learned from and experience with existing programs that share similarities with the registered traveler program. This information includes lessons related to such issues as eligibility criteria, security procedures, technology choices, and funding costs. Test the program initially on a smaller scale to demonstrate its feasibility and effectiveness, and that travelers will be willing to participate. Develop performance measures and a system for assessing whether the program meets stated mission and goals. Use technologies that are interoperable across different enrollment sites and access-control points, and select technologies that can readily be updated to keep pace with new developments in security technology, biometrics, and data sharing. At a minimum, interoperability refers to using compatible technologies at different airport checkpoints across the country and, more broadly, could be seen as including other access- control points, such as border crossings and ports of entry. Using lessons learned from existing programs offers TSA an opportunity to identify key policy and implementation issues as well as possible solutions to them. Although not of the scope that a nationwide U.S. registered traveler program would likely be, several existing smaller programs, both in the United States and abroad, address some of the same issues as the registered traveler concept and still present excellent opportunities for policymakers to learn from real-life experiences. For example, in the United States, the INS already has border control programs both at airports and roadway checkpoints to expedite the entry of “known” border crossers. Internationally, similar programs exist at Ben Gurion Airport in Israel, Schiphol Airport in Amsterdam, and Dubai International Airport in the United Arab Emirates. In the past, similar pilot programs have also been run at London’s Gatwick and Heathrow airports. All of these programs rely on credentialing registered travelers to expedite their processing and are candidates for further study. Finally, programs established by the Department of Defense and the General Services Administration that use cards and biometrics to control access to various parts of a building offer potential technology-related lessons that could help design a registered traveler program. (Appendix IV offers a brief description of some of the U.S. and foreign programs.) TSA’s program manager for the Registered Traveler Task Force stressed that his agency has no role in these other programs, which are different in purpose and scope from the registered traveler concept. He added that these programs focus on expediting crossing at international borders, while the registered traveler concept focuses on domestic security. In addition to these programs, information could also be gleaned from a registered traveler pilot program. For example, the Air Transport Association has proposed a passenger and employee pilot program. ATA’s proposed program would include over 6,000 participants, covering both travelers who passed a background check and airline employees. ATA’s proposal assumes that (1) the appropriate pool of registered traveler participants will be based on background checks against the FBI/TSA watch list, and (2) airlines would determine which employees could apply, and would initiate background checks for them. ATA estimates that the pilot program would initially cost about $1.2 million to implement. To allow TSA and the airlines to evaluate the effectiveness of the program’s technologies and procedures and their overall impact on checkpoint efficiency, ATA plans to collect data on enrollment procedures, including: the number of individuals who applied and were accepted, the reasons for rejection, and customer interest in the program; reliability of the biometric cards and readers; and checkpoint operational issues. In our discussions, the Associate Under Secretary for Security Regulation and Policy at TSA made it clear that he thought developing a registered traveler pilot program on a small scale would be a necessary step before deciding to implement a national program. TSA officials responsible for assessing a registered traveler program said that they hope to begin a pilot program by the end of the first quarter of 2003. They also noted that much of the available information about the registered traveler concept is qualitative, rather than quantitative. They added that, because the cost- effective nature of a registered traveler program is not certain, a financial analysis is needed that considers the total cost of developing, implementing, and maintaining the technology and the program. Along these lines, they believe that a pilot program and rigorous, fact-based analysis of the costs and benefits of this program will be useful for determining (1) whether the hassle factor really exists, and if so to what extent, (2) whether a registered traveler program will effectively address the need to expedite passenger flow or to manage risk, and (3) whether such a program would be the risk-mitigation tool of choice, given the realities of limited resources. In addition to developing performance-based metrics to evaluate the effectiveness of a pilot program, TSA could consider developing similar metrics to measure the performance of a nationwide program if one is created. Our previous work on evaluating federal programs has stressed the importance of identifying goals, developing related performance measures, collecting data, analyzing data, and reporting results. Collecting such information is most useful if the data-gathering process is designed during the program’s development and initiated with its implementation. Periodic assessment of the data should include comparisons with previously collected baseline data. The implementation of a registered traveler program could be helped by following those principles. For example, determining whether, and how well, the program improves aviation security and alleviates passenger inconvenience requires that measurements be developed and data collected and analyzed to demonstrate how well these goals are being met. Such information could include the success of screeners at detecting devices not allowed on airplanes for both enrollees and nonparticipants, or the average amount of time it takes for enrollees to pass through security screening. An effective registered traveler program depends on using technologies that are interoperable across various sites and with other technologies, and can be readily updated to keep pace with new developments in security technology, biometrics, and data sharing. Such a program is unlikely to be airport- or airline-specific, which means that the various technologies will have to be sufficiently standardized for enrollees to use the same individual cards or biometrics at many airports and with many airlines. Consequently, the technologies supporting the nationwide system need to be interoperable so that they can communicate with one another. The FAA’s experience with employee access cards offers a good lesson on the dangers of not having standards to ensure that technologies are interoperable. As we reported in 1995, different airports have installed different types of equipment to secure doors and gates. While some airports have installed magnetic stripe card readers, others have installed proximity card readers, and still another has installed hand-scanning equipment to verify employee identity. As a result, an official from one airline stated that employees who travel to numerous airports have to carry several different identity cards to gain access to specific areas. Another important interoperability issue is the way in which the personal data associated with a registered traveler program relates to other existing information on travelers, most important of which is the automated passenger prescreening system information. Some stakeholders believe it will be crucial that the registered traveler program is integrated into the automated system. Given TSA’s focus on developing and launching a revised automated passenger prescreening system, such integration will likely be essential. Integrating the data depends on finding a workable technology solution. Furthermore, TSA officials added that interoperability may extend beyond aviation to passengers who enter the United States at border crossings or seaports. They noted that ensuring the interoperability of systems across modes of transportation overseen by a variety of different federal agencies will be a complex and expensive undertaking. An equally important factor to consider is how easily a technology can be upgraded as related technologies evolve and improve. As stakeholders made clear to us, because technologies surrounding identification cards and biometrics are evolving rapidly, often in unpredictable ways, the technology of choice today may not be cost-effective tomorrow. To ensure that a registered traveler program will not be dependent on outdated technologies, it is essential to design a system flexible enough to adapt to new technological developments as they emerge. For example, if fingerprints were initially chosen as the biometric, the supporting technologies should be easily adaptable to other biometrics, such as iris scans. An effective way to make them so is to use technology standards for biometrics, data storage, and operating systems, rather than to mandate specific technology solutions. A registered traveler program is one possible approach for managing some of the security vulnerabilities in our nation’s aviation and broader transportation systems. However, numerous unresolved policy and programmatic issues would have to be addressed before developing and implementing such a program. These issues include, for example, the central question of whether such a program will effectively enhance security or will inadvertently provide a means to circumvent and compromise new security procedures. These issues also include programmatic and administrative questions, such as how much such a program would cost and what entities would provide its financing. Our analysis of existing literature and our interviews with stakeholders helped identify some of these key issues but provide no easy answers. The information we developed should help to focus and shape the debate and to identify key issues to be addressed when TSA considers whether to implement a registered traveler program. We provided the Department of Transportation (DOT) with a draft of this report for review and comment. DOT provided both oral and written comments. TSA’s program manager for the Registered Traveler Task Force and agency officials present with legal and other responsibilities related to this program said that the report does an excellent job of raising a number of good issues that TSA should consider as it evaluates the registered traveler concept. These officials provided a number of clarifying comments, which we have incorporated where appropriate. Unless you publicly announce its contents earlier, we plan no further distribution of this report until 15 days from the date of this letter. At that time, we will send copies of this report to interested Members of Congress, the Secretary of Transportation, and the Under Secretary of Transportation for Security. We will also make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-3650. I can also be reached by E-mail at [email protected]. Key contributors are listed in appendix V. To obtain and develop information on the purpose of a registered traveler program and the key policy and implementation issues in designing and implementing it, we conducted an extensive search of existing information and carried out interviews with key stakeholders. These interviews included officials from the federal government, the aviation industry, aviation security consultants, vendors developing and testing registered traveler applications, and organizations concerned with issues of data privacy and civil liberties. We conducted a literature search that identified existing studies, policy papers, and articles from the federal government, the aviation industry, and other organizations on numerous issues associated with designing and implementing a registered traveler program. These issues included the goals or purposes of a registered traveler program and policy and programmatic issues such as the potential costs, security procedures, and technology choices for such a program. We also identified existing studies and papers on specific items, such as the applicability of biometric technologies for use in a registered traveler program and the extent to which programs already exist in the United States and abroad (this detailed information is presented in appendix IV). This literature search also identified key stakeholders regarding designing and implementing a registered traveler program. Based on our literature search, we identified a list of 25 key stakeholders who could provide professional opinions on a wide range of issues involved in a registered traveler program. We chose these stakeholders based on their influence in the aviation industry as well as their expertise in such issues as aviation security, identification technologies, civil liberties, and the air-travel experience. In total, we conducted 22 interviews. We also visited and interviewed officials associated with registered traveler–type programs in two European countries. The intent of our interviews was to gain a further understanding of the issues surrounding a registered traveler program and specific information on such items as the potential costs for implementing a registered traveler program and the technology needs of such a program. In conducting our interview process, we developed a standard series of questions on key policy and implementation issues, sent the questions to the stakeholders in advance, and conducted the interviews. We then summarized the interviews to identify any key themes and areas of consensus or difference on major issues. We did not, however, attempt to empirically validate the information provided to us by stakeholders through these interviews. To identify basic principles that TSA should consider if it decides to implement a registered traveler program, we analyzed existing studies to identify overriding themes that could impact the policy or implementation of such a program. We also analyzed the results of our interviews, to generate a list of key principles. We performed our work from July 2002 through October 2002 in accordance with generally accepted government auditing standards. The International Biometrics Group considers four types of biometric identifiers as the most suitable for air-travel applications. These identifiers are fingerprint recognition, iris recognition, hand geometry, and facial recognition. Each of these biometrics has been employed, at least on a small scale, in airports worldwide. The following information describes how each biometric works and compares their functionality. This technology extracts features from impressions made by the distinct ridges on the fingertips. The fingerprints can be either flat or rolled. A flat print captures only an impression of the central area between the fingertip and the first knuckle; a rolled print captures ridges on both sides of the finger. The technology is one of the best known and most widely used biometric technologies. This technology is based on the distinctly colored ring surrounding the pupil of the eye. The technology uses a small, high-quality camera to capture a black-and-white high-resolution image of the iris. It then defines the boundaries of the iris, establishes a coordinate system over the iris, and defines the zones for analysis within that coordinate system. Made from elastic connective tissue, the iris is a very plentiful source of biometric data, having approximately 450 distinctive characteristics. This technology measures the width, height, and length of the fingers, distances between joints, and shapes of the knuckles. The technology uses an optical camera and light-emitting diodes with mirrors and reflectors to capture three-dimensional images of the back and sides of the hand. From these images, 96 measurements are extracted from the hand. Hand geometry systems have been in use for more than 10 years for access control at facilities ranging from nuclear power plants to day care centers. This technology identifies people by areas of the face not easily altered— the upper outlines of the eye sockets, the areas around the cheekbones, and the sides of the mouth. The technology is typically used to compare a live facial scan with a stored template, but it can also be used to compare static images, such as digitized passport photographs. Facial recognition can be used in both verification and identification systems. In addition, because facial images can be captured from video cameras, facial recognition is the only biometric that can also be used for surveillance purposes. To improve border security and passenger convenience. Passengers from European Union, Norway, Iceland, and Liechtenstein. In the enrollment phase, the traveler is qualified and registered. This process includes a passport review, background check, and iris scan. All collected information is encrypted and embedded on a smart card. 2,500 passengers have enrolled in the program. In the traveling phase, the passenger approaches a gated kiosk and inserts the smart card in a card reader. The system reads the card and allows valid registered travelers to enter an isolated area. The passenger then looks into an iris scan camera. If the iris scan matches the data stored on the card, the passenger is allowed to continue through the gate. If the system cannot match the iris scan to the information on the card, the passenger is directed to the regular passport check lane. As of October 1, 2002, there is a 99-119 Euro ($97–$118) annual fee for participating passengers. According to program officials, the entire automatic border passage procedure is typically completed in about 10–15 seconds. The system can process four to five people per minute. There are plans to expand the program so that airlines and airports can use it for passenger identification and for tracking such functions as ticketing, check-in, screening, and boarding. There are also plans to develop components of the technology to provide secure-employee and staff access to restricted areas of travel and transportation facilities. To expedite passenger processing at passport control areas. Israeli citizens and frequent international travelers. Travelers who have dual U.S./Israel citizenship can take advantage of the Ben Gurion program, as well as the INS’s INSPASS program. During enrollment, applicants submit biographic information and biometric hand geometry. Applicants also receive an in-depth interview. Approximately 80,000 Israeli citizens have enrolled in the program. During arrival and departure, participants use a credit card for initial identification in one of 21 automated inspection kiosks at the airport. The participant then places his or her hand in the hand reader for identity verification. If verified, the system prints a receipt, which allows the traveler to proceed through a system-controlled gate. If the person’s identity cannot be verified, the individual is referred to an inspector. $20–$25 annual membership fee for participants. According to program officials, the entire automated verification process takes 20 seconds. Passport control lines at Ben Gurion airport can take up to 1 hour. The program allows airport personnel to concentrate on high-risk travelers, reduces bottlenecks with automated kiosks, improves airport cost-effectiveness, generates new revenue for the airport authority, and expands security capabilities at other Israeli borders. To expedite passenger processing at passport control. Non-United Kingdom, non-European Union, non-visa frequent travelers (mostly American and Canadian business travelers) originating from John F. Kennedy International Airport or Dulles International Airport on Virgin Atlantic or British Airways. To enroll, participants record their iris images with EyeTicket, have their passports scanned, and submit to a background check with U.K. immigration. 900 of 1,000 applicants were approved for participation; 300 enrolled. Upon arrival in London, participants are able to bypass the regular immigration line and proceed through a designated border entry lane. Participants look into an iris scan camera, and the image is compared against the scan taken at enrollment. If the two iris images match, participants are able to proceed through immigration. There were no user fees associated with the pilot program. According to EyeTicket, the average processing time per passenger is 12 seconds. Completed. Six-month trial ran from January 31, 2002, to July 31, 2002. IP@SS (Integrated Passenger Security System) Newark International Airport, Newark, New Jersey (Continental Airlines); Gatwick Airport, London, England (Delta Airlines) To expedite and simplify the processes of passenger identification and security screening. In June 2002, 6,909 passengers were processed through IP@SS. Officials report that about 99 percent of passengers volunteered for the program. Continental Airlines has two kiosks for tourist class, one for business and first classes, and one at the Continental gate for flights between Newark and Tel Aviv. Each station is staffed with a trained security agent who asks passengers for travel documents, including the individual’s passport, which is scanned by an automated reader. After being cleared, the passenger can enroll in a biometric program in which biometric information is transferred to a smart card. The passenger then takes the card to the boarding gate and inserts it into the card reader and inserts fingers into the reader. If the information corresponds with the information contained on the smart card, the passenger is cleared to board the plane. Cards are surrendered to program officials after each use, and the information is scrambled to prevent misuse. There were no user fees associated with the pilot programs. Ongoing. ICTS International plans to launch pilot programs at other U.S. and European airports. The pilot programs at Newark and Gatwick are technology demonstrations and are used only to aid in the departure process. ICTS may test a “sister city” concept, in which the participant can take the card to his or her destination to aid in the deplaning/arrival process there. To expedite border crossings for low-risk frequent commuters. CANPASS is a project of the Canada-U.S. Shared Border Accord. Citizens and permanent residents of the United States and Canada are eligible to participate in the CANPASS program. As part of the application process, an applicant provides personal identification, vehicle identification, and driver’s license information. Background checks are performed on all applicants. As of October 1, 2001, there were approximately 119,743 participants in the CANPASS program. Technology varies from site to site. At Douglas, the participant receives only a letter of authorization and a windshield decal; at Windsor, a participant receives a photo ID card. A participant receives a letter of authorization and a windshield decal, which can be used only on a vehicle registered in the CANPASS system. When a vehicle enters the lane, a license plate reader reads the plate on the car. Membership in the CANPASS program is validated with data available through the license plate reader and other sources. At the applicable crossings, a participant must show the CANPASS identification card to the border inspector. There are no fees associated with the CANPASS system. The CANPASS Highway program was closed as a result of the events of September 11, 2001; however, the program is still currently available at the Whirlpool Bridge in Niagara Falls, Ontario. The CANPASS program operates in conjunction with the SENTRI/PORTPASS program. SENTRI/PORTPASS (Secure Electronic Network for Travelers’ Rapid Inspection/Port Passenger Accelerated Service System) Detroit, Michigan; Buffalo, New York; El Paso and Hidalgo, Texas; Otay Mesa and San Ysidro, California Citizens and permanent residents of the United States and Canada and certain citizens and non-immigrants of Mexico are eligible to apply for program participation. Applicants must undergo an FBI background check, an Interagency Border Inspection System (IBIS) check, vehicle search, and personal interview prior to participation. Applicants must provide evidence of citizenship, residence, and employment or financial support. Fingerprints and a digital photograph are taken at the time of application. If cleared for enrollment, the passenger receives an identification card and a transponder, which must be installed in the registered vehicle. During 2000, approximately 792 participants were registered for the Detroit program, and 11,700 were registered for the Otay Mesa program. Transponders and magnetic card readers recall electronic photographs of registered drivers and their passengers. Images are presented on a monitor for border inspectors to visually confirm participants. Participants use designated SENTRI lanes to cross the border. The system automatically identifies the vehicles and the participants authorized to use the program. Border inspectors compare digitized photographs that appear on computer screens in the inspectors’ booths with the vehicles’ passengers. There is no charge for the U.S./Canada program. The SENTRI program for the United States and Mexico is $129 ($25 enrollment fee per person, $24 fingerprinting fee, and $80 systems fee). According to an El Paso INS official, delays in border crossing are typically around 60–90 minutes, but can be more than 2 hours. The SENTRI lane at a bridge border crossing has wait times of no more than 30 minutes. According to program officials, in Otay Mesa, CA, SENTRI participants wait approximately 4– 5 minutes in the inspection lane, while nonparticipants can wait up to 3 hours in a primary inspection lane. To expedite border crossings for low-risk frequent commuters. NEXUS is a pilot project of the Canada-U.S. Shared Border Accord. Canadian and U.S. lawful, national, and permanent residents are eligible to apply for program participation. Applicants complete an application that is reviewed by the U.S. Customs Service, INS, Canada Customs and Revenue Service, and Citizenship and Immigration, Canada. Applicants are required to provide proof of citizenship and residency, employment authorizations, and visas. Background checks are performed by officials of both countries. Participants must also provide a fingerprint biometric of two index fingers, which is verified against an INS database for any American immigration violations. (Unlike the CANPASS/PORTPASS programs, NEXUS is a harmonized border-crossing program with common eligibility requirements, a joint enrollment process, and a common application and identity card.) Since 2000, program administrators have issued 4,415 identification cards to participants. Enrollees must provide a two-finger print biometric. Photo identification cards are given to all participants. The NEXUS identification card allows participants to use NEXUS-designated lanes in the United States and Canada and to cross the border without routine customs and immigration questioning. A nonrefundable processing fee of $80 Canadian or $50 U.S. must be paid every 5 years. According to a study on the NEXUS Program, participants can save 20 minutes, compared with using the regular primary inspection lanes. Officials may request full fingerprints to verify identity. The two-finger print biometric or full prints may be shared with other government and law enforcement agencies. In addition, any personal information provided will also be shared with other government and law enforcement agencies. Additional crossing points are scheduled to open in 2003. INSPASS (INS Passenger Accelerated Service System)/CANPASS Airport Detroit, Michigan; Los Angeles, California; Miami, Florida; Newark, New Jersey; New York, New York; San Francisco, California; Washington, D.C.; Vancouver and Toronto, Canada To decrease immigration inspection for low-risk travelers entering the U.S. via international flights. Employed at seven airports in the United States (Detroit, Los Angeles, Miami, Newark, New York (JFK), San Francisco, Washington-Dulles) and at U.S. pre-clearance sites in Canada, in Vancouver and Toronto. INSPASS enrollment is open to all citizens of the United States, Canada, Bermuda, and visa-waiver countries who travel to the United States on business three or more times a year for short visits (90 days or less). INSPASS is not available to anyone with a criminal record or to aliens who are not otherwise eligible to enter the United States. The enrollment process involves capturing biographical information, hand geometry biometric data and facial picture and digital fingerprint information. A background check is done automatically for the inspector and, if approved, a machine-readable card is created for the traveler. The entire enrollment process typically takes 30–40 minutes. Over 98,000 enrollments have been performed in INSPASS, of which 37,000 are active as of September 2001. Once enrolled, the traveler is able to use an automated kiosk at passport control. A traveler is required to swipe the INSPASS card, enter flight information on a touchscreen, verify hand geometry, and complete a security check. Upon successful inspection, a receipt is printed that allows the traveler to proceed to U.S. Customs. Presently, there are no system cost fees or filing fees associated with INSPASS. The CANPASS Airport program has been suspended since September 11, 2001, and will be replaced by the Expedited Passenger Processing System in 2003. INSPASS is being reworked and plans for a new version are under way. Key contributors to this assignment were Jean Brady, David Dornisch, David Goldstein, David Hooper, Bob Kolasky, Heather Krause, David Lichtenfeld, and Cory Roman. The General Accounting Office, the investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. The fastest and easiest way to obtain copies of GAO documents at no cost is through the Internet. GAO’s Web site (www.gao.gov) contains abstracts and full- text files of current reports and testimony and an expanding archive of older products. The Web site features a search engine to help you locate documents using key words and phrases. You can print these documents in their entirety, including charts and other graphics. Each day, GAO issues a list of newly released reports, testimony, and correspondence. GAO posts this list, known as “Today’s Reports,” on its Web site daily. The list contains links to the full-text document files. To have GAO e-mail this list to you every afternoon, go to www.gao.gov and select “Subscribe to daily E-mail alert for newly released products” under the GAO Reports heading. | The aviation industry and business traveler groups have proposed the registered traveler concept as a way to reduce long waits in airport security lines caused by heightened security screening measures implemented after the September 11 terrorist attacks. In addition, aviation security experts have advocated this concept as a way to better target security resources to those travelers who might pose greater security risks. The Aviation and Transportation Security Act of November 2001 allows the Transportation Security Administration (TSA) to consider developing a registered traveler program as a way to address these two issues. GAO completed this review to inform Congress and TSA of policy and implementation issues related to the concept of a registered traveler program. Under a variety of approaches related to the concept of a registered traveler program proposed by industry stakeholders, individuals who voluntarily provide personal background information and who clear background checks would be enrolled as registered travelers. Because these individuals would have been pre-screened through the program enrollment process, they would be entitled to expedited security screening procedures at the airport. Through a detailed literature review and interviews with stakeholders, GAO found that a registered traveler program is intended to reduce the inconvenience many travelers have experienced since September 11 and improve the quality and efficiency of airport security screening. Although GAO found support for this program among many stakeholders, GAO also found concerns that such a program could create new aviation security vulnerabilities. GAO also identified a series of key policy and program implementation issues that affect the program, including (1) Criteria for program eligibility; (2) Level of background check required for participation; (3) Security-screening procedures for registered travelers; (4) Technology options, including the use of biometrics to verify participants; (5) Program scope, including the numbers of participants and airports; and (5) Program cost and financing options. Stakeholders offered many different options on how best to resolve these issues. Finally, GAO identified several best practices that Congress and TSA may wish to consider in designing and implementing a registered traveler program. GAO concluded that a registered traveler program is one possible approach for managing some of the security vulnerabilities in our nation's aviation systems. However, decisions concerning key issues are needed before developing and implementing such a program. TSA felt that GAO's report offered a good overview of the potential and the challenges of a registered traveler program. The agency affirmed that there are no easy answers to some of the issues that GAO raised and that these issues need more study. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
Information security is a critical consideration for any organization that depends on information systems and computer networks to carry out its mission or business. It is especially important for government agencies, where maintaining the public’s trust is essential. The dramatic expansion in computer interconnectivity and the rapid increase in the use of the Internet have changed the way our government, the nation, and much of the world communicate and conduct business. However, without proper safeguards, systems are unprotected from individuals and groups with malicious intent to intrude and use the access to obtain sensitive information, commit fraud, disrupt operations, or launch attacks against other computer systems and networks. This concern is well-founded for a number of reasons, including the increase in reports of security incidents, the ease of obtaining and using hacking tools, the steady advance in the sophistication and effectiveness of attack technology, and the dire warnings of new and more destructive attacks to come. Computer- supported federal operations are likewise at risk. Our previous reports and those of agency inspectors general describe persistent information security weaknesses that place a variety of federal operations at risk of disruption, fraud, and inappropriate disclosure. Thus, we have designated information security as a governmentwide high-risk area since 1997, a designation that remains in effect. We have specifically recognized the importance of information security related to critical infrastructures. Critical infrastructures are physical or virtual systems and assets so vital to the nation that their incapacitation or destruction would have a debilitating impact on national and economic security and on public health and safety. These systems and assets—such as the electric power grid, chemical plants, and water treatment facilities—are essential to the operations of the economy and the government. Recent terrorist attacks and threats have underscored the need to protect these critical infrastructures. If their vulnerabilities are exploited, our nation’s critical infrastructures could be disrupted or disabled, possibly causing loss of life, physical damage, and economic losses. Although the majority of our nation’s critical infrastructures are owned by the private sector, the federal government owns and operates key facilities that use control systems, including oil, gas, water, energy, and nuclear facilities. Control systems are used within these infrastructures to monitor and control sensitive processes and physical functions. Typically, control systems collect sensor measurements and operational data from the field, process and display this information, and relay control commands to local or remote equipment. Control systems perform functions that range from simple to complex. They can be used to simply monitor processes—for example, the environmental conditions in a small office building—or to manage the complex activities of a municipal water system or a nuclear power plant. In the electric power industry, control systems can be used to manage and control the generation, transmission, and distribution of electric power. For example, control systems can open and close circuit breakers and set thresholds for preventive shutdowns. There are two primary types of control systems: distributed control systems and supervisory control and data acquisition (SCADA) systems. Distributed control systems typically are used within a single processing or generating plant or over a small geographic area and communicate using local area networks, while SCADA systems typically are used for large, geographically dispersed operations and rely on long-distance communication networks. In general, critical infrastructure sectors and industries depend on both types of control systems to fulfill their missions or conduct business. For example, a utility company that serves a large geographic area may use distributed control systems to manage power generation at each power plant and a SCADA system to manage power distribution to its customers. A SCADA system is generally composed of these six components (see fig. 1): (1) operating equipment, which includes pumps, valves, conveyors, and substation breakers; (2) instruments, which sense conditions such as pH, temperature, pressure, power level, and flow rate; (3) local processors, which communicate with the site’s instruments and operating equipment, collect instrument data, and identify alarm conditions; (4) short-range communication, which carries analog and discrete signals between the local processors and the instruments and operating equipment; (5) host computers, where a human operator can supervise the process, receive alarms, review data, and exercise control; and (6) long-range communication, which connects local processors and host computers using, for example, leased phone lines, satellite, and cellular packet data. A distributed control system is similar to a SCADA system but does not operate over a large geographic area or use long-range communications. We have previously reported that critical infrastructure control systems face increasing risks due to cyber threats, system vulnerabilities, and the potential impact of attacks as demonstrated by reported incidents. Cyber threats can be intentional or unintentional, targeted or nontargeted, and can come from a variety of sources. The Federal Bureau of Investigation has identified multiple sources of threats to our nation’s critical infrastructures, including foreign nation states engaged in information warfare, domestic criminals and hackers, and disgruntled employees working within an organization. Table 1 summarizes those groups or individuals that are considered to be key sources of threats to our nation’s infrastructures. Control systems are more vulnerable to cyber threats, including intentional attacks and unintended incidents, than in the past for several reasons, including their increasing standardization and their increased connectivity to other systems and the Internet. For example, in August 2006, two circulation pumps at Unit 3 of the Browns Ferry, Alabama, nuclear power plant operated by TVA failed, forcing the unit to be shut down manually. The failure of the pumps was traced to an unintended incident involving excessive traffic on the control system network caused by the failure of another control system device. Critical infrastructure owners face both technical and organizational challenges to securing control systems. Technical challenges—including control systems’ limited processing capabilities, real-time operations, and design constraints—hinder an infrastructure owner’s ability to implement traditional information technology (IT) security processes, such as strong user authentication and patch management. Organizational challenges include difficulty in developing a compelling business case for investing in control systems security and differing priorities of information security personnel and control systems engineers. To address the increasing threat to control systems governing critical infrastructures, both federal and private organizations have begun efforts to develop requirements, guidance, and best practices for securing control systems. For example, FISMA outlines a comprehensive, risk-based approach to securing federal information systems, which encompass control systems. Federal organizations, including the National Institute of Standards and Technology (NIST), the Federal Energy Regulatory Commission (FERC), and the Nuclear Regulatory Commission (NRC), have used a risk-based approach to develop guidance and standards to secure control systems. NIST guidance has been developed that currently applies to federal agencies; however, much FERC and NRC guidance and many standards have not been finalized. Once implemented, FERC and NRC standards will apply to both public and private organizations that operate covered critical infrastructures. We have previously reported on the importance of using a risk-based approach for securing critical infrastructures, including control systems. Risk management has received widespread support within and outside government as a tool that can help set priorities on how to protect critical infrastructures. While numerous and substantial gaps in security may exist, resources for closing these gaps are limited and must compete with other national priorities. Recognizing the importance of securing federal agencies’ information and systems, Congress enacted FISMA to strengthen the security of information and information systems within federal agencies, which include control systems. FISMA requires each agency to develop, document, and implement an agencywide information security program to provide security for the information and information systems that support the operations and assets of the agency, including those provided or managed by another agency, contractor, or other source. Specifically, this program is to include periodic assessments of the risk and magnitude of harm that could result from the unauthorized access, use, disclosure, disruption, modification, or destruction of information or information systems; risk-based policies and procedures that cost effectively reduce information security risks to an acceptable level and ensure that information security is addressed throughout the life cycle of each information system; subordinate plans for providing adequate information security for networks, facilities, and systems or groups of information systems; security awareness training for agency personnel, including contractors and other users of information systems that support the operations and assets of the agency; periodic testing and evaluation of the effectiveness of information security policies, procedures, and practices, performed with a frequency depending on risk, but no less than annually, and that includes testing of management, operational, and technical controls for every system identified in the agency’s required inventory of major information systems; a process for planning, implementing, evaluating, and documenting remedial action to address any deficiencies in the information security policies, procedures, and practices of the agency; procedures for detecting, reporting, and responding to security incidents; plans and procedures to ensure continuity of operations for information systems that support the operations and assets of the agency. Furthermore, FISMA established a requirement that each agency develop, maintain, and annually update an inventory of major information systems (including major national security systems) operated by the agency or under its control. This inventory is to include an identification of the interfaces between each system and all other systems or networks, including those not operated by or under the control of the agency. FISMA also directs NIST to develop standards and guidelines for systems other than national security systems. As required by FISMA and based on the objectives of providing appropriate levels of information security, NIST developed standards for all agencies to categorize their information and information systems according to a range of risk levels, guidelines recommending the types of information and information systems to be included in each category, and minimum information security requirements for information and information systems in each category. NIST standards and guidelines establish a risk management framework that instructs agencies on providing an acceptable level of information security for all agency operations and assets and that guides the testing and evaluation of information security control effectiveness within an agencywide information security program. Recognizing the importance of documenting standards and guidelines as part of an agencywide information security program, NIST emphasizes that agencies must develop and promulgate formal, documented policies and procedures in order to ensure the effective implementation of security requirements. NIST also collaborates with federal and industry stakeholders to develop standards, guidelines, checklists, and test methods to help secure federal information and information systems, including control systems. For example, NIST is currently developing guidance for federal agencies that own or operate control systems to comply with federal information system security standards and guidelines. The guidance identifies issues and modifications to consider in applying information security standards and guidelines to control systems. In December 2007, NIST released an augmentation to Special Publication (SP) 800-53, Recommended Security Controls for Federal Information Systems, which provides a security control framework for control systems. According to NIST officials, while most controls in SP 800-53 are applicable to control systems as written, several controls do require supplemental guidance and enhancements. Under the Energy Policy Act of 2005, FERC was authorized to (1) appoint an electricity reliability organization to develop and enforce mandatory electricity reliability standards, including cyber security, and (2) approve or remand each proposed standard. The commission may also direct the reliability organization to develop a new standard or modify approved standards. Both the commission and the reliability organization have the authority to enforce approved standards, investigate incidents, and impose penalties (up to $1 million a day) on noncompliant electricity asset owners or operators. FERC has conducted several activities to begin implementing the requirements of the act. In July 2006, FERC certified the North American Electric Reliability Corporation (NERC) as the national electric reliability organization. In August 2003, prior to passage of the Energy Policy Act of 2005, NERC adopted Urgent Action 1200, a temporary, voluntary cyber security standard for the electric industry. Urgent Action 1200 directed electricity transmission and generation owners and operators to develop a cyber security policy, identify critical cyber assets, and establish controls for and monitor electronic and physical access to critical cyber assets. Urgent Action 1200 remained in effect on a voluntary basis until June 1, 2006, at which time NERC proposed eight critical infrastructure protection reliability standards to replace the Urgent Action 1200 standard. In July 2007, FERC issued a notice of proposed rulemaking in which it proposed to approve eight critical infrastructure reliability standards, which included standards for control systems security. FERC also proposed to direct NERC to modify the areas of these standards that required improvement. In January 2008, after considering public comments on the notice of proposed rulemaking, FERC approved the reliability standards and the accompanying implementation plan. It also directed NERC to develop modifications to strengthen the standards and to monitor the development and implementation of the NIST standards to determine if they contain provisions that will protect the bulk-power system better than NERC’s reliability standards. The organizations subject to the standards, including utilities like TVA, must be auditably compliant with the standards by 2010. The NRC, which has regulatory authority over nuclear power plant safety and security, has conducted several activities related to enhancing the cyber security of control systems. In 2005, an industry task force led by the Nuclear Energy Institute (NEI) developed and released the Cyber Security Program for Power Reactors (NEI 04-04) to provide nuclear power reactor licensees a means for developing and maintaining effective cyber security programs at their sites. In December 2005, the commission staff accepted the method outlined in NEI 04-04 for establishing and maintaining cyber security programs at nuclear power plants. TVA officials stated that the agency has begun a program to comply with NEI 04-04 guidelines and plans to complete implementation of corrective actions identified as a result of these guidelines over the next 3 years, consistent with planned plant outages and upgrade projects. In January 2006, the commission issued a revision to Regulatory Guide 1.152, Criteria for Use of Computers in Safety Systems of Nuclear Power Plants, which provides cyber security-related guidance for the design of nuclear power plant safety systems. In April 2007, the commission finalized a rule that added “external cyber attack” to the events that power reactor licensees are required to prepare to defend against. In addition, the commission initiated a rulemaking process that provides cyber security requirements for digital computer and communication networks, including systems that are needed for plant safety, security, or emergency response. The public comment period for this rulemaking closed in March 2007. Commission officials stated that they estimate this rulemaking process will be completed in early 2009. Once the rulemaking process is completed and requirements for nuclear power plant cyber security programs are finalized, the commission is planning to conduct a range of oversight activities, including inspections at power plants. According to commission officials, all nuclear plant operators have committed to complete implementation of the NEI-04-04 program at their sites. The TVA is a federal corporation and the nation’s largest public power company. Its mission is to supply affordable, reliable power, support a thriving river system, and stimulate sustainable economic development in the public interest. In addition to generating and transmitting power, TVA also manages the nation’s fifth-largest river system to minimize flood risk, maintain navigation, provide recreational opportunities, and protect water quality. TVA is governed by a nine-member Board of Directors that is led by the Chairman. Each board member is nominated by the President of the United States and confirmed by the Senate. The TVA Chief Executive Officer reports to the TVA Board of Directors. TVA’s power service area covers 80,000 square miles in the southeastern United States, an area that includes almost all of Tennessee and parts of Mississippi, Kentucky, Alabama, Georgia, North Carolina, and Virginia, and has a total population of about 8.7 million people (see fig. 2). TVA operates 11 coal-fired fossil plants, 8 combustion turbine plants, 3 nuclear plants, and a hydroelectric system that includes 29 hydroelectric dams and one pumped storage facility (see fig. 2 and fig. 3). Fossil plants produce about 60 percent of TVA’s power, nuclear plants about 30 percent, and the hydroelectric system about 10 percent. TVA also owns and operates one of the largest transmission systems in North America. TVA’s transmission system moves electric power from the generating plants where it is produced to distributors of TVA power and to industrial and federal customers across the region. TVA provides power to three main customer groups: distributors, directly served customers, and off-system customers. There are 159 distributors— 109 municipal utility companies and 50 cooperatives—that resell TVA power to consumers. These groups represent the base of TVA’s business, accounting for 85 percent of their total revenue. Fifty-three large industrial customers and six federal installations buy TVA power directly. They represent 11 percent of TVA’s total revenue. Twelve surrounding utilities buy power from TVA on the interchange market. Sales to these utilities represent 4 percent of TVA’s total revenue. Control systems are essential to TVA’s operation. TVA uses control systems to both generate and deliver power. In generation, control systems are used within power plants to open and close valves, control equipment, monitor sensors, and ensure the safe and efficient operation of a generating unit. Many control systems networks connect with TVA’s corporate network to transmit information about system status. To deliver power, TVA monitors the status of its own and surrounding transmission facilities from two operations centers. Each center is staffed 24 hours a day and can serve as a backup for the other center. Control systems at these centers are used to open and close breakers and balance the transmission of power across the TVA network while accounting for changes in network capacity due to outages and changes in demand that occur continuously throughout the day. TVA’s control systems range in capacity from simple systems with limited functionality located in one facility to complex, geographically dispersed systems with multiple functions. The ages of these control systems range from modern systems to systems dating back 20 or more years to the original construction of a facility. As shown in table 2, TVA has designated certain senior managers to serve the key roles in information security designated by FISMA. Responsibility for control systems security is distributed throughout TVA (see fig. 4). TVA’s Information Services organization provides general guidance, assistance in FISMA compliance, and technical assistance in control systems security. The Information Services organization also manages the overall TVA corporate computer network that links facilities throughout the TVA service area and is connected to the Internet. As of February 2008, the Enterprise IT Security organization within Information Services was given specific responsibility for cyber security throughout the agency. However, the control systems located within a plant are integrated with and managed as part of the generation equipment, safety and environmental systems, and other physical equipment located at that plant. This means that development, day-to-day maintenance and operation, and upgrades of control systems are handled by the business units responsible for the facilities where the systems are located. Specifically, nuclear systems are managed by the Nuclear Power Group; coal and combustion turbine control systems are managed by the Fossil Power Group; and hydroelectric facilities are managed by River Operations. Transmission control systems are managed by TVA’s Transmission and Reliability Organization, located within its Power Systems Operations business unit. The Transmission and Reliability Organization is highly dependent on control systems. To comply with NERC Urgent Action 1200, and in an effort to ensure its systems are secure, the Transmission and Reliability Organization has handled additional aspects of information security compared with other TVA organizations. For example, the organization manages portions of its own network infrastructure. It also has arranged for both internal and external security assessments in order to enhance the security of its control systems. TVA had not fully implemented appropriate security practices to secure the control systems used to operate its critical infrastructures. Both the corporate network infrastructure and control systems networks and devices at individual facilities and plants were vulnerable to disruption. In addition, physical security controls at multiple locations did not sufficiently protect critical control systems. The interconnections between TVA’s control system networks and its corporate network increase the risk that security weaknesses on the corporate network could affect control systems networks. For example, because of weaknesses in the separation of lower security network segments from higher security network segments on TVA networks, an attacker who gained access to a less secure portion of a network such as the corporate network could potentially compromise equipment in a more secure portion of the network, including equipment that has access to control systems. As a result, TVA’s control systems that operate its critical infrastructures are at increased risk of unauthorized modification or disruption by both internal and external threats. The TVA corporate network infrastructure had multiple weaknesses that left it vulnerable to intentional or unintentional compromise of the confidentiality, integrity, and availability of the network and devices on the network. These weaknesses applied both at TVA headquarters and to the portions of the corporate network located at the individual facilities we reviewed. For example, one remote access system used for the network that we reviewed was not securely configured. Further, individual servers and workstations lacked key patches and were insecurely configured. In addition, the configuration of numerous network infrastructure protocols and devices provided limited or ineffective security protections. Moreover, the intrusion detection system that TVA used had significant limitations. As a result, TVA’s control systems were at an increased risk of unauthorized access or disruption via access from the corporate network. Furthermore, weaknesses in the intrusion detection system could limit the ability of TVA to detect malicious or unintended events on its network. Remote access is any access to an organizational information system by a user (or an information system) that communicates through an external, nonorganization-controlled network (e.g., the Internet). NIST guidance states that information systems should establish a trusted communications path between remote users and an information system and that two-factor authentication should be part of an organization’s remote access authentication requirements. Additionally, TVA policy requires that if remote access technology is used to connect to the network, it must be configured securely. One device used for remote access is a virtual private network (VPN). TVA did not configure a VPN system to include effective security mechanisms. This could allow an attacker who compromised a remote user’s computer to remotely access the user’s secure session to TVA, thereby increasing the risk that unauthorized users could gain access to TVA systems and sensitive information. Federal and agency guidance call for effective patch management, firewall configuration, and application security settings. TVA has a patch management policy that requires it to regularly monitor, identify, and remediate vulnerabilities to applications in its software inventory. NIST guidance also states that firewalls should be carefully configured to provide adequate protection. Furthermore, NIST guidance states that organizations should effectively configure security settings in key applications to the highest level possible. However, almost all of the workstations and servers that we examined on the corporate network lacked key security patches or had inadequate security settings. Furthermore, TVA did not effectively implement host firewall controls on its laptops. In addition, inadequate security settings existed in key applications installed on laptops, servers, and workstations we examined. Consequently, TVA is at an increased risk that known vulnerabilities in these applications could allow an attacker to execute malicious code and gain control of or compromise a system. Federal and agency guidance state that organizations should have strong passwords, identification and authentication, and network segmentation. National Security Agency guidance states that Windows passwords should be 12 or more characters long, include upper and lower case letters, numbers, and special characters, and not consist of dictionary words and has advised against the use of weak encryption. NIST guidance states that systems should uniquely identify and authenticate users with passwords or other authentication mechanisms or implement other compensating controls. NIST guidance also states that organizations should take steps to secure their e-mail systems. Finally, NIST guidance states that organizations should partition networks containing higher risk systems from lower risk systems and configure interfaces between those systems to manage risk. However, the TVA corporate network used several protocols and devices that did not provide sufficient security controls. For example, certain network protocols and devices were not adequately protected by password or authentication controls or encryption. In addition, TVA had network services that spanned different security network segments. As a result, a malicious user could exploit these weaknesses to gain access to sensitive systems or to otherwise modify or disrupt network traffic. Even strong controls may not block all intrusions and misuse, but organizations can reduce the risks associated with such events if they take steps to promptly detect, report, and respond to them before significant damage is done. In addition, analyzing security events allows organizations to gain a better understanding of the threats to their information and the costs of their security-related problems. Such analyses can pinpoint vulnerabilities that need to be eliminated so that they will not be exploited again. NIST states that intrusion detection is the process of monitoring events occurring in a computer system or network and analyzing the events for signs of intrusion, which it defines as an attempt to compromise the confidentiality, integrity, or availability of a computer or network. NIST guidance prescribes network and host-based intrusion detection systems as a means of protecting systems from the threats that come with increasing network connectivity. TVA had limited ability to effectively monitor its network with its intrusion detection system. Although a network intrusion detection system was deployed by TVA to monitor network traffic, it could not effectively monitor key computer assets. As a result, there is an increased risk that unauthorized access to TVA’s networks may not be detected and mitigated in a timely manner. TVA’s control system networks and devices on these networks were vulnerable to disruption due to inadequate information security controls. Specifically, firewalls were either bypassed or inadequately configured, passwords were either weak or not used at all, logging of certain activity was limited, configuration management policies for control systems software were not consistently implemented, and servers and workstations lacked key patches and effective virus protection. The combination of these weaknesses with the weaknesses in the TVA corporate network identified in the previous section places TVA’s control systems that operate its critical infrastructures at increased risk of unauthorized modification or disruption by both internal and external threats. A firewall is a hardware or software component that protects given computers or networks from attacks by blocking network traffic. NIST guidance states that firewalls should be configured to provide adequate protection for the organization’s networks and that the transmitted information between interconnected systems should be controlled and regulated. TVA had implemented firewalls to segment control systems networks from the corporate network at all facilities we reviewed with connections between these two networks. However, firewalls at three of six facilities reviewed were either bypassed or inadequately configured. As a result, the hosts on higher security control system networks were at increased risk of compromise or disruption from the other lower security networks. Passwords are used to establish the validity of a user’s claimed identity by requesting some kind of information that is known only by the user—a process known as authentication. The combination of identification, using, for example, a unique user account, and authentication, using, for example, a password, provides the basis for establishing individual accountability and for controlling access to the system. In cases where passwords cannot be implemented because of technological limitations or other concerns, such as impact on emergency response, NIST states that an organization should document controls that have been put in place to compensate for this weakness. TVA policy requires authentication of users except where security requirements or limitations in the hardware or software preclude it. In addition, agency policy requires users to establish complex passwords. TVA did not have effective passwords or other documented compensating controls governing control systems we reviewed. According to agency officials, in certain cases, passwords were not technologically possible to implement but in these cases, there were no documented compensating controls. Until the agency implements either effective password practices or documented compensating controls, it faces an increased risk of unauthorized access to its control systems. Determining what, when, and by whom specific actions are taken on a system is crucial to establishing individual accountability, monitoring compliance with security policies, and investigating security violations. Audit and monitoring involves the regular collection, review, and analysis of auditable events for indications of inappropriate or unusual activity and the appropriate investigation and reporting of such activity. Audit and monitoring can help security professionals routinely assess computer security, perform investigations during and after an attack, and even recognize an ongoing attack. Federal guidance states that organizations should develop formal audit policies and procedures. TVA guidance states that sufficient audit logs should be maintained that allow monitoring of key user activities. While TVA had taken steps to establish audit logs for its transmission control centers, it had not established effective audit logs or compensating controls at other facilities we reviewed. According to agency officials, system limitations at these facilities have historically meant that multiple users shared a single account to access these control systems. Therefore, audit logs would not have served a useful purpose because activities could not be traced to a single user. Until TVA establishes detailed audit logs for its control systems at these facilities or compensating controls in cases where such logs are not feasible, it risks being unable to determine if malicious incidents are occurring and, after an event occurs, being able to determine who or what caused the incident. Federal guidance states that all applications and changes to those applications should go through a formal, documented process that identifies all changes to the baseline configuration. Also, procedures should ensure that no unauthorized software is installed. TVA has established configuration management policies and procedures for its information technology systems. Specifically, its policies define the roles and responsibilities of application owners and developers; require business units to implement procedural controls that define documentation and testing required for software changes; and establish procedures to ensure that all changes relating to infrastructure and applications be managed and controlled. However, TVA did not consistently apply its configuration management policies and procedures to control systems. The transmission control system had a configuration management process, and the hardware at individual plants was governed by a configuration management process, including plant drawings that tracked individual pieces of equipment. However, there was no formal configuration management process for software that was part of the control systems at the hydroelectric and fossil facilities that we reviewed. As a result, increased risk exists that unapproved changes to control systems could be made. Patch management, including up-to-date patch installation, helps to mitigate vulnerabilities associated with flaws in software code, which could be exploited to cause significant damage. According to NIST, agencies should identify, report, and correct their information system flaws. According to NIST, tracking patches allows organizations to identify which patches are installed on a system and provides confirmation that the appropriate patches have been applied. Moreover, TVA policy requires the agency to remediate these vulnerabilities in a timely manner. TVA had not installed current versions of patches for key applications on computers on control systems networks. While TVA had an agencywide policy and procedure for patch management, these policies did not apply to individual plant-level control systems. According to the operators at two of the facilities we reviewed, they applied vendor-approved patches to control systems but did not track versions of patches on these machines. Failure to keep software patches up-to-date could allow unauthorized individuals to gain access to network resources or disrupt network operations. Virus and worm protection for information systems is a serious challenge. Computer attack tools and techniques are becoming increasingly sophisticated; viruses are spreading faster as a result of the increasing connectivity of today’s networks; commercial off-the-shelf products can be easily exploited for attack by their users; and there is no single solution such as firewalls or encryption to protect systems. To combat viruses and worms specifically, entities should keep antivirus programs up-to-date. According to NIST, agencies should implement malicious code protection that includes a capability for automatic updates so that virus definitions are kept up-to-date on servers, workstations, and mobile computing devices. Virus-scanning software should be provided at critical entry points, such as remote-access servers, and at each desktop system on the network. Although TVA implemented antivirus software on its transmission control systems network, it did not consistently implement antivirus software on other control systems we reviewed. In one case, according to agency officials, the vendor that developed the control systems software would not support an antivirus application, and the agency did not have plans to require the vendor to address this weakness. In another case, antivirus software was implemented, but it was not up-to-date. In the event that using antivirus software is infeasible on a control system, the agency must document the controls, such as training or physical security, that would compensate for this deficiency. TVA had not done this. According to agency officials, such documentation is under way for its hydroelectric facilities, but not for other facilities. As a result, there is increased risk that the integrity of these networks and devices could be compromised. Physical security controls are important for protecting computer facilities and resources from espionage, sabotage, damage, and theft. These controls restrict physical access to computer resources, usually by limiting access to the buildings and rooms in which the resources are housed and by periodically reviewing the access granted in order to ensure that access continues to be appropriate. TVA policy requires that appropriate physical and environmental controls be implemented to provide security commensurate with the level of risk and magnitude of harm that would result from loss, misuse, unauthorized access, or modification of information or information systems. Further, NIST policy requires that federal organizations implement a variety of physical security controls to protect information and industrial control systems and the facilities in which they are located. TVA had taken steps to provide physical security for its control systems. For example, it had issued electronic badges to agency personnel and contractors to help control access to many of its sensitive and restricted areas. TVA had also established law enforcement liaisons that help ensure additional backup security and facilitate the accurate flow of timely security information between appropriate government agencies. In addition, the agency had implemented physical security training for its employees to help achieve greater security awareness and accountability. However, the agency had not effectively implemented physical security controls at various locations, as the following examples illustrate: Live network jacks connected to TVA’s internal network at certain facilities we reviewed had not been adequately secured from access by the public. TVA did not adequately control or change its keys to industrial control rooms containing sensitive equipment at one facility we reviewed. For example, the agency could neither account for all keys issued at the facility, which relies on manual locks for the security of rooms containing sensitive computer and control equipment, nor could it determine when keys had last been changed. TVA did not have an effective visitor control program at one facility we reviewed. For example, the agency had not maintained a visitor log describing visitors’ names, organizations, purpose of visits, forms of identification, or the names of the persons visited. Physical security policies and plans were either in draft form or were nonexistent. Rooms containing sensitive IT equipment had not been adequately environmentally protected. For example, sufficient emergency lighting was not available outside the control room at one facility we reviewed, a server room at the facility had no smoke detection capability, a control room at the facility contained a kitchen (a potential fire and water hazard), and a communications room had batteries collocated with sensitive communications gear. TVA had not always ensured that access to sensitive computing and industrial control systems resources had been granted to only those who needed it to perform their jobs at one facility we reviewed. About 75 percent of those who were issued facility badges had access to a facility computer room, but the vast majority of these badgeholders did not need access to the room. While TVA officials stated that all of those with access had been through the background investigation and training process required for all employees at the facility, an underlying principle for secure computer systems and data recommended by NIST is that users should be granted only those access rights and permissions needed to perform their official duties. As a consequence of weaknesses such as these, increased risk exists that sensitive computing resources and data could be inadvertently or deliberately misused or destroyed. Federal guidance and best practices in information security call for the use of multiple layers of defense to secure information resources. These multiple layers include the use of protection mechanisms and key network control points such as firewalls, routers, and intrusion detection systems to segment and control access to networks. Higher risk networks and devices, such as critical infrastructure control systems, may require additional security controls and should be on networks that are separate from lower risk devices. TVA had deployed a layered defense model to control access between and among the corporate and control systems networks. For example, in all cases we examined, control systems were located on networks that had been segmented from business computing resources. The agency had also deployed protection mechanisms such as firewalls, router access control lists, virtual local area networking, and physical security controls at multiple locations throughout its network. For example, TVA’s transmission control organization used layered networks with increasing levels of security to separate critical control devices from the corporate network. However, these mechanisms and information security controls had been inconsistently applied. As a result, the effectiveness of the multiple layers of defense was limited. For example, while the transmission control organization network restricted access to control systems using multiple firewalls at outer and inner network boundaries, some plant systems had significantly fewer levels of security to reach control systems that impacted the same facilities. In addition, specific weaknesses in security configurations on key systems further reduced the overall effectiveness of security controls. The cumulative effect of these individual weaknesses and the interconnectedness of TVA critical infrastructure control systems places these systems at risk of compromise or disruption from internal and external threats. An underlying reason for TVA’s information security control weaknesses is that it had not consistently implemented significant elements of its information security program. The effective implementation of an information security program includes implementing the key elements required under FISMA and the establishment of a continuing cycle of activity—which includes developing an inventory of systems, assessing risk, developing policies and procedures, developing security plans, testing and monitoring the effectiveness of controls, identifying and tracking remedial actions, and establishing appropriate training. TVA had not consistently implemented key elements of these activities. As a result of not fully developing and implementing its information security program, an increased potential for disruption or compromise of its control systems exists. FISMA requires that each agency develop, maintain, and annually update an inventory of major information systems operated by the agency or that are under its control. A complete and accurate inventory of major information systems is a key element of managing the agency’s information technology resources, including the security of those resources. The inventory can be used to track agency systems for purposes such as periodic security testing and evaluation, patch management, contingency planning, and identifying system interconnections. TVA requires that the senior agency information security officer maintain an authoritative inventory of general support systems, major applications, major information systems, and minor applications. TVA did not have a complete and accurate inventory of its control systems. In its fiscal year 2007 FISMA submission, TVA included in its inventory of major applications the transmission and the hydro automation control systems. Although TVA stated that the plant control systems at its nuclear and fossil facilities were minor applications, these applications had not been included in TVA’s inventory of minor applications or accounted for as part of a consolidated general support system. These systems are essential to automated operation of generation facilities. At the conclusion of our review, agency officials stated they had developed a plan to develop a more complete and accurate system inventory by September 2008. Until TVA has a complete and accurate inventory of its control systems, it cannot ensure that the appropriate security controls have been implemented to protect these systems. FISMA mandates that agencies assess the risk and magnitude of harm that could result from the unauthorized access, use, disclosure disruption, modification, or destruction of their information and information systems. The Federal Information Processing Standard (FIPS) 199, Standards for Security Categorization of Federal Information and Information Systems, and related NIST guidance provide a common framework for categorizing systems according to risk. The framework establishes three levels of potential impact on organizational operation, assets, or individuals should a breach of security occur—high (severe or catastrophic), moderate (serious), and low (limited)—and it is used to determine the impact for each of the FISMA-specified security objectives of confidentiality, integrity, and availability. Once determined, security categories are to be used in conjunction with vulnerability and threat information in determining minimum security requirements for the system and in assessing the risk to an organization. Risk assessments help ensure that the greatest risks have been identified and addressed, increase the understanding of risk, and provide support for needed controls. Office of Management and Budget (OMB) Circular A-130, appendix III, prescribes that risk be assessed when significant changes are made to major systems and applications in an agency’s inventory or at least every 3 years. Consistent with NIST guidance, TVA policy states that risk assessments should be updated to reflect the results of security tests and evaluations. TVA had not completed assigning risk levels or assessing the risk of its control systems. While TVA categorized the transmission and hydro automation control systems as high-impact systems using FIPS 199, its nuclear division and fossil business unit, which include its coal and combustion turbine facilities, had not assigned risk levels to their control systems. Further, although TVA had performed a risk assessment for the transmission control system, the risk assessment did not include the risks associated with the newly identified vulnerabilities identified during the latest security test and evaluation. TVA had not completed risk assessments for the control systems at their nuclear, hydroelectric, coal, and combustion turbine facilities. According to TVA officials, the agency plans to complete risk assessments by May 2008 at the nuclear facility and June 2008 at the hydroelectric facility. For the fossil facility and all remaining control systems throughout TVA, agency officials stated that they would complete the security categorization of these systems by the end of September 2008. However, no date has been set for completion of risk assessments. Without assigned risk levels, TVA cannot make risk- based decisions on the security needs of their information and information systems. Moreover, until TVA assesses the risks of all its control systems, the agency cannot be assured that its control systems apply the appropriate level of controls to help prevent their unauthorized access, use, disclosure, disruption, modification, or destruction. A key task in developing, documenting, and implementing an effective information security program is to establish and implement risk-based policies, procedures, and technical standards that cover security over an agency’s computing environment. If properly implemented, policies and procedures can help to reduce the risk that could come from unauthorized access or disruption of services. Because security policies are the primary mechanism by which management communicates its views and requirements, it is important to document and implement them. Several shortcomings existed in TVA’s information security policies. First, the agency had not consistently applied information security policies to its control systems. Second, business unit security policies were not always consistent with overall agency information security policies. Third, cyber security responsibilities for interfaces between TVA’s transmission control system and its fossil and hydroelectric generation units had not been documented. Fourth, TVA’s patch management process was not in compliance with federal guidance. Finally, physical security standards for control system sites were in draft. TVA had developed and documented policies, standards, and guidelines for information security; however, it had not consistently applied these policies to its control systems. Although neither FISMA nor TVA’s agencywide IT security policy explicitly mentions control systems, our analysis of NIST guidance and the stated position of NIST officials is that the guidance does apply to industrial control systems, such as the systems that TVA uses to operate critical infrastructures. Furthermore, NIST has recently developed and released guidance to assist agencies in applying federal IT security requirements to control systems. As a result of not applying this guidance with the same level of rigor to its control systems, numerous shortfalls existed in TVA’s information security management program for its control systems, including outdated risk assessments; incomplete system security categorizations, system security plans, and testing and evaluation activities; and an ineffective remediation process. TVA officials stated that they are in the process of applying current NIST criteria to their control systems and plan to complete this process by the end of fiscal year 2008. Until TVA consistently applies federal IT security policies to its control systems and addresses identified weaknesses, its control systems will remain at risk of compromise and disruption. While two TVA business units had developed IT security policies to address anticipated cyber security guidance from their respective industries, these policies were not always consistent with agencywide IT security policy. According to TVA policy, business units may establish their own IT security policies but must still comply with agencywide IT security policy. For example, TVA’s Nuclear Power Group had developed a cyber security policy and the Power Systems Operations business unit had developed two cyber security policies—one business unit policy that was in draft, and one approved policy developed by and applicable to the unit’s Transmission and Reliability Organization. These policies addressed many of the same issues as TVA’s agencywide IT security policy, including establishing roles and responsibilities, access controls, configuration management, training, and emergency planning and response. However, the policies were not always consistent with the agencywide IT security policy. For example, although both the Nuclear Power Group and the Transmission and Reliability Organization policies had been developed to establish requirements for cyber security of plant systems, neither policy directed system security officers to implement minimum baseline security controls to protect the confidentiality, integrity, and availability of these systems, as is required by agency policy, nor did they establish a link or reference to agencywide IT security policy or federal IT security requirements. Although the Power System Operations cyber security policy reiterated requirements outlined by FISMA and the TVA IT security policy, this policy remained in draft. The existence of inconsistent policies at different levels of TVA could hinder its ability to apply IT security requirements consistently across the agency. Without developing and implementing consistent policies, procedures, and standards across all agency divisions and groups, TVA has less assurance that its systems controlling critical infrastructure are protected from unauthorized access and cyber threats. NIST guidance states that organizations should authorize all connections from an information system to another information system through the use of system connection agreements. Documentation should include security roles and responsibilities and any service level agreements, which should define the expectations of performance for each required security control, and remedy and response requirements for any identified instance of noncompliance. The agreements established by TVA’s Transmission and Reliability Organization with other TVA business units did not fully address information that should be included based on NIST guidance. For example, the control systems operated by the Transmission and Reliability Organization interface with power plant control systems operated by TVA’s fossil and hydroelectric business units. Although the transmission organization had established agreements with the fossil and hydroelectric business units, these agreements made no mention of cyber security roles and responsibilities, performance expectations for security controls, and remedy and response requirements for noncompliance. TVA officials stated that the type of interface between the transmission control system and individual plant systems means that, in most cases, a cyber security incident on a plant control network would not impact the overall transmission control network. While the likelihood of direct transmission of malware such as a virus might be small, without clear documentation of information required in an intergroup agreement, TVA faces the risk that security controls may not be in place or work as intended at an individual plant, resulting in a situation where critical generation equipment may not be able to start, safely shut down, or otherwise be controlled by the transmission control system when necessary. This is particularly of concern because of the variation in cyber security controls that we observed between the overall transmission control system and the individual plants. Without clear documentation of cyber security-related roles and responsibilities, TVA faces the risk that security controls may not be in place or work as intended. NIST guidance states that federal agencies should create a comprehensive patch management process. The process should include monitoring of security sources for vulnerability announcements; an accurate inventory of the organization’s IT resources, using commercially available automated inventory management tools whenever possible; prioritization of the order in which the vulnerabilities are addressed with a focus on high-priority systems such as those essential for mission-critical operations; and automated deployment of patches to IT devices using enterprise patch management tools. TVA had not fully implemented such a comprehensive process. It had a patch management process, including staff whose primary responsibility is to monitor security sources for vulnerability announcements. However, the agency lacked an accurate inventory of its IT resources produced using an automated management tool. For example, agency staff did not have timely access to version numbers and build numbers of software applications in the agency, although officials stated this information could be obtained manually. In addition, the agency’s patch management policy did not apply to individual plant-level control systems or network infrastructure devices such as routers and switches. Furthermore, TVA’s written guidance on patch management provided only limited guidance on how to prioritize vulnerabilities. For example, the guidance did not refer to the criticality of IT resources. In addition, as previously noted, the agency had not categorized the impact of many of its control systems. The guidance also did not specify situations for which it was acceptable to upgrade or downgrade a vulnerability’s priority from that given by industry standard sources such as the vendor or third-party patch tracking services. As a result, patches that were identified as critical, meaning they should be applied immediately to vulnerable systems, were not applied in a timely manner. For example, agency staff had reduced the priority of three vulnerabilities identified as critical or important by the vendor or a patch tracking service and did not provide sufficient documentation of the basis for this decision. TVA also did not document many vulnerabilities on its systems. For a 15-month period, TVA documented its analysis of 351 reported vulnerabilities, while NIST’s National Vulnerability Database reported about 2,000 vulnerabilities rated as high or medium-risk for the types of systems in operation at TVA for the same time period. Finally, the agency lacked an automated tool to assess the deployment of many types of application patches. As a result, certain systems were missing patches more than 6 months past TVA deadlines for patching. Without a fully effective patch management process, TVA faces an increased risk that critical systems may remain vulnerable to known vulnerabilities and be open to compromise or disruption. NIST guidance states that organizations should develop formal documented physical security policies and procedures to facilitate the implementation of physical and environmental protection controls. However, TVA’s physical security standards for protection of its assets, including sensitive computer and industrial control equipment, as well as employees, contractors, visitors, and the general public, had been drafted but not approved by management. These standards are intended to provide clear and consistent physical security policy for all nonnuclear facilities. According to TVA Police officials, most sites budget for and implement their own physical security guidance and measures. Finalized physical security standards agencywide would provide consistent guidelines for facilities to make risk-based decisions on implementing these recommendations. Consequently, TVA has less assurance that control systems will be consistently and effectively protected from inadvertent or deliberate misuse including damage or destruction. The objective of system security planning is to improve the protection of IT resources. A system security plan provides a complete and up-to-date overview of the system’s security requirements and describes the controls that are in place—or planned—to meet those requirements. FISMA requires that agency information security programs include subordinate plans for providing adequate information security for networks, facilities, and systems or groups of information systems, as appropriate. OMB Circular A-130 specifies that agencies develop and implement system security plans for major applications and for general support systems and that these plans address policies and procedures for providing management, operational, and technical controls. NIST guidance states that minor applications that are not connected to a general support system or major application should be described in a general support system plan that has either a common physical location or is supported by the same organization. Further, TVA policy states that minor applications should be briefly described in a general support system security plan. NIST guidance states that security plans should contain key information needed to select the appropriate security controls, such as the FIPS 199 category and the certification and accreditation status of the connected systems. Plans should also be updated to include the latest security test and evaluation and risk assessment results. TVA had only developed a system security plan that covered two of the six facilities we reviewed, and this plan was incomplete and not up-to-date. The transmission control system security plan, which addressed systems at two transmission control centers, included many elements required by NIST, such as the description of the individuals responsible for security, and addressed management, operational, and technical controls. Although the plan listed interconnected systems, it did not completely address interconnectivity with other systems operated by other organizations. Specifically, it did not include essential information needed to select the appropriate security controls, such as the FIPS 199 category or the certification and accreditation status of the connected systems. Further, the plan was not updated to include the latest security test and evaluation or risk assessment results. According to agency officials, TVA is developing a system security plan for its hydroelectric automation control system as part of its certification and accreditation process. Agency officials stated that this plan will be completed by June 2008. TVA nuclear and fossil facilities had not developed security plans for their control systems. Agency officials stated that they were planning to develop security plans and complete the certification and accreditation process for these control systems. The plan for the nuclear facility is scheduled to be completed by June 2008. For the fossil facility, TVA officials stated that they intend to complete a security plan and certification and accreditation activities based on the results of security categorizations that will be completed by September 2008. However, no time frame has been set for completion of the plan or accreditation. Until these activities are completed, TVA cannot ensure that the security requirements have been identified and that the appropriate controls will be in place to protect these critical control systems. FISMA mandates that federal employees and contractors who use agency information systems be provided with periodic training in information security awareness. FISMA also requires agencies to provide appropriate training on information security to personnel who have significant security responsibilities. This training, described in NIST guidance, should inform personnel, including contractors and other users of information systems supporting the operations and assets of an agency, of information security risks associated with their activities and their roles and responsibilities to properly and effectively implement the practices that are designed to reduce these risks. Depending on an employee’s specific security role, training could include specialized topics such as incident detection and response, physical security, or firewall configuration. TVA also has a policy that requires that all employees and others who have access to its corporate network to complete annual security awareness training. The policy requires that employees and contractors who do not complete the training within a set time frame have their network access suspended. Although for fiscal year 2007 TVA reported that 98 percent of its employees and contractors completed its annual security awareness training, other shortfalls existed in TVA’s training program. For example, the agency policy of suspending network access for employees who did not complete security awareness training did not apply to control system- specific networks, such as those at the nuclear, hydroelectric, and fossil facilities we reviewed. At these sites, there were no controls in place to enforce completion of the required training by employees using these control systems. In addition, a substantial number of TVA employees who have significant security responsibilities did not complete role-based training in the last fiscal year, and the required training did not include specialized technical topics. In fiscal year 2007, TVA reported that only 25 percent of 197 applicable employees who had significant IT security responsibilities had completed role-based training, compared with 86 percent and 72 percent who reportedly received such training in fiscal years 2005 and 2006, respectively. According to agency officials, training had not been completed primarily due to a lack of staff to provide the training. Furthermore, the role-based training that was required was focused on management and procedural issues. TVA had technical security training available to its information security staff, which comprised approximately 14 of the 197 employees who needed role-based training, but this training was not required. For these 14 staff, TVA reported a 100 percent completion rate for the technical training. At the end of our review, agency officials provided a plan to improve the number of employees completing role-based training and to examine adding technical training to training requirements. The plan is to be completed by July 2008. Until this plan is fully implemented, security lapses are more likely to occur and could contribute to information security weaknesses at TVA. A key element of an information security program is ongoing testing and evaluation to ensure that systems are in compliance with policies and that the policies and controls are both appropriate and effective. Testing and evaluation demonstrates management’s commitment to the security program, reminds employees of their roles and responsibilities, and identifies areas of noncompliance and ineffectiveness requiring remediation. Starting in fiscal year 2007, OMB required agencies to discontinue using SP 800-26 and to use NIST SP 800-53A for the assessment of security controls effectiveness when performing periodic security testing and evaluation of their information systems. In addition, TVA policy requires all minor applications to be assigned to a general support system or major application that is tested and evaluated as part of the certification and accreditation process performed every 3 years. TVA did not properly test and evaluate all of its control systems. Although TVA had performed annual self-assessments of the two control systems designated as major applications (transmission and hydro automation control systems) in fiscal year 2007, it did so using outdated NIST guidance contained in SP 800-26, rather than the current guidance in SP 800-53A. Of these two control systems, TVA performed a complete test and evaluation of the security controls on one of the systems—the transmission control system—within the last 3 years. Although TVA officials at the nuclear and fossil facilities considered their plant-level control systems to be minor applications, they were not part of any general support system. As a result, TVA did not appropriately identify, test, or evaluate the effectiveness of the security controls in place for the control systems at these facilities. Without appropriate tests and evaluations of all its control systems, the agency has limited assurance that policies and controls are appropriate and working as intended. Additionally, increased risk exists that undetected vulnerabilities could be exploited to allow unauthorized access to these critical systems. A remedial action plan is a key component described in FISMA. Such a plan assists agencies in identifying, assessing, prioritizing, and monitoring progress in correcting security weaknesses that are found in information systems. In its annual FISMA guidance to agencies, OMB requires agencies’ remedial action plans, also known as plans of action and milestones, to include, at a minimum, the resources necessary to correct an identified weakness, the original scheduled completion date, the status of the weakness as completed or ongoing, and key milestones with completion dates.According to TVA policy, the agency should document weaknesses found during security assessments and document any planned remedial actions to correct any deficiencies. TVA did not always address known significant deficiencies in its remedial action plans. The agency had developed a plan of action and milestones for its transmission control system; however, it did not do so for the control systems at the fossil, hydroelectric, or nuclear facilities. In addition, while the agency tracks weaknesses identified by the TVA Inspector General for its transmission control system, it did not include these weaknesses in its plan of action and milestones. Until the agency implements an effective remediation process for all control systems, it will not have assurance that the proper resources will be applied to known vulnerabilities or that those vulnerabilities will be properly mitigated. Even strong controls may not block all intrusions and misuse, but organizations can reduce the risks associated with such events if they take steps to promptly detect, report, and respond to them before significant damage is done. In addition, analyzing security incidents allows organizations to gain a better understanding of the threats to their information and the costs of their security-related problems. Such analyses can pinpoint vulnerabilities that need to be eliminated so that they will not be exploited again. Incident reports can be used to provide valuable input for risk assessments, can help in prioritizing security improvement efforts, and can illustrate risks and related trends for senior management. FISMA and NIST guidance require that agency information security programs include procedures for detecting, reporting, and responding to security incidents, including reporting them to the U.S. Computer Emergency Readiness Team (US-CERT). Furthermore, NIST guidance prescribes network and host-based intrusion detection systems as a means of protecting systems from the threats that come with increasing network connectivity. TVA had developed incident detection, response, and reporting procedures. However, while the TVA organization responsible for operating its transmission control center had approved incident response and reporting procedures, the agencywide incident response and reporting procedure remained in draft form, although it is currently being used by TVA information security personnel. According to agency officials, the procedure is being revised and finalized to align with incident reporting guidelines developed by US-CERT. Until TVA finalizes these procedures, it cannot be assured that facilities are prepared to respond to and report incidents in an effective manner. Contingency planning includes developing and testing plans and activities so that when unexpected events occur, critical operations can continue without disruption or can be promptly resumed and that critical and sensitive data are protected. If contingency planning controls are inadequate, even relatively minor interruptions can result in a loss of system function and expensive recovery efforts. For some TVA control systems, system interruptions or malfunctions could result in loss of power, injuries, or loss of life. Given these severe implications, it is critical that an entity have in place (1) procedures for protecting information systems and minimizing the risk of unplanned interruptions and (2) a plan to recover critical operations should interruptions occur. To determine whether recovery plans will work as intended, they should be tested periodically in disaster-simulation exercises. FISMA requires that each federal agency implement an information security program that includes plans and procedures to ensure continuity of operations for information systems that support the operation and assets of the agency. TVA had taken steps to address contingency planning for physical incidents such as fire, explosion, and natural disasters, and for other events such as cyber incidents. At the facilities we reviewed, staff performed regular drills and tests to address physical contingencies. According to agency officials, in many cases, these same drills are applicable to cyber incidents that could have physical consequences. In addition, the agency had developed backup procedures for key information resources, including those that support its control systems. In TVA’s transmission control centers, written backup procedures existed; however, in the hydroelectric, coal, and gas turbine facilities we reviewed, the backup procedures were not documented. Until TVA consistently documents backup procedures across all of its facilities, it has limited assurance that all TVA facilities will be able to respond appropriately in the event of a physical or cyber incident. TVA’s power generation and transmission critical infrastructures are important to the economy of the southeastern United States and the safety, security, and welfare of millions of people. Control systems are essential to the operation of these infrastructures; however, multiple information security weaknesses existed in both the agency’s corporate network and individual control systems networks and devices. As a result, although TVA had implemented multiple layers of information security controls to protect its critical infrastructures, such as segmenting control systems networks from the corporate network, in many cases, these layers were not as effective as intended. An underlying cause for these weaknesses is that the agency had not consistently implemented its information security program throughout the agency. If TVA does not take sufficient steps to secure its control systems and implement an information security program, it risks not being able to respond properly to a major disruption that is the result of an intended or unintended cyber incident, which could affect the agency’s operations and its customers. To improve the implementation of information security program activites for the control systems governing TVA’s critical infrastructures, we are recommending that the Chief Executive Officer of TVA take the following 19 actions: Establish a formal, documented configuration management process for changes to software governing control systems at TVA hydroelectric and fossil facilities. Establish a patch management policy for all control systems. Establish a complete and accurate inventory of agency information systems that includes each TVA control system either as a major application, or as a minor application to a general support system. Categorize and assess the risk of all control systems. Update the transmission control system risk assessment to include the risk associated with vulnerabilities identified during security testing and evaluations and self-assessments. Revise TVA information security policies and procedures to specifically mention their applicability to control systems. Ensure that any division-level information security policies and procedures established to address industry regulations or guidance are consistent with, refer to, and are fully integrated with TVA corporate security policy and federal guidance. Revise the intergroup agreements between TVA’s Transmission and Reliability Organization and its fossil and hydroelectric business units to explicitly define cyber security roles and responsibilities. Revise TVA patch management policy to clarify its applicability to control systems and network infrastructure devices, provide guidance to prioritize vulnerabilities based on criticality of IT resources, and define situations where it would be appropriate to upgrade or downgrade a vulnerability’s priority from that given by industry standard sources. Finalize draft TVA physical security standards. Complete system security plans that cover all control systems in accordance with NIST guidance and include all information required by NIST in security plans, such as the FIPS 199 category and the certification and accreditation status of connected systems. Enforce a process to ensure that employees who do not complete required security awareness training cannot access control system-specific networks. Ensure that all designated employees complete role-based security training and that this training includes relevant technical topics. Develop and implement a TVA policy to ensure that periodic (at least annual) assessments of control effectiveness use NIST SP 800-53A for major applications and general support systems. Perform assessments of control effectiveness following the methodology in NIST SP 800-53A. Develop and implement remedial action plans for all control systems. Include the results of inspector general assessments in the remedial action plan for the transmission control system. Finalize the draft agencywide cyber incident response procedure. Document backup procedures at all control system facilities. In a separate report designated “Limited Official Use Only,” we are also making 73 recommendations to the Chief Executive Officer of TVA to address weaknesses in information security controls. In written comments on a draft of this report, the Executive Vice President of Administrative Services for TVA agreed on the importance of protecting critical infrastructures and described several actions TVA has taken to strengthen information security for control systems, such as centralizing responsibility for cyber security within the agency. The Executive Vice President concurred with all 19 recommendations in this report and provided information on steps the agency was taking to implement the recommendations. A copy of the agency’s response is included in appendix II. Additionally, in a meeting with GAO officials, TVA officials expressed concerns about the level of detail in this report. Based on that meeting and subsequent discussions with agency officials, we have modified the wording in this report to address the agency’s concerns. The agency also provided technical comments that we have incorporated where appropriate. We are sending copies of this report to OMB, the TVA Inspector General and other interested parties. We will also make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http:www.gao.gov. If you have any questions on matters discussed in this report, please contact Gregory Wilshusen at (202) 512-6244 or Nabajyoti Barkakati (202) 512-4499, or by e-mail at [email protected] and [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix III. The objective of our review was to determine if the Tennessee Valley Authority (TVA) has effectively implemented appropriate information security practices for the control systems used to operate its critical infrastructure. We conducted our review using our Federal Information System Controls Audit Manual, a methodology for reviewing information system controls that affect the confidentiality, integrity, and availability of computerized data. We focused our work on the control systems located at six TVA facilities. These facilities were selected to provide a cross-section of the variety of control systems by type of generation facility (coal, combustion turbine, hydroelectric, and nuclear) and function (generation and transmission). To evaluate the effectiveness of TVA’s information security practices, we conducted tests and observations using federal guidance, checklists, and vendor best practices for information security. Where federal requirements or guidelines, including National Institute of Standards and Technology (NIST) guidance, were applicable, we used them to assess the extent to which TVA had complied with specific requirements. Specifically, we used NIST guidance for the security of federal information systems. For example, we analyzed the password hashing implementation used for identification and evaluated and reviewed the complexity and expiration of passwords on servers to determine if strong password management was enforced; examined user and application system authorizations to determine whether they had more permissions than necessary to perform their assigned functions; analyzed system configurations to determine whether sensitive data were observed whether system security software was configured to log successful system changes; inspected key servers, workstations, and network infrastructure devices to determine whether critical patches had been installed or were up-to-date; tested and observed physical access controls to determine if computer facilities and resources were being protected from espionage, sabotage, damage, and theft; and synthesized the information obtained about networks and applications to develop an accurate understanding of overall network and system architecture. The Federal Information Security Management Act of 2002 (FISMA) establishes key elements of an effective agencywide information security program. We evaluated TVA’s implementation of these key elements by reviewing TVA’s system inventory to determine whether it contained an accurate and comprehensive list of control systems; analyzing risk assessments for key TVA systems to determine whether risks and threats were documented; examining security plans to determine if management, operational, and technical controls were in place or planned and whether these security plans were updated; analyzing TVA policies, procedures, practices, and standards to determine their effectiveness in providing guidance to personnel responsible for securing information and information systems; inspecting training records for personnel with significant responsibilities to determine if they received training commensurate with those responsibilities; analyzing test plans and test results for key TVA systems to determine whether management, operational, and technical controls were adequately tested at least annually and were based on risk; evaluating TVA’s process to correct weaknesses and determining whether remedial action plans complied with federal guidance; and examining contingency plans for key TVA systems to determine whether those plans had been tested or updated. To conduct our work, we reviewed and analyzed relevant documentation and held discussions with key security representatives, system administrators, and management officials to determine whether information system controls were in place, adequately designed, and operating effectively. We also reviewed previous reports issued by the TVA Inspector General’s Office. We conducted this performance audit from March 2007 to April 2008 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the individuals named above, Nancy DeFrancesco and Lon Chin, Assistant Directors; Angela Bell; Bruce Cain; Mark Canter; Heather Collins; West Coile; Kirk Daubenspeck; Neil Doherty; Vijay D’Souza; Nancy Glover; Sairah Ijaz; Myong Kim; Stephanie Lee; Lee McCracken; Duc Ngo; Sylvia Shanks; John Spence; and Chris Warweg made key contributions to this report. | Securing the control systems that regulate the nation's critical infrastructures is vital to ensuring our economic security and public health and safety. The Tennessee Valley Authority (TVA), a federal corporation and the nation's largest public power company, generates and distributes power in an area of about 80,000 square miles in the southeastern United States. GAO was asked to determine whether TVA has implemented appropriate information security practices to protect its control systems. To do this, GAO examined the security practices in place at several TVA facilities; analyzed the agency's information security policies, plans, and procedures against federal law and guidance; and interviewed agency officials who are responsible for overseeing TVA's control systems and their security. TVA has not fully implemented appropriate security practices to secure the control systems and networks used to operate its critical infrastructures. Both its corporate network infrastructure and control systems networks and devices were vulnerable to disruption. The corporate network was interconnected with control systems networks GAO reviewed, thereby increasing the risk that security weaknesses on the corporate network could affect those control systems networks. On TVA's corporate network, certain individual workstations lacked key software patches and had inadequate security settings, and numerous network infrastructure protocols and devices had limited or ineffective security configurations. In addition, the intrusion detection system had significant limitations. On control systems networks, firewalls reviewed were either inadequately configured or had been bypassed, passwords were not effectively implemented, logging of certain activity was limited, configuration management policies for control systems software were inconsistently implemented, and servers and workstations lacked key patches and effective virus protection. In addition, physical security at multiple locations did not sufficiently protect critical control systems. As a result, systems that operate TVA's critical infrastructures are at increased risk of unauthorized modification or disruption by both internal and external threats. An underlying reason for these weaknesses is that TVA had not consistently implemented significant elements of its information security program. Although TVA had developed and implemented program activities related to contingency planning and incident response, it had not consistently implemented key activities related to developing an inventory of systems, assessing risk, developing policies and procedures, developing security plans, testing and monitoring the effectiveness of controls, completing appropriate training, and identifying and tracking remedial actions. For example, the agency lacked a complete inventory of its control systems and had not categorized all of its control systems according to risk, thereby limiting assurance that these systems were adequately protected. Agency officials stated that they plan to complete these risk assessments and related activities but have not established a completion date. Key information security policies and procedures were also in draft or under revision. Additionally, the agency's patch management process lacked a way to effectively prioritize vulnerabilities. TVA had only completed one system security plan, and another plan was under development. The agency had also tested the effectiveness of its control systems' security using outdated federal guidance, and many control systems had not been tested for security. In addition, only 25 percent of relevant agency staff had completed required role-based security training in fiscal year 2007. Furthermore, while the agency had developed a process to track remedial actions for information security, this process had not been implemented for the majority of its control systems. Until TVA fully implements these security program activities, it risks a disruption of its operations as a result of a cyber incident, which could impact its customers. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
Solar energy can be used to heat, cool, and power homes and businesses with a variety of technologies that convert sunlight into usable energy. Examples of solar energy technologies include photovoltaics, concentrated solar power, and solar hot water. Solar cells, also known as photovoltaic cells, convert sunlight directly into electricity. Photovoltaic technologies are used in a variety of applications. They can be found on residential and commercial rooftops to power homes and businesses; utility companies use them for large power stations, and they power space satellites, calculators, and watches. Concentrated solar power uses mirrors or lenses to concentrate sunlight and produce intense heat, which is used to generate electricity via a thermal energy conversion process; for example, by using concentrated sunlight to heat a fluid, boil water with the heated fluid, and channel the resulting steam through a turbine to produce electricity. Most concentrated solar power technologies are designed for utility-scale operations and are connected to the electricity-transmission system. Solar hot water technologies use a collector to absorb and transfer heat from the sun to water, which is stored in a tank until needed. Solar hot water systems can be found in residential and industrial buildings. Innovation in solar energy technology takes place across a spectrum of activities, which we refer to as technology advancement activities, and which include basic research, applied research, demonstration, and commercialization. For purposes of this report, we defined basic research to include efforts to explore and define scientific or engineering concepts or is conducted to investigate the nature of a subject without targeting any specific technology; applied research includes efforts to develop new scientific or engineering knowledge to create new and improved technologies; demonstration activities include efforts to operate new or improved technologies to collect information on their performance and assess readiness for widespread use; and commercialization efforts transition technologies to commercial applications by bridging the gap between research and demonstration activities and venture capital funding and marketing activities. Congressional Budget Office, Federal Financial Support for the Development and Production of Fuels and Energy Technologies (Washington, D.C.: March 2012). as a whole but not necessarily for the firms that invested in the activities. For example, basic research can create general scientific knowledge that is not itself subject to commercialization but that can lead to multiple applications that private companies can produce and sell. As activities get closer to the commercialization stage, the private sector may increase its support because its return on investment increases. We identified 65 solar-related initiatives with a variety of key characteristics at six federal agencies. Over half of the 65 initiatives supported solar projects exclusively; the remaining initiatives supported solar energy technologies in addition to other renewable energy technologies. The initiatives demonstrated a variety of key characteristics, including focusing on different types of solar technologies and supporting a range of technology advancement activities from basic research to commercialization, with an emphasis on applied research and demonstration activities. Additionally, the initiatives supported several types of funding recipients including universities, industry, nonprofit organizations, and federal labs and researchers, primarily through grants and contracts. Agency officials reported that they obligated around $2.6 billion for the solar projects in these initiatives in fiscal years 2010 and 2011. In fiscal years 2010 and 2011, six federal agencies—DOD, DOE, EPA, NASA, NSF, and USDA—undertook 65 initiatives that supported solar energy technology, at least in part. (See app. II for a full list of the initiatives). Of these initiatives, 35 of 65 (54 percent) supported solar projects exclusively and 30 (46 percent) also supported projects that were not solar. For example, in fiscal years 2010 and 2011, DOE’s Solar Energy Technologies Program—Photovoltaic Research and Development initiative, had 263 projects, all of which focused on solar energy. In contrast, in fiscal years 2010 and 2011, DOE’s Hydrogen and Fuel Research and Development initiative—which supports wind and other renewable sources that could be used to produce hydrogen—had 209 projects, 26 of which were solar projects. Although initiatives support solar energy technologies, in a given year, they might not support any solar projects. For example, NSF officials noted that the agency funds research across all fields and disciplines of science and engineering and that individual initiatives invite proposals for projects across a broad field of research, which includes solar-related research in addition to other renewable energy research. However, in any given year, NSF may not fund proposals that address solar energy because either no solar proposals were submitted or the submitted solar-related proposals were not deemed meritorious for funding based upon competitive, merit-based reviews. Although more than half of the agencies’ initiatives supported solar energy projects exclusively, the majority of projects supported by all 65 initiatives were not focused on solar. As shown in table 1, of the 4,996 total projects active in fiscal years 2010 and 2011 under the 65 initiatives, 1,506 (30 percent) were solar projects, and 3,490 (70 percent) were not solar projects. Agencies’ solar-related initiatives supported different types of solar energy technologies. According to agency officials responding to our questionnaire, 47 of the 65 initiatives supported photovoltaic technologies, and 18 supported concentrated solar power; some initiatives supported both of these technologies or other solar technologies. For example, NSF’s CHE-DMR-DMS Solar Energy Initiative (SOLAR) supports both photovoltaic and concentrated solar power technologies, including a project that is developing hybrid organic/inorganic materials to create ultra-low-cost photovoltaic devices and to advance solar concentrating technologies. These initiatives supported solar energy technologies through multiple technology advancement activities, ranging from basic research to commercialization. As shown in figure 1, five of the six agencies supported at least three of the four technology advancement activities we examined, and four of the six supported all four. Our analysis showed that of the 65 initiatives, 20 initiatives (31 percent) supported a single type of technology advancement activity; 45 of the initiatives (69 percent) supported more than one type of technology advancement activity; and 4 of those 45 initiatives (6 percent) supported all four. For example, NASA’s Solar Probe Plus Technology Development initiative—which tests the performance of solar cells in elevated temperature and radiation environments such as near the sun— supported applied research exclusively. In contrast, NASA’s Small Business Innovations Research/Small Business Technology Transfer Research initiative—which seeks high-technology companies to participate in government-sponsored research and development efforts critical to NASA’s mission—supported all four technology advancement activities. The technology advancement activities supported by the initiatives were applied research (47 initiatives), demonstration (41 initiatives), basic research (27 initiatives), and commercialization (17 initiatives). The initiatives supported these technology advancement activities by providing funding to four types of recipients: universities, industry, nonprofit organizations, and federal laboratories and researchers. The initiatives most often supported universities and industry. In many cases, initiatives provided funding to more than one type of recipient. Specifically, our analysis showed that of the 65 initiatives, 23 of the initiatives (35 percent) supported one type of recipient; 21 of the initiatives (32 percent) provided funding to at least two types of recipients; 17 initiatives (26 percent) supported three types; and 4 initiatives (6 percent) supported all four. In two cases, agency officials reported that their initiatives supported “other” types of recipients, which included college students and military installations. Initiatives often supported a variety of recipient types, but individual agencies more often supported one or two types. As shown in figure 2, DOE’s initiatives most often supported federal laboratories and researchers; DOD’s most often supported industry recipients; NASA’s supported federal laboratories and industry equally; NSF’s supported universities exclusively. For example, NASA’s Small Business Innovations Research/Small Business Technology Transfer Research initiative provided contracts to industry to participate in government- sponsored research and development for advanced photovoltaic technologies to improve efficiency and reliability of solar power for space exploration missions. NSF’s Emerging Frontiers in Research and Innovation initiative provided grants to universities for, among other purposes, promoting breakthroughs in computational tools and intelligent systems for large-scale energy storage suitable for renewable energy sources such as solar energy. Federal solar-related initiatives provided funding to these recipients through multiple mechanisms, often using more than one mechanism per initiative. As shown in figure 3, the initiatives primarily used grants and contracts. Of the 65 initiatives, 27 awarded grants, and 36 awarded contracts; many awarded both. Agency officials also reported funding solar projects via cooperative agreements, loans, and other mechanisms. Agency officials reported that the 65 initiatives as a group used multiple funding mechanisms, but we found that individual agencies tended to use primarily one or two funding mechanisms. For example, USDA exclusively used grants, while DOD tended to use contracts. DOE reported using grants and cooperative agreements almost equally. For example, DOE’s Solar ADEPT initiative, an acronym for “Solar Agile Delivery of Electrical Power Technology,” awards cooperative agreements to universities, industry, nonprofit organizations, and federal laboratories and researchers. Through a cooperative agreement, the initiative supported a project at the University of Colorado at Boulder that is developing advanced power conversion components that can be integrated into individual solar panels to improve energy yields. According to the project description, the power conversion devices will be designed for use on any type of solar panel. The University of Colorado at Boulder is partnering with industry and DOE’s National Renewable Energy Laboratory on this project. In responding to our questionnaire, officials from the six agencies reported that they obligated around $2.6 billion for the 1,506 solar projects in fiscal years 2010 and 2011. These obligations data represented a mix of actual obligations and estimates. Actual obligations were provided for both years for 51 of 65 initiatives. Officials provided estimated obligations for 12 initiatives for at least 1 of the 2 years, and officials from another 2 initiatives were unable to provide any obligations data. Those officials who provided estimates or were unable to provide obligations data noted that the accuracy or the availability of the obligations data was limited because isolating the solar activities from the overall initiative obligations can be difficult. (See app. II for a full list of the initiatives and their related obligations.) As shown in table 2, over 90 percent of the funds (about $2.3 billion of $2.6 billion) were obligated by DOE. The majority of DOE’s obligations (approximately $1.7 billion) were obligated as credit subsidy costs—the government’s estimated net long-term cost, in present value terms, of the loans—as part of Title XVII Section 1705 Loan Guarantee Program from funds appropriated by Congress under the American Recovery and Reinvestment Act (Recovery Act). Even excluding the Loan Guarantee Program funds, DOE obligated $661 million, which is more than was obligated by the other five agencies combined. The 65 solar-related initiatives are fragmented across six agencies and many overlap to some degree, but agency officials reported a number of coordination activities to avoid duplication. We found that many initiatives overlapped in the key characteristics of technology advancement activities, types of technologies, types of funding recipients, or broad goals; however, these areas of overlap do not necessarily lead to duplication of efforts because the initiatives sometimes differ in meaningful ways or leverage the efforts of other initiatives, and we did not find clear evidence of duplication among initiatives. Officials from most initiatives reported that they engage in a variety of coordination activities with other solar-related initiatives, at times specifically to avoid duplication. The 65 solar-related initiatives are fragmented in that they are implemented by various offices across six agencies and address the same broad area of national need. In March 2011, we reported that fragmentation has the potential to result in duplication of resources. However, such fragmentation is, by itself, not an indication that unnecessary duplication of efforts or activities exists. For example, in our March 2011 report, we stated that there can be advantages to having multiple federal agencies involved in a broad area of national need— agencies can tailor initiatives to suit their specific missions and needs, among other things. In particular, DOD is able to focus its efforts on solar energy technologies that serve its energy security mission, among other things, and NASA is able to focus its efforts on solar energy technologies that aid in aeronautics and space exploration, among other things. As table 3 illustrates, we found that many initiatives overlap because they support similar technology advancement activities and types of funding recipients. For example, initiatives that support basic and applied research most often fund universities, and those initiatives that support demonstration and commercialization activities most often fund industry. Almost all of the initiatives overlapped to some degree with at least one other initiative in that they support broadly similar technology advancement activities, types of technologies, and eligible funding recipients. Twenty-seven initiatives support applied research for photovoltaic technologies by universities. For example, NSF’s Engineering Research Center for Quantum Energy and Sustainable Solar Technologies at Arizona State University pursues cost-competitive photovoltaic technologies with sustained market growth. The Air Force’s Space Propulsion and Power Generation Research initiative partners with various universities to develop improved methods for powering spacecraft, including solar cell technologies. Sixteen initiatives support demonstration activities focused on photovoltaic technologies by federal laboratories and researchers. For example, NASA’s High-Efficiency Space Power Systems initiative conducts activities at NASA’s Glenn Research Center to develop technologies to provide low cost and abundant power for deep space missions, such as highly reliable solar arrays, to enable a crewed mission to explore a near Earth asteroid. DOE’s Solar Energy Technologies Program (SETP), which includes the Photovoltaic Research and Development initiative, works with national laboratories such as the National Renewable Energy Laboratory, Sandia National Laboratories, Brookhaven National Laboratory, and Oak Ridge Laboratory to advance a variety of photovoltaic technologies to enable solar energy to be as cost competitive as traditional energy sources by 2015. Seven initiatives supported applied research on concentrated solar power technologies by industry. For example, DOE’s SETP Concentrated Solar Power subprogram, which focuses on reducing the cost of and increasing the use of solar power in the United States, funded a company to develop the hard coat on reflective mirrors that is now being used in concentrated solar power applications. In addition, DOD’s Fast Access Spacecraft Testbed Program, which concluded in March 2011, funded industry to demonstrate a suite of critical technologies including high-efficiency solar cells, sunlight concentrating arrays, large deployable structures, and ultra- lightweight solar arrays. Additionally, 40 of the 65 initiatives overlap with at least one other initiative in that they supported similar broad goals, types of technologies, and technology advancement activities. Providing lightweight, portable energy sources. Officials from several initiatives within DOD reported that their initiatives supported demonstration activities with the broad goal of providing lightweight, portable energy sources for military applications. For example, the goal of the Department of the Army’s Basic Solar Power Generation Research initiative is to determine the feasibility and applicability of lightweight flexible, foldable solar panels for remote site power generation in tactical battlefield applications. Similarly, the goal of the Office of the Secretary of Defense’s Engineered Bio-Molecular Nano- Devices and Systems initiative is to provide a low-cost, lightweight, portable photovoltaic device to reduce the footprint and logistical burden on the warfighter. Artificial photosynthesis. Several initiatives at DOE and NSF reported having the broad goal of supporting artificial photosynthesis, which converts sunlight, carbon dioxide, and water into a fuel, such as hydrogen. For example, one of DOE’s Energy Innovation Hubs, the Fuels from Sunlight Hub, supports basic research to develop an artificial photosynthesis system with the specific goals of (1) understanding and designing catalytic complexes or solids that generate chemical fuel from carbon dioxide and/or water; (2) integrating all essential elements, from light capture to fuel formation components, into an effective system; and (3) providing a pragmatic evaluation of the system under development. NSF’s Catalysis and Biocatalysis initiative has a specific goal of developing new materials that will be catalysts for converting sunlight into usable energy for direct use, or for conversion into electricity, or into fuel for use in fuel cell applications. Integrating solar energy into the grid. Officials from several initiatives reported focusing on demonstration activities for technologies with the broad goal of integrating solar or renewable energies into the grid or onto military bases. For example, DOE’s Smart Grid Research and Development initiative has a goal of developing smart grid technologies, particularly those that help match supply and demand in real time, to enable the integration of renewable energies, including solar energy, into the grid by helping stabilize variability and facilitate the safe and cost-effective operation by utilities and consumers. The goal of this initiative is to achieve a 20 percent improvement in the ratio of the average power supplied to the maximum demand for power during a specified period by 2020. DOD’s Installation Energy Research initiative has a goal of developing better ways to integrate solar energy into a grid system, thereby optimizing the benefit of renewable energy sources. Some initiatives may overlap on key characteristics such as technology advancement activities, types of technologies, types of recipients, or broad goals, but they also differ in meaningful ways that could result in specific and complementary research efforts, which may not be apparent when analyzing the characteristics. For example, an Army official told us that both the Army and Marine Corps were interested in developing a flexible solar substrate, which is a photovoltaic panel laminated onto fabric that can be rolled up and carried in a backpack. The Army developed technology that included a battery through its initiative, while the Marine Corps, through a separate initiative, altered the Army’s technology to create a flexible solar substrate without a battery. Other initiatives may also overlap on key characteristics, but the efforts undertaken by their respective projects may complement each other rather than result in duplication. For example, DOE officials told us that one solar company may receive funding from multiple federal initiatives for different components of a larger project, thus simultaneously supporting a common goal without providing duplicative support. While we did not find clear instances of duplicative initiatives, it is possible that there are duplicative activities among the initiatives that could be consolidated or resolved through enhanced coordination across agencies and at the initiative level. Also, it is possible that there are instances in which recipients receive funding from more than one federal source or that initiatives may fund some activities that would have otherwise sought and received private funding. Because it was beyond the scope of this work to look at the vast number of activities and individual awards that are encompassed in the initiatives we evaluated, we were unable to rule out the existence of any such duplication of activities or funding. Officials from 57 of the 65 initiatives (88 percent) reported coordinating with other solar-related initiatives. Coordination is important because, as we have previously reported, a lack of coordination can waste scarce funds and limit the overall effectiveness of the federal effort. We have also previously reported that coordination across programs may help address fragmentation, overlap, and duplication. Officials from nearly all initiatives that we identified as overlapping in their broad goals, types of technologies, and technology advancement activities, reported coordinating with other solar-related initiatives. In October 2005, we identified key practices that can help enhance and sustain federal agency coordination, such as (1) establishing joint strategies, which help align activities, core processes, and resources to accomplish a common outcome; (2) developing mechanisms to evaluate and report on the progress of achieving results, which allow agencies to identify areas for improvement; (3) leveraging resources, which helps obtain additional benefits that would not be available if agencies or offices were working separately; and (4) defining a common outcome, which helps overcome differences in missions, cultures, and established ways of doing business. Agency officials at solar-related initiatives reported coordination activities that are consistent with these key practices, as described below. Some agency officials reported undertaking formal activities within their own agency to coordinate the efforts of multiple initiatives. For example: Establishing a joint strategy. NSF initiatives reported participating in an Energy Working Group, which includes initiatives in the agency’s Directorates for Mathematical and Physical Sciences and for Engineering. Officials from initiatives we identified as overlapping reported participating in the Energy Working Group. NSF formed this group to initiate coordination of energy-related efforts between the two directorates, including solar efforts, and tasked it with establishing a uniform clean, sustainable energy strategy and implementation plan for the agency. Developing mechanisms to monitor, evaluate, and report results. DOD officials from initiatives in the Army, Marine Corps, and Navy that we identified as overlapping reported they participated in the agency’s Energy and Power Community of Interest. The goal of this group is to coordinate the R&D activities within DOD. The group is scheduled to meet every quarter, but an Army official told us the group has been meeting every 3 to 4 weeks recently to produce R&D road maps and to identify any gaps in energy and power R&D efforts that need to be addressed. Because of the information sharing that occurs during these meetings, the official said the risk of such duplication of efforts across initiatives within DOD is minimized. In responding to our questionnaire, agency officials also reported engaging in formal activities across agencies to coordinate the efforts of multiple initiatives. For example: Leveraging resources. The Interagency Advanced Power Group (IAPG), which includes the Central Intelligence Agency, DOD, DOE, NASA, and the National Institute of Standards and Technology, is a federal membership organization that was established in the 1950s to streamline energy efforts across the government and to avoid duplicating research efforts. A number of smaller working groups were formed as part of this effort, including the Renewable Energy Conversion Working Group, which includes the coordination of solar efforts. The working groups are to meet at least once each year, but according to a DOD official, working group members often meet more often than that in conjunction with outside conferences and workshops. The purpose of the meetings is to present each agency’s portfolio of research efforts and to inform and ultimately leverage resources across the participating agencies. According to IAPG documents, group activities allow agencies to identify and avoid duplication of efforts. Several of the initiatives that we identified as overlapping also reported participating in the IAPG. Leveraging resources and defining a common outcome. DOE’s SETP in the Office of Energy Efficiency and Renewable Energy (EERE) coordinates with DOE’s Office of Science and the Advanced Research Projects Agency-Energy (ARPA-E) through the SunShot Initiative, which according to SunShot officials, was established expressly to prevent duplication of efforts while maximizing agencywide impact on solar energy technologies. The goal of the SunShot Initiative is to reduce the total installed cost of solar energy systems by 75 percent. SunShot officials said program managers from all three offices participate on the SunShot management team, which holds “brain-storming” meetings to discuss ideas for upcoming funding announcements and subsequently vote on proposed funding announcements. Officials from other DOE offices and other federal agencies are invited to participate, with coordination occurring as funding opportunities arise in order to leverage resources. Officials said meetings may include as few as 25 or as many as 85 attendees, depending on the type of project and the expertise required of the attending officials. Additionally, DOE and NSF coordinate through the SunShot Initiative on the Foundational Program to Advance Cell Efficiency (F-PACE), which identifies and funds solar device physics and photovoltaic technology research and development that will improve photovoltaic cell performance and reduce module cost for grid-scale commercial applications. The initiatives that reported participating in SunShot activities also included many that we found to be overlapping. Developing joint strategies; developing mechanisms to monitor, evaluate, and report results; and defining a common outcome. The National Nanotechnology Initiative (NNI) an interagency program, which includes DOD, DOE, NASA, NSF, and USDA, among others, was established to coordinate the nanotechnology-related activities across federal agencies that fund nanoscale research or have a stake in the outcome of this research. The NNI is directed to (1) establish goals, priorities, and metrics for evaluation for federal nanotechnology research, development, and other activities; (2) invest in federal R&D programs in nanotechnology and related sciences to achieve these goals; and (3) provide for interagency coordination of federal nanotechnology research, development, and other activities. The NNI implementation plan states that the NNI will maximize the federal investment in nanotechnology and avoid unnecessary duplication of efforts. NNI includes a subgroup that focuses on nanotechnology for solar energy collection and conversion. Specifically, this subgroup is to (1) improve photovoltaic solar electricity generation with nanotechnology, (2) improve solar thermal energy generation and conversion with nanotechnology, and (3) improve solar-to-fuel conversions with nanotechnology. In addition to the coordination efforts above, officials reported through our questionnaire that their agencies coordinate through discussions with other agency officials or as part of the program and project management and review processes. Some officials said such discussions and reviews among officials occur explicitly to determine whether there is duplication of funding occurring. For example, SETP projects include technical merit reviews, which include peer reviewers from outside of the federal government, as well as a federal review panel composed of officials from several agencies. Officials from SETP also participate in the technical merit reviews of other DOE offices’ projects. ARPA-E initiatives also go through a review process that includes federal officials and independent experts. DOE officials told us that an ARPA-E High Energy Advanced Thermal Storage review meeting, an instance of potential duplicative funding was found with an SETP project. Funding of the project through SETP was subsequently removed because of the ARPA-E review process, and no duplicative funds were expended. In addition to coordinating to avoid duplication, officials from 59 of the 65 initiatives (91 percent) reported that they determine whether applicants have received other sources of federal funding for the project for which they are applying. Twenty-one of the 65 initiatives (32 percent) further reported that they have policies that either prohibit or permit recipients from receiving other sources of federal funding for projects. Some respondents to our questionnaire said it is part of their project management process to follow up with funding recipients on a regular basis to determine whether they have subsequently received other sources of funding. For example, DOE’s ARPA-E prohibits recipients from receiving duplicative funding from either public or private sources, and requires disclosure of other sources of funding both at the time of application, as well as on a quarterly basis throughout the performance of the award. Even if an agency requires that such funding information be disclosed on applications, applicants may choose not to disclose it. In fact, it was recently discovered that a university researcher did not identify other sources of funding on his federal applications as was required and accepted funding for the same research on solar conversion of carbon dioxide into hydrocarbons from both NSF and DOE. Ultimately, the professor was charged with and pleaded guilty to wire fraud, false statements, and money laundering in connection with the federal research grant. We provided DOD, DOE, EPA, NASA, NSF, and USDA with a draft of this report for review and comment. USDA generally agreed with the overall findings of the report. NASA and NSF provided technical or clarifying comments, which we incorporated as appropriate. DOD, DOE, and EPA indicated that they had no comments on the report. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the Secretaries of Agriculture, Defense, and Energy; the Administrators of EPA and NASA; the Director of NSF; the appropriate congressional committees; and other interested parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff members have any questions about this report, please contact me at (202) 512-3841 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix IV. The objectives of our report were to identify (1) solar-related initiatives supported by federal agencies in fiscal years 2010 and 2011 and key characteristics of those initiatives and (2) the extent of fragmentation, overlap, and duplication, if any, among federal solar-related initiatives, as well as the extent of coordination among these initiatives. To inform our objectives, we reviewed a February 2012 GAO report that was conducted to identify federal agencies’ renewable energy initiatives, which included solar-related initiatives, and examine the federal roles the agencies’ initiatives support. The GAO report on renewable energy- related initiatives identified nearly 700 initiatives that were implemented in fiscal year 2010 across the federal government, of which 345 initiatives supported solar energy. For purposes of this report, we only included those solar-related initiatives that we determined were focused on research and development (R&D), and commercialization, which we defined as follows: Research and development. Efforts ranging from defining scientific concepts to those applying and demonstrating new and improved technologies. Commercialization. Efforts to bridge the gap between research and development activities and the marketplace by transitioning technologies to commercial applications. We did not include those initiatives that focused solely on deployment activities, which include efforts to facilitate or achieve widespread use of existing technologies either in the commercial market or for nonmarket uses such as defense, through their construction, operation, or use. Initiatives that focus on deployment activities include a variety of tax incentives. We also narrowed our list to only those initiatives that focused research on advancing or developing new and innovative solar technologies. Next, we shared our list with agency officials and provided our definitions of R&D and commercialization. We asked officials to determine whether the list was complete and accurate for fiscal year 2010 initiatives that met our criteria, whether those initiatives were still active in fiscal year 2011, and whether there were any new initiatives in fiscal year 2011. If officials wanted to remove an initiative from our list, we asked for additional information to support the removal. In total, we determined that there were 65 initiatives that met our criteria. To identify and describe the key characteristics of solar-related initiatives implemented by federal agencies, we developed a questionnaire to collect information from officials of those 65 federal solar energy-related initiatives. The questionnaire was prepopulated with information that was obtained from the agencies for GAO’s renewable energy report including program descriptions, type of solar technology supported, funding mechanisms, and type of funding recipients. Questions included the type of technology advancement activities, obligations for solar activities in fiscal years 2010 and 2011, initiative-wide and solar-specific goals, and coordination efforts with other solar-related initiatives. We conducted pretests with officials of three different initiatives at three different agencies to check that (1) the questions were clear and unambiguous, (2) terminology was used correctly, (3) the questionnaire did not place an undue burden on agency officials, (4) the information could feasibly be obtained, and (5) the questionnaire was comprehensive and unbiased. An independent GAO reviewer also reviewed a draft of the questionnaire prior to its administration. On the basis of feedback from these pretests and independent review, we revised the survey in order to improve its clarity. After completing the pretests, we administered the questionnaire. We sent questionnaires to the appropriate agency liaisons in an attached Microsoft Word form, who in turn sent the questionnaires to the appropriate officials. We received questionnaire responses for each initiative and, thus, had a response rate of 100 percent. After reviewing the responses, we conducted follow-up e-mail exchanges or telephone discussions with agency officials when responses were unclear or conflicting. When necessary, we used the clarifying information provided by agency officials to update answers to questions to improve the accuracy and completeness of the data. Because this effort was not a sample survey, it has no sampling errors. However, the practical difficulties of conducting any survey may introduce errors, commonly referred to as nonsampling errors. For example, difficulties in interpreting a particular question, sources of information available to respondents, or entering data into a database or analyzing them can introduce unwanted variability into the survey results. However, we took steps to minimize such nonsampling errors in developing the questionnaire—including using a social science survey specialist for design and pretesting the questionnaire. We also minimized the nonsampling errors when collecting and analyzing the data, including using a computer program for analysis, and using an independent analyst to review the computer program. Finally, we verified the accuracy of a small sample of keypunched records by comparing them with their corresponding questionnaires, and we corrected the errors we found. Less than 0.5 percent of the data items we checked had random keypunch errors that would not have been corrected during data processing. To conduct our analysis, a technologist compared all of the initiatives and identified overlapping initiatives as those sharing at least one common technology advancement activity, one common technology, and having similar goals. A second technologist then completed the same analysis, and the two then compared their findings and, where they differed, came to a joint decision as to which initiatives broadly overlapped on their technology advancement activities, technologies, and broad goals. If the two technologists could not come to an agreement, a third technologist determined whether there was overlap. To assess the reliability of obligations data, we asked officials of initiatives that comprised over 90 percent of the total obligations follow-up questions on the data systems used to generate that data. While we did not verify all responses, on the basis of our application of recognized survey design practices and follow-up procedures, we determined that the data used in this report were of sufficient quality for our purposes. We conducted this performance audit from September 2011 to August 2012 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Tables 4, 5, 6, 7, 8, and 9 provide descriptions, by agency, of the 65 initiatives that support solar energy technologies and the obligations for those initiatives’ solar activities in fiscal years 2010 and 2011. In addition to the individual named above, key contributors to this report included Karla Springer (Assistant Director), Tanya Doriss, Cindy Gilbert, Jessica Lemke, Cynthia Norris, Jerome Sandau, Holly Sasso, Maria Stattel, and Barbara Timmerman. | The United States has abundant solar energy resources and solar, along with wind, offers the greatest energy and power potential among all currently available domestic renewable resources. In February 2012, GAO reported that 23 federal agencies had implemented nearly 700 renewable energy initiatives in fiscal year 2010-- including initiatives that supported solar energy technologies (GAO-12-260). The existence of such initiatives at multiple agencies raised questions about the potential for duplication, which can occur when multiple initiatives support the same technology advancement activities and technologies, direct funding to the same recipients, and have the same goals. GAO was asked to identify (1) solar- related initiatives supported by federal agencies in fiscal years 2010 and 2011 and key characteristics of those initiatives and (2) the extent of fragmentation, overlap, and duplication, if any, of federal solar- related initiatives, as well as the extent of any coordination among these initiatives. GAO reviewed its previous work and interviewed officials at each of the agencies identified as having federal solar initiatives active in fiscal years 2010 and 2011. GAO developed a questionnaire and administered it to officials involved in each initiative to collect information on: initiative goals, technology advancement activities, funding obligations, number of projects, and coordination activities. This report contains no recommendations. In response to the draft report, USDA generally agreed with the findings, while the other agencies had no comments. Sixty-five solar-related initiatives with a variety of key characteristics were supported by six federal agencies. Over half of these 65 initiatives supported solar projects exclusively; the remaining initiatives supported solar and other renewable energy technologies. The 65 initiatives exhibited a variety of key characteristics, including multiple technology advancement activities ranging from basic research to commercialization by providing funding to various types of recipients including universities, industry, and federal laboratories and researchers, primarily through grants and contracts. Agency officials reported that they obligated about $2.6 billion for the solar projects in these initiatives in fiscal years 2010 and 2011, an amount higher than in previous years, in part, because of additional funding from the 2009 American Recovery and Reinvestment Act. The 65 solar-related initiatives are fragmented across six agencies and overlap to some degree in their key characteristics, but most agency officials reported coordination efforts to avoid duplication. The initiatives are fragmented in that they are implemented by various offices across the six agencies and address the same broad areas of national need. However, the agencies tailor their initiatives to meet their specific missions, such as DOD's energy security mission and NASA's space exploration mission. Many of the initiatives overlapped with at least one other initiative in the technology advancement activity, technology type, funding recipient, or goal. However, GAO found no clear instances of duplicative initiatives. Furthermore, officials at 57 of the 65 initiatives (88 percent) indicated that they coordinated in some way with other solar-related initiatives, including both within their own agencies and with other agencies. Such coordination may reduce the risk of duplication. Moreover, 59 of the 65 initiatives (91 percent) require applicants to disclose other federal sources of funding on their applications to help ensure that they do not receive duplicative funding. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
Intellectual property is an important component of the U.S. economy, and the United States is an acknowledged global leader in the creation of intellectual property. However, industries estimate that annual losses stemming from violations of intellectual property rights overseas are substantial. Further, counterfeiting of products such as pharmaceuticals and food items fuels public health and safety concerns. USTR’s Special 301 reports on the adequacy and effectiveness of intellectual property protection around the world demonstrate that, from a U.S. perspective, intellectual property protection is weak in developed as well as developing countries and that the willingness of countries to address intellectual property issues varies greatly. Eight federal agencies, as well as the Federal Bureau of Investigation (FBI) and the U.S. Patent and Trademark Office (USPTO), undertake the primary U.S. government activities to protect and enforce U.S. intellectual property rights overseas. The agencies are the Departments of Commerce, State, Justice, and Homeland Security; USTR; the Copyright Office; the U.S. Agency for International Development (USAID); and the U.S. International Trade Commission. The efforts of U.S. agencies to protect U.S. intellectual property overseas fall into three general categories—policy initiatives, training and technical assistance, and U.S. law enforcement actions. U.S. policy initiatives to increase intellectual property protection around the world are primarily led by USTR, in coordination with the Departments of State and Commerce, USPTO, and the Copyright Office, among other agencies. A centerpiece of policy activities is the annual Special 301 process. “Special 301” refers to certain provisions of the Trade Act of 1974, as amended, that require USTR to annually identify foreign countries that deny adequate and effective protection of intellectual property rights or fair and equitable market access for U.S. persons who rely on intellectual property protection. USTR identifies these countries with substantial assistance from industry and U.S. agencies and publishes the results of its reviews in an annual report. Once a pool of such countries has been determined, the USTR, in coordination with other agencies, is required to decide which, if any, of these countries should be designated as a Priority Foreign Country (PFC). If a trading partner is identified as a PFC, USTR must decide within 30 days whether to initiate an investigation of those acts, policies, and practices that were the basis for identifying the country as a PFC. Such an investigation can lead to actions such as negotiating separate intellectual property understandings or agreements between the United States and the PFC or implementing trade sanctions against the PFC if no satisfactory outcome is reached. Between 1994 and 2004, the U.S. government designated three countries as PFCs—China, Paraguay, and Ukraine—as a result of intellectual property reviews. The U.S. government negotiated separate bilateral intellectual property agreements with China and Paraguay to address IPR problems. These agreements are subject to annual monitoring, with progress cited in each year’s Special 301 report. Ukraine, where optical media piracy was prevalent, was designated a PFC in 2001. The United States and Ukraine found no mutual solution to the IPR problems, and in January 2002, the U.S. government imposed trade sanctions in the form of prohibitive tariffs (100 percent) aimed at stopping $75 million worth of certain imports from Ukraine over time. In addition, most of the agencies involved in efforts to promote or protect IPR overseas engage in some training or technical assistance activities. Key activities to develop and promote enhanced IPR protection in foreign countries are undertaken by the Departments of Commerce, Homeland Security, Justice, and State; the FBI; USPTO; the Copyright Office; and USAID. Training events sponsored by U.S. agencies to promote the enforcement of intellectual property rights have included enforcement programs for foreign police and customs officials, workshops on legal reform, and joint government-industry events. According to a State Department official, U.S. government agencies have conducted intellectual property training for a number of countries concerning bilateral and multilateral intellectual property commitments, including enforcement, during the past few years. For example, intellectual property training was conducted by numerous agencies over the last year in Poland, China, Morocco, Italy, Jordan, Turkey, and Mexico. A small number of agencies are involved in enforcing U.S. intellectual property laws, and the nature of these activities differs from other U.S. government actions related to intellectual property protection. Working in an environment where counterterrorism is the central priority, the FBI and the Departments of Justice and Homeland Security take actions that include engaging in multicountry investigations involving intellectual property violations and seizing goods that violate intellectual property rights at U.S. ports of entry. For example, the Department of Justice has an office that directly addresses international IPR problems. Justice has been involved with international investigation and prosecution efforts and, according to a Justice official, has become more aggressive in recent years. For instance, Justice and the FBI recently coordinated an undercover IPR investigation, with the involvement of several foreign law enforcement agencies. The investigation focused on individuals and organizations, known as “warez” release groups, which specialize in the Internet distribution of pirated materials. In April 2004, these investigations resulted in 120 simultaneous searches worldwide (80 in the United States) by law enforcement entities from 10 foreign countries and the United States in an effort known as “Operation Fastlink.” Although investigations can result in international actions such as those cited above, FBI officials told us that they cannot determine the number of past or present IPR cases with an international component because they do not track or categorize cases according to this factor. Department of Homeland Security (DHS) officials emphasized that their investigations include an international component when counterfeit goods are brought into the United States. However, DHS does not track cases by a specific foreign connection. The overall number of IPR-oriented investigations that have been pursued by foreign authorities as a result of DHS efforts is unknown. DHS does track seizures of goods that violate IPR and reports seizures that totaled more than $90 million in fiscal year 2003. Seizures of IPR-infringing goods have involved imports primarily from Asia. In fiscal year 2003, goods from China accounted for about two-thirds of the value of all IPR seizures, many of which were shipments of cigarettes. Other seized goods from Asia that year originated in Hong Kong and Korea. A DHS official pointed out that providing protection against IPR-infringing imported goods for some U.S. companies—particularly entertainment companies— can be difficult, because companies often fail to record their trademarks and copyrights with DHS. Several interagency mechanisms exist to coordinate overseas intellectual property policy initiatives, development and assistance activities, and law enforcement efforts, although these mechanisms’ level of activity and usefulness varies. According to government and industry officials, an interagency trade policy mechanism established by the Congress in 1962 to assist USTR has operated effectively in reviewing IPR issues. The mechanism, which consists of tiers of committees as well as numerous subcommittees, constitutes the principle means for developing and coordinating U.S. government positions on international trade, including IPR. A specialized subcommittee is central to conducting the Special 301 review and determining the results of the review. This interagency process is rigorous and effective, according to U.S. government and industry officials. A Commerce official told us that the Special 301 review is one of the best tools for interagency coordination in the government, while a Copyright Office official noted that coordination during the review is frequent and effective. A representative for copyright industries also told us that the process works well and is a solid interagency effort. The National Intellectual Property Law Enforcement Coordination Council (NIPLECC), created by the Congress in 1999 to coordinate domestic and international intellectual property law enforcement among U.S. federal and foreign entities, seems to have had little impact. NIPLECC consists of (1) the Under Secretary of Commerce for Intellectual Property and Director of the United States Patent and Trademark Office; (2) the Assistant Attorney General, Criminal Division; (3) the Under Secretary of State for Economic and Agricultural Affairs; (4) the Deputy United States Trade Representative; (5) the Commissioner of Customs; and (6) the Under Secretary of Commerce for International Trade. NIPLECC’s authorizing legislation did not include the FBI as a member of NIPLECC, despite its pivotal role in law enforcement. However, according to representatives of the FBI, USPTO, and Justice, the FBI should be a member. USPTO and Justice cochair NIPLECC, which has no independent staff or budget. In the council’s nearly 4 years of existence, its primary output has been three annual reports to the Congress, which are required by statute. According to interviews with industry officials and officials from its member agencies, and as evidenced by its own legislation and reports, NIPLECC continues to struggle to define its purpose and has had little discernable impact. Indeed, officials from more than half of the member agencies offered criticisms of NIPLECC, remarking that it is unfocused, ineffective, and “unwieldy.” In official comments to the council’s 2003 annual report, major IPR industry associations expressed a sense that NIPLECC is not undertaking any independent activities or effecting any impact. One industry association representative stated that law enforcement needs to be made more central to U.S. IPR efforts and said that although he believes the council was created to deal with this issue, it has “totally failed.” The lack of communication regarding enforcement results in part from complications such as concerns regarding the sharing of sensitive law enforcement information and from the different missions of the various agencies involved in intellectual property actions overseas. According to an official from USPTO, NIPLECC is hampered primarily by its lack of independent staff and funding. According to a USTR official, NIPLECC needs to define a clear role in coordinating government policy. A Justice official stressed that, when considering coordination, it is important to avoid creating an additional layer of bureaucracy that may detract from efforts devoted to each agency’s primary mission. Despite its difficulties thus far, we heard some positive comments regarding NIPLECC. For example, an official from USPTO noted that the IPR training database Web site resulted from NIPLECC efforts. Further, an official from the State Department commented that NIPLECC has had some “trickle-down” effects, such as helping to prioritize the funding and development of the intellectual property database at the State Department. Although the agency officials that constitute NIPLECC’s membership meet infrequently and NIPLECC has undertaken few concrete activities, this official noted that NIPLECC provides the only forum for bringing enforcement, policy, and foreign affairs agencies together at a high level to discuss intellectual property issues. A USPTO official stated that NIPLECC has potential but needs to be “energized.” Other coordination mechanisms include the National International Property Rights Coordination Center (IPR Center) and informal coordination. The IPR Center in Washington, D.C., a joint effort between DHS and the FBI, began limited operations in 2000. According to a DHS official, the coordination between DHS, the FBI, and industry and trade associations makes the IPR Center unique. The IPR Center is intended to serve as a focal point for the collection of intelligence involving copyright and trademark infringement, signal theft, and theft of trade secrets. However, the center is not widely used by industry. An FBI official associated with the IPR Center estimated that about 10 percent of all FBI industry referrals come through the center rather than going directly to FBI field offices. DHS officials noted that “industry is not knocking the door down” and that the IPR Center is perceived as underutilized. Policy agency officials noted the importance of informal but regular communication among staff at the various agencies involved in the promotion or protection of intellectual property overseas. Several officials at various policy-oriented agencies, such as USTR and the Department of Commerce, noted that the intellectual property community was small and that all involved were very familiar with the relevant policy officials at other agencies in Washington, D.C. Further, State Department officials at U.S. embassies regularly communicate with agencies in Washington, D.C., regarding IPR matters and U.S. government actions. Agency officials noted that this type of coordination is central to pursuing U.S. intellectual property goals overseas. Although communication between policy and law enforcement agencies can occur through forums such as the NIPLECC, these agencies do not systematically share specific information about law enforcement activities. According to an FBI official, once a criminal investigation begins, case information stays within the law enforcement agencies and is not shared. A Justice official emphasized that criminal law enforcement is fundamentally different from the activities of policy agencies and that restrictions exist on Justice’s ability to share investigative information, even with other U.S. agencies. U.S. efforts have contributed to strengthened foreign IPR laws, but enforcement overseas remains weak. The impact of U.S. activities is challenged by numerous factors. Industry representatives report that the situation may be worsening overall for some intellectual property sectors. The efforts of U.S. agencies have contributed to the establishment of strengthened intellectual property legislation in many foreign countries, however, the enforcement of intellectual property rights remains weak in many countries, and U.S. government and industry sources note that improving enforcement overseas is now a key priority. USTR’s most recent Special 301 report states that “although several countries have taken positive steps to improve their IPR regimes, the lack of IPR protection and enforcement continues to be a global problem.” For example, although the Chinese government has improved its statutory IPR regime, USTR remains concerned about enforcement in that country. According to USTR, counterfeiting and piracy remain rampant in China and increasing amounts of counterfeit and pirated products are being exported from China. Although U.S. law enforcement does undertake international cooperative activities to enforce intellectual property rights overseas, executing these efforts can prove difficult. For example, according to DHS and Justice officials, U.S. efforts to investigate IPR violations overseas are complicated by a lack of jurisdiction as well as by the fact that U.S. officials must convince foreign officials to take action. Further, a DHS official noted that in some cases, activities defined as criminal in the United States are not viewed as an infringement by other countries and that U.S. law enforcement agencies can therefore do nothing. In addition, U.S. efforts confront numerous challenges. Because intellectual property protection is one of many U.S. government objectives pursued overseas, it is viewed internally in the context of broader U.S. foreign policy objectives that may receive higher priority at certain times in certain countries. Industry officials with whom we met noted, for example, their belief that policy priorities related to national security were limiting the extent to which the United States undertook activities or applied diplomatic pressure related to IPR issues in some countries. Further, the impact of U.S. activities is affected by a country’s own domestic policy objectives and economic interests, which may complement or conflict with U.S. objectives. U.S. efforts are more likely to be effective in encouraging government action or achieving impact in a foreign country where support for intellectual property protection exists. It is difficult for the U.S. government to achieve impact in locations where foreign governments lack the “political will” to enact IPR protections. Many economic factors complicate and challenge U.S. and foreign governments’ efforts, even in countries with the political will to protect intellectual property. These factors include low barriers to entering the counterfeiting and piracy business and potentially high profits for producers. In addition, the low prices of counterfeit products are attractive to consumers. The economic incentives can be especially acute in countries where people have limited income. Technological advances allowing for high-quality inexpensive and accessible reproduction and distribution in some industries have exacerbated the problem. Moreover, many government and industry officials believe that the chances of getting caught for counterfeiting and piracy, as well as the penalties when caught, are too low. The increasing involvement of organized crime in the production and distribution of pirated products further complicates enforcement efforts. Federal and foreign law enforcement officials have linked intellectual property crime to national and transnational organized criminal operations. Further, like other criminals, terrorists can trade any commodity in an illegal fashion, as evidenced by their reported involvement in trading a variety of counterfeit and other goods. Many of these challenges are evident in the optical media industry, which includes music, movies, software, and games. Even in countries where interests exist to protect domestic industries, such as the domestic music industry in Brazil or the domestic movie industry in China, economic and law enforcement challenges can be difficult to overcome. For example, the cost of reproduction technology and copying digital media is low, making piracy an attractive employment opportunity, especially in a country where formal employment is hard to obtain. The huge price differentials between pirated CDs and legitimate copies also create incentives on the consumer side. For example, when we visited a market in Brazil, we observed that the price for a legitimate DVD was approximately ten times the price for a pirated DVD. Even if consumers are willing to pay extra to purchase the legitimate product, they may not do so if the price differences are too great for similar products. Further, the potentially high profit makes optical media piracy an attractive venture for organized criminal groups. Industry and government officials have noted criminal involvement in optical media piracy and the resulting law enforcement challenges. Recent technological advances have also exacerbated optical media piracy. The mobility of the equipment makes it easy to transport it to another location, further complicating enforcement efforts. Likewise, the Internet provides a means to transmit and sell illegal software or music on a global scale. According to an industry representative, the ability of Internet pirates to hide their identities or operate from remote jurisdictions often makes it difficult for IPR holders to find them and hold them accountable. Despite improvements such as strengthened foreign IPR legislation, international IPR protection may be worsening overall for some intellectual property sectors. For example, according to copyright industry estimates, losses due to piracy grew markedly in recent years. The entertainment and business software sectors, for example, which are very supportive of USTR and other agencies, face an environment in which their optical media products are increasingly easy to reproduce, and digitized products can be distributed around the world quickly and easily via the Internet. According to an intellectual property association representative, counterfeiting trademarks has also become more pervasive in recent years. Counterfeiting affects more than just luxury goods; it also affects various industrial goods. The U.S. government has demonstrated a commitment to addressing IPR issues in foreign countries using multiple agencies. However, law enforcement actions are more restricted than other U.S. activities, owing to factors such as a lack of jurisdiction overseas to enforce U.S. law. Several IPR coordination mechanisms exist, with the interagency coordination that occurs during the Special 301 process standing out as the most significant and active. Conversely, the mechanism for coordinating intellectual property law enforcement, NIPLECC, has accomplished little that is concrete. Currently, there is a lack of compelling information to demonstrate a unique role for this group, bringing into question its effectiveness. In addition, it does not include the FBI, a primary law enforcement agency. Members, including NIPLECC leadership, have repeatedly acknowledged that the group continues to struggle to find an appropriate mission. The effects of U.S. actions are most evident in strengthened foreign IPR legislation. U.S. efforts are now focused on enforcement, since effective enforcement is often the weak link in intellectual property protection overseas and the situation may be deteriorating for some industries. As agencies continue to pursue IPR improvements overseas, they will face daunting challenges. These challenges include the need to create political will overseas, recent technological advancements that facilitate the production and distribution of counterfeit and pirated goods, and powerful economic incentives for both producers and consumers, particularly in developing countries. Further, as the U.S. government focuses increasingly on enforcement, it will face different and complex factors, such as organized crime, that may prove quite difficult to address. With a broad mandate under its authorizing legislation, NIPLECC has struggled to establish its purpose and unique role. If the Congress wishes to maintain NIPLECC and take action to increase its effectiveness, the Congress may wish to consider reviewing the council’s authority, operating structure, membership, and mission. Such considerations could help NIPLECC identify appropriate activities and operate more effectively to coordinate intellectual property law enforcement issues. Mr. Chairman, this concludes my prepared statement. I would be pleased to respond to any questions that you or other members of the committee may have at this time. Should you have any questions about this testimony, please contact me by e-mail at [email protected] or Emil Friberg at [email protected]. We can also be reached at (202) 512-4128 and (202) 512-8990, respectively. Other major contributors to this testimony were Leslie Holen, Ming Chen, and Sharla Draemel. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | Although the U.S. government provides broad protection for intellectual property, intellectual property protection in parts of the world is inadequate. As a result, U.S. goods are subject to piracy and counterfeiting in many countries. A number of U.S. agencies are engaged in efforts to improve protection of U.S. intellectual property abroad. This testimony, based on a recent GAO report, describes U.S agencies' efforts, the mechanisms used to coordinate these efforts, and the impact of these efforts and the challenges they face. U.S. agencies undertake policy initiatives, training and assistance activities, and law enforcement actions in an effort to improve protection of U.S. intellectual property abroad. Policy initiatives include assessing global intellectual property challenges and identifying countries with the most significant problems--an annual interagency process known as the "Special 301" review--and negotiating agreements that address intellectual property. In addition, many agencies engage in training and assistance activities, such as providing training for foreign officials. Finally, a small number of agencies carry out law enforcement actions, such as criminal investigations involving foreign parties and seizures of counterfeit merchandise. Agencies use several mechanisms to coordinate their efforts, although the mechanisms' usefulness varies. Formal interagency meetings--part of the U.S. government's annual Special 301 review--allow agencies to discuss intellectual property policy concerns and are seen by government and industry sources as rigorous and effective. However, the National Intellectual Property Law Enforcement Coordination Council, established to coordinate domestic and international intellectual property law enforcement, has struggled to find a clear mission, has undertaken few activities, and is generally viewed as having little impact. U.S. efforts have contributed to strengthened intellectual property legislation overseas, but enforcement in many countries remains weak, and further U.S. efforts face significant challenges. For example, competing U.S. policy objectives take precedence over protecting intellectual property in certain regions. Further, other countries' domestic policy objectives can affect their "political will" to address U.S. concerns. Finally, many economic factors, as well as the involvement of organized crime, hinder U.S. and foreign governments' efforts to protect U.S. intellectual property abroad. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
In the last several decades, Congress has passed various legislation to increase federal agencies’ abilities to identify and address the health and environmental risks associated with toxic chemicals and to address such risks. Some of these laws, such as the Clean Air Act; the Clean Water Act; the Federal Food, Drug and Cosmetic Act; and the Federal Insecticide, Fungicide, and Rodenticide Act authorize the control of hazardous chemicals in, among other things, the air, water, and soil and in food, drugs, and pesticides. Other laws, such as the Occupational Safety and Health Act and the Consumer Product Safety Act, can be used to protect workers and consumers from unsafe exposures to chemicals in the workplace and the home. Nonetheless, the Congress found that human beings and the environment were being exposed to a large number of chemicals and that some could pose an unreasonable risk of injury to health or the environment. In 1976, the Congress passed TSCA to provide EPA with the authority to obtain information on chemicals and regulate those substances that pose an unreasonable risk to human health or the environment. While other environmental and occupational health laws generally control only the release of chemicals in the environment, exposures in the workplace, or the disposal of chemicals, TSCA allows EPA to control the entire life cycle of chemicals from their production and distribution to their use and disposal. In October 2003, the European Commission presented a proposal for a new EU regulatory system for chemicals. REACH was proposed because the Commission believed that the current legislative framework for chemicals in the EU did not produce sufficient information about the effects of chemicals on human health and the environment. In addition, the risk assessment process was slow and resource-intensive and did not allow the regulatory system to work efficiently and effectively. Under REACH, authority exists to establish restrictions for any chemical that poses unacceptable risks and to require authorization for the use of chemicals identified as being of very high concern. These restrictions could include banning uses in certain products, banning uses by consumers, or even completely banning the chemical. Authorization will be granted if a given manufacturer can demonstrate that the risks from a given use of the chemical can be adequately controlled if a threshold can be determined for the chemical. Or, if no threshold can be determined, the manufacturer has to demonstrate that the socioeconomic benefits outweigh the risks associated with continued use and that there are no suitable alternatives or technologies available. In addition, a key aspect of REACH is that it places the burden on manufacturers, importers, and downstream users to ensure that they manufacture, place on the market, or use such substances that do not adversely affect human health or the environment. Its provisions are underpinned by the precautionary principle. REACH was approved in December 2006 and went into effect in June 2007. To avoid overloading regulators and companies with the work arising from the registration process, full implementation of all the provisions of REACH will be phased in over an 11-year period (or by 2018). TSCA does not require companies to develop information for either new or existing chemicals, whereas REACH generally requires companies to submit and, in some circumstances, requires companies to develop such information for both kinds of chemicals. For new chemicals, TSCA requires companies to submit to EPA any available human health and environmental data, but companies do not have to develop additional information unless EPA requires additional test data through a test rule or other EPA action. For existing chemicals, companies do not have to develop such information unless EPA requires them to do so. In contrast, companies generally are required under REACH to provide and develop where needed the European Chemicals Agency with health and environmental data. The extent of such data depends on the annual production volume of the chemical. TSCA does not require chemical companies to test new chemicals for their effect on human health or the environment, but it requires companies to submit such information if it already exists when they submit a premanufacture notice (PMN) notifying EPA of their intent to manufacture a new chemical. This notice provides, among other things, certain information on the chemical’s intended uses and potential exposure. TSCA also requires chemical companies to submit data and other information on the physical/chemical properties, fate, or health and environmental effects of a chemical, which we refer to in this report as "hazard information," that the companies possesses or is reasonably ascertainable by them when they submit a PMN to EPA. In part because TSCA does not require chemical companies to develop hazard information before submitting a PMN, EPA employs several other approaches for assessing hazards, including using models that compare new chemicals with existing chemicals with similar molecular structures for which test data on health and environmental effects are available. In June 2005, we recommended that EPA develop a strategy for improving and validating the models that EPA uses to assess and predict the hazards of chemicals. EPA is currently devising such a strategy, according to agency officials. EPA receives approximately 1,500 new chemical notices each year, half of which are exemption requests, and has reviewed more than 45,000 from 1979 through 2005. PMNs include information such as specific chemical identity estimated maximum production volume for 12 months of production a description of how the chemical will be processed and used and estimates of how many workers may be exposed to the chemical. Additionally, EPA requires that the following information be submitted with a PMN: all existing health and environmental data in the possession of the submitter, parent company, or affiliates, and a description of any existing data known to or reasonably ascertainable by the submitter. EPA estimates that most PMNs do not include test data of any type, and only about 15 percent include health and safety data—such as acute toxicity or skin and eye irritation data. In some cases, EPA may determine during the review process that more data are needed for an analysis of a chemical’s potential risks and often will negotiate an agreement with the chemical company to conduct health hazard or environmental effects testing. According to EPA, more than 300 testing agreements have been issued since EPA began reviewing new chemicals in 1979. In some cases, however, the chemical company may voluntarily withdraw the PMN rather than incur the costs of hazard testing requested by EPA, or for other reasons. EPA does not maintain records as to how many PMNs chemical companies have withdrawn because of potential EPA action. While TSCA does not require chemical companies to develop information on the harmful effects of existing chemicals on human health or the environment, TSCA provides that EPA, by issuing a test rule, can require such information on a case-by-case basis. Before promulgating such a rule EPA must find, among other things, that current data are insufficient, testing is necessary, and that either (1) the chemical may present an unreasonable risk or (2) the chemical is or will be produced in substantial quantities and that there is or may be substantial human or environmental exposure to the chemical. EPA officials responsible for administering the act said that TSCA’s test rule provision and data-gathering authorities can be burdensome and too time consuming for EPA to administer. Because EPA has limited information on existing chemicals and the difficulty in promulgating test rules, EPA uses voluntary programs to help gather more data to assess risks on certain chemicals. While TSCA authorizes EPA to require testing of existing chemicals, the act does not authorize the agency to do so unless EPA first determines on the basis of risk or exposure information that the chemicals warrant such testing. TSCA provides EPA the authority to obtain hazard information needed to assess chemicals by issuing rules under Section 4 of TSCA requiring chemical companies to test to determine the health and environmental effects of chemicals and submit the test data to EPA. However, in order for EPA to issue a test rule, the agency must determine that a chemical (1) may present an unreasonable risk of injury to health or the environment or (2) is or will be produced in substantial quantities and (a) there is or may be significant or substantial human exposure to the chemical or (b) it enters or may reasonably be anticipated to enter the environment in substantial quantities. EPA must also determine that there are insufficient data to reasonably determine or predict the effects of the chemical on health or the environment and that testing is necessary to develop such data. Once EPA has made the required determination, the agency can issue a proposed rule for public comment, consider the comments it receives, and promulgate a final rule ordering chemical testing. OPPT officials responsible for implementing TSCA told us that finalizing rules under Section 4 of TSCA can take from 2 to 10 years and require the expenditure of substantial resources. EPA has used its authority to require testing for about 200 existing chemicals since the agency began reviewing chemicals under TSCA in 1979. EPA does not maintain estimates of the cost of implementing these rules. However, in our September 1994 report on TSCA, we noted that EPA officials told us that issuing a rule under Section 4 can cost up to a $234,000. Given the difficulties and cost of requiring testing, EPA could review substantially more chemicals in less time if it had authority to require chemical companies to conduct testing and provide test data on chemicals once they reach a substantial production volume. In June, 2005, we stated that Congress may wish to consider amending TSCA to provide EPA such authority. As an alternative to formal rule making, EPA asserts that Section 4 of TSCA provides EPA implied authority to enter into "enforceable consent agreements" with chemical companies that would require them to conduct testing when there is insufficient data available to assess a chemical’s risk. EPA uses enforceable consent agreements to accomplish testing where a consensus exists among EPA, affected manufacturers and/or processors, and interested members of the public concerning the need for and scope of testing. According to EPA, these agreements allow greater flexibility in the design of the testing program and negotiating these agreements is generally less costly and time consuming than promulgating test rules. EPA has entered into consent agreements with chemical companies to develop tests for about 60 chemicals where the agency determined additional data were needed to assess the chemical’s risk. Under Section 8 of TSCA, EPA promulgates rules directing chemical companies to maintain records and submit such information as the EPA Administrator reasonably requires. This information can include, among other things, chemical identity, categories of use, production levels, by- products, existing data on adverse health and environmental effects, and the number of workers exposed to the chemical. Section 8(d) authorizes EPA to promulgate rules under which chemical companies are required to submit lists or copies of any health and safety studies to EPA. Finally, Section 8 requires chemical companies to report any information to EPA that reasonably supports a conclusion that a chemical presents a substantial risk of injury to health or the environment. According to EPA, the agency has issued about 50 Section 8(d) rules covering approximately 1,000 chemicals. As a result of these rules, EPA has received nearly 50,000 studies covering environmental fate, human health effects, and environmental effects. However, TSCA Section 8(d) only applies to existing studies and does not require companies to develop new studies. The TSCA Inventory Update Rule (IUR), currently requires chemical companies to report every 5 years to EPA the site and manufacturing information for chemicals in the TSCA inventory that they manufacture or import in amounts of 25,000 pounds or greater at a single site. For the most current reporting cycle and for subsequent reporting cycles, chemical companies must report additional information—such as uses, the types of consumer products the chemical will be used in—including those intended for use by children, and the number of workers who could potentially be exposed—for chemicals manufactured or imported in amounts of 300,000 pounds or more at a single site. In response to the lack of information on existing chemicals and the relative difficulty the agency faces in requiring companies to conduct additional testing under TSCA, EPA has taken efforts to increase the amount of the information it can access on chemicals by implementing a voluntary program called the High Production Volume (HPV) Challenge Program. The HPV Challenge Program focuses on obtaining chemical company sponsors to voluntarily provide data on approximately 2,800 chemicals that chemical companies reported in 1990 were domestically produced or imported at a high volume—over 1 million pounds. Through this program, sponsors develop a basic set of screening level information on the chemicals either by gathering available data, using models to predict the chemicals’ properties, or conducting testing of the chemicals. The six data endpoints collected under the HPV Challenge Program are acute toxicity, repeat dose toxicity, developmental and reproductive toxicity, mutagenicity, ecotoxicity, and environmental fate. EPA believes that these basic data are needed to make an informed, preliminary judgment about the hazards of HPV chemicals. In June 2005, we recommended that EPA develop a methodology for using information collected through the HPV Challenge Program to prioritize chemicals for further review. EPA’s Director of OPPT told us the agency developed such a methodology as data from chemical companies became available and are currently applying the methodology to assess HPV chemicals. The methodology was developed based on input received from an advisory committee, the National Pollution Prevention and Toxics Advisory Committee (NPPTAC). Despite these promising voluntary efforts regarding high-production- volume chemicals, several difficulties remain, as we have noted in our prior work. For example, (1) chemical companies have not agreed to test approximately 300 chemicals identified by EPA as high-production-volume chemicals; (2) additional chemicals will become high-production chemicals in the constantly changing commercial chemical marketplace; and (3) chemicals without a particularly high-production volume may also warrant testing, based on their toxicity and the nature of exposure to them. In addition, this program may not provide enough information for EPA to use in making risk-assessment decisions. While the data in the HPV Challenge Program and the new exposure and use reporting under the IUR may help EPA prioritize chemicals of concern, the data may not provide sufficient evidence for EPA to determine whether a reasonable basis exists to conclude that the chemical presents an unreasonable risk of injury to health or the environment and that regulatory action is necessary. Although the chemical industry may be willing to take action, even before EPA has the evidence required for rule making under TSCA, the industry is nonetheless large and diverse, and it is uncertain that all companies will always take action voluntarily. To ensure that adequate data are made publicly available to assess the special impact that industrial chemicals may have on children, EPA launched the Voluntary Children’s Chemical Evaluation Program (VCCEP). In December 2000, EPA implemented VCCEP first as a pilot program. EPA’s goal is to learn from this pilot program before a final VCCEP process is determined and before additional chemicals are selected. For the VCCEP pilot, EPA identified 23 commercial chemicals to which children have a high likelihood of exposure and the information needed to assess the risks to children from these chemicals. Recently, EPA requested comments on the implementation of the pilot program from stakeholders and other interested parties but has not yet responded to the comments or evaluated the program for its effectiveness. EPA is running a pilot of the VCCEP so it can gain insight into how best to design and implement the VCCEP in order to effectively provide the agency and the public with the means to understand the potential health risks to children associated with exposure to these and ultimately other chemicals to which children may be exposed. EPA intends the pilot to be the means of identifying efficiencies that can be applied to any subsequent implementation of the VCCEP. Another purpose for running the pilot is the opportunity it will offer to test the performance of the peer consultation process. For the VCCEP pilot, the purpose of the peer consultation process is to provide a forum for scientists and relevant experts from various stakeholder groups to exchange scientific views on the chemical sponsor's data submissions and in particular on the recommended data needs. Under the VCCEP pilot, EPA is pursuing a three-tiered approach for gathering information, with tier 3 involving more detailed toxicology and exposure studies than tier 2, and tier 2 involving more detailed toxicology and exposure studies than tier 1. EPA asked companies that produce and/or import 23 specific chemicals to volunteer to sponsor their chemical in the first tier of the VCCEP pilot. EPA selected these 23 chemicals because the agency believed them to be especially relevant to children’s chemical exposures, such as the presence of the chemical in human tissue or blood, in food and water children eat and drink, and in air children breathe. In addition, many of these chemicals were known to be relatively “data rich” in that chemical data were already available. Chemical companies have volunteered to sponsor 20 of the 23 chemicals in the VCCEP. EPA believes that these 20 chemicals provide an adequate basis for evaluating the VCCEP pilot. Chemical companies volunteering to sponsor a chemical under the program have agreed to make chemical-specific public commitments to make certain hazard, exposure, and risk assessment data and analyses publicly available. For toxicity data, specific types of studies have been assigned to each of the three tiers. For exposure data, the depth of exposure information increases with each tier. If data needs are identified through the peer consultation process, the sponsor will choose whether to volunteer for any additional data generation or testing and whether to provide additional assessments in subsequent tiers. However, company sponsors are under no obligation to volunteer for tiers 2 and 3, even if EPA determines additional information is needed. After the submission of tier 1 information and its review by the peer consultation group—consisting of scientific experts with extensive and broad experience in toxicity testing and exposure evaluations—EPA reviews the sponsor’s assessment and develops a response, focusing primarily on whether any additional information is needed to adequately evaluate the potential risks to children. If additional information is needed, EPA will indicate what information should be provided in tier 2. Companies will then be given an opportunity to sponsor chemicals at tier 2. EPA plans to repeat this process to determine whether tier 3 information is needed. Information from all three tiers may not always be necessary to adequately evaluate the risk to children. According to EPA officials, since the program’s inception, sponsors have submitted 15 of the 20 assessments on chemicals to EPA and the peer consultation group. The peer consultation group has issued reports on 13 of the 15 chemical submissions. EPA has issued Data Needs Decisions on 11 of these 13 chemicals for which EPA determined that 5 chemicals needed additional data. One of the sponsors agreed to commit to tier 2 and to provide the additional data to EPA. The sponsor of two other chemicals declined to commit to tier 2 since it had ceased manufacturing the chemicals in 2004. The sponsor of the other 2 chemicals told EPA it will decide whether to commit to the additional testing by the end of July 2007. In November 2006, EPA requested comments on the implementation of the pilot program from stakeholders and interested parties. As part of its request for comments, EPA included a list of questions that the agency believed would be helpful in its evaluation of the pilot program. The questions ranged from asking about the sufficiency of the hazard, exposure, and risk assessments provided by the chemical sponsors; to the effectiveness and efficiency of the peer review panel; to the timeliness of the VCCEP pilot in providing data. EPA received comments from 11 interested parties, including from industry representatives, environmental organizations, children’s health advocacy groups, and other interested parties. Generally, the industry groups provided positive comments about the pilot while the children’s health advocacy and environmental groups provided negative comments about VCCEP. For example, the American Chemistry Council commented that the pilot is proceeding well, the current tiered approach is sound, and that only minimal improvements are needed. One of the improvements the chemistry council suggested is that EPA should make the data generated under the pilot more accessible to the public, other EPA program offices, and to other federal and state agencies. Conversely, the American Academy of Pediatrics commented that the VCCEP pilot is failing in its goal to provide timely or useful information on chemical exposures and their implications to the public or to health care providers. EPA plans to prepare a comments document summarizing the comments received from the stakeholders and publish it on the VCCEP Web site. In addition, EPA plans to have a final evaluation of the effectiveness of the VCCEP pilot in late 2007. REACH created a single system for the regulation of new and existing chemicals and, once implemented, will generally require chemical companies to register chemicals produced or imported at 1 ton or more per producer or importer per year with a newly created European Chemicals Agency. Information requirements with registration will vary according to the production volume and suspected toxicity of the chemical. For chemicals produced at 1 ton or more per producer or importer per year, chemical companies subject to registration will be required to submit information for the chemical, such as the chemical’s identity; how it will be produced; how it will be used; guidance on its safe use; exposure information; and study summaries of physical/chemical properties and their effects on human health or the environment. REACH specifies the amount of information to be included in the study summaries based on the chemical’s production volume, i.e., how much of the chemical will be produced or imported each year. The information requirements may be met through a variety of methods, including existing data, scientific modeling, or testing. REACH separates the production volume information requirements into four metric tonnage bands—1 ton or more, 10 tons or more, 100 tons or more, and 1,000 tons or more. Hazard information must be submitted for each tonnage band with each higher band requiring the information for the lower bands in addition to the ones specified for that band. For example, at the one or more tonnage band, REACH requires information on environmental effects that include short- term toxicity on invertebrates, toxicity to algae, and ready biodegradability. At the 10 or more tonnage band, REACH requires such information in addition to a chemical safety assessment, which includes an assessment of the chemical’s human health and environmental hazards; a physiochemical hazard assessment; an environmental hazard assessment; and an assessment of the chemical’s potential to be a persistent, bioaccumulative, and toxic pollutant, which are chemicals that create pollutants that persist in the environment, bioaccumulate in food chains, and are toxic. Table 1 shows the total number of chemical endpoints—the chemical or biological effect that is assessed by a test method—required for chemicals produced at various production volumes, where applicable, for TSCA, the HPV Challenge Program, and REACH. While industry participation in the EPA’s HPV Challenge Program is voluntary, we have included information on the number of endpoints to be produced for chemicals in the program for comparison purposes. As the table shows, companies will provide a greater number of endpoints on chemicals under REACH than TSCA or the HPV Challenge Program. Additionally, appendix IV provides a listing of specific information requirements or endpoints for three testing categories: physical/chemical, human health, and environmental effects/fates. Both TSCA and REACH provide regulators with authorities to control chemical risks by restricting the production or use of both new and existing chemicals. Under TSCA, EPA must generally compile data needed to assess the potential risks of chemicals and must also develop substantial evidence in the rule-making record in order to withstand judicial review. However, REACH is based on the principle that chemical companies—manufacturers, importers, and downstream users—should ensure that the chemicals they manufacture, place on the market, or use do not adversely affect human health or the environment. Even when EPA has toxicity and exposure information on existing chemicals, the agency has had difficulty demonstrating that chemicals present or will present an unreasonable risk and that they should have limits placed on their production or use. Since the Congress enacted TSCA in 1976, EPA has issued regulations under Section 6 of the act to limit the production or restrict the use of five existing chemicals or chemical classes. The five chemicals or chemical classes are polychlorinated biphenyls (PCB), fully halogenated chlorofluoroalkanes, dioxin, asbestos, and hexavalent chromium. In addition, under Section 5(a)(2) of TSCA, for 160 existing chemicals, EPA issued significant new use rules that require chemical companies to submit notices to EPA prior to commencing the manufacture, import, or processing of the substance for a significant new use. In order to regulate an existing chemical under Section 6(a) of TSCA, EPA must find that there is a reasonable basis to conclude that the chemical presents or will present an unreasonable risk of injury to health or the environment. Before regulating a chemical under Section 6(a), the EPA Administrator must consider and publish a statement regarding the effects of the chemical on human health and the magnitude of human exposure to the chemical; the effects of the chemical on the environment and the magnitude of the environment’s exposure to the chemical; the benefits of the chemical for various uses and the availability of substitutes for those uses; and the reasonably ascertainable economic consequences of the rule, after consideration of the effect on the national economy, small business, technological innovation, the environment, and public health. Further, the regulation must apply the least burdensome requirement that will adequately protect against such risk. For example, if EPA finds that it can adequately manage the unreasonable risk of a chemical through requiring chemical companies to place warning labels on the chemical, EPA could not ban or otherwise restrict the use of that chemical. Additionally, if the EPA Administrator determines that a risk of injury to health or the environment could be eliminated or sufficiently reduced by actions under another federal law, then TSCA prohibits EPA from promulgating a rule under Section 6(a) of TSCA, unless EPA finds that it is in the public interest considering all aspects of the risk, the estimated costs of compliance, and the relative efficiency of such action to protect against risk of injury. Finally, EPA must also develop substantial evidence in the rule-making record in order to withstand judicial review. Under TSCA, a court reviewing a TSCA rule “shall hold unlawful and set aside…if the court finds that the rule is not supported by substantial evidence in the rule-making record.” According to EPA officials responsible for administering TSCA, the economic costs of regulating a chemical are usually more easily documented than the risks of the chemical or the benefits associated with controlling those risks, and it is difficult to show by substantial evidence that EPA is promulgating the least burdensome requirement. According to EPA officials in OPPT who are responsible for implementing TSCA, the use of Section 6(a) has presented challenges as the agency must, in effect, perform a cost-benefit analysis, considering the economic and societal costs of placing controls on the chemical. Specifically, these officials say that EPA must take into account the benefits provided by the various uses of the chemical, the availability of substitutes, and the reasonably ascertainable economic consequences of regulating the chemical after considering the effects of such regulation on the national economy, small business, technological innovation, the environment, and public health. EPA’s 1989 asbestos rule illustrates the evidentiary requirements that TSCA places on EPA to control chemicals under TSCA Section 6(a). The rule prohibited the future manufacture, importation, processing, and distribution of asbestos in almost all products. Some of the manufacturers of these asbestos products filed suit against EPA, arguing that the rule was not promulgated on the basis of substantial evidence regarding unreasonable risk. In October 1991, the U.S. Court of Appeals for the Fifth Circuit agreed with the manufacturers, concluding that EPA had failed to muster substantial evidence to justify its asbestos ban and returning parts of the rule to EPA for reconsideration. In reaching this conclusion, the court found that EPA did not consider all necessary evidence and failed to show that the control action it chose was the least burdensome reasonable regulation required to adequately protect human health or the environment. As articulated by the court, the proper course of action for EPA, after an initial showing of product danger, would have been to consider the costs and benefits of each regulatory option available under Section 6, starting with the less restrictive options, such as product labeling, and working up through a partial ban to a complete ban. The court further criticized EPA's ban of asbestos in products for which no substitutes were currently available stating that, in such cases, EPA “bears a tough burden” to demonstrate, as TSCA requires, that a ban is the least burdensome alternative. The court’s decision on the asbestos rule is especially revealing about Section 6 because EPA spent 10 years preparing the rule. In addition, asbestos is generally regarded as one of the substances for which EPA has the most scientific evidence or documentation of substantial adverse health effects. Since the U.S. Court of Appeals for the Fifth Circuit’s ruling in October 1991, EPA has not used TSCA Section 6 to restrict any chemicals. However, EPA has used Section 6 to issue a proposed ban on certain grouts, which was later withdrawn when industry agreed to use personal protection equipment to address worker exposure issues, and issue an Advance Notice of Proposed Rule Making for methyl-t-butyl ether because of widespread drinking water contamination. Although TSCA’s Section 6 has been used infrequently, the Director of OPPT and other EPA officials responsible for implementing TSCA told us that they believe that taking action under this section remains a practicable option for the agency. Section 5(a)(2) requires chemical companies to notify EPA at least 90 days before beginning to manufacture or process a chemical for a use that EPA has determined by rule is a significant new use. EPA has these 90 days to review the chemical information in the premanufacture notice and identify the chemical’s potential risks. Under Section 5(e), if EPA determines that there is insufficient information available to permit a reasoned evaluation of the health and environmental effects of a chemical and that (1), in absence of such information, the chemical may present an unreasonable risk of injury to health or the environment or (2) it is or will be produced in substantial quantities and (a) it either enters or may reasonably be anticipated to enter the environment in substantial quantities or (b) there is or may be significant or substantial human exposure to the substance, then EPA can issue a proposed order or seek a court injunction to prohibit or limit the manufacture, processing, distribution in commerce, use, or disposal of the chemical. Under Section 5(f), if EPA finds that the chemical will present an unreasonable risk, EPA must act to protect against the risk. If EPA finds that there is a reasonable basis to conclude that a new chemical may pose an unreasonable risk before it can protect against such risk by regulating it under Section 6 of TSCA, EPA can (1) issue a proposed rule, effective immediately, to require the chemical to be marked with adequate warnings or instructions, to restrict its use, or to ban or limit the production of the chemical or (2) seek a court injunction or issue a proposed order to prohibit the manufacture, processing, or distribution of the chemical. According to the Director of OPPT, it is less difficult for the agency to demonstrate that a chemical “may present” an unreasonable risk than it is to show that a chemical “will present” such a risk. Thus, EPA has found it easier to impose controls on new chemicals when warranted. Despite limitations in the information available on new chemicals, EPA’s reviews have resulted in some action being taken to reduce the risks of over 3,800 of the 33,000 new chemicals that chemical companies have submitted for review since 1979. These actions included, among other things, chemical companies voluntarily withdrawing their notices of intent to manufacture new chemicals, and entering into consent orders with EPA to produce a chemical only under specified conditions. In addition, EPA has promulgated significant new use rules requiring chemical companies to notify EPA of their intent to manufacture or process certain chemicals for any uses that EPA has determined to be a "significant new use." For over 1,700 chemicals, companies withdrew their PMNs sometimes after EPA officials indicated that the agency planned to initiate the process for placing controls on the chemicals, such as requiring testing or prohibiting the production or certain uses of the chemical. The Director of OPPT told us that after EPA has screened a new chemical or performed a detailed analysis of it, chemical companies may drop their plans to market the chemical when the chemical’s niche in the marketplace is uncertain and EPA requests that the company develop and submit test data or apply exposure controls. According to EPA officials, companies may be uncertain that they will recoup costs associated with the test data and controls and prefer to withdraw their PMN. In addition, for over 1,300 chemicals, EPA issued orders requiring chemical companies to implement workplace controls or practices during manufacturing pending the development of information on the risks posed by the chemicals and/or to perform toxicity testing if the chemicals’ production volumes reached certain levels. For over 570 of the 33,000 new chemicals submitted for review, EPA required chemical companies to submit notices for any significant new uses of the chemical, providing EPA the opportunity to review the risks of injury to human health or the environment before new uses begin. For example, in 2003, EPA promulgated a significant new use rule requiring chemical companies to submit a notice for the manufacture or processing of substituted benzenesulfonic acid salt for any use other than as described in the PMN. To control chemical risks, REACH provides procedures for both authorizing and restricting the use of chemicals. Authorization procedures under REACH have three major steps. First, the European Chemicals Agency will publish a list of chemicals—known as the candidate list—that potentially need authorization before they can be used. The chemical agency will determine which chemicals to place on the candidate list after it has reviewed the information that chemical companies submit to the agency at the time the chemicals are registered under REACH and after considering the input provided by individual EU member states and the European Commission. In making this determination, the agency is to use criteria set forth in REACH, covering issues such as bioaccumulation, carcinogenicity, and reproductive toxicity. Secondly, the European Commission will determine which chemicals on the candidate list will require authorization and which will be exempted from the authorization requirements. According to the Environment Counselor for the Delegation of the European Commission to the United States, some chemicals may be exempted from authorization requirements because, so far, sufficient controls established by other legislation are already in place. Finally, once a chemical has been deemed to require authorization, a chemical company will have to apply to the European Commission for an authorization for each use of the chemical. The application for authorization must include an analysis of the technical and economic feasibility of using safer substitutes and, if appropriate, information about any relevant research and development activities by the applicant. If such an analysis shows that suitable alternatives are available for any use of the chemical, then the application must also include a plan for how the company plans to substitute the safer chemical for the chemical of concern in that particular use. The European Commission is generally required to grant an authorization if the applicant meets the burden of demonstrating that the risks from the manufacture, use, or disposal of the chemical can be adequately controlled, except for (1) PBTs; (2) very persistent, very bioaccumulative chemicals (vPvBs); and (3) certain other chemicals including those that are carcinogenic or reproductive toxins. However, even these chemicals may receive authorization if a chemical company can demonstrate that social and economic benefits outweigh the risks. In addition, 6 years after REACH goes into effect (or in 2013), the European Commission will review whether endocrine disrupters should also be excluded from authorization unless chemical companies can demonstrate that the social and economic benefits outweigh their risks. Eventually, all chemicals granted authorizations under REACH will be reviewed to ensure that they can be safely manufactured, used, and disposed. The time frame for such reviews will be determined on a case- by-case basis that takes into account information such as the risks posed by the chemical, the availability of safer alternatives, and the social and economic benefits of the use of the chemical. For example, if suitable substitutes become available, the authorization may be amended or withdrawn, even if the chemical company granted the authorization has demonstrated that the chemical can be safely controlled. In addition to such authorization procedures, REACH provides procedures for placing restrictions on chemicals that pose an unacceptable risk to health or the environment. The restriction may completely ban a chemical or limit its use by consumers or by manufacturers of certain products. REACH’s restrictions procedures enable the EU to regulate communitywide conditions for the manufacture, marketing, or use of certain chemicals where there is an unacceptable risk to health or the environment. Proposals for restrictions will be prepared by either a Member State or by the European Chemicals Agency at the request of the European Commission. The proposal must demonstrate that there is a risk to human health or the environment that needs to be addressed at the communitywide level and to identify the most appropriate set of risk reduction measures. Interested parties will have an opportunity to comment on the restriction proposal. However, the final determination on the restriction proposal will be made by the European Commission. Because no chemicals have undergone REACH’s authorization and restriction procedures, it is not possible to comment on the ability of these procedures to control the risks of chemicals to human health or the environment. TSCA and REACH require public disclosure of certain information on chemicals and both laws protect confidential or sensitive business information, although the extent to which information can be claimed as confidential or sensitive varies under the two laws. In this regard, one of the objectives of REACH is to make information on chemicals more widely available to the public. Accordingly, REACH places greater limitations on the kinds of information that companies may claim as confidential or sensitive. TSCA has provisions to protect information claimed by chemical companies as confidential or sensitive business information, such as information on chemical production volumes and trade secret formulas. Health and safety studies, however, generally cannot be considered confidential business information, and TSCA has provisions for making such studies available to the public. Additionally, EPA can disclose confidential business information when it determines such disclosure is necessary to protect human health or the environment from an unreasonable risk. EPA interprets the term health and safety study broadly and, as such, it may include but is not limited to epidemiological, occupational exposure, toxicological, and ecological studies. However, TSCA generally allows chemical companies to claim any information provided to EPA, other than health and safety studies, as confidential. TSCA requires EPA to protect the information from unauthorized disclosure. More specifically, TSCA restricts EPA’s ability to share certain information it collects from chemical companies, such as information about the company (including its identity); the chemical’s identity; or the site of operation, including with state officials or with officials of foreign governments. If a request is made for disclosure of the confidential information, EPA regulations require the chemical company to substantiate the claims by providing the agency information on a number of issues, such as whether the identity of the chemical had been kept confidential from competitors and what harmful effects to the company’s competitive position would result from publication of the chemical on the TSCA inventory. State environmental agencies and others are interested in obtaining chemical information, including that claimed as confidential, for use in various activities, such as developing contingency plans to alert emergency response personnel of the presence of highly toxic substances at local manufacturing facilities. Likewise, the general public may find information collected under TSCA useful to engage in dialogues with chemical companies about reducing chemical risks and limiting chemical exposures at nearby facilities that produce or use toxic chemicals. While EPA believes that some claims of confidential business information may be unwarranted, challenging the claims is resource-intensive. According to a 1992 EPA study, the latest performed by the agency, problems with inappropriate claims were extensive. This study examined the extent to which companies made confidential business information claims, the validity of the claims, and the impact of inappropriate claims on the usefulness of TSCA data to the public. The study found that many of the confidentiality claims submitted under TSCA were not appropriate, particularly for health and safety data. For example, between September 1990 and May 1991, EPA reviewed 351 health and safety studies that chemical companies submitted with a claim of confidentiality. EPA challenged the confidentiality claimed for 77, or 22 percent of the studies and, in each case, the submitter amended the confidentiality claim when challenged by EPA. Currently, while EPA may suspect that some chemical companies’ confidentiality claims are unwarranted, the agency does not have data on the number of inappropriate claims. As we reported in June 2005, EPA focuses on investigating primarily those claims that it believes may be both inappropriate and among the most potentially important—that is, claims relating to health and safety studies performed by chemical companies. According to the EPA official responsible for initiating challenges to confidentiality claims, the agency challenges about 14 such claims each year, and the chemical companies withdraw nearly all of the claims challenged. Chemical companies have expressed interest in working with EPA to identify ways to enable other organizations to use the information given the adoption of appropriate safeguards. In addition, chemical company representatives told us that, in principle, they have no concerns about revising TSCA or EPA regulations to require that confidentiality claims be periodically reasserted and reviewed. However, neither TSCA nor EPA regulations require periodic reviews to determine when information no longer needs to be protected as confidential. In our June 2005 report, we recommended that EPA revise its regulations to require that companies reassert claims of confidentiality submitted to EPA under TSCA within a certain time period after the information is initially claimed as confidential. In July 2006, EPA responded to Congress that the agency planned to initiate a pilot process, using its existing authorities, to review selected older submissions containing CBI claims. According to EPA officials, the agency is examining PMNs and notices of commencements submitted to EPA from fiscal years 1993 thorough March 2007 and plans to compile statistics on the numbers and percentages of submissions and the types of CBI claims made. Based on the agency’s review, and in light of its other regulatory priorities, EPA will consider whether rule making is appropriate to maximize the benefits of a reassertion program, including benefits to the public. However, no completion date has been determined for the pilot. Similar to TSCA, REACH has provisions to protect information claimed by chemical companies as confidential or sensitive, including trade secret formulas and production volumes. In addition, REACH treats some information as confidential, including the following, even if a company did not claim it as confidential: (1) details of the full composition of the chemical’s preparation; (2) the precise use, function, or application of the chemical or its preparation; (3) the precise tonnage or volume of the chemical manufactured or placed on the market; or (4) relationships between manufacturers/importers and downstream users. In exceptional cases where there are immediate risks to human health and safety or to the environment, REACH authorizes the European Chemicals Agency to publicly disclose this information. Furthermore, unlike TSCA, REACH places substantial restrictions on the types of data that chemical companies may claim as confidential. Consistent with one of the key objectives of REACH, the legislation makes information on hazardous chemicals widely available to the public by limiting the types of hazard information that chemical companies may claim as confidential, and generally does not allow confidentiality claims related to, among other things, guidance on the chemical’s safe use, and the chemical’s physical chemical properties, such as melting and boiling points, and results of toxicological and ecotoxicological studies, including analytical methods that make it possible to detect a dangerous substance when discharged into the environment and to determine the effects of direct exposure to humans. In addition, other information, such as study summaries and tonnage band information will be available unless the chemical companies justify that disclosing the information will be harmful to its commercial interests. REACH also requires that safety data sheets for PBTs and vPvBs and other chemicals classified as dangerous be provided to ensure that commercial users—known as downstream users and distributors of a chemical, as well as chemical manufacturers and importers, have the information they need to safely use chemicals. The data sheets, which chemical companies are required to prepare, include information on health, safety, and environmental properties, and risks and risk management measures. Similar to TSCA, REACH requires public disclosure of health and safety information and has provisions for making information available to the public. REACH also includes a provision for public access to basic chemical information, including brief profiles of hazardous properties, labeling requirements, authorized uses, and risk management measures. The European Union’s rules regarding the public’s access to information combine a variety of ways that the interests of the public’s right to know is balanced with the need to keep certain information confidential. As such, nonconfidential information will be published on the chemical agency’s Web site. However, some types of information are always to be treated as confidential under REACH, such as precise production volume. REACH also includes a provision under which confidential information can generally be shared with government authorities of other countries or international organizations under an agreement between the parties provided that the following conditions are met: (1) the purpose of the agreement is cooperation on implementation or the management of legislation concerning the chemicals covered by REACH and (2) the foreign government or international organization protects the confidential information as mutually agreed. In our June 2005 report, we suggested that Congress should consider amending TSCA to authorize EPA to share with the states and foreign governments the confidential business information that chemical companies provide to the agency, subject to regulations to be established by EPA in consultation with the chemical industry and other interested parties that would set forth the procedures to be followed by all recipients of the information in order to protect the information from unauthorized disclosures. Furthermore, chemical industry representatives told us that chemical companies would not object to Congress revising TSCA to allow those with a legitimate reason to obtain access to the confidential business information provided that adequate safeguards exist to protect the information from inappropriate disclosures. In addition, EPA officials said that harmonized international chemical assessments would be improved if the agency had the ability to share this information under appropriate procedures to protect confidentiality. Substantial differences exist between TSCA and REACH in their approaches to obtaining the information needed to identify chemical risks; controlling the manufacture, distribution, and use of chemicals; and providing the public with information on harmful chemicals. Assuming that the EU has the ability to review chemical information in a timely manner, specific provisions under REACH provide a means for addressing long-standing difficulties experienced both under TSCA and previous European chemicals legislation in (1) obtaining information on chemicals’ potentially harmful characteristics and their potential exposure to people and the environment and (2) making the chemical industry more accountable for ensuring the safety of their products. Furthermore, REACH is structured to provide a broader range of data about chemicals that could enable people to make more informed decisions about the products they use in their everyday lives. We have identified, in our previous reports on TSCA, various potential revisions to the act that could strengthen TSCA to obtain additional chemical information from the chemical industry, shift more of the burden to chemical companies for demonstrating the safety of their chemicals, and enhance the public’s understanding of the risks of chemicals to which they may be exposed. We provided EPA and the Environment Counselor for the Delegation of the European Commission to the United States a draft of this report for review and comment. Both EPA and the Environment Counselor for the Delegation of the European Commission provided technical comments, which we have incorporated into this report as appropriate. EPA also provided written comments. EPA highlighted the regulatory actions it has taken under TSCA and noted that TSCA is a “fully implemented statute that has withstood the test of time” and that, in contrast, “REACH is not yet in force, and there is no practical experience with any aspect of its implementation.” Furthermore, while EPA agreed that it is possible to compare the approaches used to protect against the risks of toxic chemicals under TSCA and REACH, “it is not yet possible to evaluate or compare the effectiveness of the different chemical management approaches or requirements.” EPA’s written comments are presented in appendix V. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies of this report to the congressional committees with jurisdiction over EPA and its activities; the Administrator, EPA; and the Director, Office of Management and Budget. We also will make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you have any questions about this report, please contact me at (202) 512- 3841 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix VI. Our objectives were to describe how Toxic Substances Control Act (TSCA) compares with Registration, Evaluation and Authorization of Chemicals (REACH) in its approaches to (1) identifying chemicals harmful to public health and the environment, (2) controlling chemical risks, and (3) disclosing chemical data to the public while protecting confidential business information. In addressing these issues, we also obtained information on Environmental Protection Agency’s (EPA) voluntary chemical control programs that complement TSCA. We reviewed the relevant provisions of TSCA, identified and analyzed EPA’s regulations on how the new and existing chemical review and control programs work, including the handling of confidential information, and determined the extent of actions taken by EPA to control chemicals. These efforts were augmented by interviews with EPA officials in the agency’s Office of Pollution Prevention and Toxics (OPPT), the EPA office with primary responsibility for implementing TSCA, the High Production Volume (HPV) Challenge Program, and the Voluntary Children’s Chemical Evaluation Program (VCCEP) pilot. In addition, we interviewed representatives of the American Chemistry Council (a national chemical manufacturers association), Environmental Defense (a national, nonprofit, environmental advocacy organization), and the Synthetic Organic Chemical Manufacturers Association (a national, specialty chemical manufacturer’s association). We also attended meetings of EPA’s National Pollution Prevention and Toxics Advisory Committee (NPPTAC) and attended various conferences sponsored by EPA and others. We selected the industry and environmental experts we interviewed based on discussions with NPPTAC representatives and based on our prior work on TSCA. Finally, we obtained and reviewed EPA documents related to its chemical program. For reviewing REACH, we obtained laws, technical literature, and government documents that describe the European Union’s (EU) chemical control program. We also interviewed EU officials who helped develop and who will be involved in implementing REACH, including the Environment Counselor for the Delegation of the European Commission to the United States and representatives from the European Commission and the European Parliament. Our descriptions of these laws are based on interviews with government officials and written materials they provided. In addition, we interviewed representatives of the American Chamber of Commerce to the EU, American Chemistry Council (a national chemical manufacturers association), Environmental Defense (a national, nonprofit environmental advocacy organization), the European Chemical Industry Council (an EU chemical manufacturers association), the European Environmental Bureau (a federation environmental advocacy organization based in the EU Member States), and the Synthetic Organic Chemical Manufacturers Association (a national, specialty chemical manufacturer’s association). Furthermore, we interviewed staff from the U.S. Mission to the EU. Finally, for the purposes of this report, we compared TSCA to the REACH legislation that was approved in December 2006, as the basis for analysis. Our review was performed between January 2006 and May 2007 in accordance with generally accepted government auditing standards. New chemicals are those not on the TSCA inventory. Existing chemicals are those listed in the TSCA Inventory. REACH creates a single system so that there will be virtually no distinction between new and existing chemicals. Originally 62,000. Of the more than 82,000 chemicals currently in the TSCA inventory, approximately 20,000 were added to the inventory since EPA began reviewing chemicals in 1979. EU officials estimated the number of chemicals with production or import levels of at least 1 metric ton (2,205 pounds) to be about 30,000. Chemical registration will be phased in over 11 years after enactment of REACH. Companies are required to notify EPA prior to manufacturing a new chemical. Companies notify EPA of its intent to manufacture a new chemical through submission of a Premanufacture Notice (PMN) or of an application for exemption. After the PMN review period has expired and within 30 days of the chemical’s manufacture, companies submit a Notice of Commencement of Manufacture or Import to EPA. The chemical is then added to the TSCA Inventory, and the chemical is classified as an existing chemical. TSCA generally does not require chemical companies to notify EPA of changes in use or production volume. However, every 5 years companies are required to update EPA on information such as the processing, use, and production volume of chemicals produced at over 25,000 pounds. In general, REACH treats new and existing chemicals the same. Chemical companies register chemicals with the European Chemicals Agency once production or import of a chemical reaches 1 metric ton (2,205 pounds). Companies must also notify EPA if the company obtains information that reasonably supports the conclusion that the chemical presents a substantial risk to human health or the environment. After registration, companies are required to immediately notify the European Chemicals Agency of significant changes in use or production volumes of the registered chemical. Based on information compiled through a series of steps, including a chemical review strategy meeting, structure-activity relationship analysis, and exposure-based reviews, EPA makes a decision ranging from “dropping” a chemical for further review to banning a chemical pending further information. TSCA does not require EPA to systematically prioritize and assess existing chemicals. The European Chemicals Agency will develop the criteria for prioritizing chemicals for further review based on, among other things, hazard data, exposure data, and production volume. However, TSCA established an Interagency Testing Committee—an advisory committee created to identify chemicals for which there are suspicions of toxicity or exposure and for which there are few, if any ecological effects, environmental fate, or health-effects testing data— to recommend chemicals to which EPA should give priority consideration in promulgating test rules. Member states may use these criteria when developing their list of chemicals to be reviewed. EPA also plans to use the High Production Volume (HPV) Challenge Program and the information under the Inventory Update Rule to help the agency prioritize the chemicals it will review. New chemicals once they have commenced manufacture are added to the TSCA Inventory. Such former new chemicals can be subject to significant new use rules (SNUR) or restrictions on the manufacture, processing, distribution in commerce, use, or disposal of the chemical under TSCA 5(e) consent orders. Chemical companies report use information once every 5 years under TSCA’s Inventory Update Rule (IUR), which is primarily used to gather certain information on chemicals produced at the threshold of 25,000 pounds or more. Chemical companies must immediately inform the European Chemicals Agency in writing of new uses of the chemical about which the company may reasonably be expected to have become aware. However, in the absence of a SNUR on a particular chemical, there is no requirement for chemical companies to notify EPA of significant new uses of existing chemicals in the intervening years or for chemicals produced at less than 25,000 pounds. Manufacturers and processors of existing chemicals subject to a SNUR must notify EPA 90 days before manufacture of or processing for significant new use. Chemical companies are not required to perform risk assessments on the risks of new chemicals. However, if a company has voluntarily performed risk assessments, they must submit these data with the PMN. Chemical companies are not required to complete assessments on the risks of existing chemicals. However, TSCA requires chemical companies to notify EPA immediately of new unpublished information on chemicals that reasonably supports a conclusion of substantial risk. Chemical companies must conduct a risk assessment in addition to European Chemicals Agency review for all chemicals produced at a level of 1 ton or more per year. Additionally, chemical companies must conduct a chemical safety assessment for all chemicals produced at a level of 10 tons or more per year. TSCA contains no specific language relating to reducing animal testing. However, according to EPA officials, TSCA’s approach of not requiring companies to test new chemicals for health hazards or environmental effects absent EPA action, combined with EPA’s use of Structure Activity Relationship (SAR) analysis reduces the need for animal testing compared with requiring a base set of data without the use of SAR analysis. No specific language relating to reducing animal testing. However, under the HPV Challenge Program, EPA encourages companies to consider approaches, such as using existing data, sharing data, and using SAR and read across approaches that would reduce the amount of animal testing needed. Further, EPA does not require retesting for chemicals with adequate Screening Information Data Sets data. EPA has expressed its commitment to examining alternate test methods that reduce the number of animals needed for testing, that reduce pain and suffering to test animals or that replaces test animals with validated in vitro (nonanimal) test systems. REACH states that testing on vertebrate animals for the purposes of regulation shall be undertaken as a last resort. To reduce the amount of animal testing, REACH encourages the sharing and joint submission of information. REACH implementation guidance encourages the use of SAR and read across approaches. Further, registrants may use any study summaries or robust study summaries performed within the 12 previous years by another manufacturer or importer to register after due compensation of the costs to the owner of the data. In addition, under the Voluntary Children’s Chemical Evaluation Program (VCCEP), EPA encouraged participating companies to reduce or eliminate animal testing. Chemical companies must provide EPA a reasonable third year estimate of the total production volume of a new chemical at the time a PMN is submitted. Chemical companies report production quantities every 5 years for those chemicals on the TSCA inventory and produced at quantities of 25,000 pounds or more through the Inventory Update Rule (IUR). Chemical companies must include information on the overall manufacture or import of a chemical in metric tons per year in a technical dossier with their registration. Chemical companies must immediately report any significant changes in the annual or total quantities manufactured or imported. No specific requirement relating to downstream users. No specific requirement relating to downstream users. Assemble and keep available all information required to carry out duties under REACH for a period of at least 10 years after the substance has been used. Prepare a chemical safety report for any use outside the conditions described in an exposure scenario or if appropriate use and exposure category described in a safety data sheet or for any use the supplier advises against. Downstream users may also provide information to assist in the preparation of a registration. EPA can issue a proposed order or seek a court injunction to prohibit or limit the manufacture, processing, distribution in commerce, use, or disposal of a chemical if EPA determines that there is insufficient information available to permit a reasoned evaluation of the health and environmental effects of a chemical and that (1) in the absence of such information, the chemical may present an unreasonable risk of injury to health or the environment or (2) it is or will be produced in substantial quantities and (a) it either enters or may reasonably be anticipated to enter the environment in substantial quantities or (b) there is or may be significant or substantial human exposure to the substance. TSCA requires EPA to apply regulatory requirements to chemicals for which EPA finds a reasonable basis to conclude that the chemical presents or will present an unreasonable risk to human health or the environment. To adequately protect against a chemical’s risk, EPA can promulgate a rule that bans or restricts the chemical’s production, processing, distribution in commerce, use or disposal, or requires warning labels be placed on the chemical. Chemicals may be regulated under provisions known as authorization and restriction. Authorization is required for the use of substances of very high concern. This includes substances that are (1) carcinogenic, mutagenic, or toxic for reproduction; (2) persistent, bioaccumulative, and toxic or very persistent and very bioaccumulative; or (3) identified as causing serious and irreversible effects to humans or the environment, such as endocrine disrupters. Section 6(a) authorizes EPA to regulate existing chemicals, including restriction or prohibition. EPA is required to apply the least burdensome requirement and the rule must be supported by substantial evidence in the rule- making record. Restrictions on substances relating to its manufacture, marketplace, or use, including banning, may be required where there is an unacceptable risk to health or the environment. EPA maintains compliance officials to monitor compliance with TSCA. EPA maintains compliance officials to monitor compliance with TSCA. Reach requires EU Member States to monitor compliance with provisions of REACH. No specific language relating to substitution or finding safer alternatives. No specific language relating to substitution or finding safer alternatives. Authorization applications (for chemicals of very high concern) require an analysis of possible alternatives or substitutes. TSCA allows companies to make confidentiality claims on nearly all information it provides EPA. TSCA allows companies to make confidentiality claims on nearly all information it provides to EPA. REACH allows chemical companies to make confidentiality claims; however, it places restrictions on what kinds of information companies may claim as confidential. TSCA requires that existing health and safety-related information must be made available to the public. TSCA requires that existing health and safety-related information must be made available to the public. EPA uses its HPV Challenge Program to voluntarily gather information from industry and ensure that a minimum set of basic data on approximately 2,800 high- production-volume-chemicals is available to the public. REACH requires public disclosure of information such as the trade name of the substance, certain physicochemical data, guidance on safe use, and all health and safety-related information. No specific language relating to children’s health. No specific language relating to children’s health. No specific language relating to children’s health. However, under the TSCA Inventory Update Reporting Regulation of December 2005, manufacturers of chemicals in volumes of 300,000 pounds or more must report use in or on products intended for use by children. As requested, we identified a number of options that could strengthen EPA’s ability under the TSCA to assess chemicals and control those found to be harmful. These options have been previously identified in earlier GAO reports on ways to make TSCA more effective. Representatives of environmental organizations and subject matter experts subsequently concurred with a number of these options and commented on them in congressional testimony. These options are not meant to be comprehensive but illustrate actions that the Congress could take to strengthen EPA’s ability to regulate chemicals under TSCA. The Congress may wish to consider revising TSCA to place more of the burden on industry to demonstrate that new chemicals are safe. Some of the burden could be shifted by requiring industry to test new chemicals based on substantial production volume and the necessity for testing, and to notify EPA of significant increases in production, releases, and exposures or of significant changes in manufacturing processes and uses after new chemicals are marketed. To put existing chemicals on a more equal footing with new chemicals, the Congress could consider revising TSCA to set specific deadlines or targets for the review of existing chemicals. These deadlines or targets would help EPA to establish priorities for reviewing those chemicals that, on the basis of their toxicity, production volumes, and potential exposure, present the highest risk to health and the environment. The Congress could also consider revising TSCA to shift more of the burden for reviewing existing chemicals to industry. If more of the responsibility for assessing existing chemicals was shared by industry, EPA could review more chemicals with current resources. In deciding how much of the burden to shift to industry, the Congress would need to consider the extent to which providing data to show that chemicals are safe should be a cost of doing business for the chemical industry. To ensure that EPA can implement its initiatives without having to face legal challenges and delays, the Congress may wish to consider revising TSCA to provide explicit authority for EPA to enter into enforceable consent agreements under which chemical companies are required to conduct testing, clarify that health and safety data cannot be claimed as confidential require substantiation of confidentiality claims at the time that the claims are submitted to EPA, limit the length of time for which information may be claimed as confidential without reaffirming the need for confidentiality, establish penalties for the false filing of confidentiality claims, and authorize states and foreign governments to have access to confidential business information when they can demonstrate to EPA that they have a legitimate need for the information and can adequately protect it against unauthorized disclosure. Once a company begins production of a chemical, it is placed on the TSCA Inventory and is classified as an existing chemical. For the HPV Challenge Program, only one of the three tests of oral route, inhalation, or dermal route are required. For REACH, the oral route test is the only one required at one ton or above and all three (oral, inhalation, and dermal) are required at 10 tons or above. These tests may be required at production volumes of 1 million pounds (about 454 tons) or more. Three biotic degradation tests are specified: simulation testing on ultimate degradation in surface water; soil simulation testing (for substances with a high potential for adsorption to soil); and sediment simulation testing (for substances with a high potential for adsorption to sediment). The choice of the appropriate test(s) depends on the results of the chemical safety assessment. In addition to the individual named above, David Bennett, John Delicath, Richard Johnson, Valerie Kasindi, Ed Kratzer, and Tyra Thompson made key contributions to this report. | Chemicals play an important role in everyday life. However, some chemicals are highly toxic and need to be regulated. In 1976, the Congress passed the Toxic Substances Control Act (TSCA) to authorize the Environmental Protection Agency (EPA) to control chemicals that pose an unreasonable risk to human health or the environment, but some have questioned whether TSCA provides EPA with enough tools to protect against chemical risks. Like the United States, the European Union (EU) has laws governing the production and use of chemicals. The EU has recently revised its chemical control policy through legislation known as Registration, Evaluation and Authorization of Chemicals (REACH) in order to better identify and mitigate risks from chemicals. GAO was asked to review the approaches used under TSCA and REACH for (1) requiring chemical companies to develop information on chemicals' effects, (2) controlling risks from chemicals, and (3) making information on chemicals available to the public. To review these issues, GAO analyzed applicable U.S. and EU laws and regulations and interviewed U.S. and EU officials, industry representatives, and environmental advocacy organizations. GAO is making no recommendations. REACH requires companies to develop information on chemicals' effects on human health and the environment, while TSCA does not require companies to develop such information absent EPA rule-making requiring them to do so. While TSCA does not require companies to develop information on chemicals before they enter commerce (new chemicals), companies are required to provide EPA any information that may already exist on a chemical's impact on human health or the environment. Companies do not have to develop information on the health or environmental impacts of chemicals already in commerce (existing chemicals) unless EPA formally promulgates a rule requiring them to do so. Partly because of the resources and difficulties the agency faces in order to require testing to develop information on existing chemicals, EPA has moved toward using voluntary programs as an alternative means of gathering information from chemical companies in order to assess and control the chemicals under TSCA. While these programs are noteworthy, data collection has been slow in some cases, and it is unclear if the programs will provide EPA enough information to identify and control chemical risks. TSCA places the burden of proof on EPA to demonstrate that a chemical poses a risk to human health or the environment before EPA can regulate its production or use, while REACH generally places a burden on chemical companies to ensure that chemicals do not pose such risks or that measures are identified for handling chemicals safely. In addition, TSCA provides EPA with differing authorities for controlling risks, depending on whether the risks are posed by new or existing chemicals. For new chemicals, EPA can restrict a chemical's production or use if the agency determines that insufficient information exists to permit a reasoned evaluation of the health and environmental effects of the chemical and that, in the absence of such information, the chemical may present an unreasonable risk. For existing chemicals, EPA may regulate a chemical for which it finds a reasonable basis exists to conclude that it presents or will present an unreasonable risk. Further, TSCA requires EPA to choose the regulatory action that is least burdensome in mitigating the unreasonable risk. However, EPA has found it difficult to promulgate rules under this standard. Under REACH, chemical companies must obtain authorization to use chemicals that are listed as chemicals of very high concern. Generally, to obtain such authorization, chemical companies need to demonstrate that they can adequately control risks posed by the chemical or otherwise ensure that the chemical is used safely. TSCA and REACH both have provisions to protect information claimed by chemical companies as confidential or sensitive business information but REACH requires greater public disclosure of certain information, such as basic chemical properties, including melting and boiling points. In addition, REACH places greater restrictions on the kinds of information chemical companies may claim as confidential. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
Our preliminary results indicate that, in the absence of RAMP, FPS currently is not assessing risk at the over 9,000 federal facilities under the custody and control of GSA in a manner consistent with federal standards such as NIPP’s risk management framework, as FPS originally planned. According to this framework, to be considered credible a risk assessment must specifically address the three components of risk: threat, vulnerability, and consequence. As a result, FPS has accumulated a backlog of federal facilities that have not been assessed for several years. According to FPS data, more than 5,000 facilities were to be assessed in fiscal years 2010 through 2012. However, we were not able to determine the extent of the FSA backlog because we found FPS’s FSA data to be unreliable. Specifically, our analysis of FPS’s December 2011 assessment data showed nearly 800 (9 percent) of the approximately 9,000 federal facilities did not have a date for when the last FSA was completed. We have reported that timely and comprehensive risk assessments play a critical role in protecting federal facilities by helping decision makers identify and evaluate potential threats so that countermeasures can be implemented to help prevent or mitigate the facilities’ vulnerabilities. Although FPS is not currently assessing risk at federal facilities, FPS officials stated that the agency is taking steps to ensure federal facilities are safe. According to FPS officials, its inspectors (also referred to as law enforcement security officers) monitor the security posture of federal facilities by responding to incidents, testing countermeasures, and conducting guard post inspections. In addition, since September 2011, FPS’s inspectors have collected information—such as location, purpose, agency contacts, and current countermeasures (e.g., perimeter security, access controls, and closed-circuit television systems) at over 1,400 facilities—which will be used as a starting point to complete FPS’s fiscal year 2012 assessments. However, FPS officials acknowledged that this approach is not consistent with NIPP’s risk management framework. Moreover, several FPS inspectors told us that they received minimal training or guidance on how to collect this information, and expressed concern that the facility information collected could become outdated by the time it is used to complete an FSA. We reported in February 2012 that multiple federal agencies have been expending additional resources to conduct their own risk assessments, in part because they have not been satisfied with FPS’s past assessments. These assessments are taking place even though, according to FPS’s Chief Financial Officer, FPS received $236 million in basic security fees from federal agencies to conduct FSAs and other security services in fiscal year 2011. For example, officials we spoke with at the Internal Revenue Service, Federal Emergency Management Agency, Environmental Protection Agency and the U.S. Army Corps of Engineers stated that they conduct their own risk assessments. GSA is also expending additional resources to assess risk. We reported in October 2010 that GSA officials did not always receive timely FPS risk assessments for facilities GSA considered leasing. GSA seeks to have these assessments completed before it takes possession of a property and leases it to tenant agencies. However, our preliminary work indicates that as of June 2012, FPS has not coordinated with GSA and other federal agencies to reduce or prevent duplication of its assessments. In September 2011, FPS signed an interagency agreement with Argonne National Laboratory for about $875,000 to develop an interim tool for conducting vulnerability assessments by June 30, 2012. According to FPS officials, on March 30, 2012, Argonne National Laboratory delivered this tool, called the Modified Infrastructure Survey Tool (MIST), to FPS on time and within budget. MIST is an interim vulnerability assessment tool that FPS plans to use until it can develop a permanent solution to replace RAMP. According to MIST project documents and FPS officials, among other things, MIST will: allow FPS’s inspectors to review and document a facility’s security posture, current level of protection, and recommend countermeasures; provide FPS’s inspectors with a standardized way for gathering and recording facility data; and allow FPS to compare a facility’s existing countermeasures against the Interagency Security Committee’s (ISC) countermeasure standards based on the ISC’s predefined threats to federal facilities (e.g., blast-resistant windows for a facility designed to counter the threat of an explosive device) to create the facility’s vulnerability report. According to FPS officials, MIST will provide several potential improvements over FPS’s prior assessment tools, such as using a standard way of collecting facility information and allowing edits to GSA’s facility data when FPS inspectors find it is inaccurate. In addition, according to FPS officials, after completing a MIST vulnerability assessment, inspectors will use additional threat information gathered outside of MIST by FPS’s Threat Management Division as well as local crime statistics to identify any additional threats and generate a threat assessment report. FPS plans to provide the facility’s threat and vulnerability reports along with any countermeasure recommendations to the federal tenant agencies. In May 2012, FPS began training inspectors on MIST and how to use the threat information obtained outside MIST and expects to complete the training by the end of September 2012. According to FPS officials, inspectors will be able to use MIST once they have completed training and a supervisor has determined, based on professional judgment, that the inspector is capable of using MIST. At that time, an inspector will be able to use MIST to assess level I or II facilities. According to FPS officials, once these assessments are approved, FPS will subsequently determine which level III and IV facilities the inspector may assess with MIST. Our preliminary analysis indicates that in developing MIST, FPS increased its use of GAO’s project management best practices, including alternatives analysis, managing requirements, and conducting user acceptance testing. For example, FPS completed, although it did not document, an alternatives analysis prior to selecting MIST as an interim tool to replace RAMP. It appears that FPS also better managed MIST’s requirements. Specifically, FPS’s Director required that MIST be an FSA- exclusive tool and thus helped avoid changes in requirements that could have resulted in cost or schedule increases during development. In March 2012, FPS completed user acceptance testing of MIST with some inspectors and supervisors, as we recommended in 2011.FPS officials, user feedback on MIST was positive from the user acceptance test, and MIST produced the necessary output for FPS’s FSA process. However, FPS did not obtain GSA or federal tenant agencies’ input in developing MIST’s requirements. Without this input, FPS’s customers may not receive the information they need to make well- informed countermeasure decisions. FPS has yet to decide what tool, if any, will replace MIST, which is intended to be an interim vulnerability assessment tool. According to FPS officials, the agency plans to use MIST for at least the next 18 months. Consequently, until FPS decides what tool, if any, will replace MIST and RAMP, it will still not be able to assess risk at federal facilities in a manner consistent with NIPP, as we previously mentioned. Our preliminary work suggests that MIST has several limitations: Assessing Consequence. FPS did not design MIST to estimate consequence, a critical component of a risk assessment. Assessing consequence is important because it combines vulnerability and threat information to evaluate the potential effects of an adverse event on a federal facility. Three of the four risk assessment experts we spoke with generally agreed that a tool that does not estimate consequences does not allow an agency to fully assess the risks to a federal facility. However, FPS officials stated that incorporating consequence information into an assessment tool is a complex task. FPS officials stated that they did not include consequence assessment in MIST’s design because it would have required additional time to develop, validate, and test MIST. As a result, while FPS may be able to identify a facility’s vulnerabilities to different threats using MIST, without consequence information, federal tenant agencies may not be able to make fully informed decisions about how to allocate resources to best protect federal facilities. FPS officials do not know if this capability can be developed in the future, but they said that they are working with the ISC and DHS’s Science and Technology Directorate to explore the possibility. Comparing Risk across Federal Facilities. FPS did not design MIST to present comparisons of risk assessment results across federal facilities. Consequently, FPS cannot take a comprehensive approach to managing risk across its portfolio of 9,000 facilities to prioritize recommended countermeasures to federal tenant agencies. Instead, FPS takes a facility by facility approach to risk management where all facilities with the same security level are assumed to have the same security risk, regardless of their location. We reported in 2010 that FPS’s approach to risk management provides limited assurance that the most critical risks at federal facilities across the country are being prioritized and mitigated.such a comprehensive approach to its FSA program when it developed RAMP and FPS officials stated that they may develop this capability for the next version of MIST. FPS recognized the importance of having Measuring Performance. FPS has not developed metrics to measure MIST’s performance, such as feedback surveys from tenant agencies. Measuring performance allows organizations to track progress toward their goals and, gives managers critical information on which to base decisions for improving their programs. This is a necessary component of effective management, and should provide agency Without such managers with timely, action-oriented information. metrics, FPS’s ability to improve MIST will be hampered. FPS officials stated that they are planning to develop performance measures for MIST, but did not give a time frame for when they will do so. GAO, Homeland Security: The Federal Protective Service Faces Several Challenges That Hamper its Ability to Protect Federal Facilities, GAO-08-683 (Washington, D.C.: June 11, 2008). these challenges in 2011, FPS did not stop using RAMP for guard oversight until June 2012 when the RAMP operations and maintenance contract was due to expire. In the absence of RAMP, in June 2012, FPS decided to deploy an interim method to enable inspectors to record post inspections. FPS officials said this capability is separate from MIST, will not allow FPS to generate post inspection reports, and does not include a way for FPS inspectors to check guard training and certification data during a post inspection. FPS officials acknowledged that this method is not a comprehensive system for guard oversight. Consequently, it is now more difficult for FPS to verify that guards on post are trained and certified and that inspectors are conducting guard post inspections as required. Although FPS collects guard training and certification information from the companies that provide contract guards, it appears that FPS does not independently verify that information. FPS currently requires its guard contractors to maintain their own files containing guard training and certification information and began requiring them to submit a monthly report with this information to FPS’s regions in July 2011. To verify the guard companies’ reports, FPS conducts monthly audits. As part of its monthly audit process, FPS’s regional staff visits the contractor’s office to select 10 percent of the contractor’s guard files and check them against the reports guard companies send FPS each month. In addition, in October 2011, FPS undertook a month-long audit of every guard file to verify that guards had up-to-date training and certification information for its 110 contracts across its 11 regions. FPS provided preliminary October 2011 data showing that 1,152 (9 percent) of the 12,274 guard files FPS reviewed at that time were deficient, meaning that they were missing one or more of the required certification document(s). However, FPS does not have a final report on the results of the nation-wide audit that includes an explanation of why the files were deficient and whether deficiencies were resolved. FPS’s monthly audits of contractor data provide limited assurance that qualified guards are standing post, as FPS is verifying that the contractor- provided information matches the information in the contractor’s files. We reported in 2010 that FPS’s reliance on contractors to self-report guard training and certification information without a reliable tracking system of its own may have contributed to a situation in which a contractor allegedly falsified training information for its guards. In addition, officials at one FPS region told us they maintain a list of the files that have been audited previously to avoid reviewing the same files, but FPS has no way of ensuring that the same guard files are not repeatedly reviewed during the monthly audits, while others are never reviewed. In the place of RAMP, FPS plans to continue using its administrative audit process and the monthly contractor-provided information to verify that qualified contract guards are standing post in federal facilities. We plan to finalize our analysis and report to the Chairman in August 2012, including recommendations. We discussed the information in this statement with FPS and incorporated technical comments as appropriate. Chairman Lungren, Ranking Member Clarke, and members of the Subcommittee, this completes my prepared statement. I would be happy to respond to any questions you may have at this time. For further information on this testimony, please contact me at (202) 512-2834, or by e-mail at [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Individuals making key contributions to this testimony include Tammy Conquest, Assistant Director; Geoffrey Hamilton; Greg Hanna; Justin Reed; and Amy Rosewarne. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | FPS provides security and law enforcement services to over 9,000 federal facilities managed by the General Services Administration (GSA). GAO has reported that FPS faces challenges providing security services, particularly completing FSAs and managing its contract guard program. To address these challenges, FPS spent about $35 million and 4 years developing RAMPessentially a risk assessment and guard oversight tool. However, RAMP ultimately could not be used to do either because of system problems. This testimony is based on preliminary work for the Chairman and discusses the extent to which FPS is (1) completing risk assessments, (2) developing a tool to complete FSAs, and (3) managing its contract guard workforce. GAO reviewed FPS documents, conducted site visits at 3 of FPSs 11 regions and interviewed officials from FPS, Argonne National Laboratory, GSA, Department of Veterans Affairs, the Federal Highway Administration, Immigration and Customs Enforcement, and guard companies; as well as 4 risk management experts. GAOs preliminary results indicate that the Department of Homeland Securitys (DHS) Federal Protective Service (FPS) is not assessing risks at federal facilities in a manner consistent with standards such as the National Infrastructure Protection Plans (NIPP) risk management framework, as FPS originally planned. Instead of conducting risk assessments, since September 2011, FPSs inspectors have collected information, such as the location, purpose, agency contacts, and current countermeasures (e.g., perimeter security, access controls, and closed-circuit television systems). This information notwithstanding, FPS has a backlog of federal facilities that have not been ssessed for several years. According to FPSs data, more than 5,000 facilities were to be assessed in fiscal years 2010 through 2012. However, GAO was not able to determine the extent of FPSs facility security assessment (FSA) backlog because the data were unreliable. Multiple agencies have expended resources to conduct risk assessments, even though they also already pay FPS for this service. FPS has an interim vulnerability assessment tool, referred to as the Modified Infrastructure Survey Tool (MIST), which it plans to use to assess federal facilities until it develops a longer-term solution. In developing MIST, FPS generally followed GAOs project management best practices, such as conducting user acceptance testing. However, our preliminary analysis indicates that MIST has some limitations. Most notably, MIST does not estimate the consequences of an undesirable event occurring at a facility. Three of the four risk assessment experts GAO spoke with generally agreed that a tool that does not estimate consequences does not allow an agency to fully assess risks. FPS officials stated that they did not include consequence information in MIST because it was not part of the original design and thus requires more time to validate. MIST also was not designed to compare risks across federal facilities. Thus, FPS has limited assurance that critical risks at federal facilities are being prioritized and mitigated. GAOs preliminary work indicates that FPS continues to face challenges in overseeing its approximately 12,500 contract guards. FPS developed the Risk Assessment and Management Program (RAMP) to help it oversee its contract guard workforce by verifying that guards are trained and certified and for conducting guard post inspections. However, FPS faced challenges using RAMP for guard oversight, such as verifying guard training and certification information, and has recently determined that it would no longer use RAMP. Without a comprehensive system, it is more difficult for FPS to oversee its contract guard workforce. FPS is verifying guard certification and training information by conducting monthly audits of guard information maintained by guard contractors. However, FPS does not independently verify the contractors information. Additionally, according to FPS officials, FPS recently decided to deploy a new interim method to record post inspections that replaces RAMP. GAO is not making any recommendations in this testimony. GAO plans to finalize its analysis and report to the Chairman in August 2012, including recommendations. GAO discussed the information in this statement with FPS and incorporated technical comments as appropriate. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
Unlike other U.S. school districts, DCPS, due to its location in the nation’s capital, has a unique administrative environment. Because Washington, D.C., is not located in a state, DCPS does not benefit from the oversight and assistance often provided by states. Furthermore, recent organizational changes in both the city and its school system have changed administration of the schools. To reform the District’s school system, the Congress recently passed the District of Columbia School Reform Act of 1995, which includes requirements for counting District students. Counting student enrollment, a process involving several interconnected elements, is usually fundamental to assessing funding needs and required of most other U.S. school districts. DCPS’ enrollment count process in school year 1996-97 was centered in the local schools and modified somewhat to address criticisms. DCPS lacks the state-level oversight that most other school districts in the country have. The state’s role in school operations is an important one. States generally provide guidance to their school districts on important issues, including student enrollment counts. The state determines the rules to be used in the enrollment count—who should be counted, by what method, and when. States also distribute state and federal funds to their districts, usually on the basis of enrollment, and they routinely audit various school district operations, including the enrollment count. The governance of DCPS had been performed for many years by an elected Board of Education. In November 1996, however, the specially appointed District of Columbia Financial Responsibility and Management Assistance Authority (Authority) declared a state of emergency in DCPS and transferred DCPS management—until June 30, 2000—to the Authority’s agents, a nine-member, specially appointed Emergency Transitional Education Board of Trustees. In so doing, the Authority transferred to the Board of Trustees “. . . all authority, powers, functions, duties, responsibilities . . .” of the former Board of Education (with some exceptions not relevant to this report). Meanwhile, the Authority also replaced DCPS’ superintendent with a Chief Executive Officer/ Superintendent. These changes have resulted in a shift of control from elected officials toward those appointed for a specific purpose: to reform the system. Early reform initiatives have included the administrative reorganization of DCPS and the closure of 11 schools. Even before the Authority’s takeover of DCPS, the Congress, relying on its plenary power to legislate for the District of Columbia, acted directly to reform DCPS. In April 1996, the Congress passed the District of Columbia School Reform Act of 1995, calling for the calculation of the number of students enrolled in DCPS. The law requires the District of Columbia Board of Education to do the following: calculate by October 15 of each year the number of students enrolled in the District’s public schools and students whose tuition in other schools is paid by DCPS funds, including students with special needs and nonresident students, in the following categories by grade level if applicable: kindergarten through grade 12, preschool and prekindergarten, adult students, and students in nongrade level programs; calculate the amount of fees and tuition assessed and collected from nonresident students in these categories; prepare by October 15 and submit to the Authority, the Comptroller General of the United States, appropriate congressional committees, and others an annual report summarizing those counts; and arrange with the Authority to provide for the conduct of an independent audit of the count. Within 45 days of the Authority’s receipt of the annual report—or as soon thereafter as is practicable—the Authority is to submit the independent audit report to the appropriate congressional committees. The requirement to count students is common to most other U.S. school districts. Forty-one of the 50 states use some type of direct student count to assess resource needs and to distribute state funds to their school districts. Enrollment counts also usually determine budgets and resource allocations to the individual schools. Three basic methods are used for counting enrollment. One method— called Enrolled Pupils (often called ENR)—counts all enrolled students on a specified day of the year. Definitions of “enrolled students” vary among districts, but they usually include elements of attendance. That is, students must be in attendance at least once during some preceding time period. ENR is used by 12 states and the District of Columbia. Another similar method is called Pupils in Average Daily Membership (often called ADM). This method, used by 22 states, calculates the average of total enrollment figures over a specified time period. A third method, called Pupils in Average Daily Attendance (often called ADA), calculates the average total daily attendance over a specified time period. Seven states use this method. Enrollment counts may occur several times throughout the school year in response to both state and local information needs and may use different counting methods depending on the purpose of the count. For example, officials in one district reported that they perform a count about 5 days after school opens, using the ENR method. The district uses this count to make final adjustments to school-level resource allocations for the current school year. On September 30, the district conducts the first of three state-required enrollment counts, also using the ENR method. The state uses this count to assess compliance with state quality standards (such as pupil/teacher ratios) and to estimate enrollment before the March 31 count. On March 31, the district conducts the second state-required count, this time using the ADM method. The state uses this count to distribute state funds. Finally, the district conducts the third state-required enrollment count at the end of the school year, also using the ADM method. The state uses this count as a final report on enrollment for the entire school year. In addition to fulfilling reporting requirements, the school district uses the state-required enrollment counts for local planning and monitoring purposes. States vary in their approach to monitoring and auditing their districts’ enrollment counts. Some states do little monitoring or auditing of their districts’ counts, while others stringently monitor and audit. For example, one state simply reviews district enrollment reports for the fall and spring and contacts districts if large discrepancies exist. In contrast, another state not only conducts an electronic audit of its districts’ spring and fall official enrollment counts, but also visits districts and examines a random sample of student records in detail. School district officials in this state reported that the state withdraws from its districts state funds paid for students improperly enrolled or retained on the rolls. Regardless of when the count is performed or by what method, whether audited or not, accuracy is critical. A student count may be inaccurate if it has problems in any of at least three critical areas: enrollment, residency verification, and pupil accounting. Enrollment and residency verification take place when a student enters the school system. They determine a student’s initial eligibility and therefore who may potentially be included in the count. Pupil accounting refers to the tracking of students after initial enrollment. Monitoring student attendance, status, and transfers in and out of school are all part of pupil accounting, which often involves an automated student database. The pupil accounting system provides the basis for determining continued eligibility to be counted—based upon a student’s attendance—and it helps determine which school may count a particular student in its enrollment. Critics have often charged that the District’s reported official enrollment numbers have been overstated. One reviewer asserted, for example, that results of the 1990 U.S. census suggest that the District’s school-age population in 1990 might have been as much as 13,000 less than DCPS’ official enrollment count. Subsequent reviewers, including a certified public accounting firm, the Office of the District of Columbia Auditor, and us, examined the process that DCPS used to count pupils in school years 1994-95 and 1995-96 and found flaws. These flaws included DCPS’ lack of documentation to support enrollment status and lack of sanctions if false enrollment information was provided. These reviewers also reported that DCPS lacked adequate procedures to verify residency and that the student database had errors, including duplicate records, incomplete transfers, and incorrect enrollment status. For a more detailed discussion of audit findings and recommendations, see appendix II. DCPS’ process for enrolling, verifying residency of, and tracking students remained centered in the local school in school year 1996-97, while central office staff monitored portions of the process. To respond to past criticisms, DCPS instituted some changes for school year 1996-97, including new forms, residency verification procedures, and additional preparatory counts. The actual official enrollment count was done manually, and school principals were ultimately responsible for ensuring the accuracy of their schools’ counts. DCPS’ local schools conducted all enrollment activities in school year 1996-97 for new and returning students, and the schools’ principals made all determinations about enrollment eligibility. Principals were allowed to enroll students who lived outside school boundaries without limitation. Principals could also temporarily enroll students who had not provided evidence of meeting eligibility criteria, including health certificates and proofs of District of Columbia residency. Upon completion of initial paperwork, the schools’ data entry clerks created an electronic record for each newly enrolled—or temporarily enrolled—student in SIS. The system maintained records for returning students from the previous school year, and the records were updated during the summer with promotion information. Similarly, withdrawals were processed during the summer, and these records were removed from the schools’ rolls. Figure 1 shows the enrollment count process for school year 1996-97. The process in school year 1996-97 incorporated the use of a new enrollment card designed to address auditors’ concerns about validating enrollment status. Students were to complete two copies of the enrollment card on the first day of attendance, and teachers were to sign and certify the cards. A completed card was to serve as proof that a child had appeared the single day required to be considered enrolled. In addition to serving as proof of enrollment status, the card was to be used to update SIS. In addition to the enrollment card, DCPS’ enrollment process for 1996-97 required all students to provide evidence of District of Columbia residency. If the student provided no evidence, DCPS’ rules allowed the student to enroll, but the student was to be assessed tuition. Tuition for a full-time program for school year 1996-97 ranged from $3,349 to $7,558, depending on grade level. Providing evidence of District of Columbia residency was required as part of revised DCPS procedures for school year 1996-97 to answer critics who charged that DCPS’ process for verifying residency was inadequate. In previous years, only students entering DCPS schools for the first time would have been required to submit proof of residency. A new form, the Student Residency and Data Verification Form, which had been piloted at selected schools during the previous school year, was to be completed for all students during school year 1996-97. Students were expected to have their parents or guardians complete the form and return it to the school with proofs of residency attached. Schools were to give students 3 days to complete and submit the form and proofs. Within 10 days, the school was to provide one copy of the form to the Nonresident Tuition Enforcement Branch of the Central Office along with a list of those students for whom residency had not been verified. The Nonresident Tuition Enforcement Branch was responsible for assessing and collecting tuition. In addition to enrollment and residency verification procedures, local schools also tracked student attendance, status, and transfers in school year 1996-97. Each of DCPS’ schools had online access to school data, and the schools’ data entry personnel (or enrollment clerks) were responsible for ensuring data were accurate and up to date. The MIS Branch, however, in the Central Office, managed the overall database. Classroom or homeroom teachers took attendance once a day, and data entry staff recorded it in SIS. Transfers were often done electronically, with transfer procedures initiated by the losing school and completed by the gaining school, although a manual back-up transfer process was also available. Monitoring activities for school year 1996-97 focused exclusively on overseeing the schools’ implementation of the enrollment card and on identifying nonresidents. During the early part of the school year, DCPS’ Central Office staff visited each of the schools three times to monitor enrollment cards. Eighteen members of the Central Office staff were temporarily reassigned to monitor the cards. Staff paid the first monitoring visit within the first 2 weeks of school and focused on the extent to which schools were following the process, that is, distributing and completing enrollment cards and filing them in the appropriate locations. Staff paid the interim monitoring visit before the official enrollment count and manually tallied students, comparing the enrollment cards, SIS reports, and the preliminary count documents. Staff paid the final monitoring visit after the October 3 count and were to verify that names on the enrollment cards matched those on SIS homeroom rosters. Nonresident students of the District of Columbia were to be identified through local schools’ monitoring of the completed data verification forms. The Nonresident Tuition Enforcement Branch was to investigate cases the schools identified. In addition, staff from this branch were to visit the schools to survey cars transporting students to and from school, identifying all out-of-state license plates. The monitors were also to review enrollment cards and residency verification forms to determine if the forms indicated residency issues. The branch was to investigate all identified cases and assess tuition for students found not meeting the District’s residency requirements. As previously mentioned, for school year 1996-97, DCPS used the ENR method to count its students—counting all enrolled students on a single day—October 3, 1996. Students did not have to attend school on this day to be included in the count because enrollment records were counted— not actual students. DCPS defined an “enrolled student” as any student who had appeared at school at least once—and who had not withdrawn from DCPS—between the beginning of the school year on September 3, 1996, and October 3, 1996, the day of the count. DCPS’ October 3, 1996, count was conducted manually by each homeroom teacher using homeroom rosters prepared from SIS. School staff compiled the count, classroom by classroom, and recorded the numbers on the school’s official report. The Central Office received the schools’ reports, and schools’ data were aggregated by the Office of Educational Accountability (OEA), which prepared the official enrollment report.Each school’s principal was to ensure not only the accuracy of the school’s manual count, but also the enrollment, residency, and pupil accounting data that supported it. DCPS’ policy for the October 3, 1996, count called for unspecified rewards and sanctions to be applied on the basis of the extent to which staff maintained and reported accurate, up-to-date information. Beyond the official October count, DCPS also performed other counts throughout the year using this same process. These included official counts in December and again in February. The February count aided in computing projections for school year 1997-98. In addition to these counts, DCPS began two new preparatory counts this year. Each school took daily enrollment counts and communicated them by telephone to the Central Office every morning for the first 11 days of the school year. In addition, in September, each school completed a preliminary count using forms established for the official October 3 count. DCPS’ new student enrollment card was intended to document that students had met the 1-day attendance requirement for inclusion in the official enrollment count. Although the card may have met this requirement in some respects, it appears to have burdened both school and DCPS staff and may not offer much advantage over more traditional methods of documenting attendance, such as teachers’ attendance tracking. Perhaps even more importantly, the card alone did not ensure that enrollment records were correct before the count. The card did not address a critical problem—one revealed by prior audits—a lack of internal controls of the student database. This problem allowed multiple records to be created for a single student. Furthermore, DCPS continued to include in its enrollment some categories of students often excluded in official enrollment counts used for funding purposes in other states. In contrast to DCPS procedures, officials in other school districts reported using various strategies for ensuring accuracy and minimizing duplicate records. Teachers and school staff reported that DCPS’ new enrollment card was burdensome and difficult to implement. Each child, on the first day of attendance, had to complete and sign two separate copies of the card. However, many students—primarily the very young, disabled, or non- English speaking—could not complete the card themselves because they could not read or write at all or do so in English. In these cases, teachers had to complete the enrollment cards, although the students were asked to sign the cards when possible. Teachers, particularly in the primary grades, reported that completing the cards was troublesome for them, adding to their paperwork burden. Furthermore, the legitimacy of a child’s signature as a method of validation—particularly when the child cannot read or write—is questionable. In addition, the enrollment card did not contain vital enrollment information needed by the schools, such as emergency contact numbers. Consequently, it could not substitute for other enrollment forms that schools had been using. Several of the schools we visited augmented the enrollment card with other forms to obtain needed information. Consequently, the busy school staff had to complete and manage multiple forms to collect and maintain basic enrollment data. Moreover, the procedures that DCPS established for completing the enrollment card were difficult to implement after the first days of the school year. The procedures, which required the teacher to certify the student’s signature, were designed for the initial few days of school when an entire class enrolled together and could complete the form in the teacher’s presence. No provision had been established for students arriving later, who normally enroll at the school office. School staff in the schools we visited reported that they could not sign the card for the teacher, and obtaining the teacher’s signature and certification for these late enrollments was sometimes difficult. As a result, the process sometimes failed when enrollment cards for late enrollees were not completed or signed and certified by teachers. Finally, DCPS officials reported that Central Office monitoring for implementation of the new enrollment card was labor intensive. Enrollment card monitoring efforts did not use statistical sampling. Instead, we were told, monitors visited all the schools on three separate occasions, often reviewing 100 percent of the enrollment records. To perform this task, monitoring teams were formed, without regard to their normal responsibilities, from available staff within the former OEA, according to DCPS officials. During our review, we could not confirm the extent of these enrollment card monitoring visits because DCPS could not provide us with any of the monitoring reports prepared on the basis of these visits. The procedures that DCPS used for enrolling students in school year 1996-97 allowed multiple records to be entered into SIS for a single student. When school staff entered a new record, the SIS processing procedure automatically queried the database for any matching names and dates of birth. If a match occurred—as would be the case if the student had previously enrolled in a DCPS school—SIS informed the person entering the data that a record already existed for an individual with that name and date of birth. SIS, however, also provided the option of overriding the system and creating a new record for the student. DCPS officials reported that some data entry personnel were choosing this override capability and creating the new record. With safeguards overridden and additional records created, two schools could have each had access to a separate record for the same individual, allowing both schools to count the student. DCPS’ mechanisms for resolving this error were limited. Although Central Office MIS personnel maintained SIS, they had no authority to correct the errors once detected. Only the local school had such authority. MIS personnel had limited influence over the schools to ensure that corrections were made quickly and accurately, according to DCPS officials. Furthermore, while duplicate record checks were done, officials told us, the checks were not done on a regular, routine schedule. In addition, individuals who had helped with data quality control in the past as well as those who had monitored attendance were moved in early 1997 to facilities without office telephones or data lines. DCPS’ practice of allowing schools to enroll, without restriction, students who live outside school attendance boundaries increased the possibility of a student’s having multiple enrollment records for school year 1996-97. Students did not have to enroll in the school serving the geographic area where they lived but could enroll in any DCPS school if the principal allowed. For example, a student could have gone first to the school serving his or her area, filled out an enrollment card, and been entered into SIS. Subsequently, the student may have gone to another school, filled out another enrollment card, and—if the person entering this record in SIS chose to override the safeguard—been entered into SIS a second time. In addition, some principals reported that schools actively sought to attract out-of-boundary students to increase their enrollment. DCPS’ official enrollment count of 78,648 included not only regular elementary and secondary students, but also other categories of students excluded from enrollment counts in other districts when the counts are used for funding purposes. For example, DCPS included in its enrollment count students identified as tuition-paying nonresidents of the District of Columbia and students above and below the mandatory age for public education in the District, including Head Start participants,prekindergarten students (age 4), preschool students (age 0 to 3), and some senior high and special education students aged 20 and older. In contrast, the three states we visited reported that they exclude any student who is above or below mandatory school age or who is fully funded from other sources from enrollment counts used for funding purposes. Furthermore, even though the District of Columbia Auditor has suggested that students unable to document their residency be excluded from the official enrollment count, whether they pay tuition or not, DCPS included these students in its enrollment count for school year 1996-97. In contrast with the DCPS process, students in the Boston and Chelsea, Massachusetts, school districts enroll at central Parent Information Centers (PIC), which are separate and independent from the schools, officials told us. Individual schools in these two districts cannot enroll new students, we were told. All enrollment activities, including assignment of all students to schools, take place at PICs. Boston’s PICs were established as a key part of the U.S. District Court’s desegregation plan to alleviate the Court’s concerns about the accuracy of Boston’s reported enrollment numbers and to satisfy the Court’s requirements for credibility and accountability in pupil enrollment, assignment, and accounting. Centralizing student enrollment at PICs has helped reduce errors, according to officials in both districts. For example, staff in Boston have specialized in and become knowledgeable about the process. Limiting access to the student database has also helped to reduce errors. For example, in Boston, only six people may enter data into the database. Furthermore, PICs prevent students from being enrolled at two or more schools simultaneously, reducing duplicate counting and preventing schools from inflating their enrollment. In the other four districts we visited, schools—rather than a central site—usually handle student enrollment, but they use other safeguards. To enroll, a student goes to the school serving the geographic area in which he or she lives. Out-of-boundary enrollment is not usually allowed.In addition, officials in all four of these districts reported having student database safeguards to aid enrollment accuracy. For example, all four districts have procedures and edits in their student databases that automatically block the creation of duplicate enrollment records. If an enrolling student has attended another school in the district, these procedures will not allow a new record to be created once the old record has been located. School staff, officials told us, cannot override this blocking mechanism. In addition, Prince George’s County has a procedure in its student database that automatically checks student addresses with school attendance boundaries as enrollment information is entered. If the address falls outside the enrolling school’s boundaries, the database blocks enrollment. During school year 1996-97, District of Columbia schools had features that attracted nonresidents. Elementary schools in the District had free all-day prekindergarten and kindergarten, and some elementary schools had before- and after-school programs at low cost. One school we visited had before- and after-school care for $25 per week. This program extended the school day’s hours to accommodate working parents—the program began at 7 a.m. and ended at 6 p.m. In addition, several high schools had highly regarded academic and artistic programs; and some high schools had athletic programs that reportedly attracted scouts from highly rated colleges. Furthermore, students could participate in competitive athletic programs until age 19 in the District, compared with age 18 in some nearby jurisdictions. DCPS established new procedures for school year 1996-97 to detect nonresidents and collect tuition from those who attended DCPS schools, but both school and Central Office staff failed to implement the new procedures completely. In addition, DCPS failed to monitor and enforce its new procedures effectively. Most of the schools we visited failed to comply with the new residency verification process. As discussed previously, all students’ parents or legal guardians had to complete a Student Residency and Data Verification Form (residency form) and provide at least two proofs of residency. Students were told that failure to provide either the completed residency form or proofs would result in an investigation of their residency, and, if appropriate, either tuition payments or exclusion from DCPS. Most of the schools we visited, however, did not obtain completed residency forms for all their students. In fact, only 2 of the 15 schools had—or reported having—residency forms for 100 percent of the student files we reviewed. In addition, schools did not collect all required proofs of residency. Students and their families presented two proofs of residency in only isolated cases, and many students submitted no proofs. In many other cases, the proofs that the schools collected did not meet the standards established by DCPS and printed on the residency form. Although the residency form specified proofs of residency, such as copies of deeds, rental leases, utility bills, or vehicle registrations, as acceptable, schools sometimes accepted proofs such as newspaper or magazine subscriptions, copies of envelopes mailed to the student’s family, stubs from paid utility bills with no name attached, and informal personal notes (rather than leases or rental agreements) from individuals from whom the family reportedly rented housing. We also found some instances in which the names or addresses on the proof did not match those on the form. School staff often complained to us about the difficulty they had trying to get students to return completed residency forms and proofs. Some acknowledged that they placed little emphasis on this effort. Schools we visited also varied in their compliance with the requirements to report residency issues to OEA. Schools were supposed to forward copies of all students’ completed residency forms to OEA. These copies were to be attached to a list of students whose residency was considered questionable. Some schools sent copies of their student residency forms along with the list as required. Others sent the proofs with the forms. At least six schools sent no verifications of residency to the Central Office. Some of these implementation issues may have resulted from poorly specified requirements and procedures. For example, though DCPS officials reported to us that the requirements were for at least two proofs of residency, we found no written documentation communicating to the school staff or to the students a requirement for more than one proof. DCPS officials also gave us conflicting information about the number of proofs required. At one meeting, we were told that three proofs were required; at a later meeting, that two to three were required. Similarly, DCPS’ guidance to the schools did not specify how the schools were to maintain their students’ completed residency documentation—or even exactly what documentation was to be maintained. Consequently, schools’ maintenance of residency documents varied considerably. For example, about one-third of the schools we visited maintained the residency forms alphabetically; the remaining schools grouped them by classroom. The schools’ disposition of the proofs of residency varied even more. Eight schools filed proofs of residency with the students’ completed residency forms; one filed the proofs in the students’ permanent (cumulative) record folder; one filed them either with the completed form or in the folder; one placed all proofs in a file drawer without annotating them to permit subsequent identification of the student to whom they belonged; two forwarded all proofs to OEA, along with copies of the completed form; and two schools had no proofs at all for the student records we reviewed. And, because procedures did not provide for the schools to document the proofs on the residency forms, schools not retaining the proofs with the forms could not demonstrate that they had adequately verified residency. Other audits of schools’ compliance with residency verification would face similar obstacles because of the schools’ inability to link student records with proof of residency. Monitors for student residency, in general, did not report the level of school and student noncompliance that we observed in our review. For the nine schools for which we could directly assess compliance, with few exceptions, proofs of residency were missing for large portions of the student population. But, most DCPS Daily Activity Reports (monitoring reports) failed to cite the missing proofs, focusing instead on students who lived with someone other than a parent or whose forms indicated a nonresident address or phone number. For example, in one school we visited, we determined that about one-fourth of the students (or 108) did not return a proof to the school. The DCPS monitoring report, however, identified only one student living with a grandmother and two students with nonresident addresses. In another school, we found no proofs, and staff reported that they could not get students to provide proofs. But the monitoring report showed that only two students had nonresident addresses or phone numbers. Moreover, DCPS officials did not provide monitoring reports for 3 of the 15 schools we visited, telling us that it only prepared monitoring reports for schools where issues of nonresidency had been identified on enrollment cards or residency verification forms. At one of the three schools without monitoring reports, we found no proofs of residency on file for any student. Some of the monitors’ failure to detect and report residency problems may have resulted from poorly specified guidance. Instructions to monitors were not specific enough to guide implementation, for example, asking monitors to identify students for whom parents had not “sufficiently documented” residence. Monitoring instructions did not specify what to examine to determine whether residency was documented or what documentation was considered sufficient. Furthermore, despite recommendations of previous audits, monitors had no instructions to review the files to determine whether students had submitted a residency form. Consequently, when monitors failed to compare names on the student roster with those on completed residency forms, DCPS missed a key element in determining school and student compliance. We found forms missing for at least some of the students at 13 of the 15 schools we visited. At one school, the staff estimated that about 25 to 30 percent of the students did not return the residency forms, and, at another school, the staff could not find about one-third of the forms. Despite monitoring efforts and threats of sanctions, DCPS administration did not ensure that the schools completed the residency verification procedures. DCPS conducted no follow-up of schools failing to submit the office copy of the residency form. In addition, on the basis of the reports from the schools we visited, it conducted only minimal follow-up of schools failing to collect adequate proofs. Furthermore, as noted earlier, DCPS conducted no follow-up of those schools failing to collect residency forms for all students because no one in the Central Office checked to see if all forms had been received. In addition, the Central Office did not consistently apply the established sanctions to the students or their families for failing to submit forms or proofs. As noted earlier, parents and guardians were told that failure to provide proof could result in an investigation, a tuition bill, or exclusion from DCPS. On the basis of our visits to 15 schools, we assessed the degree of student noncompliance as very high. In one school alone, staff estimated that about 80 percent of the students—or about 700 students— did not comply. Yet, for all 158 schools, the Nonresident Tuition Enforcement Branch reported that, as of May 1, 1997, it issued only 469 letters to students requesting them to submit proofs of residency, collected tuition from only 35, and excluded only 156 students from DCPS schools. Action was pending for another 136. DCPS officials in the Nonresident Tuition Enforcement Branch told us that, at the request of one of the assistant superintendents, they were focusing their enforcement action mainly on high school athletes largely because the athletic program may have been attracting nonresidents. Like DCPS, all the other districts reported that all new students must verify residency upon enrolling. Residency verification occurs either at the individual schools or at central service centers. Officials in Boston and Chelsea reported that the PICs verify residency. Officials in the other four districts told us that all or most new students enroll and verify residency at the school they will attend. School staff verify residency and check to see that the student’s address falls within the attendance boundary of the school. If the parent fails to provide satisfactory proof of residency, the child is not allowed to enroll. Other districts reported relying upon the schools to verify residency for continuing students. For example, officials in Arlington, Fairfax, and Prince George’s counties told us that teachers and principals are expected to monitor continually for students’ possible relocation, and students must provide information on address changes. Schools also often make use of returned mail as a reliable data source for address changes. None of the other districts we visited requires annual residency verification for all students as DCPS does. The foundation of the pupil accounting system—SIS—lacked adequate safeguards to ensure that students were accurately tracked when they transferred from one school to another. Furthermore, some schools did not follow attendance rules, affecting later counts and projections. These rules, if implemented, may have allowed some students who no longer attended to be included in the school’s count. The student transfer process may have allowed a single student to be enrolled in at least two schools simultaneously. During most of the school year, a student’s record could be accessed and modified only by the school in which the student was enrolled. When a student transferred, however, the losing school was to submit the student’s record to a computer procedure that allowed both the losing and gaining school to have identical copies of the student’s record. During this process, both schools could enter the student’s status as “active” or “inactive.” The computer procedure provided no safeguards to ensure that the student was only active at one school at a time. Until the losing school completed the computer procedure with a withdrawal code, both schools could have claimed the student as active or enrolled. The possible impact of this vulnerability upon the count may have been sizeable. DCPS officials reported that the number of transfers between schools in the District during school year 1996-97 was well in excess of 20,000. DCPS officials in the MIS Branch, concerned with this problem, performed periodic data runs to detect cases in which students were shown as enrolled in two schools. Resolving these issues and completing the transfers, however, sometimes involved a lengthy delay. We found cases that took as long as 1 to 2 months to resolve. Local schools made all changes—the MIS Branch did not have authority to change the data—and some school staff did not use the electronic transfer procedures. Furthermore, DCPS did not specify a time limit for completing the transfer. In addition, students could also be counted at more than one school when the massive transfers took place at year end during “roll-over”—when students transferred as a group to either middle or high school. During school year 1996-97, well over 6,800 roll-overs took place, and the process was multistaged and generally occurred when students were still enrolled in the elementary or middle schools, officials said. SIS has a programming anomaly allowing students to have active status in both schools’ databases, according to DCPS officials. Sometimes students were legitimately enrolled in two schools simultaneously, for example, when attending a regular high school program in addition to one of the School-to-Aid-Youth (STAY) programs. In these cases, the database of the school with the secondary program— STAY—should have shown the student with the special status of “enrolled” and the student’s regular school should have shown his or her status as “active.” The student should have only been counted at the school where active. School clerks did not use the “enrolled” code properly, however, and, because the status code had no safeguards, the student could be counted at both schools, according to DCPS officials. During school year 1996-97, two attendance rules directly affected student status and therefore the number of students eligible to be counted. First, schools were to reclassify as inactive, or in this case as a “no-show,” any student expected to enroll but not actually attending school at least once during the first 10 days of school. Students classified as inactive would not be included in the official enrollment count. No-shows, however, were sometimes not reclassified as inactive as required by the attendance rules. While most schools we visited appeared to be following this rule, at least one school we visited apparently had difficulty changing these students’ status to inactive. At this school, the data entry staff reported that they were having trouble maintaining student status as “inactive” for the no-shows. Some of these students were appearing on their active rolls as late as February, possibly affecting DCPS’ official count. Second, schools were required to change to inactive status those students who showed up for at least 1 day but subsequently accumulated 45 consecutive days of absences. For students who had 45 days of absences, schools reported that they only rarely changed their status to inactive. School officials often told us that they did not change a student’s status unless they could obtain accurate information about the student’s whereabouts, confirming that the student should be dropped from the rolls. School administrators stated reluctance to “give up on a student,” and they viewed changing the student’s status to inactive as such. Unlike the no-show rule, failing to implement the 45-day rule would not have directly affected the October count. It would have affected, however, subsequent counts and the accuracy of projections from them. The 45-day attendance rule, if implemented, may have allowed some nonattending students to be considered active and enrolled. The rule enabled any student who reported 1 day to be considered enrolled until evidence was obtained that he or she had transferred elsewhere or until 45 days had elapsed. If a student went to another school district without notifying the school, the school would not have known to drop the student from its rolls. Consequently, even if the student appeared only on the first day of school, the 45-day time period would not have expired before the official enrollment count, allowing a student to be counted who no longer attended a DCPS school. This 45-day time period might be considered lengthy by some other nearby districts. Other school districts we visited reported that they have shorter time periods. For example, Virginia law requires that students with 15 or more consecutive days of absence be withdrawn from school, district officials told us. Therefore, neither Arlington County nor Fairfax County counts any student with 15 or more days of consecutive absence. Neither does Boston count any student in this category. SIS provided no safeguards to ensure that the schools followed either the no-show rule or the 45-day rule. It had no feature that would allow students’ status to be automatically changed to inactive on the basis of absences. Nor could SIS identify students with 45 consecutive days of absence—it does not readily permit calculating consecutive days of absence for students throughout the school year. Consequently, quality control or management assistance from the MIS Branch on this issue was not possible. Other districts we visited reported using essentially the same approaches for controlling errors in tracking student transfers as they use for controlling enrollment and residency verification. For example, in Boston, all student transfers take place through the PICs, where a limited number of staff may process the transfers. The schools lack the authority or ability to transfer students. In most of the other districts, officials reported that the individual schools handle student transfers. These districts rely on a variety of automatic edits and procedures in their student database systems to prevent such errors and serve as ongoing checks and balances on the schools. For example, in Arlington, Fairfax, Prince George’s, and Montgomery counties, the student database systems either do not allow a transfer to proceed unless the losing school removes the student from its rolls or automatically removes the student from the losing school as part of the transfer process. The school cannot override these safeguards. In addition, Arlington, Fairfax, and Prince George’s counties reported using two centralized oversight mechanisms for further enhancing accuracy in accounting for student transfers. First, they regularly and frequently check their student databases for duplicate student entries using students’ names and dates of birth as well as identification numbers. These checks also help to safeguard against multiple student entries arising from other sources such as enrollments. Arlington County performs this check every 15 days; Fairfax County, every 2 weeks; and Prince George’s County, daily concerning transfers. Second, if these districts identify duplicates, they notify the school immediately and work with the school to resolve the situation, officials reported. For example, Prince George’s County reports duplicates from transfers to the schools every day; when school staff log onto the computer system in the morning, the first thing that appears is an error screen showing duplicates from transfers as well as any other errors. Prince George’s County officials also review these schools’ error screens and follow up daily. If schools do not respond, according to these officials, database management staff can readily access senior district officials to quickly resolve such problems. In addition, in Arlington, Fairfax, and Prince George’s counties, Boston, and Chelsea, the database staff may make changes to the student database. As in DCPS, all six of the districts we visited reported to us that teachers are responsible for tracking daily attendance and schools for recording attendance data in the student database. Most of the other districts reported that they also use their central student databases to track all student absences as a check on the schools’ tracking. In addition, several districts withdraw students from school after substantially fewer days of consecutive absences than DCPS. For example, in Boston and Arlington and Fairfax counties, students absent 15 days in a row are withdrawn from school. They are therefore not included in school or district enrollment counts. These students must re-enroll if they return. The District of Columbia School Reform Act of 1995 imposed enrollment count reporting and audit requirements upon DCPS, the District of Columbia Board of Education—all of the responsibilities of which have been delegated to the Board of Trustees—and the Authority. The Reform Act requires the District’s schools to report certain kinds of information. The schools did not collect all the information required to be reported, and the official enrollment count that was released did not comply with the Reform Act’s requirements. In addition, the Reform Act requirements to independently audit the count have not been met. The Reform Act requires an enrollment count that includes—in addition to data historically reported by DCPS—a report of special needs and nonresident students by grade level and tuition assessed and collected. The official enrollment count report released for school year 1996-97—the first year of the new reporting requirements—failed to provide information on special needs and nonresident students as well as on tuition assessed and collected. DCPS has not provided any evidence that additional documentation was released that would include the required information. Despite October 1996 correspondence from the U.S. Department of Education referring them to the law, DCPS officials repeatedly expressed to us unfamiliarity with the law or the type of information it requires. The Reform Act also stipulates that the Authority, after receiving the annual report, is to provide for the conduct of an independent audit. The Authority, however, had delegated this function to DCPS earlier this year, according to DCPS procurement officials. With that understanding, DCPS’ Procurement Office, with technical assistance provided by the U.S. Department of Education Inspector General’s Office, issued a Request for Proposals (RFP). DCPS received proposals in response, and, in early June 1997, the Procurement Office was preparing to make an award. When we queried Authority officials at that time about their role in this effort, however, they reported that they did not know of any DCPS efforts to procure the audit and were preparing to advertise an RFP for the audit. Subsequent correspondence from the Authority indicated that the inadequacies that led to the restructuring of the public school system would make auditing the count counterproductive. In addition, the Authority’s comments in response to our draft report reiterated its notion that auditing the flawed count would be counterproductive. In short, the Reform Act’s requirements to count and report student enrollment and audit that enrollment count have not been met. Although DCPS has tried to respond to criticisms raised by previous audits, its efforts have overlooked larger systemic issues. Consequently, fundamental weaknesses remain in the enrollment count process that make it vulnerable to inaccuracy and weaken its credibility. For example, the lack of internal controls allows multiple records and other errors that raise questions about the accuracy of the database used as a key part of the count. Furthermore, unidentified nonresident students may be included in the count when they avoid detection because DCPS’ sanctions are not enforced. An accurate and credible enrollment count demands a process with stringent accountability and strong internal controls. Moreover, the need to correct DCPS’ problems is more critical now than ever before. Current reform initiatives have heightened public awareness of the issues and increased scrutiny of the process. Meanwhile, new budget initiatives for per pupil accounting will increase this level of scrutiny. Even without the new initiatives, an accurate enrollment count is essential if DCPS is to spend its educational dollars wisely. Because the enrollment count will become the basis for funding DCPS, the Congress may wish to direct DCPS to report separately, in its annual reporting of the enrollment count, those students fully funded from other sources, such as Head Start participants or tuition-paying nonresidents; above and below the mandatory age for compulsory public education, such as prekindergarten or those aged 20 and above; and for whom District residency cannot be confirmed. We recommend that the DCPS Chief Executive Officer/Superintendent do the following: Clarify, document, and enforce the responsibilities and sanctions for employees in all three areas of the enrollment count process—enrollment, residency verification, and pupil accounting. Clarify, document, and enforce the residency verification requirements for students and their parents. Institute internal controls in the student information database, including database management practices and automatic procedures and edits to control database errors. Comply with the reporting requirements of the District of Columbia School Reform Act of 1995. We also recommend that the District of Columbia Financial Responsibility and Management Assistance Authority comply with the auditing requirements of the District of Columbia School Reform Act of 1995. DCPS’ Chief Executive Officer/Superintendent stated that DCPS concurs with the major findings and recommendations of the audit and will correct the identified weaknesses. He also acknowledged that the enrollment numbers for school year 1996-97 are subject to question for the reasons we cited— especially because the enrollment count credibility hinges almost entirely on the written verification provided by local administrators. No substantial checks and balances, no aggressive central monitoring, and few routine reports were in place. In addition, virtually no administrative sanctions were applied, indicating that the submitted reports were hardly reviewed. DCPS’ comments appear in appendix III. The Authority shared DCPS’ view that many findings and recommendations in this report will help to correct what it characterized as a flawed student enrollment process. Its comments did, however, express concerns about certain aspects of our report. More specifically, the Authority was concerned that our review did not discuss the effects of the Authority’s overhaul of DCPS in November 1996. It also commented that our report did not note that the flawed student count was one of the issues prompting the Authority to change the governance structure and management of DCPS as noted in its report, Children in Crisis: A Failure of the D.C. Public Schools. Although we did not review the Authority’s overhaul of DCPS or the events and concerns leading to that overhaul, we have revised the report to clarify the Authority’s transfer of powers and responsibilities from the District of Columbia Board of Education to the Emergency Board of Trustees. The Authority was also concerned about the clarity of our discussion of the District of Columbia School Reform Act, suggesting that we enhance this discussion to include the portion of the Reform Act that addresses the funding of the audit. We have clarified in the report that the relevant responsibilities of the Board of Education—including that of funding the audit—were transferred to the Emergency Board of Trustees. Finally, the Authority questioned statements made in our report about its role in preparing an RFP for an audit. Specifically, it disputes our statement that the Authority was “. . . unaware of any of DCPS’ efforts to produce the audit and were preparing to advertise an RFP for the audit.” In disputing our statement, the Authority asserts that this is a misrepresentation of a conversation between a new employee of the Authority who would have known nothing about the Authority’s contracting process and our staff. We disagree that this misrepresents our conversations with Authority staff. In preparing to meet with the Authority the first time, we spoke with a more senior, long-time member of the Authority’s staff about the audit issues who referred us to the new staff member as the expert on District education issues. When we met with the new staff member, she stated that she had reviewed the act and had spoken with other staff who were preparing to develop an RFP. Furthermore, after meeting with this new staff member, we met a second time with other Authority staff present. At both meetings, Authority staff expressed unfamiliarity with DCPS’ efforts to produce an audit. The Authority’s comments appear in appendix IV. The U.S. Department of Education, in commenting on our draft report, noted that its Office of Inspector General had no role in preparing DCPS’ enrollment count for school year 1996-97 but provided some clarifications about correspondence between it and DCPS regarding an audit of the count. We have revised the report where appropriate. Education’s comments appear in appendix V. We are sending copies of this report to the U.S. Department of Education; the Office of the Chief Executive Officer/Superintendent, District of Columbia Public Schools; the District of Columbia Financial Responsibility and Management Assistance Authority; appropriate congressional committees; and other interested parties. Please call Carlotta Joyner, Director, Education and Employment Issues, at (202) 512-7014 if you or your staff have any questions about this report. Major contributors to this report are listed in appendix VI. We designed our study to gather information about DCPS’ enrollment count process for school year 1996-97 and the process used by other selected urban school districts. To do so, we visited DCPS administrative offices, interviewed administration officials, and reviewed documents. We also visited randomly selected DCPS schools unannounced, interviewing school faculty and staff and reviewing student records. In addition, we interviewed officials in other urban school districts, officials in the U.S. Department of Education and the District of Columbia, and other experts in the field. We did our work between October 1996 and June 1997 in accordance with generally accepted government auditing standards. We visited 15 randomly sampled DCPS elementary and secondary schools to review documents and interview faculty and staff about DCPS’ enrollment count process. We selected these schools from a list of 158 elementary and secondary schools provided to us by school district officials. We focused our review on regular elementary and secondary schools and excluded the two School-to-Aid-Youth (STAY) programs, two educational centers, and one elementary art center. Therefore, our final population included 153 schools. Fifteen schools were randomly selected by city quadrant (Northeast, Northwest, Southeast, and Southwest) and by level of school (elementary, middle/junior high, and senior high). Table I.1 shows the population distribution, and table I.2 shows the sample distribution for schools visited. We also interviewed officials in other selected urban school districts to gather general information about their enrollment count processes. Table I.3 shows the districts we visited with their enrollment count, counting method, and number of schools for school year 1996-97. We did not visit schools or interview school faculty or staff in these other districts. Critics have charged that DCPS’ reported enrollment numbers are overstated. Questions raised about the credibility of DCPS’ enrollment count have led to a series of reviews and audits. This appendix discusses in detail these efforts, which varied in scope and involved the efforts of several organizations. Table II.1 summarizes these efforts. In 1995, the Grier Partnership, as part of a study commissioned by DCPS, asserted that results of the 1990 U.S. census suggested that the District’s total school-age population in 1990 might have been as much as 13,000 less than DCPS reported in its official enrollment count. Grier also expressed concern about the apparent relative stability of DCPS’ official enrollment count in the face of the District’s declining resident population. Limitations to the methodology the Grier Partnership used, however, may have caused the apparent differences to be overstated. For example, Grier did not include some subgroups—preschool (Head Start), prekindergarten, and kindergarten students—that DCPS routinely includes in its official count. Even if these groups had been included in the estimates, using census data to estimate public school enrollment can be problematic. For example, the Census Bureau reports that estimates generated from its official files undercount some groups. From the 1990 census, the largest group undercounted was “renters.” Census estimates of pre-primary students enrolled in school are also understated because parents reporting the number of students enrolled in “regular school” often fail to include their pre-primary children. Finally, declines in residency do not necessarily mean declines in school enrollment. Census currently projects a loss of 31,000 in the District’s population over the next 5 years, while projecting an increase in the number of school-aged children. The first of several independent audits took place following the September 29, 1994, enrollment count. At that time, DCPS organized an internal audit and validation of the count. DCPS randomly selected a sample of students and focused on validating these students’ actual attendance in schools before the enrollment count. We were asked to observe DCPS’ internal audit effort. We questioned the reliability of the student database, finding that the database used to enroll and track students—the Student Information Membership System (SIMS)—included students who had not enrolled before the official enrollment count. We also found that transfer students were never removed from SIMS when they transferred. In addition, SIMS had other errors, was not regularly updated, and had at least 340 duplicate student records. We also criticized DCPS’ inability to identify nonresident students and the absence of procedures to validate residency. DCPS estimated that at that time approximately 2 percent of its students were probably undetected nonresidents. DCPS also estimated that this equaled more than $6 million in lost tuition revenues. We consequently recommended that DCPS periodically check SIMS for duplicates and errors, particularly before the official enrollment count, and update it regularly to reflect the changes in the enrollment status of DCPS students. We also recommended that DCPS develop systematic procedures at the school level to verify student residency and that schools refer names of nonresident students to DCPS administration for enforcement and collection of nonresident tuition. The DCPS Superintendent, after the October 1995 enrollment count, contracted for an independent audit and validation of the count. In addition to a 100-percent validation of the count, DCPS expected that the independent auditor would assess the accuracy of DCPS’ Student Information System (SIS) and determine if school and headquarters staff had followed DCPS’ policies and procedures. The independent auditor chosen by DCPS conducted a full validation of the enrollment count and examined SIS for duplicates and errors. The auditor failed, however, to determine if DCPS school and headquarters staff consistently implemented the policies and procedures developed by the DCPS administration. The independent auditor found several weaknesses in the October 1995 count, including problems with the way the enrollment count was taken and documented by DCPS staff; lack of residency documentation and validation; the questionable accuracy of SIS; and the lack of guidance for withdrawing students and excluding them from the schools’ rolls. For example, a new form, the Student Residency and Data Verification Form, used to document residency, was piloted in some schools during school year 1995-96. The auditor found that these forms were sent home to parents but were not always returned to the schools, and the forms were not reconciled to student enrollment reports to determine the number of missing forms. The auditor also found 550 sets of students with the same name and date of birth, that is, duplicate entries in SIS. In addition, the auditor criticized the time lapse—about 4 months—from the October 5, 1995, enrollment count to the audit. This meant that the auditor could not validate the enrollment of some students—students who were no longer in school at the time of the audit and for whom the school could provide no documentation demonstrating attendance before the count. To remedy the problem with duplicate database entries, the auditor recommended that DCPS periodically search the database for duplicates and errors before the enrollment count. Because of differences found in SIS and the manually prepared enrollment count report, the auditor also recommended that these two data sources be reconciled periodically to help update SIS. Regarding timing of the audit, the auditor recommended that the audit of the official enrollment count take place closer to the date of the count. And, to facilitate future audits, the auditor suggested that documentation exist to support a student’s attendance in school before the enrollment count. The independent auditor also suggested that after an enrollment count is taken, the staff responsible for monitoring attendance problems have the opportunity to review the enrollment count so they can remove from the count those students who have not attended at least 1 day of school or who have withdrawn from DCPS. The District of Columbia Auditor, in its audit of the October 5, 1995, enrollment count, found that DCPS needed significantly improved procedures for student enrollment counts to ensure more reliable and valid counts. The Auditor’s office expressed concerns about the security and reliability of SIS, the absence of any penalty for providing false enrollment information, and the lack of oversight or controls to ensure the accuracy of the information reported on the enrollment count. In addition, the Auditor found that SIS was not updated regularly to reflect changes in the enrollment status of students, particularly before the official enrollment count. The Auditor also discussed the weak controls in place to detect nonresidency and the weak procedures to collect nonresident tuition. The Auditor found that DCPS did not maintain records on the number of Student Residence and Data Verification Forms completed and returned by students’ parents, and it did not test the information on these forms or the documents provided to support the forms. As a result, the Auditor reported that according to the DCPS Nonresident Tuition Enforcement Branch estimates, about 4,000 to 6,000 DCPS students were nonresidents but did not pay nonresident tuition. Consequently, the Auditor recommended that each local school periodically reconcile SIS-generated reports with the attendance records it maintains. This would allow for adjustments to SIS to include those students who have physically presented themselves in class and removing those who have not presented themselves, withdrawn, or transferred. In addition, the Auditor suggested that unless students could document their residency, including proof of residency, they should be excluded from the official enrollment count. Furthermore, the Auditor suggested that those nonresidents who pay tuition be excluded from the enrollment count. In addition to those named above, the following individuals made important contributions to this report: Christine McGagh led numerous site visits, reviewed DCPS’ enrollment count process, and cowrote portions of this report; James W. Hansbury, Jr., performed numerous site visits, reviewed prior audit reports, and summarized those audits. Wayne Dow, Edward Tuchman, and Deborah Edwards assisted with the visits to the schools; Sylvia Shanks and Robert Crystal provided legal assistance; and Liz Williams and Ann McDermott assisted with report preparation. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | Pursuant to a congressional request, GAO examined the enrollment count process that the District of Columbia Public Schools (DCPS) used in school year 1996-97, focusing on: (1) whether the process appeared sufficient to produce an accurate count; (2) enrollment count processes used by some other urban school systems; and (3) the role of the Department of Education's Inspector General in preparing DCPS' official enrollment count for school year 1996-97. GAO noted that: (1) even though DCPS changed parts of its enrollment count process in school year 1996-97 to address criticisms, the process remains flawed; (2) some of these changes increased complexity and work effort but did little to improve the count's credibility; (3) errors remained in the Student Information System (SIS), but DCPS had only limited mechanisms for correcting these errors; (4) problems also persisted in the critical area of residency verification; (5) in school year 1996-97, schools did not always verify student residency as required by DCPS' own procedures; (6) proofs of residency, when actually obtained, often fell short of DCPS' standards; (7) Central Office staff did not consistently track failures to verify residency; (8) school staff and parents rarely suffered sanctions for failure to comply with the residency verification requirements; (9) the pupil accounting system failed to adequately track students; (10) SIS allowed more than one school to count a single student when the student transferred from one school to another; (11) schools did not always follow attendance rules, and SIS lacked the capability to track implementation of the rules; (12) some attendance rules, if implemented, could have allowed counting of nonattending students; (13) other school districts report that they use several approaches to control errors and to increase the accuracy of their enrollment counts; (14) these include using centralized enrollment and pupil accounting centers and a variety of automated student information system edits and procedures designed to prevent or disallow pupil accounting errors before they occur; (15) the recently enacted District of Columbia School Reform Act of 1995 requires the enrollment count process to produce enrollment numbers for nonresidents and students with special needs; (16) DCPS (acting on behalf of the District of Columbia Board of Education) and the District of Columbia Financial Responsibility and Management Assistance Authority are not in compliance with requirements of this new law; (17) the Department of Education helped DCPS develop its request for proposals for the independent audit of the enrollment count for school year 1996-97, but it had no role in preparing DCPS' official enrollment count for school year 1996-97; and (18) the Authority subsequently decided, however, that auditing the count for school year 1996-97 would be counterproductive. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
WIA specifies one funding source for each of the act’s main client groups—adults, youths, and dislocated workers. Labor estimated that approximately 927,000 dislocated workers would be served with these funds in program year 2000. A dislocated worker is an individual who (1) has been terminated or laid off, or who has received a notice of termination or layoff, from employment; is eligible for, or has exhausted entitlement to, unemployment insurance or is not eligible but has been employed for a sufficient duration to demonstrate attachment to the workforce; and is unlikely to return to previous industry or occupation; (2) has been terminated or laid off, or has received a notice of termination or layoff, from employment as a result of any permanent plant closure of, or substantial layoff at, a plant, facility, or enterprise; (3) was self employed but is unemployed as a result of general economic conditions in the community in which the individual resides or because of natural disasters; or (4) is a displaced homemaker. The secretary of Labor retains 20 percent of the dislocated worker funds in a national reserve account to be used for emergency grants, demonstrations, and technical assistance and allots the remaining funds to each of the 50 states, the District of Columbia, and Puerto Rico according to a specific formula. The formula, first adopted in 1982 under the Job Training Partnership Act, was grandfathered into the dislocated worker program under WIA. According to the formula, of the total funds that Labor allots to the states, one-third is based on each of the following: the number of unemployed in the state compared with the total number of unemployed in all states, the number of excess unemployed in the state compared with the total number of excess unemployed in all states (i.e., the number of unemployed greater than 4.5 percent of the total civilian labor force in each state), and the number of individuals unemployed for 15 weeks or more in the state compared with the number of individuals unemployed for 15 weeks or more in all states. Upon receiving its allotment, each state can reserve no more than 25 percent of its dislocated worker funds to provide “rapid response” services to workers affected by layoffs and plant closings. The funds set aside by the states to provide rapid response services are intended to help dislocated workers transition quickly to new employment. In its regulations, Labor divides rapid response activities into the following three categories: Required services. These include immediate and on-site contact with the employer experiencing layoffs as well as with employee representatives to assess the needs of affected workers and to provide information to the affected workers about unemployment insurance (UI) and other services. Optional services. These include developing programs for layoff aversion and incumbent worker training and for analyzing economic dislocation data. Additional assistance. This includes providing aid to local areas that are experiencing increased unemployment, to pay for direct services such as training. Under WIA regulations, each state is required to have a rapid response unit with responsibility for rapid response services. The staff in these units may deliver services directly by providing orientations or workshops for dislocated workers, or they may supervise the provision of such services. In the latter capacity, the state unit staff would assign the delivery of direct services to other personnel such as local area staff or private contractors. In addition to the dislocated worker funds that are set aside for rapid response, WIA allows states to set aside up to 15 percent of their dislocated worker allotment to support statewide activities other than rapid response. These may include a variety of activities that benefit adults, youths, and dislocated workers statewide, such as providing assistance in the establishment and operation of one-stop centers, developing or operating state or local management information systems, and disseminating lists of organizations that can provide training. WIA also permits states to combine the set-aside from the dislocated worker allotment with similar set-asides from their adult and youth allotments. After states set aside funds for rapid response and for other statewide activities, they allocate the remainder of the funds—at least 60 percent— to their local workforce areas. Approximately 600 local workforce areas exist throughout the nation to provide services to dislocated workers. When the Congress passed WIA in 1998, the dislocated worker program was changed in ways that have important implications for dislocated workers. Unlike JTPA, WIA ensured that some job search and placement assistance is offered to anyone who seeks it, whether or not he or she is eligible for the dislocated worker program. WIA also created three sequential levels of service—core, intensive, and training. In order to move from the core level to the intensive level and from the intensive level to training, an individual must be unable to obtain a job that allows him or her to become self sufficient. Under WIA, the initial core services—including job search and placement assistance, the provision of labor market information, and preliminary assessment of skills and needs—are available to everyone, whether or not he or she is a dislocated worker. If a dislocated worker is determined to be unable to find a job or has a job that does not lead to self-sufficiency after core services, he or she can receive intensive services, which include comprehensive assessments, development of an individual employment plan, case management, and short-term prevocational services. A dislocated worker cannot receive intensive services until he or she is officially registered in the program. A dislocated worker who is determined to be unable to find a job leading to self sufficiency after intensive services can move on to training. At this level, a dislocated worker can receive occupational skills training, on-the-job training, and customized training. With the greater flexibility granted by WIA, local workforce areas are likely to offer services tailored to local needs and services that emphasize a quick return to employment. Many of the local areas that we visited tailored services or designed programs to meet the needs of dislocated workers in their areas. Some workforce areas had also adopted a work- first approach to their services and required individuals to dedicate a set amount of time or a specific number of tasks to finding employment before receiving additional services, such as training. This meant that more individuals returned to work before being registered in the dislocated worker program. Thus, fewer dislocated workers were registered in the program and fewer were enrolled in training. Although WIA was intended to provide local workforce officials with greater flexibility, it also increased their need for timely and accurate information concerning the provisions of the legislation that they are required to implement. Labor has provided guidance and technical assistance to help states transition from JTPA to WIA. Despite these efforts, state and local officials cite an ongoing need for guidance concerning basic program requirements and how to interpret them. Several of the local areas we visited tailored their services or designed programs to meet the particular needs of the dislocated workers in their areas. For example, staff at the one-stop centers that we visited provided general orientation about available services to all interested individuals. However, one local area in California designed an orientation program exclusively for dislocated workers. At this two-hour orientation, benefits and requirements specific to dislocated workers were described and counselors met one-on-one with interested workers for more in-depth needs assessments and strategy development. Unlike other local areas that we visited, this area had two staff members who were responsible for providing a range of services only to dislocated workers. Another local area in California established a separate career resource program to assist the area’s professional workers who have been dislocated and employers seeking qualified job applicants in areas such as software development, biotechnology, communications, and human resources. The program, tailored to professional and high-tech dislocated workers, provided the dislocated workers with their own one-stop center where job information and computers were available. In addition, regular meetings were held to share information on job leads and career fairs as well as for moral support. This program also had its own Web site where participating dislocated workers could post their résumés. Employers looking for qualified professional or high-tech applicants were able to search the Web site for potential candidates by means of key words, such as “Web design,” and obtain a list of all résumés containing those key words. A local area in Maryland that we visited was administering a 3-year $20- million dislocated worker demonstration grant tailored to local employer needs. The training programs consisted of customized training with extensive involvement from employers in designing the programs to train 3,000 people for high-tech jobs in a metropolitan area covering three states. The program focused on entry-level information technology and telecommunication jobs and, to date, has established training programs for Web developers and cable technicians. This same local area also developed a career transition workshop to help dislocated workers cope emotionally with being laid off and plan for the future. A local area in Louisiana facing a major plant closing tailored a program to meet the needs of the 1,300 workers being laid off. Workers in this plant were primarily from two adjacent workforce areas. Staff from these two areas joined together to establish a transition center on site at the employer’s location. Staff and computers were available around the clock to advise the workers of available services; provide job search and placement assistance, career counseling, and vocational assessments; and register workers into the dislocated worker program under WIA. The emphasis placed by some local workforce areas on individuals finding a job and the availability of job search and placement assistance prior to enrolling in the dislocated worker program has reduced the number of people registering in the dislocated worker program in those areas. Some local officials have interpreted WIA’s requirements as supporting a work- first philosophy. In four of the local areas we visited, officials required individuals to spend a certain amount of time or perform a specific number of tasks related to finding employment before registering in the dislocated worker program and receiving additional services. In its March 2001 Status of WIA Readiness Implementation Report, Labor acknowledged that many local areas have adopted some form of a work- first approach to the delivery of services that stresses the importance of a quick entry or reentry into the workforce. Officials from several of the local areas that we visited confirmed that they viewed WIA as a work-first program that emphasizes returning dislocated workers to the workforce. For example, a counselor from a local area in Massachusetts told us that if a client has a marketable skill, he or she must reenter the workforce regardless of any desire for training for a career change. Unlike JTPA, which required that an individual be enrolled as a participant before receiving any services, WIA requires the provision of core services to all adults who seek them, regardless of program eligibility. All of the one-stop centers that we visited had a resource area where individuals could access labor market information, review job openings, create résumés, and even attend some workshops, with topics such as interviewing techniques, without registering for the dislocated worker program. Some local program officials believe that many individuals found employment through these core services and that they therefore did not go on to seek other services that would have required program registration. Because program participation is not recorded before receipt of these preliminary services, the total number of people who used them and found employment is not known. Collectively, the 14 locations that we visited registered nearly 3,000 fewer dislocated workers during the first year of WIA than they had registered under JTPA during the previous program year (5,603 vs. 8,462). Of these locations, eight registered fewer dislocated workers under WIA and six registered more dislocated workers (see fig.1). Officials from the local workforce areas that registered more dislocated workers under WIA than during the previous year under JTPA cited various reasons for the increase. For example, officials from two local workforce areas said that they had more dislocated worker funds available in program year 2000 and thus were able to provide services to more workers, while another official said that the local workforce area experienced several plant closings that resulted in more workers’ needing assistance. Under WIA, the 14 local workforce areas that we visited enrolled 52 percent fewer dislocated workers in training than they had enrolled under JTPA. Collectively, about 1500 fewer dislocated workers were enrolled in training under WIA than were enrolled in training under JTPA (1,427 vs. 2,967). Of these areas, nine enrolled fewer dislocated workers and five enrolled an equal or greater number of dislocated workers in training under WIA (see fig. 2). The decrease in the percentage of dislocated workers entering training is tied to local requirements that dislocated workers spend a certain amount of time receiving services or complete a certain number of tasks before being enrolled in training. Although the act requires individuals to receive sequential services, Labor has not imposed a required minimum period of participation in the core or intensive services, leaving this decision instead to the discretion of local workforce boards. Four local areas have set requirements for the amount of time or the number of tasks that a dislocated worker must complete at each level of service before he or she can move to the next level. Officials in three of these areas required dislocated workers to spend at least three weeks searching for a job and documenting their attempts at finding employment. Officials in the fourth local area required dislocated workers to complete a certain number of tasks, such as documenting 12 unsuccessful job applications or five case management appointments, before moving to the next level of service. The decrease in the percentage of dislocated workers being trained is also tied to the wages of the jobs they may be offered during the job search required before training. The receipt of future services—specifically, training—hinges on a dislocated worker’s ability to find a job leading to self-sufficiency. Only those who are unable to find such a job can continue to training. Among the locations we visited, self-sufficiency was defined differently. Because the definition, within certain parameters, is left to the discretion of state or local workforce boards, the dislocated workers who are allowed to continue to training vary from area to area. For example, a local area in Maryland defined self-sufficiency as having a job that pays $8.50 per hour, while a local area in Louisiana had recently increased its self-sufficiency standard to having a job that pays $16.39 per hour. Three other local areas we visited had no set standard at all. The lower the standard, the harder it is for a worker to qualify for training, because it is easier for the worker to find a job meeting the criterion. State and local workforce officials, uncertain as to the act’s new requirements or how to interpret them in a manner consistent with that of Labor’s Office of Inspector General, sought specific guidance from Labor to assist them in implementing the act. Several officials in the states and local workforce areas that we visited voiced a need for more guidance. They said that they felt uncertain about when individuals should be registered into the dislocated worker program, how to determine when training is an appropriate service strategy, and how to use rapid response funds to provide additional assistance to local workforce areas. For example, a rapid response official in the state of Maryland told us that he would like additional guidance from Labor concerning the extent to which a state could use rapid response funds to provide additional assistance to local workforce areas experiencing layoffs. Labor’s guidance, however, does not adequately address this issue. In addition, WIA created a new mindset for workforce development professionals and makes substantial changes in how dislocated workers receive services. Unlike the more prescriptive dislocated worker program under JTPA, state and local workforce officials must continually interpret WIA’s requirements in order to meet the constantly changing needs of the workers and employers they serve. However, all local workforce officials were not prepared to meet this challenge. For example, Labor’s February 2001 final interim report on the early state and local progress toward WIA implementation noted that state and local workforce officials would like to have more guidance on how to interpret the requirements of the act. Labor has provided guidance and technical assistance to aid state and local workforce officials in transitioning from JTPA to WIA ranging from training sessions conducted by headquarters and regional office staff to the dissemination of guidance concerning WIA’s technical requirements. This guidance, in addition to information about best practices, is generally available via the Internet. According to some workforce officials, however, Labor’s guidance has generally been too broad for them to use when implementing WIA’s requirements and the information available on the Internet is often outdated. According to Labor officials, the guidance that it has provided to state and local workforce officials on a range of WIA topics has been intentionally nonprescriptive to allow state and local workforce officials to use the flexibility that the act allows to design programs that will accomplish state and locally established goals. Despite Labor’s efforts to provide state and local workforce officials with program guidance, misunderstandings still exist concerning some of WIA’s dislocated worker program requirements. In its March 2001 Status of WIA Readiness Implementation Report, Labor found that some dislocated worker program requirements were being interpreted incorrectly. In particular, the report, which was based on Labor’s WIA Readiness Review of all states and 126 local workforce areas, identified the need for additional guidance in the areas of program eligibility and registration, the sequence of services, training, the eligible training provider list, and the consumer report system. States used the flexibility under WIA to decide how much of their set-aside funds to spend on rapid response for dislocated workers and how much to spend on other statewide activities. All states provided some rapid response services, but there was variation in the amount of dislocated worker funds they obligated for rapid response and in the services they provided. Most states, however, have not changed the way they provide rapid response services since implementing WIA. During program year 2000, state set-aside obligations for rapid response averaged 12 percent and ranged from less than 1 percent to the maximum allowable 25 percent. When providing rapid response, most states responded primarily to layoffs and plant closings affecting at least 50 workers and provided, at a minimum, basic informational services for affected workers. Many states also offered other services such as group workshops on job search and used a portion of their rapid response funds to provide additional assistance to local areas experiencing an increase in unemployment. In addition, as allowed by the act, most states combined funds from the 15- percent dislocated worker set-aside with set-aside funds from the adult and youth programs to support a variety of statewide activities and programs. Some activities, such as disseminating a list of eligible training providers, are required by the act, while others, such as conducting research and demonstration projects, are optional. States differed in how much of their dislocated worker funds they used for rapid response during program year 2000 and what services they funded with this money. Nearly a third of the 42 states that provided program year 2000 data in their survey responses said that they obligated 5 percent or less of their dislocated worker funds for rapid response activities. Overall, the amount obligated for rapid response in the 42 states ranged from less than 1 percent in Hawaii and Wyoming to the maximum allowable 25 percent in Georgia and Rhode Island (see fig. 3). On average, these states obligated about 12 percent of their dislocated worker funds for rapid response activities. Appendix II shows each state’s dislocated worker allotment and the amount obligated for rapid response activities. Any rapid response funds not used in program year 2000, up to the 25- percent ceiling, could be distributed to local areas or carried over to the following program year to conduct rapid response activities. For example, Maryland reallocated to its local workforce areas $1 million of the $4.2 million it had set aside for rapid response activities, and Louisiana carried over into the next program year $5.1 million of the $6.1 million it had set aside for rapid response. Rapid response services almost always include the provision of basic information for workers being laid off, and in many states, additional services such as group workshops are available. Forty-five of the 50 states that responded to our survey had a rapid response unit consisting of state employees who delivered at least some direct services to the workers being laid off. Almost all of these state units contacted employers experiencing layoffs to explain available rapid response services and provided orientations for workers being laid off. State staff often delivered these services in conjunction with local area staff. In the six states that we visited, orientation sessions provided information to workers on topics such as UI benefits, services available at the local one-stop centers, and training opportunities. In many states, services in addition to orientation are also available to dislocated workers. These services, including group workshops on topics such as job search and stress management and one- on-one meetings to discuss subjects such as financial planning, were provided usually by local area staff but sometimes in conjunction with the state unit or private contractors, such as unions (see fig. 4). Rapid response units in some states were more involved in providing direct services after a layoff than were units in other states. For example, in Florida, the state rapid response unit provided a broader range of services than did the units of most other states. The Florida unit directly provided workshops and one-on-one meetings in addition to general informational services. In Maryland, as in many states, the state unit played a more limited role. The Maryland unit contacted employers experiencing layoffs and participated, along with staff from local one-stop centers, in orientations for the affected workers. Any services beyond the orientations, including workshops and one-on-one meetings at the work- sites, were provided exclusively by local staff. Louisiana had a state unit that met with employers and conducted orientations but provided no other direct services. To supplement the additional services provided by its local areas, Louisiana contracted with a private agency to provide workshops on topics such as résumé development and stress management for dislocated workers around the state. A state official explained that Louisiana hired this agency because some local areas that experience significant layoffs infrequently lack the experience to provide effective rapid response services. Five of the 50 states that responded to our survey delegated all responsibility for direct rapid response services to staff in the local workforce areas. For example, California had a state unit that informed local areas of impending layoffs but delivered no direct services. The state distributed a portion of its rapid response funds to the local areas to provide direct services. State officials believed that the state’s size and diversity made local flexibility more feasible than a single, uniform approach. Another advantage, according to a local official, was that referrals of workers from rapid response units to one-stop centers were smoother because rapid response staff were also local area staff. While the state stressed local flexibility, it also encouraged coordination among local areas that share a labor market. Ten local areas in northern California were collaborating to standardize their rapid response services, provide services jointly, and possibly contract with a private agency for all rapid response. New York was another state where local workforce area staff were generally responsible for delivering rapid response services. Unlike California, however, New York did not provide the local workforce areas with funding for these services. New York also had a $1 million contract with representatives of organized labor to provide rapid response assistance when their union members were affected by a layoff. Most states provided rapid response primarily for larger layoffs and plant closings affecting 50 or more workers. Responding to layoffs of 50 or more workers appears to be related to the Worker Adjustment and Retraining Notification (WARN) Act of 1988, which requires companies with 100 or more full-time employees to notify state dislocated-worker staff of layoffs and plant closures generally affecting 50 or more full-time workers. Of the 45 states using state staff to provide rapid response services, staff in 37 states generally provided rapid response services for layoffs affecting 50 or more workers, which, on average, represented 75 percent of the layoffs to which each state unit responded during program year 2000. Workers affected by dislocation events that are too small to trigger state unit involvement may nonetheless receive local rapid response services. In fact, almost all of the states that had a trigger for state rapid response said that local staff in their states may have provided rapid response services for layoffs and plant closings that were too small to trigger rapid response by the state unit. Illinois and Massachusetts illustrate different approaches to the use of a trigger for state unit response. Illinois obligated about $2.4 million for rapid response and had a unit of state employees that was responsible for rapid response services statewide. These employees provided direct services for all layoffs and closures affecting 50 or more workers and responded to 170 such events during program year 2000. Some local workforce areas provided rapid response services for dislocation events affecting fewer than 50 workers, but the state did not require them to serve these smaller events and did not distribute any rapid response funds to them for this purpose. On the other hand, Massachusetts obligated about $1.2 million for rapid response and had a unit of state employees that attempted to provide rapid response for all layoffs regardless of size. During program year 2000, the unit responded to 158 events affecting 50 or more workers and 149 events affecting fewer than 50 workers. In addition to providing direct rapid response services to workers affected by a layoff or plant closing, 32 states said that they used a portion of their rapid response set-aside funds to provide additional assistance to local areas that experienced an increase in unemployment owing to plant closings or mass layoffs. In the states for which data were available, more than half of the $129.6 million that these states set aside for rapid response was used to provide additional assistance to local workforce areas (see app. II). Of the 32 states responding that provided additional assistance, 15 states said that they provided additional assistance to local areas only to help them address specific layoffs and required that local areas spend the funds exclusively on workers affected by those layoffs. For example, Maryland provided $250,000 in additional assistance to one local workforce area that intended to provide training to a small number of workers laid off from a bottled water plant, and Louisiana provided $72,531 in additional assistance to a local workforce area to set up a worker transition center at a clothing plant that was closing. Another eight states said that they provided additional assistance to local areas that experienced a general rise in unemployment and did not tie the use of the funds to specific layoffs. For example, California provided $3 million in additional assistance to a local workforce area to provide comprehensive services for its dislocated workers in a region with high job turnover. Nine other states said that they awarded funds for both rapid response and additional assistance during program year 2000. Thirty of the 50 states responding to our survey have not changed the way they provide rapid response since implementing WIA. The remaining 20 states reported making changes in the way they provide rapid response as a result of WIA, but few of these changes were significant and none were required by the act or the regulations. The more significant changes involved giving the state unit greater responsibility for direct services or developing new programs to distribute set-aside funds to local workforce areas. For example, Washington state and Kansas assigned state staff to each local workforce area to coordinate and deliver rapid response services. Also, Indiana developed a program to quickly distribute additional funds within one or two days to local workforce areas experiencing mass layoffs to help them provide rapid response services. Other changes included increasing coordination between the state rapid response unit and other workforce partners, changing the focus of orientations from training benefits to available job search services, and shifting state units from one state department to another. (See table 1.) During program year 2000, most states took advantage of the flexibility under WIA and combined dislocated worker set-aside funds with set-aside funds from the adult and youth programs to support a variety of statewide activities. Some activities, such as developing or operating a statewide management information system, benefited dislocated workers along with other types of workers such as adults and youths; other activities, such as career training for at-risk youths, benefited a specific segment of the population who were not dislocated workers. During program year 2000, states used their set-aside funds for statewide activities for various purposes. Under WIA, states can set aside up to 15 percent of their dislocated worker allotment to support some required statewide workforce investment activities. These activities include providing additional assistance to local areas that have high concentrations of eligible youths, assisting in the establishment and operation of one-stop center systems, disseminating a list of eligible providers of training services, and operating a management information system. In addition, the act allows states to use the funds for other allowable activities such as state administration, research and demonstration projects, and innovative incumbent worker training programs (i.e., programs to improve the skills of employed workers). Of the 50 states responding to our survey, 43 said that they combined set- aside funds for statewide activities from the dislocated worker allotment with similar funds from the adult and youth programs. Appendix III lists each state’s allotment for their adult, youth, and dislocated worker programs and identifies the maximum amount of funds that could be set aside to support statewide activities. As allowed by the act, these states combined the funds and used them for a variety of purposes. For example, 41 states reported that they spent, on average, 25.7 percent of the combined set-aside funds on carrying out general state-level administrative activities, while 37 states reported spending, on average, 14.8 percent on assisting the establishment and operation of one-stop centers (see table 2). Several states are using the flexibility that WIA provides by spending the majority of their combined set-aside funds on a single activity. For example, Virginia spent over half of its $5.8 million combined set-aside funds to operate a fiscal and management accountability information system. Missouri used over half of its $6.7 million combined set-aside funds to assist in the establishment and operation of one-stop centers. Iowa used nearly two-thirds of approximately $900,000 of its combined set-aside funds to carry out general state-level administrative activities. Appendix IV shows the percentage of combined set-aside funds that the 43 states dedicated to each activity listed in table 2. In addition to funding the required and optional activities identified in the act, 30 states funded other activities. Many of these activities were directed to programs that benefit a specific group. For example, Arizona used about 4 percent of its $6.6 million combined set-aside funds for older worker training and support, Kentucky used about 19 percent of its $6.4 million combined set-aside funds for statewide youth programs, and Montana used about 5 percent of its $2.2 million combined set-aside funds for adult literacy and education. Several of the states that we visited used the flexibility provided by the act to fund projects that the states determined were most in need of additional funding. In many instances, these projects were targeted to specific groups. For example, of its $63 million combined set-aside, California spent $6 million on a project to train veterans, $15 million on a project to train caregivers who work with the aging and disabled population, and $20 million to provide job training to targeted groups including at-risk pregnant teens, homeless individuals, noncustodial parents, and farm workers. Similarly, Illinois spent $1.3 million of its $13 million set-aside to help individuals obtain their high school general equivalency diploma over the Internet; Louisiana spent $1.5 million of its $10 million set-aside on services for UI claimants who were projected to exhaust their UI benefits (the projection is known as UI profiling); Maryland spent $330,000 of its $6 million set-aside to train at-risk youths for a career in the merchant marine service. The dislocated worker funding formula distributes funds that vary dramatically from year to year and that do not recognize fluctuations in state dislocated worker populations. State and local officials said that the volatility in the allotment of formula funds could limit the ability of some states to provide basic program services to dislocated workers. Without stable funding levels that are tied to the number of dislocated workers, states are unable to conduct the meaningful long- or short-term financial planning that is necessary to develop and deliver high-quality services for dislocated workers. Information obtained from Labor on state allotments between program years 1997 and 2001 also raises concerns about the performance of the current funding formula (see app. V for a detailed listing of the dislocated workers funding formula allotments by state). Many states have experienced very substantial changes in funding from one year to the next over this time period. For example, Mississippi’s funding for program year 2001, increased nearly 130 percent over that for program year 2000 (from $13.4 million to $30.7 million), while Arkansas’s funding dropped by more than 40 percent (from $12.4 million to $7.1 million). Figure 5 displays the ten states with the largest percentage changes in dislocated worker funding allotments between program years 2000 and 2001. Such changes, which do not seem to be in proportion to the number of dislocated workers in a state, appear to corroborate concerns raised by state officials regarding the volatility of the current formula. The dislocated worker funding formula consists of three factors, each of which determines one-third of the allotment given to a state. None of the three factors is directly related to the dislocation activity in a state. Two parts of this funding formula, however, contribute to the fluctuations in state funding of the dislocated worker program. An analysis of the funding formula reveals the primary cause of funding fluctuations to be the result of the parts of the formula that incorporate the number of excess unemployed (exceeding 4.5 percent of the total labor force) and the number of long-term unemployed. The number of excess unemployed displayed an extremely high degree of volatility during the 1997 2001 time period. For example, in program year 1997, 36 states had unemployment rates above 4.5 percent and therefore qualified for funding under this part of the formula. By program year 2001, only 13 states continued to receive funding under this part of the formula. Thus, as economic conditions improve, the number of states receiving funding under this part of the formula decreases (see fig. 6). The decline in the number of states that received funding under this part of the formula, in combination with increased funding during this period, resulted in more funding for states that received funds under this part of the formula; states falling below the 4.5 percent threshold saw their allotments reduced substantially. In program year 1997, $345 million was allotted among 36 states, for an average of $9.5 million per state. By program year 2001, $424 million was allotted to 13 states, resulting in an average allotment of $32.6 million per state. The nearly 130 percent increase in funding between program years 2000 and 2001, reported for Mississippi in figure 5, was largely the result of a two-thirds reduction in the number of states that received funding under this criterion. This volatility in funding will likely persist as unemployment rates rise in response to the current economic slowdown. Rising unemployment in the future means that more states will again qualify for funding based on the excess unemployment criterion and that even as their own unemployment increases, the 13 states will likely experience substantial funding losses as more states become eligible for funding based on this criterion. In addition to the number of excess unemployed, the number of long-term unemployed also contributed to the fluctuations in program funding for individual states. For example, the allotments for long-term unemployed in Minnesota declined by more than 20 percent in program year 2000 and increased by more than 50 percent the following year. In New Hampshire, the pattern was the opposite: an increase of more than 85 percent was followed by a decline of nearly 45 percent (see fig. 7). The funding fluctuation introduced by the number of long-term unemployed is particularly problematic in that the number of long-term unemployed is not necessarily indicative of the number of dislocated workers in a state, because individuals can be unemployed for 15 weeks or more and not have been laid off. Furthermore, the long-term unemployed are no longer included under the definition of a dislocated worker and are therefore not automatically eligible for the dislocated worker program. The high degree of volatility in formula allotments has resulted in increasingly wide disparities in funding across states. In program year 1997, both Texas and Mississippi received the same funding per unemployed resident. However, because Texas became ineligible for funding based on excess unemployment in 2001, its funding per unemployed resident dropped slightly, while Mississippi (one of the thirteen states still eligible) saw its funding jump more than three-fold. As shown in figure 8, the program year 2001 funding per unemployed individual in Mississippi was three times higher than in Texas, even though in program year 1997, the funding per unemployed individual was nearly identical. (See table 11 in app. V for a complete listing of each state’s funding per unemployed resident for program years 1997 through 2001.) When the Congress passed WIA in 1998, it mandated that the secretary of Labor undertake a study to improve the formula for the adult program. This mandate includes the study of the formula used to allot adult program funds to the states and of the formula used to allocate these funds within the states. The study has been completed but has not yet been released. The mandate did not address the formula for allocating dislocated worker program funds. WIA was passed with the intention of providing greater flexibility to states and local workforce areas, but more detailed guidance could enable local workforce areas to better use the act’s flexibility. Clearly, WIA intends to provide state and local areas with the flexibility to design programs that meet the specific needs of dislocated workers in their areas. Given the early stage of implementation, it is not surprising that some state and local officials remain confused about how to put into practice some of the act’s new requirements, such as when to register individuals in the dislocated worker program. Although Labor has provided broad guidance and technical assistance to aid the transition from JTPA to WIA, some workforce officials have stated that the guidance does not address specific implementation concerns. Efforts to design flexible programs that meet local needs could be enhanced if Labor addressed the concerns of workforce officials with specific guidance regarding the act’s implementation and disseminated information on best practices in a timely manner. Some states have trouble meeting the needs of their dislocated workers, because the amount of dislocated worker funds they receive varies dramatically from year to year and is not directly related to the states’ dislocated worker populations. The fluctuation in funding is caused by a three-part funding formula that incorporates factors that are no longer relevant to the dislocated worker program, that are highly volatile from year to year, and that do not reflect the number of dislocated workers in a state. A dislocated worker formula that incorporates factors more accurately approximating a state’s dislocated worker population would provide states with a more relevant level of funding for services to their dislocated workers. We recommend that the secretary of Labor provide local workforce areas with additional guidance on implementation issues and information on best practices to facilitate implementation of the dislocated worker program under WIA and to assist local workforce officials in using the greater flexibility afforded by the act to design programs and services. Such guidance would help the local areas further define their policies and procedures to meet the needs of their dislocated workers. We also recommend that the secretary identify strategies for disseminating this information in a timely manner. In particular, Labor should proactively identify areas that emerge as requiring additional guidance to help state and local areas implement the dislocated worker program; disseminate guidance that is more responsive to the concerns of workforce officials responsible for implementing the act’s requirements, including when to register individuals into the dislocated worker program and how to provide additional assistance to local areas using rapid response funds; and disseminate timely information on best practices being developed by local areas to meet the needs of their dislocated workers. We suggest that the Congress consider modifying the existing dislocated worker funding formula to minimize funding volatility and to ensure that dislocated worker funds are better distributed to states in relation to their dislocated worker population. The Congress may wish to direct Labor to undertake a study of the dislocated worker funding formula to identify factors that would enable better distribution of program funds to states in relation to their dislocated worker population. We provided a draft of this report to Labor for review and comment. Labor noted that the report provided an informative review of how states have responded to the challenges presented by the implementation of WIA. Labor generally agreed with our recommendations and identified steps that it is taking to address them. Labor commented that the report provided the agency’s first opportunity to review many of the issues regarding the use of state set-aside funds for rapid response and other statewide activities and said that analysis of this data will be used to determine areas requiring more technical assistance and guidance. Labor also provided technical comments that we incorporated where appropriate. Labor’s entire comments are reproduced in appendix VI. Regarding our recommendation that Labor proactively identify areas requiring additional guidance, Labor generally agreed, pointing out that it had organized four WIA readiness workgroups consisting of local, state, and federal representatives that had identified several potential areas for additional federal guidance. However, Labor said that it did not want to interfere with the flexibility that WIA provides to states and localities. We acknowledge Labor’s efforts and encourage Labor to continue to monitor emerging issues by facilitating discussions between local, state, and federal officials on an ongoing basis. Regarding our recommendation that Labor disseminate more guidance on issues such as point of registration and use of rapid response funds for additional assistance, Labor agreed, saying that it plans to issue additional guidance on establishing the point of registration and believes that a common point of registration is an integral component of a nationwide system of performance accountability. Labor also recognizes that registration guidance cannot be developed in isolation and must reflect the complexities of WIA’s performance accountability system. Regarding the issue of guidance on the use of rapid response funds for additional assistance, Labor said that a lack of guidance on this subject was not identified previously in its implementation assessments or the workgroups’. Labor noted that the information in our report would allow further exploration of this issue and a determination of whether federal guidance is necessary on this topic. We concur with Labor’s assessment and agree that such guidance should be developed with input from those officials responsible for implementing WIA at the local level and should be consistent with the accountability system established under WIA. Regarding our recommendation that Labor disseminate timely information on best practices, the agency stated that it has a contract with the state of Illinois to develop a Web site to display promising practices. We applaud Labor’s efforts in this regard, agreeing that a Web site is an excellent vehicle for providing information to a wide audience. We strongly encourage Labor to monitor the site’s implementation to ensure that the information posted to the Web site is kept current. Finally, regarding our suggestion that the Congress consider modifying the dislocated worker funding formula, Labor replied that it has been aware of the severe funding fluctuations and the difficulties such fluctuations present to states. It believes that resource allocation practices should ensure that funds are distributed in a manner that puts resources where they are most needed, and it acknowledged that because worker dislocations take place after formula funds are allocated, available resources do not always match need. Labor noted that it has initiated a review of the WIA dislocated worker funding formula. While we support Labor’s efforts to review this formula, we believe that it is imperative that such an initiative be congressionally mandated. We are sending copies of this report to the Honorable Elaine L. Chao, secretary of Labor; relevant congressional committees; and others who are interested. Copies will be made available to others upon request. Please contact me on (202) 512-7215 if you or your staff have any questions about this report. Other GAO contacts and staff acknowledgments are listed in appendix VII. We were asked to determine (1) how the implementation of WIA has affected the services provided to dislocated workers at the local level, (2) how funds set aside for rapid response and other statewide activities are used to assist dislocated workers under WIA, and (3) whether the distribution of dislocated worker funds is appropriately targeted to states in relation to their dislocated worker population. To determine how services are provided to dislocated workers, we visited 14 local workforce areas located in 6 states and distributed surveys to all 50 states, the District of Columbia, and Puerto Rico concerning the use of state set-aside funds for rapid response activities and for other statewide activities. We also interviewed officials from the U.S. Department of Labor (Labor), the National Alliance of Business, the National Governor’s Association, and the National Association of Workforce Boards. In selecting which states to visit, we categorized them according to the size of each state’s allotment for program year 2000, the number of mass layoff events during the previous year, and the number of workers affected by those events. We used three categories for the size of the dislocated worker allotment: large—more than $50 million; medium—$15 million to $50 million; and small—less than $15 million. Similarly, we used three categories for layoff activity: large— more than 200 mass layoff events or more than 30,000 workers laid off; medium—75 to 200 mass layoff events or 10,000 to 30,000 workers laid off; and small—fewer than 75 mass layoff events or fewer than 10,000 workers laid off. We obtained the funding information from Labor’s Employment and Training Administration and the mass layoff data from the U.S. Bureau of Labor Statistics. We then chose states from the different groups to provide variety in terms of funding size, dislocation activity, and location (see table 3). Within each state, we picked two local workforce areas, except in California where we picked four areas. We judgmentally selected these workforce areas to provide a range funding sizes and types of areas— specifically, urban versus rural (see table 4 for a list of the selected local workforce areas). At each of these locations, we interviewed officials representing the local workforce area and local workforce board and we toured one or more one-stop centers. In some of the locations, we also attended orientation meetings and met with one-stop center staff. We distributed two surveys to the 50 states, the District of Columbia, and Puerto Rico. One survey was designed to obtain information on how states used their set-aside funds for other statewide activities, and the other was designed to obtain information on how states used their set-aside funds for rapid response. We sent the survey on other statewide activities to the 52 state agencies responsible for WIA implementation and sent the survey on rapid response to the 52 state units responsible for rapid response activities. As of September 27, 2001, we had received 50 responses (96 percent) for the survey on statewide activities and 50 responses (96 percent) to the survey on rapid response. Ohio and Pennsylvania did not respond to the survey on other statewide activities, and Maine and New Hampshire did not respond to the survey on rapid response. Fifty states responded to our survey on states’ rapid response programs. Of the 50 respondents, 42 provided program year 2000 financial data that do not include program year 1999 carryover funds. Table 5 shows, for each of these 42 states, the total amount of dislocated worker funds set aside for rapid response activities. Funds obligated for rapid response activities are further broken down into two categories of obligations: rapid response services and additional assistance to local areas. The Workforce Investment Act (WIA) permits states to set aside up to 15 percent of the allotments for their adult, dislocated worker, and youth programs. In addition, the act allows the states to combine these funds to support a variety of statewide activities. Table 6 lists, for all 50 states, the District of Columbia, and Puerto Rico, the program year 2000 WIA adult, dislocated worker, and youth allotments and the maximum allowable combined set-aside for statewide activities. Forty-three of the 50 states responding to our survey on the use of set- aside funds for statewide activities indicated that they combined set-aside funds from their adult, youth, and dislocated worker allotments, as allowed by the act. The following graphs identify the percentage of statewide set-aside funds that these states spent on various activities. In some instances, the upper limit (greater than 10 percent) included a wide range. Accordingly, we have provided more information in the text on expenditures by states for this category. One state (North Dakota) spent 30 percent of its set-aside funds on disseminating a state list of training providers. Five states spent between 10 percent and 18 percent of their combined set- aside funds on conducting evaluations of programs or activities. In addition, South Dakota and Arizona spent about 25 percent on this activity. Seven states spent between 10 percent and 17 percent of their combined set-aside funds on providing incentive grants to local areas. In addition, Illinois spent over 21 percent, Wisconsin spent almost 29 percent, and Nebraska spent about 38 percent on this activity. Four states spent between 10 percent and 15 percent of their combined set-aside funds on providing technical assistance to local areas. In addition, Mississippi and South Dakota spent 25 percent on this activity. Seven states spent between 11 percent and 20 percent of their combined set-aside funds on assisting in the establishment and operation of one-stop center systems; another nine states spent between 22 percent and 37 percent of their funds on this activity. In addition, Connecticut spent about 41 percent and Missouri spent about 52 percent. Four states spent between 12 percent and 15 percent of their set-aside funds on additional assistance for local areas with a high concentration of eligible youths. Four states spent between 10 percent and 20 percent of their set-aside funds on operating fiscal and management accountability information systems; another eight states spent from 20 percent to 30 percent on this activity. In addition, Arkansas spent almost 39 percent, Idaho spent about 47 percent, and Virginia spent about 51 percent on this activity. Nine states spent between 10 percent and 20 percent of their combined set-aside funds on carrying out general state-level administrative activities, six states spent between 20 percent and 30 percent on this activity, and 18 states spent between 30 percent and 35 percent. In addition, Texas spent about 38 percent, Nebraska spent about 45 percent, and Iowa spent about 64 percent of their combined set-aside funds on this activity. Five states spent between 10 percent and 19 percent of their combined set- aside funds on providing capacity building to local areas through training of staff, development of exemplary program activities, or both. Idaho spent 13 percent, Florida spent about 29 percent, and New Hampshire spent about 35 percent of their combined set-aside funds on conducting research and demonstration projects. Four states spent between 10 percent and 20 percent of their combined set-aside funds on implementing incumbent worker training. In addition, Vermont spent 30 percent, Florida spent 34 percent, and Indiana spent 37 percent of their combined set-aside funds on this activity. Virginia spent 19 percent of its set-aside funds on implementing programs for displaced homemakers. Vermont spent about 11 percent of its set-aside funds on implementing training programs for nontraditional employment positions. Seven states spent between 10 percent and 18 percent of their set-aside funds on other activities, six states spent between 19 percent and 31 percent, and two states spent between 31 percent and 40 percent. In addition, Alabama and West Virginia both spent about 47 percent of their set-aside funds on these activities, while Nevada spent about 73 percent and California spent about 76 percent. Appendix V presents detailed results of our analysis of the federal funding formula for dislocated workers and its impact on 50 states, the District of Columbia, and Puerto Rico (see tables 7 through 11). We obtained information for the analysis of the funding formula and the state dislocated worker allotments between program years 1997 and 2001 from the Department of Labor. Arthur Merriam, Joseph Evans, and Lorin Obler made significant contributions to this report, in all aspects of the work throughout the review. In addition, Jerry Fastrup and Richard Horte led the analysis of the dislocated worker funding formula, James Wright and John Smale assisted in the design of the two national surveys, Jessica Botsford and Richard Burkard provided legal support, and Corinna Nicolaou assisted in the message and report development. Workforce Investment Act: Improvements Needed in Performance Standards to Provide a More Accurate Picture of WIA’s Effectiveness. GAO-02-275. Washington, D.C.: February 1, 2002. Workforce Investment Act: Better Guidance Needed to Address Concerns Over New Requirements. GAO-02-72. Washington, D.C.: October 4, 2001. Trade Adjustment Assistance: Experiences of Six Trade-Impacted Communities. GAO-01-838. Washington, D.C.: August 24, 2001. Veterans’ Employment and Training Service: Proposed Performance Measurement System Improved, But Further Changes Needed. GAO-01- 580. Washington, D.C.: May 15, 2001. Trade Adjustment Assistance: Trends, Outcomes, and Management Issues in Dislocated Worker Programs. GAO-01-59. Washington, D.C.: October 13, 2000. Workforce Investment Act: Implementation Status and the Integration of TANF Services. GAO/T-HEHS-00-145. Washington, D.C.: June 29, 2000. | Under the Workforce Investment Act, local workforce areas are likely to offer dislocated workers services that are tailored to local needs and that emphasize a quick return to employment. Nine of the local workforce areas that GAO visited emphasized a quick return to work and enrolled fewer dislocated workers into training than were enrolled under the Job Training Partnership Act (JTPA). Five local areas enrolled into training an equal or greater number of dislocated workers than were enrolled under JTPA. States used the act's flexibility to decide how much of their set-aside funds to spend on rapid response for dislocated workers and how much to spend on other statewide activities. Most of the 50 states that responded to a GAO survey on rapid response activities said that their state unit provided services when layoffs and plant closings involved 50 or more workers and that the state generally relied on local workforce area officials to provide rapid response services for layoffs affecting fewer workers. Workforce officials in several states expressed concern that the act's dislocated worker funding formula causes dramatic fluctuations in funding that are unrelated to the number of dislocated workers in the state. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
The military services and defense agencies, such as the National Security Agency and the National Imagery and Mapping Agency, collect and use intelligence data—either in the form of photographic, radar, or infrared images or electronic signals—to better understand and react to an adversary’s actions and intentions. This data can come from aircraft like the U-2 or Global Hawk or satellites or other ground, air, sea, or spaced- based equipment. The sensors that collect this data are linked to ground- surface-based processing systems that collect, analyze, and disseminate it to other intelligence processing facilities and to combat forces. (See figures 1 and 2.) These systems can be large or small, fixed, mobile, or transportable. For example, the Air Force operates several large, fixed systems that provide extensive analysis capability well beyond combat activities. By contrast, the Army and Marine Corps operate smaller, mobile intelligence systems that travel with and operate near combat forces. A key problem facing DOD is that these systems do not always work together effectively, thereby slowing down the time it takes to collect data and analyze and disseminate it sometimes by hours or even days, though DOD reports that timing has improved in more recent military operations. At times, some systems cannot easily exchange information because they were not designed to be compatible and must work through technical patches to transmit and receive data. In other cases, the systems are not connected at all. Compounding this problem is the fact that each service has its own command, control, and communications structure that present barriers to interoperability. Among the efforts DOD has underway to improve interoperability is the migration to a family of overarching ground-surface systems, based on the best systems already deployed and future systems. DCGS will not only connect individual systems but also enable these systems to merge intelligence information from multiple sources. The first phase of the migration effort will focus on connecting existing systems belonging to the military services—so that each service has an interoperable “family” of systems. The second phase will focus on interconnecting the families of systems so that joint and combined forces can have an unprecedented, common view of the battlefield. DOD’s Office of the Assistant Secretary of Defense for Command, Control, Communications, and Intelligence is leading this effort. Successfully building a compatible ground-surface system is extremely challenging. First, DOD is facing a significant technical challenge. The ground-surface-based systems must not only have compatible electronic connections, but also compatible data transfer rates and data formats and vocabularies. At the same time it modifies systems, DOD must protect sensitive and classified data and be able to make fixes to one system without negatively affecting others. All of these tasks will be difficult to achieve given that the systems currently operated were designed by the individual services with their own requirements in mind and that they still own the systems. Second, sufficient communications capacity (e.g., bandwidth) must exist to transmit large amounts of data. DOD is still in the early stages of adding this capacity through its bandwidth expansion program. Third, DOD must have enough qualified people to analyze and exploit the large volumes of data modern sensors are capable of collecting. Lastly, DOD must still address interoperability barriers that stretch well beyond technical and human capital enhancements. For example, the services may have operating procedures and processes that simply preclude them from sharing data with other services and components, or they may have inconsistent security procedures. Formulating and following common processes and procedures will be difficult since the services have historically been reluctant to do so. Given the multi-billion-dollar commitment and many technical and operational challenges with the migration initiative, it is critical that DOD have effective plans to guide and manage system development. These would include such things as a comprehensive architecture, migration plan, and investment strategy. However, even though it initiated DCGS in 1998 and is fielding new intelligence systems, DOD is still in the beginning stages of this planning. It is now working on an enterprise architecture, a high level concept of operations for the processing of intelligence information, and an overarching test plan, and it expects these to be done by July 2003. DOD has not yet focused on an investment strategy or on a migration plan that would set a target date for completing the migration and outline activities for meeting that date. By fielding systems without completing these plans, DOD is increasing the risk that DCGS systems will not share data as quickly as needed by the warfighter. Successfully moving toward an interoperable family of ground-surface- based processing systems for intelligence data is a difficult endeavor for DOD. The systems now in place are managed by many different entities within DOD. They are involved in a wide range of military operations and installed on a broad array of equipment. At the same time, they need to be made to be compatible and interoperable. DOD’s migration must also fit in with long-term goals for achieving information superiority over the enemy. Several elements are particularly critical to successfully addressing these challenges. They include an enterprise architecture, or blueprint, to define the current and target environment for ground-based processing systems; a road map, or migration plan to define how DOD will get to the target environment and track its progress in doing so; and an investment strategy to ensure adequate resources are provided toward the migration. Each of these elements is described in the following discussions. Enterprise architecture. Enterprise architectures systematically and completely define an organization’s current (baseline) or desired (target) environment. They do so by providing a clear and comprehensive picture of a mission area—both in logical (e.g., operations, functions, and information flows) terms and technical (e.g., software, hardware, and communications) terms. If defined properly, enterprise architectures can assist in optimizing interdependencies and interrelationships among an organization’s operations and the underlying technology supporting these operations. Our experience with federal agencies has shown that attempting to define and build systems without first completing an architecture often results in systems that are duplicative, not well integrated, and unnecessarily costly to maintain and interface, and do not optimize mission performance. DOD also recognizes the importance of enterprise architectures and developed a framework known as the Command, Control, Communications, Computers, Intelligence, Surveillance, and Reconnaissance (C4ISR) Architecture Framework for its components to use in guiding efforts similar to DCGS. DOD’s acquisition guidance also requires the use of architectures to characterize interrelationships and interactions between U.S., allied, and coalition systems. Migration plan or road map. Given the size and complexity of DCGS, it is important that the migration be planned in convenient, manageable increments to accommodate DOD’s capacity to handle change. At a minimum, a plan would lay out current system capabilities, desired capabilities, and specific initiatives, programs, projects, and schedules intended to get DOD and the services to that vision. It would also define measures for tracking progress, such as testing timeliness and the status of modifications, roles and responsibilities for key activities, and mechanisms for enforcing compliance with the migration plan and ensuring that systems conform to technical and data standards defined by the architecture. Such plans, or road maps, are often developed as part of an enterprise architecture. Investment strategy. To ensure the migration is successfully implemented, it is important to know what funds are available—for the initial phases of migration, for interoperability testing, and for transition to the target architecture. It is important as well to know what constraints or gaps need to be addressed. By achieving better visibility over resources, DOD can take steps needed to analyze its migration investment as well as funding alternatives. DOD is in the process of developing an architecture for DCGS. It expects the architecture to be completed by July 2003. As recommended by DOD’s C4ISR Architecture Framework, the architecture will include a (1) baseline, or as-is, architecture and (2) a target, or to-be, architecture. The architecture will also include a high-level concept of operations. The architecture will to also reflect DOD’s future plans to develop a web-based intelligence information network. This network would substantially change how intelligence information is collected and analyzed and could therefore substantially change DOD’s requirements for DCGS. Currently, ground-surface-based systems process intelligence data and then disseminate processed data to select users. Under the new approach, unprocessed data would be posted on a Web-based network; leaving a larger range of users to decide which data they want to process and use. DOD has started implementing its plans for this new network but does not envision fully implementing it until 2010-2015. In addition, DOD has created a DCGS Council comprised of integrated product teams to oversee the migration. A team exists for each type of intelligence (imagery, signals, measurement, and signature); test and evaluation; and infrastructure and working groups to study specific issues. In tandem with the architecture, DOD has also issued a capstone requirements document for the migration effort. This document references top-level requirements and standards, such as the Joint Technical Architecture with which all systems must comply. DOD is also developing an overarching test plan called the Capstone Test and Evaluation Master Plan, which will define standards, test processes, test resources, and responsibilities of the services for demonstrating that the systems can work together and an operational concept for processing intelligence information. An enterprise architecture and overarching test plan should help ensure that the ground-surface-based processing systems selected for migration will be interoperable and that they will help to achieve DOD’s broader goals for its intelligence operations. But there are gaps in DOD’s planning that raise risks that the migration will not be adequately funded and managed. First, the planning process itself has been slower than DOD officials anticipated. By the time DOD expects to complete its architecture and testing plan, it will have been proceeding with its migration initiative for 4 years. This delay has hampered DOD’s ability to ensure interoperability in the systems now being developed and deployed. Second, DOD still lacks a detailed migration plan that identifies which systems will be retained for migration; which will be phased out; when systems will be modified and integrated into the target system; how the transition will take place—how efforts will be prioritized; and how progress in implementing the migration plan and architecture will be enforced and tracked. Until DOD puts this in place, it will lack a mechanism to drive its migration. Moreover, the DCGS Council will lack a specific plan and tools for executing its oversight. Third, DOD has not yet developed an integrated investment strategy for its migration effort that would contemplate what resources are available for acquisitions, modifications, and interoperability testing and how gaps in those resources could be addressed. More fundamentally, DOD still lacks visibility over spending on its intelligence systems since spending is spread among the budgets of DOD’s services and components. As a result, DOD does not fully know what has already been spent on the migration effort, nor does it have a means for making sure the investments the services make in their intelligence systems support its overall goals; and if not, what other options can be employed to make sure spending is on target. DOD officials agreed that both a migration plan and investment strategy were needed but said they were concentrating first on completing the architecture, test plan, and the operational concept. DOD has a process in place to test and certify that systems are interoperable, but it is not working effectively for ground-surface-based intelligence processing systems. In fact, at the time of our review, only 2 of 26 DCGS systems have been certified as being interoperable. The certification process is important because it considers such things as whether systems can work with systems belonging to other military services without unacceptable workarounds or special interfaces, whether they are using standard data formats, and whether they conform to broader architectures designed to facilitate interoperability across DOD. DOD has placed great importance on making intelligence processing systems interoperable and requires that all new (and many existing) systems demonstrate that they are interoperable with other systems and be certified as interoperable before they are fielded. DOD relies on the Joint Interoperability Test Command (JITC, part of the Defense Information Systems Agency) to certify systems. In conducting this certification, JITC assesses whether systems can interoperate without degrading other systems or networks or being degraded by them; the ability of systems to exchange information; the ability of systems to interoperate in joint environments without the use of unacceptable workaround procedures or special technical interfaces; and the ability of systems to interoperate while maintaining system confidentiality and integrity. In doing so, JITC reviews testing already conducted as well as assessments prepared by independent testing organizations. It may also conduct some of its own testing. The results are then submitted to the Joint Staff, who validate the system’s certification. Systems are generally certified for 3 years—after which they must be re-certified. The certification is funded by the system owner—whether it is a service or DOD agency. The cost depends on the size and complexity of a system and generally requires 10 percent of funding designated for testing and evaluation. Generally, certification costs are small relative to the total cost of a system. The cost to certify the Army’s $95 million Common Ground Station, for example, was $388,000. To help enforce the certification process, DOD asked 4 key officials (the Under Secretary of Defense for Acquisition, Technology and Logistics; the Assistant Secretary of Defense for Command, Control, Communications, and Intelligence; the Director of Operational Test and Evaluation; and the Director, Joint Staff) in December 2000 to periodically review systems and to place those with interoperability deficiencies on a “watch list.” This designation would trigger a series of progress reviews and updates by the program manager, the responsible testing organization, and JITC, until the system is taken off the list. Other DOD forums are also charged with identifying systems that need to be put on the list, including DOD’s Interoperability Senior Review Panel, which is composed of senior leaders from the offices of the Under Secretary of Defense for Acquisition Technology and Logistics; the Assistant Secretary of Defense for Command, Control, Communications, and Intelligence; the Joint Staff; the Director for Programs, Analysis, and Evaluation; the Director, Operational Test and Evaluation; and U. S. Joint Forces Command. At the time of our review, only 2 of 26 DCGS systems had been certified by JITC. Of the remaining 24 systems; 3 were in the process of being certified; 14 had plans for certification; and 7 had no plans. (See table 1.) Because 21 systems that have not been certified have already been fielded, there is greater risk that the systems cannot share data as quickly as needed. Some of the systems in this category are critical to the success of other intelligence systems. For example, software modules contained in the Army’s tactical exploitation system are to be used to build systems for the Navy, Marine Corps, and the Air Force. DOD officials responsible for developing intelligence systems as well as testing them pointed toward several reasons for noncompliance, including the following. Our previous work in this area has identified the following similar reasons. Some system managers are unaware of the requirement for certification. Some system managers do not believe that their design, although fielded, was mature enough for testing. Some system managers are concerned that the certification process itself would raise the need for expensive system modifications. DOD officials do not always budget the resources needed for interoperability testing. The military services sometimes allow service-unique requirements to take precedence over satisfying joint interoperability requirements. Various approval authorities allow some new systems to be fielded without verifying their certification status. DOD’s interoperability watch list was implemented after our 1998 report to provide better oversight over the interoperability certification process. In January 2003, after considering our findings, DOD’s Interoperability Senior Review Panel evaluated DCGS’s progress toward interoperability certification and added the program to the interoperability watch list. Making its intelligence systems interoperable and enhancing their capability is a critical first step in DOD’s effort to drive down time needed to identify and hit targets and otherwise enhance joint military operations. But DOD has been slow to plan for this initiative and it has not addressed important questions such as how and when systems will be pared down and modified as well as how the initiative will be funded. Moreover, DOD is fielding new systems and new versions of old systems without following its own certification process. If both problems are not promptly addressed, data sharing problems may still persist, precluding DOD from achieving its goals for quicker intelligence dissemination. Even for the DCGS systems, which are supposed to be interconnected over time, noncompliance with interoperability requirements continues to persist. We believe DOD should take a fresh look at the reasons for noncompliance and consider what mix of controls and incentives, including innovative funding mechanisms, are needed to ensure the interoperability of DCGS systems. To ensure that an effective Distributed Common Ground-Surface System is adequately planned and funded, we recommend that the Secretary of Defense direct the Assistant Secretary of Defense for Command, Control, Communications, and Intelligence to expand the planning efforts for DCGS to include a migration plan or road map that at a minimum lays out (1) current system capabilities and desired capabilities; (2) specific initiatives, programs, projects and schedules to get DOD and the services to their goal; (3) measures to gauge success in implementing the migration plan as well as the enterprise architecture; and (4) mechanisms for ensuring that the plan is followed. We also recommend that the Secretary of Defense direct the Assistant Secretary of Defense for Command, Control, Communications, and Intelligence to develop an investment strategy to identify what funds are available, both for the initial phases of the DCGS migration and transition to the target architecture, and whether there are gaps or constraints that need to be addressed. To ensure that systems critical to an effective DCGS are interoperable, we recommend that the Secretary of Defense take steps needed to enforce its certification process, including directing the service secretaries in collaboration with the Joint Staff, Acquisition Executives, and the Joint Interoperability Test Command to (1) examine reasons the services are slow to comply with its certification requirement and (2) mechanisms that can be implemented to instill better discipline in adhering to the certification requirement. If lack of funding is found to be a significant barrier, we recommend that the Secretary of Defense consider centrally funding the DCGS certification process as a pilot program. In commenting on a draft of this report, DOD concurred with our recommendations to expand the planning efforts for DCGS to include a migration plan and an investment strategy. It stated that it has already funded both projects. DOD also strongly supported our recommendation to take additional steps to enforce its certification process and described recent actions it has taken to do so. DOD partially concurred with our last recommendation to consider centrally funding the certification process if funding is found to be a significant barrier. While DOD supported this step if it is warranted, DOD believed it was premature to identify a solution without further definition of the problem. We agree that DOD needs to first examine the reasons for noncompliance and consider what mix of controls and incentives are needed to make the certification process work. At the same time, because funding has already been raised as a barrier, DOD should include an analysis of innovative funding mechanisms into its review. To achieve our objectives, we examined Department of Defense regulations, directives, instructions as well as the implementing instructions of the Chairman, Joint Chiefs of Staff, regarding interoperability and the certification process. We visited the Joint Interoperability Test Command in Fort Huachuca, Arizona, and obtained detailed briefings on the extent that intelligence, surveillance, and reconnaissance systems, including DCGS systems, have been certified. We visited and obtained detailed briefings on the interoperability issues facing the Combatant Commanders at Joint Forces Command in Norfolk, Virginia; Central Command in Tampa, Florida; and Pacific Command in Honolulu, Hawaii, including a videoconference with U.S. Forces Korea officials. We discussed the interoperability certification process and its implementation with officials in the Office of the Director, Operational Test and Evaluation; the Under Secretary of Defense for Acquisition, Technology and Logistics; and the Assistant Secretary of Defense for Command, Control, Communications and Intelligence. During these visits and additional visits to the intelligence and acquisition offices of the services, the National Imagery and Mapping Agency, and the National Security Agency, we obtained detailed briefings and examined documents such as the capstone requirements document involving the status and plan to implement the ground systems strategy. We conducted our review from December 2001 through February 2003 in accordance with generally accepted government auditing standards. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution of it until 7 days from the date of this report. At that time, we will send copies of this report to the other congressional defense committees and the Secretary of Defense. We will also provide copies to others on request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. Please contact me at (202) 512-4841 if you or your staff have any questions concerning this report. Key contributors to this report were Keith Rhodes, Cristina Chaplain, Richard Strittmatter, and Matthew Mongin. The General Accounting Office, the audit, evaluation and investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. The fastest and easiest way to obtain copies of GAO documents at no cost is through the Internet. GAO’s Web site (www.gao.gov) contains abstracts and full- text files of current reports and testimony and an expanding archive of older products. The Web site features a search engine to help you locate documents using key words and phrases. You can print these documents in their entirety, including charts and other graphics. Each day, GAO issues a list of newly released reports, testimony, and correspondence. GAO posts this list, known as “Today’s Reports,” on its Web site daily. The list contains links to the full-text document files. To have GAO e-mail this list to you every afternoon, go to www.gao.gov and select “Subscribe to GAO Mailing Lists” under “Order GAO Products” heading. | Making sure systems can work effectively together (interoperability) has been a key problem for the Department of Defense (DOD) yet integral to its goals for enhancing joint operations. Given the importance of being able to share intelligence data quickly, we were asked to assess DOD's initiative to develop a common ground-surface-based intelligence system and to particularly examine (1) whether DOD has adequately planned this initiative and (2) whether its process for testing and certifying the interoperability of new systems is working effectively. DOD relies on a broad array of intelligence systems to study the battlefield and identify and hit enemy targets. These systems include reconnaissance aircraft, satellites, and ground-surface stations that receive, analyze, and disseminate intelligence data. At times, these systems are not interoperable--either for technical reasons (such as incompatible data formats) and/or operational reasons. Such problems can considerably slow down the time to identify and analyze a potential target and decide whether to attack it. One multibillion-dollar initiative DOD has underway to address this problem is to pare down the number of ground-surface systems that process intelligence data and upgrade them to enhance their functionality and ensure that they can work with other DOD systems. The eventual goal is an overarching family of interconnected systems, known as the Distributed Common Ground-Surface System (DCGS). To date, planning for this initiative has been slow and incomplete. DOD is developing an architecture, or blueprint, for the new systems as well as an overarching test plan and an operational concept. Although DCGS was started in 1998, DOD has not yet formally identified which systems are going to be involved in DCGS; what the time frames will be for making selections and modifications, conducting interoperability tests, and integrating systems into the overarching system; how transitions will be funded; and how the progress of the initiative will be tracked. Moreover, DOD's process for testing and certifying that systems will be interoperable is not working effectively. In fact, only 2 of 26 DCGS systems have been certified as interoperable. Because 21 of the systems that have not been certified have already been fielded, DOD has a greater risk that the new systems will not be able to share intelligence data as quickly as needed. Certifications are important because they consider such things as whether a system can work with systems belonging to other military services without unacceptable workarounds and whether individual systems conform to broader architectures designed to facilitate interoperability across DOD. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
Vanuatu consists of 83 islands spread over hundreds of miles of ocean in the South Pacific, 1,300 miles northeast of Sydney, Australia. About 39 percent of the population is concentrated on the islands of Santo and Efate. Vanuatu’s capital, Port Vila, is on Efate, and Vanuatu’s only other urban center, Luganville, is on Santo. In the past decade, Vanuatu’s real GDP growth averaged 2 percent, although more rapid population growth led to a decline in per capita GDP over the same period. Average growth of real GDP per capita was negative from 1993 to 2005. An estimated 40 percent of Vanuatu’s population of about 207,000 has an income below the international poverty line of $1 per day. Agriculture and tourism are the principal productive sectors of Vanuatu’s economy, contributing approximately 15 percent and 19 percent to GDP, respectively. Although agriculture represents a relatively small share of Vanuatu’s overall economy, approximately 80 percent of Vanuatu’s residents live in rural areas and depend on subsistence agriculture for food and shelter. The tourism sector is dominated by expatriates of foreign countries living in Vanuatu, who also predominate in other formal sectors of the economy such as plantation agriculture and retail trade. On May 6, 2004, MCC determined that Vanuatu was eligible to submit a compact proposal for Millennium Challenge Account funding. Vanuatu’s proposal identified transportation infrastructure as a key constraint to private-sector development. The timeline in figure 1 shows the development and implementation of the Vanuatu proposal and compact. The $65.7 million Vanuatu compact includes $54.5 million for the rehabilitation or construction of 11 transportation infrastructure assets on 8 of Vanuatu’s 83 islands, including roads, wharves, an airstrip, and warehouses (see fig. 2). The compact also includes $6.2 million for an institutional strengthening program to increase the capacity of the Vanuatu Public Works Department (PWD) to maintain transportation infrastructure. The remaining $5 million is for program management and monitoring and evaluation. More than half of the compact, $37 million, is budgeted for three road projects on Santo and Efate islands. The compact provides for upgrading existing roads on both islands; the compact also includes five new bridges for an existing road on Santo. MCC’s compact with Vanuatu and congressional notification state that the compact will have a transformational impact on Vanuatu’s economic development, increasing average per capita income by approximately $200—15 percent—by 2010 and increasing total GDP by “an additional 3 percent a year.” MCC’s investment memo further quantifies the per capita income increase as $488—37 percent—by 2015. The compact and the congressional notification also state that the compact will provide benefits to approximately 65,000 poor, rural inhabitants (see fig. 3). In projecting the impact of the Vanuatu compact, MCC estimated the benefits and costs of the proposed infrastructure improvements. MCC also estimated the number of beneficiaries within a defined catchment area— that is, the geographic area in which benefits may be expected to accrue. MCC used the estimated benefits and costs to calculate the compact’s ERR and impact on Vanuatu’s GDP and per capita income. MCC’s analysis determined that the compact will reduce transportation costs and improve the reliability of access to transportation services for poor, rural agricultural producers and providers of tourism-related goods and services and that these benefits will, in turn, lead to increases in per capita income and GDP and reduction in poverty. MCC projects several direct and induced benefits from the compact’s infrastructure improvement projects over a 20-year period, beginning in full in 2008 or 2009 and increasing by at least 3 percent every year. Direct benefits. MCC projects that direct benefits will include, for example, construction spending, reduced transportation costs, and time saved in transit on the improved roads. Induced benefits. MCC projects that induced benefits from tourism and agriculture will include, for example, increased growth in Vanuatu tourism, tourist spending, and hotel occupancy and increased crop, livestock, and fisheries production. Figure 4 illustrates MCC’s logic in projecting the compact’s impact. MCC expects compact benefits to flow from different sources, depending on the project and its location. In Efate, the Ring Road is expected to provide direct benefits from decreased road user costs and induced benefits through tourism and foreign resident spending. In Santo, MCC anticipates similar benefits as well as the induced benefit of increased agricultural production. On other islands, where tourism is not as developed, MCC expects benefits to derive primarily from user cost savings and increased agriculture. To calculate construction and maintenance costs for the transportation infrastructure projects, MCC used existing cost estimates prepared for the government of Vanuatu and for another donor as well as data from the Vanuatu PWD. To estimate the number of poor, rural beneficiaries, MCC used Vanuatu maps to identify villages in the catchment area and used the 1999 Vanuatu National Population and Housing Census to determine the number of persons living in those villages. In all, MCC calculated that approximately 65,000 poor, rural people on the eight islands would benefit from MCC projects. On the basis of the costs and benefits projected over a 20-year period, MCC calculated three summaries of the compact’s impact: its ERR, effect on per capita income, and effect on GDP. MCC projected an overall compact ERR of 24.7 percent over 20 years. In projecting the compact’s impact on Vanuatu’s per capita income, MCC used a baseline per capita income of $1,326 for 2005. MCC also prepared a sensitivity analysis to assess how a range of possible outcomes would affect compact results. MCC’s tests included a 1-year delay of the start date for accrued benefits; a 20 percent increase of all costs; a 20 percent decrease of all benefits; and a “stress test,” with a 20 percent increase of all costs and a 20 percent decrease of all benefits. MCC calculated a best-case compact ERR of 30.2 percent and a worst-case compact ERR of 13.9 percent. MCC’s public portrayal of the Vanuatu compact’s projected effects on per capita income and on GDP suggest greater impact than its analysis supports. In addition, MCC’s portrayal of the compact’s projected impact on poverty does not identify the proportion of benefits that will accrue to the rural poor. Impact on per capita income. In the compact and the congressional notification, MCC states that the transportation infrastructure project is expected to increase “average income per capita (in real terms) by approximately $200, or 15 percent of current income per capita, by 2010.” MCC’s investment memo states that the compact will cause per capita income to increase by $488, or 37 percent, by 2015. These statements suggest that as a result of the program, average incomes in Vanuatu will be 15 percent higher in 2010 and 37 percent higher in 2015 than they would be without the compact. However, MCC’s underlying data show that these percentages represent the sum of increases from per capita income in 2005 that MCC projects for each year. For example, according to MCC’s data, Vanuatu’s per capita income in a given year between 2006 and 2010 will range from about 2 percent to almost 4 percent higher than in 2005; in its statements, MCC sums these percentages as 15 percent without stating that this percentage is a cumulative increase from 2005. Our analysis of MCC’s data shows that actual gains in per capita income, relative to income in 2005, would be $51, or 3.9 percent, in 2010 and $61, or 4.6 percent, in 2015 (see fig. 5). Figure 6 further illustrates MCC’s methodology in projecting the compact’s impact on per capita income levels for 2010 and 2015. Impact on GDP. Like its portrayal of the projected impact on per capita income, MCC’s portrayal of the projected impact on GDP is not supported by the underlying data. In the compact and the 2006 congressional notification, MCC states that the compact will have a transformational effect on Vanuatu’s economy, causing GDP to “increase by an additional 3 percent a year.” Given the GDP growth rate of about 3 percent that MCC expects in Vanuatu without the compact, MCC’s statement of a transformational effect suggests that the GDP growth rate will rise to about 6 percent. However, MCC’s underlying data show that although Vanuatu’s GDP growth rate will rise to about 6 percent in 2007, in subsequent years the GDP growth rate will revert to roughly the rate MCC assumes would occur without the compact, approximately 3 percent (see fig. 7). Although MCC’s data show that the compact will result in a higher level (i.e., dollar value) of GDP, the data do not show a transformational increase to the GDP growth rate. Impact on poverty. MCC’s portrayal of the compact’s projected impact on poverty does not identify the proportion of the financial benefits that will accrue to the rural poor. In the compact and the congressional notification, MCC states that the program is expected to benefit “approximately 65,000 poor, rural inhabitants living nearby and using the roads to access markets and social services.” In its underlying documentation, MCC expects 57 percent of the monetary benefits to accrue to other beneficiaries, including expatriate tourism services providers, transport providers, government, and local businesses; 43 percent is expected to go to the local population, which MCC defines as “local producers, local consumers and inhabitants of remote communities” (see fig. 8). However, MCC does not establish the proportion of local- population benefits that will go to the 65,000 poor, rural beneficiaries. Our analysis shows that risks related to construction costs, timing of benefits, project maintenance, induced benefits, and efficiency gains may lessen the Vanuatu compact’s projected impact on poverty reduction and economic growth. Accounting for these risks could reduce the overall compact ERR. Construction costs. Although MCC considered the risk of construction cost increases, the contingencies used in its calculations may not be sufficient to cover actual construction costs. Cost estimate documentation for 5 of MCC’s 11 construction projects shows that these estimates include design contingencies of 20 percent. However, cost overruns of more than 20 percent occur in many transportation projects, and as MCC’s analysis notes, the risk of excessive cost overruns is significant in a small country such as Vanuatu. Any construction cost overrun must be made up within the Vanuatu compact budget by reducing the scope, and therefore the benefits, of the compact projects; reduced project benefits would in turn reduce the compact’s ERR and effects on per capita income and GDP. Timing of benefits. Although MCC’s analysis assumes compact benefits from 2008 or 2009—shortly after the end of project construction—we found that benefits are likely to accrue more slowly. Our document review and discussions with tourism services providers and agricultural and timber producers suggest that these businesses will likely react gradually to any increased market opportunities resulting from MCC’s projects, in part because of constraints to expanding economic activity. In addition, MCC assumes that all construction spending will occur in the first year, instead of phasing the benefits from this spending over the multiyear construction schedule. Project maintenance. Uncertainty about the maintenance of completed transportation infrastructure projects after 2011 may affect the compact’s projected benefits. Vanuatu’s record of road maintenance is poor. According to World Bank and Asian Development Bank officials, continuing donor involvement is needed to ensure the maintenance and sustainability of completed projects. However, although MCC has budgeted $6.2 million for institutional strengthening of the Vanuatu PWD, MCC has no means of ensuring the maintenance of completed projects after the compact expires in 2011; the Millennium Challenge Act limits compacts to 5 years. Poor maintenance performance will reduce the benefits projected in the MCC compact. Induced benefits. The compact’s induced benefits depend on the response of Vanuatu tourism providers and agricultural producers. However, constraints affecting these economic sectors may prevent the sector from expanding as MCC projects. Limited response to the compact by tourism providers and agricultural producers would have a significant impact on compact benefits. Efficiency gains. MCC counts efficiency gains—such as time saved because of better roads—as compact benefits. However, although efficiency gains could improve social welfare, they may not lead to changes in per capita income or GDP or be directly measurable as net additions to the economy. Accounting for these risks could reduce the overall compact ERR from 24.2 percent, as projected by MCC, to between 5.5 percent and 16.5 percent (see table 1). MCC’s public portrayal of the Vanuatu compact’s projected benefits— particularly the effect on per capita income—suggests a greater impact than MCC’s underlying data and analysis support and can be understood only by reviewing source documents and spreadsheets that are not publicly available. As a result, MCC’s statements may foster unrealistic expectations of the compact’s impact in Vanuatu. For example, by suggesting that per capita incomes will increase so quickly, MCC suggests that its compact will produce sustainable growth that other donors to Vanuatu have not been able to achieve. The gaps between MCC’s statements about, and underlying analysis of, the Vanuatu compact also raise questions about other MCC compacts’ projections of a transformational impact on country economies or economic sectors. Without accurate portrayals of its compacts’ projected benefits, the extent to which MCC’s compacts are likely to further its goals of poverty reduction and economic growth cannot be accurately evaluated. In addition, the economic analysis underlying MCC’s statements does not reflect the time required to improve Vanuatu’s transportation infrastructure and for the economy to respond and does not fully account for other risks that could substantially reduce compact benefits. In our report, we recommend that the CEO of MCC take the following actions: revise the public reporting of the Vanuatu compact’s projected impact to clearly represent the underlying data and analysis; assess whether similar statements in other compacts accurately reflect the underlying data and analysis; and improve its economic analysis by phasing the costs and benefits in compact ERR calculations and by more fully accounting for risks such as those related to continuing maintenance, induced benefits, and monetized efficiency gains as part of sensitivity analysis. In comments on a draft of our report, MCC did not directly acknowledge our recommendations. MCC acknowledged that its use of projected cumulative compact impact on income and growth was misleading but asserted that it had no intention to mislead and that its portrayal of projected compact benefits was factually correct. MCC questioned our finding that its underlying data and analysis do not support its portrayal of compact benefits and our characterization of the program’s risks. (See app. VI of our report for MCC comments and our response.) Mr. Chairman, this completes my prepared statement. I would be happy to respond to any questions you or other Members of the Subcommittee may have at this time. For further information about this testimony, please contact me at (202) 512-3149 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. In addition to the person named above, Emil Friberg, Jr. (Assistant Director), Gergana Danailova-Trainor, Reid Lowe, Angie Nichols-Friedman, Michael Simon, and Seyda Wentworth made key contributions to this statement. Also, David Dornisch, Etana Finkler, Ernie Jackson, and Tom McCool provided technical assistance. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | In January 2004, Congress established the Millennium Challenge Corporation (MCC) for foreign assistance. Congress has appropriated almost $6 billion to MCC. As of March 2007, MCC had signed almost $3 billion in compacts with 11 countries, including a 5-year, $65.7 million compact with Vanuatu. MCC states that the Vanuatu compact will have a transformational effect on the country's economy, increasing per capita income and GDP and benefiting 65,000 poor, rural people. This testimony summarizes a July 2007 report (GAO-07-909) examining (1) MCC's methods of projecting economic benefits, (2) MCC's portrayal and analysis of the projected benefits, and (3) risks that may affect the compact's impact. To address these objectives, GAO reviewed MCC's analyses and met with officials and business owners in Vanuatu as well as with other donors. In its July 2007 report, GAO recommended that the Chief Executive Officer of MCC revise the public reporting of the Vanuatu compact's projected impact; assess whether similar reporting in other compacts accurately reflects underlying analyses; and improve its economic analyses by more fully accounting for risks to project benefits. MCC did not directly address GAO's recommendations but commented that it had not intended to make misleading statements and that its portrayal of projected results was factual and consistent with underlying data. MCC projects that the Vanuatu compact's transportation infrastructure projects will provide direct benefits such as reduced transportation costs and induced benefits from growth in tourism and agriculture. MCC estimated the costs and benefits over 20 years, with benefits beginning in full in 2008 or 2009 and growing each year, and it counted poor, rural beneficiaries by defining the area where benefits were likely to accrue. Using projected benefits and costs, MCC calculated the compact's economic rate of return (ERR) and its effects on Vanuatu's gross domestic product (GDP) and per capita income. MCC's portrayal of the projected impact does not reflect its underlying data. MCC states that per capita income will increase by approximately $200, or 15 percent, by 2010 and by $488, or 37 percent, by 2015. However, MCC's underlying data show that these figures represent the sum of individual years' gains in per capita income relative to 2005 and that actual gains will be $51, or 3.9 percent, in 2010 and $61, or 4.6 percent, in 2015. MCC also states that GDP will increase by an additional 3 percent a year, but its data show that after GDP growth of 6 percent in 2007, the economy's growth will continue at about 3 percent, as it would without the compact. MCC states that the compact will benefit approximately 65,000 poor, rural inhabitants, but this statement does not identify the financial benefits that accrue to the rural poor or reflect its own analysis that 57 percent of benefits go to others.We identified five key risks that could affect the compact's projected impacts. (1) Cost estimate contingencies may not be sufficient to cover project overruns. (2) Compact benefits will likely accrue more slowly than MCC projected. (3) Benefit estimates assume continued maintenance, but MCC's ability to ensure maintenance will end in 2011, and Vanuatu's maintenance record is poor. (4) Induced benefits depend on businesses' and residents' response to new opportunities. (5) Efficiency gains, such as time saved in transit, may not increase per capita income. Our analysis of these areas of risk illustrates the extent that MCC's projections are dependent on assumptions of immediate realization of benefits, long-term maintenance, realization of induced benefits, and benefits from efficiency gains. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
In recent years, the Congress heard and expressed concerns about the ability of federal land management agencies to provide high-quality recreational opportunities. These concerns focused on declines in visitor services, extensive needs for repairs and maintenance at the facilities and infrastructure that support recreation, and a lack of information on the condition of natural and cultural resources and the trends affecting them. In addressing these concerns, the Congress faced a dilemma: While the needs of federal recreation areas and the rate of visitation to these areas were increasing, the funding for addressing these needs and providing visitor services was growing tighter. As a result, the Congress was looking for means, other than appropriations, to provide additional resources to these areas. The recreational fee demonstration program was one such means. Authorized by the Congress in 1996 as a 3-year pilot program, the recreational fee demonstration program allows the Park Service, the Forest Service, the Bureau of Land Management (BLM), and the Fish and Wildlife Service to experiment with new or increased fees at up to 100 demonstration sites per agency. The program aims to bring additional resources to recreation lands by generating recreational fee revenues and spending most of the fee revenues at the sites where the fees are collected to increase the quality of the visitors’ experience and to enhance the protection of the sites’ resources. In addition, in carrying out the program, the agencies are to (1) be creative and innovative in designing and testing the collection of fees, (2) develop partnerships with federal agencies and with state and local agencies, (3) provide higher levels of service to the public, and (4) assess the public’s satisfaction with the program. The conference report on the program’s original legislation requested that the Secretary of the Interior and the Secretary of Agriculture each prepare a report that evaluates the demonstration program, including recommendations for further legislation, by March 31, 1999. The program is currently authorized through fiscal year 2001. The agencies have until the end of fiscal year 2004 to spend money generated under the program. Each of the four federal land management agencies included in the program provides a variety of recreational opportunities to the visiting public. Together, these agencies manage over 630 million acres of land—over one-quarter of the land in the United States. In 1997, they received over 1.2 billion visits. Table 1.1 provides information on the acreage, visitation, and lands managed by the four agencies. The fee demonstration program was established to test ways to address deteriorating conditions at many federal recreation areas, particularly those managed by the Park Service, which collects the most fee revenues, and the Forest Service, which hosts the most recreational visitors. Our prior work has detailed significant needs, including the following: The federal land management agencies have accumulated a multibillion-dollar backlog of maintenance, infrastructure, and development needs. The quality and the scope of visitor services at federal recreation sites have been declining. Some sites have closed facilities, while others have reduced their hours of operation or are providing fewer services. The condition of many key natural and cultural resources in the national park system is deteriorating, and the condition of many others is not known. Despite annual increases in federal appropriations for operating the national park system, the financial resources available have not been sufficient to stem the deterioration of the resources, services, and recreational opportunities managed by the agency. One way of addressing these needs was providing additional financial resources to these agencies through new or increased recreational fees. But while new or increased fees could have increased the federal land management agencies’ revenues, generally these additional fees did not directly benefit the agencies’ field units until the fee demonstration program was established. The Land and Water Conservation Act of 1965, as amended, limited the amount of revenues that could be raised through collecting recreational fees and required that the funds be deposited in a special U.S. Treasury account. The funds in the special Treasury account could only be used for certain purposes, including resource protection and maintenance activities, and only became available through congressional appropriations. These amounts were generally treated as a part of, rather than a supplement to, the agencies’ regular appropriations, and were included under the spending limits imposed by the Budget Enforcement Act. In the context of the Budget Enforcement Act’s limits, in order for the agencies to address deteriorating conditions at recreation areas through fee revenues, the Congress had to provide authority for the agencies to retain the fees. In 1996, the Congress authorized the fee demonstration program to test recreational fees as a source of additional financial resources for the federal land management agencies. The Congress directed that at least 80 percent of the revenues collected under the program be spent at the units collecting the fees; the remaining 20 percent could be spent at the discretion of each agency. By allowing the local units to retain such a large percentage of the fees they collected, the Congress created a powerful incentive for unit managers to emphasize fee collections. In essence, the more revenues that field units could generate through fees, the more they would have to spend on improving conditions in the areas they managed. In addition, the program’s legislative history reflected the congressional belief that allowing the local units to retain most of the revenues they collected would be likely to improve the public’s acceptance of the fees. This belief was consistent with past studies of visitors to recreation areas that indicated that most visitors would support increases in fees if the fees remained at the local units. Under the legislation, the program’s expenditures were to be used to increase the quality of visitors’ experiences at public recreation areas and to enhance the protection of resources. Specifically, authorized expenditures were to address backlogged repair and maintenance projects; enhancements to interpretation, signage, habitats, or facilities; and resource preservation, annual operations (including fee collections), maintenance, and law enforcement relating to public use. In broad terms, these authorized expenditures cover the principal aspects of managing recreation areas on federal lands. The legislation also provided an opportunity for the agencies to be creative and innovative in developing and testing fees by giving them the flexibility to develop a wide variety of fee proposals, including some that were nontraditional as well as others that simply increased previously existing fees. During the demonstration period, the agencies were to experiment with (1) various types of fees to determine what does and does not work and (2) various methods of collecting fees to make payment easier and more convenient for the visiting public. In addition, according to the program’s legislative history, the agencies were expected to coordinate with each other, as well as with state and local recreation areas, so that visitors did not face numerous fees from several agencies in the same geographic area. Coordination among the agencies could yield better service to the public, thereby potentially improving the program’s chances of success. Federal land management agencies have traditionally charged several types of fees to visitors, all of which may still be charged under the fee demonstration program. Most of these fees can be categorized generally as either entrance or user fees. Entrance fees are generally charged for short-term access to federal recreation sites. Most are charged on a per-vehicle basis, but some are charged to individuals hiking or cycling into a recreation area. The entrance fee gives the visitor access to the key features of the area. For example, visitors pay $10 per car to enter Zion National Park in Utah; this fee covers everyone in the vehicle and is good for up to a week. Another example of an entrance fee is collected within the Wasatch-Cache National Forest in Utah, where visitors to the Mirror Lake area pay an entrance fee of either $3 per vehicle for a day or $6 per vehicle for a week. Annual passes allow entrance or use of a site for the next 12 months, benefiting frequent visitors to a single recreation area, such as a park or forest. For example, instead of paying a $10 entrance fee every time they drive into Shenandoah National Park in Virginia, frequent visitors can purchase an annual pass for $20, which will give them unlimited access to the park during the next year. Similarly, in the White Mountain National Forest in New Hampshire, visitors can pay $20 for an annual pass rather than pay $5 for a daily vehicle pass. The Golden Eagle Passport provides unlimited entry for a year to most national parks, Fish and Wildlife Service sites where entrance fees are charged, and several Forest Service and BLM sites. Costing $50 for the purchaser and his or her passengers in a privately owned vehicle, the passport can be economical when people are planning to visit a number of sites that charge entrance fees within a single year. While the Golden Eagle Passport covers entrance fees, it does not cover most user fees;hence, passport holders pay separately for activities such as boat launching, camping, parking, or going on an interpretive tour. User fees are charged for engaging in specific activities. They are generally charged to individuals or groups for activities such as boat launching, camping, parking, or going on an interpretive tour. For example, individuals pay $3 for a guided interpretive tour of the Frederick Douglass home at the Frederick Douglass National Historic Site in Washington, D.C. Another example of a user fee is at Paria Canyon, a BLM demonstration site in Utah, where visitors pay $5 per day for hiking or backpacking. Individual sites may charge several types of fees for entry and other activities. For example, a demonstration site may have a $10 entrance fee, good for 7 days, and a $20 annual pass. In addition, visitors to the site may pay user fees for a variety of specific activities, such as backcountry hiking, camping, interpretive tours, or disposing of waste from a recreational vehicle. Our review included fee demonstration sites in the Park Service, the Forest Service, BLM, and the Fish and Wildlife Service. At each of these agencies, we contacted staff from headquarters and at least two regional offices. In addition, we visited 15 judgmentally selected sites operated by the four agencies. More of the selected sites were operated by the Park Service than by any other agency because the Park Service (1) had the most sites in the program and (2) generates considerably more fee revenues than any of the other agencies. The 15 selected sites were both large and small and were located throughout the country in eight different states and the District of Columbia. Table 1.2 lists the sites, by agency. We collected information on revenues, expenditures, and visitation from the headquarters offices of the four agencies and the 15 sites we visited. For each agency’s revenues and expenditures, we collected actual data for fiscal year 1997 and the agency’s estimates for fiscal year 1998. At each of the 15 sites, we collected more detailed information on revenues, such as the types of fees and the methods used to collect fees. We also compared actual with planned expenditures and classified the expenditures, using the broad purposes authorized in the program’s legislation. To determine the extent to which the agencies had adopted innovative or coordinated approaches to the fee program, we used the information we collected to accomplish our first two objectives. Various agency officials, agency task forces, and officials from the industry and user groups we contacted provided comments and ideas on innovative or coordinated approaches available to the agencies—including identifying practices employed by the private sector. To prepare for our review of the implementation of the demonstration program to date, we reviewed prior fee legislation, the program’s authorizing legislation, and its legislative history. To determine what, if any, impact the fee demonstration program had on visitation, we attempted to compare data on visitation during the demonstration period with baseline information on visitation developed since 1993. Since visitation at the Park Service’s sites accounted for over three-fourths of total visitation among all fee demonstrations at the four agencies, we compared trends in visitation at their demonstration sites with nondemonstration sites for the 1993-97 period. To conduct this analysis, we obtained visitation data from the Park Service’s Public Use Statistics Office. For each of the agencies, we collected anecdotal information on trends in visitation from officials at agency headquarters and at the sites we visited as well as officials from each of the affected industry and user groups we contacted. We also contacted six experts who either had conducted surveys of visitors concerning the recreational fee demonstration program or had prior experience with recreational fees on federal lands. These individuals were Dr. Deborah J. Chavez, Research Social Scientist, Pacific Southwest Research Station, U.S. Forest Service; Dr. Sam H. Ham, College of Forestry, Wildlife and Range Sciences, University of Idaho; Dr. David W. Lime, Senior Research Associate, University of Minnesota, Department of Forest Resources; Dr. Gary E. Machlis, Visiting Chief Social Scientist, Park Service; Mr. Jim Ridenour, Director, The Epply Institute for Parks and Public Lands, Department of Recreation and Park Administration, Indiana University, and former Director, National Park Service; and Dr. Alan E. Watson, Aldo Leopold Wilderness Research Institute, U.S. Department of Agriculture and Department of Interior, Missoula, Montana. During our review, we contacted various industry and user groups that might be affected by the fee demonstration program. We spoke with these groups to obtain their views on the agencies’ management of the program. We selected these groups because they (1) had participated in congressional hearings on the demonstration fee authority, (2) had been identified as affected parties by agency officials or officials from other industry or user groups, or (3) were widely known to be involved with recreation on federal lands. Table 1.3 provides the names and a brief description of each group we contacted. In addition to contacting these industry and user groups, we reviewed the testimonies of several other affected groups that participated in congressional hearings on the fee demonstration program. These included industry groups, such as Kampgrounds of America, the Outdoor Recreation Coalition of America, and the National Tour Association, and user groups, such as the American Hiking Society, the American Motorcyclist Association, and the Grand Canyon Private Boaters Association. We did not independently verify the reliability of the financial or visitation data provided, nor did we trace the data to the systems from which they came. In some cases, data were not available at headquarters and could be collected only at the local site. We conducted our review from September 1997 through November 1998 in accordance with generally accepted government auditing standards. Among the four agencies, the pace and the approach used to implement the recreational fee demonstration program have differed. Some of the agencies had more demonstration sites operational earlier than others. This difference is a result of the agencies’ experiences in charging fees prior to the demonstration. Nonetheless, there have been substantial increases in the amount of fees collected. Each agency estimated that it has generated at least 70 percent more in fee revenues than it did prior to the demonstration program, and the combined estimated revenues for the four agencies have nearly doubled since fiscal year 1996. According to estimates for fiscal year 1998, the Park Service has collected the most revenues under the program, generating about 85 percent of all the revenues collected at demonstration sites by the four agencies. Since getting the authority to begin testing the collection of new and increased fees, each of the agencies has taken different approaches. The agencies’ approaches have largely been influenced by (1) their respective traditions and experiences in collecting fees, (2) the geographic characteristics of the lands they manage, and (3) a recent amendment to the law authorizing the demonstration program that increased incentives to the agencies. As a result of these differing approaches, the pace of implementation among the agencies has varied. Fees are not new to the four agencies in the demonstration program. Prior to the program, each of the agencies collected fees from visitors at recreation areas. However, the agencies’ experiences with fees have differed. For example, prior to the demonstration, the Park Service collected entrance fees at about one-third of its park units. The Forest Service and BLM collected user fees at many of their more developed recreation areas—predominantly for camping—and the Fish and Wildlife Service charged a mix of entrance and user fees at about 65 of its sites. Not only did their past experiences with fees differ, but the geographical characteristics of the lands they were managing also were different, making fee collection easier in some areas and more difficult in others. For example, many sites in the Park Service have only a few roads that provide access to them. With limited access, collecting fees at an entrance station is very practical. In contrast, many Forest Service, BLM, and Fish and Wildlife Service sites have multiple roads accessing areas they manage. Multiple roads make it difficult for an agency to control access to an area, thus making it difficult to charge entrance fees. As a result, most Forest Service, BLM, and Fish and Wildlife Service sites have not charged entrance fees but instead charged user fees for specific activities. Figures 2.1 and 2.2 further illustrate the varying characteristics of federal lands. As an example, figure 2.1 shows the relatively few access points to Arches National Park in Utah. This park has only one paved road going in and out of the park. In comparison, figure 2.2 shows the multiple access points that exist along the many roads that go through the White Mountain National Forest in New Hampshire and Maine. Many of the traditions in collecting fees have influenced the agencies in both their pace of implementation and the types of fees they charge. Because many sites in the Park Service previously charged entrance fees, the agency was quickly able to bring a large number of sites into the demonstration program by increasing the entrance fees that existed prior to the demonstration. For the Park Service, of the 96 demonstration sitesin the first year of the program, 57 of them increased existing entrance fees. According to officials in several of the agencies, it is generally easier for the agencies to increase existing fees than to implement new fees because (1) the fee-collection infrastructure is already in place and (2) the public is already accustomed to paying a fee. The three other agencies were collecting predominantly new fees at their demonstration sites during fiscal year 1997, the first year of the program, including all 10 of BLM’s sites, 29 of 39 Forest Service sites, and 35 of 61 Fish and Wildlife Service sites. Compared with increases in existing fees, new fees are generally more difficult to implement because the agencies need to (1) develop an infrastructure for collecting fees and (2) inform the public and gain acceptance for the new fees. This infrastructure could include new facilities; new signs; new collection equipment, such as cash registers and safes; and new processes, such as implementing internal control standards for handling cash. Figures 2.3 and 2.4 show examples of new facilities that were constructed or put in place during the demonstration period to collect new fees. Through the first half of fiscal year 1998—that is, as of March 31, 1998—each of the four agencies added sites to the program. Through March 1998, the four agencies had 284 sites in the program, compared with 206 sites through fiscal year 1997. The Park Service added 4 sites through March 1998 and has a total of 100 sites in the program—the maximum allowed by law. Each of the other three agencies has added sites to the program, with the majority of new sites coming from BLM—the agency that had the fewest sites in fiscal year 1997. For the second half of fiscal year 1998 and fiscal year 1999, the Forest Service plans to add as many as 38 to 45 sites to the program. Officials from BLM indicated they plan to add 15 to 20 sites to the program. The Fish and Wildlife Service has added six sites during the last half of fiscal year 1998 but does not plan to add any further sites unless the demonstration program is extended beyond fiscal year 1999. Table 2.1 lists the number of fee demonstration sites, by agency, for fiscal year 1997 and through the first half of fiscal year 1998. An amendment to the law authorizing the demonstration program was one of the factors contributing to the addition of sites to the program. The law originally authorized each agency to retain the fee revenues that exceeded the revenues generated prior to the demonstration. As a result, the agencies could only retain the portion of the fee revenues that were in addition to existing fees. In November 1997, the law was amended to permit the agencies to retain all fee revenues generated by demonstration sites. This amendment created additional incentives for agencies to add existing fee sites to the program because the agency could retain all of the fee revenues generated at the site. While the approach and pace of implementation have varied, the four agencies have each been successful in raising substantial new revenues through the fee demonstration program. Before the program was authorized, each of the agencies collected fees at many recreation sites. But since the implementation of the program, each of the agencies has estimated that it has increased its fee collections by more than 70 percent above fiscal year 1996 levels—the last year before the program began. On the basis of estimates for fiscal year 1998, the Park Service has brought in significantly more in fee revenues than the other agencies. The estimated revenues of the Park Service account for about 85 percent of the revenues generated by the four agencies at demonstration sites. As shown in figure 2.5, as a result of the demonstration program, the four agencies have nearly doubled total combined fee collections since fiscal year 1996, according to the agencies’ estimates. In addition, each of the four agencies estimated that their fees increased under the demonstration by over 70 percent above fiscal year 1996 levels. Revenues under the fee demonstration program have come from a mix of new fees and increases to fees that existed before the program was authorized. In fiscal year 1996, the last year before the demonstration program was implemented, the four agencies collected a total of about $93.3 million in fees from visitors. In fiscal year 1997, the four agencies generated a total of about $144.6 million in fee revenues, of which about $123.8 million was attributed to fees at demonstration sites. For fiscal year 1998, the agencies estimate that total fee revenues will increase to about $179.3 million, with about $159.8 million in revenues from demonstration sites. (App. I contains information on each agency’s gross fee revenues for fiscal years 1996 through 1998.) Three of the four agencies have not developed formal estimates for fiscal year 1999. The one agency with fiscal year 1999 estimates—the Park Service—predicts only modest increases in revenues since the agency has already implemented the maximum number of demonstration sites authorized under the program. However, officials at each of the other three agencies estimated that as more sites become part of the demonstration program, revenues will increase. Each agency collected fees prior to the demonstration program, and as sites with existing fees were converted to demonstration sites, much of the agencies’ fee revenues have now been included in the demonstration. As a result, much of the demonstration fee revenues collected in fiscal year 1997 and beyond come from sites where fees were collected prior to the demonstration. Of the four agencies, the Park Service has generated about 85 percent of the $159.8 million in total estimated fee demonstration revenues for fiscal year 1998. The agency with the second largest revenues is the Forest Service, which estimated that it generated about 11 percent of the total fee demonstration revenues. The relative size of each agency’s revenues compared with the total revenues of the four-agency program is depicted in figure 2.6. The substantially higher revenues of the Park Service are mostly due to the agency’s large number of high-revenue sites. For fiscal year 1997, 28 Park Service sites each generated more than $1 million in fee revenues, and 2 of these sites—the Grand Canyon and Yosemite National Parks—each generated more than $10 million. Nearly all of these 28 sites attract high numbers of visitors and had histories of charging entrance fees prior to the demonstration program. In addition to the high-revenue sites of the Park Service, the Forest Service has two sites with revenues above $1 million. In contrast, in fiscal year 1997, the Fish and Wildlife Service and BLM did not have any sites with revenues above $1 million. During the first year and a half of the recreational fee demonstration program, overall expenditures at individual demonstration sites have been limited in comparison to revenues collected. So far, only about 24 percent of the revenues collected has been expended. Most of the expenditures have gone toward repair and maintenance, the costs of collection, and routine operations at the respective sites. At the sites we visited, we found that the agencies’ expenditures appeared to be consistent with the purposes authorized in the legislation establishing the program. The amount of collections varied considerably among the agencies and the individual sites within each agency, more than doubling operating budgets at some sites, while providing little revenue at others. As a result, assuming appropriations remain stable and that the program is extended beyond fiscal year 1999, many sites in the program will, in time, have sufficient revenues to address all of their needs—regardless of their relative priority within the agency. At the same time, other sites within an agency may not have enough to meet their most critical needs. Over the long term, this condition raises questions about the appropriateness of the high-revenue sites retaining 80 percent or more of their revenues as currently required by law. The four agencies have spent about 24 percent of the revenues available under the fee demonstration program through March 1998. Under the program’s original authority, not all of the revenues generated during fiscal year 1997 were available for expenditure. As a result, of the $123.8 million generated at demonstration sites in fiscal year 1997, $55 million was available to the agencies. For fiscal year 1998, the Congress amended the law authorizing the program to permit the agencies to retain all of the fee revenues generated under the program. As a result, the agencies have the full amount of the fee revenues generated at their demonstration sites in fiscal year 1998 available for expenditure. Through the first half of fiscal year 1998, the four agencies had generated about $36 million in fee revenues. Thus, the total amount available to the agencies for expenditure under the demonstration program through March 1998 was about $91 million. On a national basis, the four agencies estimated that of the $91 million available for expenditure through March 1998, about $22 million had been spent. Under the demonstration program’s current authorization, the participating agencies have until the end of fiscal year 2004 to spend the revenues raised by the program. Table 3.1 provides information comparing the fee revenues available for expenditure with actual expenditures through March 1998 for each of the four agencies. According to the managers in the participating agencies, the reasons that only 24 percent of the revenues available have been spent included (1) the approval of the authorizing legislation occurring in mid fiscal year 1996, (2) the delays in setting up accounting systems to track collections and return the funds to the sites, (3) the time needed to set up internal processes for headquarters’ approval of site expenditure plans, (4) the time needed to plan and implement expenditure projects, (5) the need to use funds during fair weather construction seasons, and (6) the fiscal year 1997 requirement for expenditures to exceed the base year amount before funds could be spent on the collecting site. The legislation authorizing the fee demonstration program permits the agencies to fund a broad array of activities from fee revenues, including the cost of fee collection, health and safety items, interpretation and signage, habitat enhancement, facility enhancement, resource preservation, annual operations, and law enforcement. The legislative history of the program further emphasized that fees were to be a new source of revenues to address backlogged repairs and maintenance. The law also states that at the discretion of agency heads, 20 percent of the fee revenues may be set aside for agencywide use for the same purposes. Of the $21.6 million in expenditures by the four participating agencies as of March 31, 1998, most have been for repairs and maintenance, the cost of collection, and operations. Figure 3.1 displays the relative size of three agencies’ expenditures by the categories authorized by the program’s legislation. As of March 31, 1998, the Park Service’s actual expenditures were mainly for the costs of repairs and maintenance, the cost of fee collection, resource preservation, and annual operations. Expenditures at the Forest Service’s demonstration sites were predominantly for annual operations, the cost of fee collection, repairs and maintenance, and interpretation. At the Fish and Wildlife Service’s sites, the cost of collection, repairs and maintenance, health and safety, and facility enhancement were the top expenditure categories. BLM did not have a national breakdown available. At the sites we visited, we found that the agencies’ expenditures appeared to be consistent with the purposes authorized in the legislation establishing the program. The top expenditures among the 15 sites visited were for the cost of fee collection, followed by annual operations and repairs and maintenance. Agency officials said that cost of fee collection is among the top categories of expenditure because of the necessary start-up costs for the demonstration program. The program’s authorization allows the agencies to spend their revenues on the actual cost of collection rather than funding the activity from other sources, such as appropriated funds. Since few expenditures had been made overall as of March 31, 1998, agency officials said the cost of collection makes up a disproportionately large part of the actual expenditures through that date. Each of the four agencies has developed its own approach for using the fees collected through the demonstration program. Each has exercised a different amount of direction and oversight over its demonstration sites’ expenditures. As a result, the agencies’ priorities and criteria for spending the fee revenues, their decisions on spending the 20 percent of the revenues not required to remain with the collecting sites, and their procedures for approving projects funded with fee revenues vary considerably. The following sections provide information about each agency’s overall expenditures. More detailed information on each agency’s expenditures for legislatively authorized purposes at the sites we visited appears in appendixes II through V. The Park Service has developed the most detailed criteria for spending fee revenues. After using the fees to cover the cost of their collection, the Park Service has given the highest priority to reducing its repair and maintenance backlog. The Park Service has required both headquarters and regional reviews of the demonstration site managers’ expenditure proposals. In addition, an Interior Department-level work group, including Park Service representatives, was commissioned by the Assistant Secretary for Policy, Management, and Budget to review the proposals. The Park Service’s headquarters had intended to have regional offices approve the expenditure of fee demonstration funds but found, after reviewing region-approved projects, that some did not meet the established criteria. The Park Service is addressing its spending priorities with both the 80 percent of the fee revenues that stay at the collecting sites and with the 20 percent of the funds that are put into an agencywide account for distribution primarily to nondemonstration sites. The Park Service spent $12.8 million on projects at its demonstration sites through March 31, 1998. This amounts to about 17 percent of its $75.2 million in fee revenues available for park use through that date. Park Service officials said that the amount of funds expended was small because the amendment to the authorizing legislation in November 1997 made significantly more revenues available to the agencies for expenditure than they had expected to be allowed to spend. Furthermore, the Park Service recreational fee program coordinator and the Park Service comptroller’s staff reported that because accounts and allocation procedures took time to establish, the first release of funds to the collecting sites for expenditure came in mid fiscal year 1997. Another factor affecting the start-up of the Park Service’s expenditures under the demonstration has been the time needed for the extensive reviews of proposed projects. On a national basis, the Park Service’s demonstration sites’ expenditures were in the categories displayed in figure 3.2. The Forest Service permits demonstration sites to retain 95 percent of their fee collections and to use them as allowed by the program’s authorizing legislation—with the remaining 5 percent to be spent at the discretion of each site’s regional office. Accordingly, the Forest Service has instructed their demonstration sites to use their fee revenues for any of the broad purposes set forth in the legislation. At the same time, the agency has emphasized the need to use the revenues in ways that visibly benefit visitors. Forest Service headquarters officials said the determination of the program’s expenditures is driven by the project managers at the demonstration sites. The ranger districts and forests involved develop lists of projects and set priorities among them. The fee demonstration sites have typically sought public input on what projects should be done, along with meeting other requirements. The Forest Service began to use the funds raised by the recreational fees at 40 demonstration sites in fiscal year 1997 to address the deferred maintenance backlog, visitor services, and maintenance enhancements. Of the $13 million in demonstration fee revenues through March 31, 1998, the demonstration sites have expended $7.8 million, or about 60 percent, according to data collected by Forest Service headquarters recreation staff. Headquarters officials noted that in fiscal year 1997, most sites had not been able to spend all the revenues they collected because fee collection started in the middle of the fiscal year, time was needed to make the fee deposits available to the sites for expenditure, and time was needed to plan and contract for the projects to be funded with fee revenues. On a national basis, as of March 31, 1998, the Forest Service’s demonstration sites have expended the greatest amount of fee revenues in the following categories: operations, the cost of fee collection, repairs and maintenance, and interpretation and signage (see fig. 3.3). Details on the expenditures at the Forest Service’s sites we visited are in appendix III. The Fish and Wildlife Service decided to allow its demonstration sites to use their fee revenues to maintain or improve recreation opportunities and enhance visitors’ experiences. Fish and Wildlife Service headquarters reviews the demonstration sites’ expenditure of the funds after the fact, using the agency’s overall criteria and specific guidance. The Fish and Wildlife Service has allowed its regional directors to determine where to use the 20 percent of the fee revenues that does not have to be spent at the collecting sites. This has resulted in collecting sites in four of the Fish and Wildlife Service’s seven regions being permitted to retain all of the fee revenues they generate. Directors of the three other regions have decided to require that 20 percent of the fee revenues from their demonstration sites be submitted to a central account for use as seed money to initiate fee programs at other sites, for improvements to visitor services, or for backlogged maintenance projects at other sites in the region. In the first year and a half of the program, the Fish and Wildlife Service’s demonstration sites have spent about one-quarter of the fee revenues they generated. Of the $2 million in fee revenues through March 31, 1998, the demonstration sites had expended $500,949, or 25 percent, according to data provided by Fish and Wildlife Service headquarters staff. According to the Fish and Wildlife Service, of the $500,949 spent nationally on projects during the first year and a half of the program, 71 percent was for the cost of collection, including start-up costs, with the remainder spent on repairs and maintenance, health and safety, facility enhancement, and interpretation projects (see fig. 3.4). Details on expenditures at the Fish and Wildlife Service sites we visited are included in appendix IV of this report. BLM headquarters decided to allow demonstration sites to spend funds for any of the purposes in the authorizing legislation and permitted the following uses for the demonstration funds: operations, maintenance, and improvements and interpretation to enhance recreational opportunities and visitors’ experiences. Site managers and their state offices decide on expenditures but are required to report the expenditures to the public and headquarters after each fiscal year. BLM headquarters decided to allow 100 percent of the revenues to be retained at the collecting sites, rather than requiring 20 percent of it to be submitted to a central fund for distribution. BLM’s demonstration sites have expended $572,034, or 56 percent, of the $1.0 million in fee revenues they collected through March 31, 1998, according to data provided by BLM headquarters staff. According to BLM headquarters staff, no breakdown by category of the actual expenditures as of March 31, 1998, was available for all of the agency’s sites. BLM’s fee demonstration program has expanded significantly in fiscal year 1998, from 10 active sites in fiscal year 1997 to a total of 63 approved sites as of March 31. Not all 53 new sites had begun collections or expenditures as of March 31, however. Details on expenditures at the BLM sites we visited are in appendix V. For many sites in the demonstration program—particularly the Park Service’s sites—the increased fee revenues equal 20 percent or more of the sites’ annual operating budgets. For the purposes of this report, we refer to these sites as high-revenue sites. At sites with backlogs of needs for maintenance, resource preservation and protection, and visitor services, this level of additional revenues will be sufficient to eliminate the backlogs over several years—assuming the program is extended and that existing appropriations remain stable. And, at sites with small or no backlogs, the additional revenues will support further site development and enhancement. However, the agencies selected demonstration sites not necessarily because of their extent of unmet needs for repairs, maintenance, or resource preservation, but rather because of their potential to generate fee revenues. At sites outside the demonstration program or sites that do not collect much fee revenues, the backlog of needs may remain or further development of the site may not occur. As a result, some of the agencies’ highest-priority needs may not be addressed. This potential for inequity among sites raises questions about the desirability of the current legislative requirement that at least 80 percent of the fee revenues be expended at the collecting site. Under the recreational fee demonstration program, 44 park units included in the Park Service’s 100 demonstration sites retained fees that exceeded 20 percent of their annual operating budgets in fiscal year 1998. Of these 44 sites, 13 retained fees exceeding 50 percent of their annual operating budgets, and 4 retained fees equaling or exceeding their operating budgets. For example, Arches National Park expects to supplement its fiscal year 1998 operating budget of $0.9 million with fees of $1.4 million—an effective increase of 160 percent in funds available on site. Castillo de San Marcos National Monument is expected to retain $1.3 million in fees, which is 110 percent of its operating budget of $1.2 million. Bryce Canyon National Park expected to retain $2.3 million in fees, which is 110 percent of its operating budget. Such substantial increases in the financial resources available to these sites should improve their ability to address their outstanding needs. Table 3.2 provides data on the fees retained by the 44 parks. Of the seven Park Service sites we visited during our review, four—Zion National Park, Timpanogos Cave National Monument, Carlsbad Caverns National Park, and Shenandoah National Park—were among those with fee revenues exceeding 20 percent of their operating budgets. Except for Timpanogos Cave, each of these sites had a list of backlogged repair and maintenance needs to be addressed. Managers at each of the three sites told us that the additional fee revenues would allow them to address these needs in a relatively short time. For example, Zion National Park officials told us that the park expected to receive so much new fee revenue in fiscal year 1998—about $4.5 million, a doubling of its operating budget—that they might have difficulty preparing and implementing enough projects to use the available funds if a major new $20 million alternative transportation system was not begun in the park. Without this major project, they probably would not be able to spend all of the money available to them in ways that were consistent with the demonstration program’s objectives, they said. The new transportation system is being initiated to eliminate car traffic from the most popular area of the park. Similarly, managers at Shenandoah National Park told us that the fee demonstration program revenues they expect to receive will be very useful in addressing unmet needs. The revenues expected in fiscal year 1998 of $2.9 million is equal to about 32 percent of the park’s operating budget. If the park continues to receive this level of fee revenues, the park superintendent said it should be able to eliminate its estimated $15 million repair and maintenance backlog in relatively few years. Unlike Zion and Shenandoah, Timpanogos Cave National Monument in Utah is a smaller park and does not have a backlog of repair and maintenance needs. According to managers, appropriated funds have been sufficient to keep up with the monument’s repairs and maintenance. Consequently, the managers plan to use the fee revenues they retain—$318,000 in fiscal year 1998, or about 61 percent of the monument’s annual operating budget—to enhance visitor services, such as by providing more cave tours. Park Service and Interior officials have recognized that certain sites with high fee revenues and small or nonexistent backlogs of needs will have difficulty spending their new revenues for projects that meet the demonstration program’s criteria. For example, the Comptroller of the Park Service said that some sites would run out of backlogged repair and maintenance needs to address with their fee revenues. In his view, an exemption from the requirement to retain 80 percent of the collected fees at the collecting sites and the authority to transfer more than 20 percent to a central fund for distribution to other sites would be among the options to consider. In addition, the Assistant Secretary of the Interior for Policy, Management, and Budget has testified that setting aside some of the fee revenues for broader agency priorities is important and has cautioned that permanent legislation giving collecting sites a high percentage of the revenues could “create undesirable inequities” within an agency. Similarly, some managers at higher-revenue sites we visited supported more flexibility in splitting revenues between high-revenue sites and other locations that have little or no fee revenues or that have large maintenance needs or both. Some sites participating in the demonstration program and many nonparticipating sites have repair and maintenance backlogs or health and safety needs but little or no fee revenues to address them. Under the demonstration program’s current 80-20 percent split of the revenues, the Park Service’s park units will stand to receive very uneven shares of the program’s $136 million in estimated fee revenues for fiscal year 1998: Of the 100 fee-collecting sites (which actually includes 116 park units), the top 44 units in terms of revenues are expected to retain $93 million, or 68 percent of the total, while the remaining collecting sites are expected to retain $13 million, or 10 percent of the total, leaving $30 million, or 22 percent of the total, for 260 nonparticipating sites. These sites include heavily visited locations like the Statue of Liberty National Monument in New York and some of the less visited sites such as Hopewell Furnace National Historic Site in Pennsylvania. At the other three agencies, particularly the Forest Service, there are also many sites that have as high a level of fee revenues as that realized by many of the Park Service’s sites. At least 33 of the Forest Service’s 39 demonstration sites operating in fiscal 1997 had fee revenues over 20 percent of their estimated operating budgets. Under the agency’s policy, the demonstration sites are retaining 95 percent of the fees for their own use, and the remaining 5 percent is spent at the discretion of the sites’ regional offices. As shown in table 3.3, for fiscal year 1997, of these 33 sites, 21 had fee revenues exceeding 50 percent of their operating budgets, and 8 of the sites had fee revenues equaling or exceeding their operating budgets. Data for the first half of fiscal year 1998 indicate that an even higher number of the collecting sites will generate revenues amounting to 20 percent or more of their operating budgets by year end. The Forest Service’s high-revenue sites include the Salt and Verde Rivers Recreation Complex in Tonto National Forest, Arizona, where fee collections in fiscal year 1997 were 279 percent of the fiscal year 1997 operating budget. For fiscal year 1998, the complex expects to collect $2.5 million, or about 435 percent of its operating budget, in fees. Similarly, at Mount St. Helens National Volcanic Monument in the Gifford Pinchot National Forest in Washington, $2 million in fees was collected in fiscal year 1997, which was 94 percent of the operating budget. For fiscal year 1998, about 102 percent of the monument’s operating budget is expected to be collected in fees. A two- to four-fold increase in funds available compared with the sites’ annual operating budgets amounts to a tremendous boost in available resources. While absorbing this level of additional funding for the needs of these sites is possible, the extent of sites’ unmet needs was not the principal factor in selecting them for participation in the program. Under these circumstances, it is likely that other higher-priority needs within the agency will go unaddressed at sites within the national forest system that do not have a high level of revenues or that are not participating in the demonstration program. Accordingly, keeping all of the revenues at the demonstration sites that collect substantial amounts of fees may not be in the best interests of the agency as a whole. Data on the revenues and operating budgets for the Fish and Wildlife Service’s and BLM’s demonstration sites were more limited. As a result, we did not do analyses that were comparable to those we did on the Park Service’s and Forest Service’s sites. However, since visitation at the Fish and Wildlife Service’s and BLM’s sites is generally less than at park or forest sites, it is likely that these agencies do not have a high proportion of high-revenue sites. Among the sites we visited, one of the Fish and Wildlife Service’s sites and one of BLM’s sites realized fee revenues through the demonstration program that were high in relation to their operating budgets. BLM has allowed its demonstration sites to retain 100 percent of the fee revenues they collect to address their own needs. However, it is likely that only a few sites have or will generate high levels of revenues relative to their operating budgets, according to BLM headquarters staff. We could not determine specifically how many BLM demonstration sites have or will generate fee revenues equal to 20 percent or more of their operating budgets because this information was not available at BLM headquarters and only 10 sites were operational in fiscal 1997, with 53 more approved as of March 31, 1998, that were to begin collecting fees during fiscal year 1998. Among all of BLM’s demonstration sites, the Red Rock Canyon National Conservation Area that we visited in Nevada is the highest revenue site, according to BLM staff. At Red Rock, the annual operating budget is estimated to be $1.2 million, while estimated gross revenues from the demonstration program for fiscal year 1998 are $0.9 million, or 75 percent of the operating budget. Another of BLM’s demonstration sites with relatively high revenues is the Lower Deschutes Wild and Scenic River in central Oregon where boater use and campsite fees generated $326,088 in fiscal year 1997, which is 53 percent of the recreation site’s annual operating budget of $617,000. As with BLM, data on how many of the Fish and Wildlife Service’s demonstration sites are generating fee revenues amounting to 20 percent or more of their operating budgets were limited. However, agency staff have reported few sites generating revenues that might amount to 20 percent or more of their operating budgets. Of the three Fish and Wildlife Service sites we visited, only one—Chincoteague National Wildlife Refuge in Virginia—had relatively high recreational fee revenues. There, about $300,000 was expected in fee revenues in fiscal year 1998, or about 17 percent of the refuge’s annual operating budget of $1.8 million. If this level of revenue continues, and appropriations remain stable, then managers at the refuge thought that the entire repair and maintenance backlog could be addressed with the program’s revenues. The fee demonstration program has created a significant new revenue source, particularly for the Park Service and the Forest Service, during a period of tight budgets. However, at high-revenue sites, there is no assurance that the needs being addressed are among those having the highest priority within an agency—raising questions about the desirability of the legislative requirement that at least 80 percent of the revenues remain at the collecting site. Using the revenues created by the fee demonstration program on projects that may not have the highest priority is inefficient and restricts the agencies from maximizing the potential benefits of the program. While giving recreation site managers a significant financial incentive to establish and operate fee-collection programs, the current legislation may not provide the agencies with enough flexibility to address high-priority needs outside of high-revenue sites. Factors such as the benefit to visitors, the size of a site’s resource and infrastructure needs, the site’s fee revenues, and the most pressing needs of the agency as a whole are important to consider in deciding where to spend the funds collected. Even if the demonstration program is made permanent and all recreation sites are permitted to collect fees, inequities between sites will continue. As the Congress decides on the future of the fee demonstration program, it may wish to consider whether to modify the current requirement that at least 80 percent of all fee revenues remain in the units generating these revenues. Permitting some further flexibility in where fee revenues could be spent, particularly the fees from high-revenue sites, would provide greater opportunities to address the highest-priority needs of the agencies. However, any change to the 80-percent requirement would have to be balanced against the need to maintain incentives at fee-collecting units and to maintain the support of the visitors. Two agencies within the Department of the Interior raised concerns about this chapter. In general, the Park Service agreed with the findings of the report. However, the Park Service commented on the abilities of some park units to address their backlogged repair and maintenance needs through fee revenues. Specifically, the Park Service said that our portrayal of this issue paints a false picture as the report does not address backlogged resource management needs in addition to repair and maintenance needs. We disagree with the Park Service’s comment on this point. We acknowledge that regardless of what happens to the repair and maintenance backlog, there may continue to be needs related to the natural and cultural resources at the parks we reviewed and at other sites. However, early in its implementation of the demonstration program, the Park Service directed its demonstration sites to focus program expenditures on addressing backlogged repair and maintenance items. Because of this Park Service emphasis, we sought to determine to what extent the new fee revenues would be able to address these items. We found that park managers at several parks indicated that they could address their existing repair and maintenance backlog in a few years (5 years or less) through these fee revenues. For example, managers of some of the parks we visited, such as Zion and Shenandoah, indicated that they could resolve their backlog of repair and maintenance needs in a few years through revenues from the demonstration program. In our view, this belief that individual park units may be able to eliminate their repair and maintenance backlog is not consistent with the Park Service’s past portrayal of a large repair and maintenance backlog, especially since the backlog, and not resource needs, is the agency’s stated focus for new revenues. The Fish and Wildlife Service disagreed with what it viewed as “an inference in the draft report that the practice of retaining 80 percent of the revenues at the station where fees are collected may not be a good practice.” In fact, the 80-percent requirement is appropriate in some cases; however, providing the agencies with greater flexibility may enable them to better address their highest-priority needs. The matter for congressional consideration on providing additional flexibility to the agencies that we have offered is primarily directed at high-revenue sites. Furthermore, our comments on this issue are consistent with the testimony of the Assistant Secretary of the Interior for Policy, Management, and Budget who said that setting aside some of the fee revenues for broader agency priorities is important and cautioned that giving the collecting sites a high percentage of the revenues could create undesirable inequities within an agency. The Department of Agriculture’s Forest Service agreed with our matter for congressional consideration that the 80-percent requirement be changed to permit greater flexibility. They noted that the emphasis on this point should remain on high-revenue sites and that any change to the 80-percent requirement would have to be balanced against the need to maintain incentives at fee-collecting units and to maintain the support of the visitors. Each of the agencies can point to a number of success stories and positive impacts that the fee demonstration program has had so far. Among the four agencies, a number of examples exist in which a new or innovative approach to collecting fees has resulted in greater convenience for the visitors and has improved efficiency for the agency. In addition, several of the agencies have tried innovative approaches to pricing that have resulted in greater equity in fees. However, some agencies could do more in this area. For example, while the Park Service has been innovative in looking for new ways to collect fees, it has been reluctant to experiment with different pricing approaches. As a result, the agency has not taken full advantage of the opportunity presented by the demonstration program. Greater innovation, including more business-like practices like peak-period pricing, could help address visitors’ and resource management needs. In addition, although the Congress envisioned that the agencies would work with one another in implementing this program, the coordination and the cooperation among the agencies have, on the whole, been erratic. More effective coordination and cooperation among the agencies would better serve visitors by making the payment of fees more convenient and equitable and, at the same time, reduce visitors’ confusion about similar or multiple fees being charged at nearby or adjacent federal recreation sites. One of the key legislative objectives of the demonstration program is for the agencies to be creative and innovative in implementing their fee programs. The program offers an opportunity to try new things and to learn lessons on what worked well and what did not. Among the four agencies, numerous examples can be found of innovation in developing new methods for collecting fees. In addition, the Forest Service and BLM have also experimented with new pricing structures that have resulted in greater equity in fees. However, the Park Service and the Fish and Wildlife Service have generally maintained the traditional pricing practices they used prior to the demonstration program. Accordingly, the Park Service and the Fish and Wildlife Service can do more in this area. Furthermore, greater experimentation would better meet the objective of the demonstration program as agencies could further their understanding of ways to make fees more convenient, equitable, and potentially useful as tools to influence visitation patterns and to protect resources. Examples of innovations in fee programs are differential pricing and vendor sales, which have been widely used by commercial recreation enterprises for many years. For instance, golf courses and ski areas frequently charge higher prices on the weekend than they do midweek, and amusement parks often sell entrance passes through many vendors. These concepts had rarely, if ever, been part of the four agencies’ fee programs prior to the demonstration. The Park Service, the Forest Service, and BLM are trying new ways of collecting fees that may prove more convenient for visitors. For example, the Park Service is now using automated fee-collecting machines at over 30 of its demonstration sites. These machines are similar to automated teller machines (ATM): Visitors can pay their fees with cash or credit cards, and the machine issues receipts showing the fees were paid. For example, the Grand Canyon National Park sells entrance passes at machines located in several areas outside the park, including in the towns of Flagstaff and Williams, Arizona, which are both along frequently used routes to the park and more than 50 miles from the park’s south entrance. The park has dedicated one of the four lanes at its entrance station for visitors who have already purchased their entrance passes. Thus, visitors who use the machines outside the park can avoid lines of cars waiting to pay fees at the park’s entrance station. At other demonstration sites within the Park Service, visitors can use automated fee-collection machines to pay for entrance fees, annual passes, or boat launch fees. As part of the demonstration program, the Forest Service is looking for ways to make paying fees more convenient for the visitor and more efficient for the agency. In some instances, paying fees at a location inside a forest may not always be convenient for visitors—particularly if that location is not near where visitors enter the forest, according to a Forest Service headquarters official. Some sites have experimented with having businesses and other groups outside of the forest collect entrance and user fees from visitors before they come into the forest. The vendors of the entrance and user permits are frequently small businesses, such as gas stations, grocery stores, or fish and tackle stores, that are located near the forest. For example, 350 vendors sell passes to visitors for recreation on any of four national forests in southern California. By having vendors sell entrance and user permits, a forest can increase the number of locations where visitors can pay fees and can thereby make paying fees more convenient. At Paria Canyon-Coyote Buttes in Arizona, one of BLM’s demonstration sites, the agency is experimenting with selling hiking and camping permits via the Internet. Permits are required for overnight camping by up to a total of 20 persons per day in the Paria Canyon area and for hiking by up to a total of 20 persons per day in the Coyote Buttes area. BLM, working in cooperation with Northern Arizona University and the Arizona Strip Interpretive Association, has developed a website that allows visitors to obtain information on the area, check on the availability of permits for future dates, make reservations, fill out and submit detailed application forms, or print out the application forms for mailing. In addition, visitors can pay for permits over the Internet using credit cards, although the agency is still in the process of developing the security protocols that are needed to properly protect the transactions. Visitors can also fax credit card payments or send payments through the mail. Besides innovating and experimenting to make paying fees more convenient for visitors, two of the agencies are also experimenting with various pricing strategies at demonstration sites. Pricing strategies being tried by the Forest Service and BLM are focused on charging fees that vary based on the extent of use or on whether the visit is made during a peak period—such as a weekend—or during an off-peak period. This concept is generally referred to as differential pricing and has resulted in greater equity in pricing at the sites where it has been tried. For example, in Utah, Uinta National Forest and Wasatch-Cache National Forest have both experimented with differential pricing. At American Fork Canyon/Alpine Loop Recreation Area, within the Uinta National Forest, the forest began charging a new entrance fee under the demonstration program of $3 per car for a 3-day visit and $10 for a 2-week visit. Similarly, at the Mirror Lake area within the Wasatch-Cache National Forest, visitors pay a new entrance fee of either $3 per vehicle for a day or $10 per vehicle for a week. Thus visitors to both the Uinta and Wasatch-Cache National Forests pay fees that vary with the extent of use. Fees that vary with use are more equitable than a single fee for all visitors regardless of use, as has been the traditional practice at many federal recreation sites. The Forest Service and BLM have also experimented with charging fees that differ based on peak and off-peak periods. For example, at Tonto National Forest in Arizona, the entrance fees vary depending on the day of the week. The forest sells two annual passes for day use, including use of the boat launch facilities, at six lakes within the forest. One pass sells for $90 per year and is valid 7 days a week. The other pass sells for $60 per year and is valid only Monday through Thursday, the forest’s off-peak period. Another example of peak pricing is at the Lower Deschutes Wild and Scenic River in Oregon, one of BLM’s sites, where as part of the demonstration program, the agency charges a camping fee of $10 per site per day on weekends in the summer and a $5 per site per day fee midweek and during weekends in the off-season. By charging lower fees for off-peak use, these agencies are using fees as a management tool to encourage greater use when sites have fewer visitors. This practice can help to mitigate the impact of users on resources during what would normally be the sites’ busiest periods. While the Park Service has tried new methods for collecting fees, opportunities remain for the agency to further the goals of the demonstration program by being more innovative and experimental in its pricing strategies. While the agency certainly does not need to retool its program or use differential pricing arrangements at each of its sites, the Park Service could build on what it has already done. Specifically, it could look for ways, where appropriate, to provide greater equity in fees to give visitors incentives to use parks during less busy periods, thus reducing demand on park facilities and resources during the busiest times. Because of the large numbers of visitors and the large amount of fee revenues generated, the Park Service has an opportunity to improve its pricing strategies. For the types of areas managed by the Park Service, entrance fees have worked well for the agency and are convenient for most visitors to pay. However, visitors to units of the national park system having entrance fees (about one-third of the 376 units) generally pay the same fee whether they are visiting during a peak period, such as a weekend in the summer, or an off-peak period, such as midweek during the winter, and whether they are staying for several hours or several days. A more innovative fee system would make fees more equitable for visitors and may change visitation patterns somewhat to enhance economic efficiency and reduce overcrowding and its effects on parks’ resources. For example, managers at several of the parks we visited, including Assateague Island National Seashore and Shenandoah National Park, discussed how during peak visitation periods, such as summer weekends, long lines of cars frequently form at entrance stations, with visitors waiting to pay the fee to enter the parks. The lines are an inconvenience to the visitors and the emissions from idling cars could affect the sites’ resources. By experimenting with pricing structures that have higher fees for peak periods and lower fees for off-peak periods, sites might be able to shift more visitation away from high-use periods. Our past work has found that increased visitation has eroded many parks’ ability to keep up with visitors’ and resource needs. Innovative pricing structures that result in less crowding in popular areas would also improve the recreational experience of many park visitors. Furthermore, according to the four agencies, reducing visitation during peak periods can lower the costs of operating recreation sites by reducing (1) the staff needed to operate a site, (2) the size of facilities, (3) the need for maintenance and future capital investments, and (4) the extent of damage to a site’s resources. As we already pointed out, the private sector uses such pricing strategies as a matter of routine—including when the private sector operates within parks. The private sector concessioner that operates the lodging facilities in Yosemite National Park in California, for example, employs peak pricing practices. Lodging rates are higher during the peak summer months and lower during the months when the park attracts fewer visitors. Furthermore, most parks with entrance fees charge the same fee regardless of the extent of use. For example, Zion and Olympic National Parks both charge an entrance fee of $10 per vehicle for a visit of up to 1 week. This fee is the same whether visitors are enjoying these areas for several hours, a day, several days, or the full week. This one-size-fits-all approach is convenient for the agency but may not be equitable or efficient because visitors staying longer enjoy more benefits from a site. At one park, the lack of an alternative to the 7-day entrance fee has contributed to the formation of a “black market” in entrance passes. According to recent media reports, some visitors to Yellowstone National Park are reselling their $20 1-week entrance passes—after staying only a few days or less at the park—to other visitors planning to enter the park. Since the passes are valid for 7 days, a family could sell its pass to another carload of park visitors for perhaps half price and reduce the cost of visiting the park for both parties. Even though the entrance pass is nontransferable and selling a pass is illegal and subject to a $100 fine, the park does not have an estimate of the extent of the situation. The park has not experimented with an entrance fee for visits of less than 7 days, a pricing option that would be likely to address the illegal resale of passes. Park Service headquarters officials indicated that the agency had not tried differential pricing at demonstration sites because, in their view, it (1) would be difficult to conduct sufficient enforcement activities to ensure compliance, (2) would increase the costs of fee collection, and (3) may result in a decrease in fee revenues. While we acknowledge that it may be simpler to charge only one rate to visitors at demonstration sites, the agencies that are currently using differential pricing—the Forest Service and BLM—have been able to address the concerns raised by the Park Service. Given the potential benefits of differential pricing to both the agency and the visitors, an opportunity exists for the Park Service to experiment with such pricing at a small sample of demonstration sites. The four agencies have implemented a number of multiple-agency fee demonstration projects. Although these efforts are few in comparison to the more than 200 fee projects that have begun so far, they demonstrate that multiple agencies with somewhat varying missions can form successful partnerships when conditions, such as geographical proximity, present the opportunity. While we found several examples of successful, multiple-agency fee demonstration projects, more could be done. At several of the sites we visited, opportunities existed for improving the cooperation and coordination among the agencies that would increase the quality of service provided to visitors. The legislative history of the fee demonstration program includes an emphasis on the participating agencies’ working together to minimize or eliminate confusion for visitors where multiple fees could be charged by recreation sites in the same area. There are several areas that are now working together to accomplish this goal. For example, a joint project was developed in 1997 at the American Fork Canyon/Alpine Loop Recreation Area in Utah between the Forest Service’s Uinta National Forest and the Park Service’s Timpanogos Cave National Monument. The monument is surrounded by Forest Service land, and the same roads provide access to both areas. Because of this configuration, the agencies generally share the same visitors and charge one fee for entrance to both areas. The sites also have similar public service and resource management goals. Fee-collection responsibilities are shared between the two agencies, and expenditures are decided upon by representatives from both agencies as well as from two other partners in the project—the State of Utah Department of Transportation and the county government. Figure 4.1 shows the partnership’s entrance station for the area. Since 1997, fee revenues from the project have paid for the rehabilitation of several bridges in popular picnic areas (see fig. 4.2). Future fee revenues will fund the staffing and maintenance of entrance stations where fees are collected; the repair and maintenance of camping areas, trails, and parking areas; additional law enforcement services; and resource management projects. Agencies—federal and nonfederal—have worked together to improve visitor services and reduce visitor confusion as part of the fee demonstration program in other areas as well. Examples include (1) the Tent Rocks area in northern New Mexico (BLM and an Indian reservation); (2) recreation sites along the South Fork of the Snake River in Idaho (the Forest Service, BLM, state agencies, and county governments); (3) recreation sites in the Paria Canyon-Coyote Buttes area in Arizona (BLM, the Arizona Strip Interpretive Association, and Northern Arizona University); (4) the Pack Creek bear-viewing area in the southeast Alaska (the Forest Service and the Alaska Department of Fish and Game); and (5) the proposed Oregon Coastal Access Pass (the Park Service, BLM, the Forest Service, and Oregon state parks). Through the partnership at the Tent Rocks area in north-central New Mexico between Albuquerque and Sante Fe, visitors get access to a unique geological area that BLM administers via a 3-mile access road across Pueblo de Cochiti, an Indian reservation. BLM’s site, known as the Tent Rocks Area of Critical Environmental Concern and National Recreation Trail, features large, tent-shaped rocks that hug steep canyon walls. The area is surrounded by two Indian reservations. The only access road for vehicles to Tent Rocks crosses land owned by Pueblo de Cochiti. In 1998, a cooperative partnership agreement gave visitors access to Tent Rocks, while specifying prohibited activities to preserve the tranquility of the pueblo community. The agreement also specifies resource preservation measures to protect the Tent Rocks area. Annually, Tent Rocks is visited by about 100,000 people. Under the terms of the agreement, BLM is responsible for collecting fees and shares $1 of the $5 vehicle fee with Pueblo de Cochiti. The pueblo provides interpretive talks, trash pickup, and road maintenance. As of July 1998, this interorganizational demonstration project was working satisfactorily, according to BLM officials. The Oregon Coastal Access Pass has been proposed for visitors to enter several adjacent federal and state recreation sites, each of which now charges a separate entrance fee. These include the Park Service’s Fort Clatsop National Memorial, BLM’s Yaquina Head Outstanding Natural Area, the Forest Service’s Oregon Dunes National Recreation Area, and the state of Oregon’s Department of Parks and Recreation. All of these sites currently charge separate fees, ranging from several dollars per person to over $10. For a number of years, visitors to these sites have commented on the lack of government coordination over the numerous entrance and user fees these facilities charge. During the last 2 years, representatives from the federal and state agencies involved have held meetings to develop an Oregon Coastal Access Pass, which would be good for entrance and use at all participating federal and state sites along the Oregon coast. According to a Forest Service official, two issues need to be resolved before implementing the pass: (1) the estimation of the revenues from each of the facilities to determine the amount of anticipated revenues to be shared and (2) the development of and agreement on an equitable formula to share fee revenues among the federal and state sites. The pass could be implemented in 1999, according to a Forest Service official participating on this project. While some progress is being made to increase coordination among agencies, our work shows that there are still opportunities for improvement that would benefit both the federal government and visitors. Further coordination among the agencies participating in the fee demonstration program could reduce confusion for the visitors as well as increase the revenues available for maintenance, infrastructure repairs, or visitor services. Even at the few participating sites we visited, we identified three areas where better interagency coordination would provide improved services and other benefits to the visiting public, while at the same time generating increased fee revenues. For example, in New Mexico, BLM administers a 263,000-acre parcel called El Malpais National Conservation Area. Within the BLM boundaries of this site is the El Malpais National Monument created in 1987 and managed by the Park Service (see fig. 4.3). Adjoining several sides of the agencies’ lands are two Indian reservations. Interstate, state, and county roads cross and border the BLM and Park Service lands. Presently, neither parcel has an entrance or user fee. In 1997, as part of the fee demonstration program, BLM proposed a $3 daily fee to the site. According to a BLM official, the proposed demonstration site was to be managed as a joint fee demonstration project with the Park Service, with the fee applicable to both areas. According to BLM, a demonstration project would not only increase revenues to pay for work needed at the site but also increase the presence of agencies’ officials at the site, which would help deter vandalism and other resource-related crimes. Because it is difficult for visitors to distinguish between the two sites, a unified and coordinated approach to fee collection made good management sense and would avoid confusion among fee-paying visitors to the sites. The surrounding communities endorsed BLM’s proposal, but Park Service officials at the site did not. They told us that they believed that there would be low compliance with any fee requirements because of the multiple access roads to the site, that potentially delicate situations would arise with Native Americans using the land for ceremonial purposes, and that theft and vandalism would increase because of the proposed project’s unstaffed fee-collection tubes. A local BLM official, however, said that the site could generate significant revenues (over $100,000 annually), that fee exemption cards could be developed for Native Americans using the land for traditional purposes, and that past experience in the southwest has not shown extensive damage to unstaffed fee-collection devices like those proposed for use at this site. As a result of the differing views between BLM and Park Service officials at this site, no coordinated approach has been developed. However, our work at the site indicated that experimenting with a new fee at the location would be entirely consistent with the objective of the demonstration program. As of August 1998, neither agency had documented its analysis of the situation, and BLM was considering deleting the site as a potential fee demonstration project. In the state of Washington, we found another opportunity for interagency coordination. Olympic National Park and the Olympic National Forest share a common border for hundreds of miles and are both frequently used by backcountry hikers. For backcountry use, hikers are subject to two separate fees at Olympic National Park—a $5 backcountry hiking permit and a $2 per night fee for overnight stays in the park. In contrast, Olympic National Forest does not have an entry fee, a backcountry permit fee, or any overnight fee in areas that are not specifically designated as campsites. However, the forest does have a trailhead parking fee of $3 per day per vehicle or $25 annually per vehicle. As a result, backcountry users who hike trails that cross back and forth over each agency’s lands are faced with multiple and confusing fees. Figure 4.4 shows an example of a backcountry hike from Lena Creek (Olympic National Forest land) to Upper Lena Lake (Olympic National Park land)—14 miles round-trip—where backcountry users would face such multiple fees. Table 4.1 lists the fees involved for the hike. We discussed this situation with on-site managers from both agencies. They agreed that they should better coordinate their respective fees to reduce the confusion and multiplicity of fees for backcountry users. However, so far, neither agency has taken the initiative to make this happen. At the time of our review, no one at the departmental or agency headquarters level routinely got involved in these kinds of decisions. Instead, the decisions were left to the discretion of the site managers. A third example of where greater coordination and cooperation would lead to operational efficiencies and less visitor confusion is in Virginia and Maryland at the Chincoteague National Wildlife Refuge, administered by the Fish and Wildlife Service, and the Assateague Island National Seashore, administered by the Park Service. Although the sites adjoin each other on the same island (see fig. 4.5), they are not a joint project in the fee demonstration program—each site is a separate fee demonstration project. During our review, we found many similarities between these two sites that offer the possibility of testing a single entrance fee for both sites. Both sites charge a daily entrance fee ($5 per vehicle), cooperate on law enforcement matters, and run a joint permit program for off-road vehicles. In 1997, according to Park Service officials, the two agencies together issued 5,000 annual off-road vehicle permits at $60 each. By agreement between the two agencies, the permit revenues are shared, with one-third going to the refuge and two-thirds going to the Park Service. The Park Service already provides staff to operate and maintain a ranger station and bathing facilities on refuge land. Despite these overlapping programs and similarities, the units still maintain separate, nonreciprocal entrance fee programs. This situation is continuing even though officials at the refuge told us that visitors are sometimes confused by separate agencies managing adjoining lands without any reciprocity of entrance fees. For example, during a 7-day period in July 1998, refuge officials counted 71 of 4,431 visitor vehicles as wishing to use their vehicle entrance passes for Assateague to gain admittance to Chincoteague. Similarly, during the 7-day period of July 31 through August 6, 1998, Assateague officials counted 40 of 4,056 visitor vehicles as presenting Chincoteague entrance passes to gain admittance to Assateague. In both instances, visitors needed explanations about the entrance fee policies and practices of the two sites. Refuge and seashore officials have discussed this issue, but the matter remains unresolved. While there are many notable examples of innovation and experimentation in setting and collecting fees at demonstration sites, further opportunities remain in this area. Innovation and experimentation were one of the objectives under the demonstration program’s authority and could result in fees that are more equitable, efficient, and convenient and could also work toward helping the agencies accomplish their resource management goals. Congressional interest in encouraging more interagency coordination and cooperation was focused not only on seeking additional revenues but also on developing ways to lessen the burden of multiple, similar fees being paid by visitors to adjoining or nearby recreation sites offering similar activities. Successful experiences with interagency coordination and cooperation have produced noteworthy benefits to the agencies and to visitors. Additional coordination and cooperation efforts should be tested at other locations to get a better understanding of the full impact and potential of the program. We recommend that the Secretary of the Interior require that the heads of the Park Service and the Fish and Wildlife Service take advantage of the remaining time under the fee demonstration authority to look for opportunities to experiment with peak-period pricing and with fees that vary with the length of stay or extent of use at individual sites. We also recommend that the Secretaries of the Interior and Agriculture direct the heads of the participating agencies to improve their services to visitors by better coordinating their fee-collection activities under the recreational fee demonstration program. To address this issue, each agency should perform a review of each of its demonstration sites to identify other federal recreation areas that are nearby. Once identified, each situation should be reviewed to determine whether a coordinated approach, such as a reciprocal fee arrangement, would better serve the visiting public. Two agencies within the Department of the Interior commented on this chapter. The Park Service raised concerns about experimenting with differential or peak-period pricing. The agency said that experimenting with fees could result in complex fee schedules, increased processing times at entrance stations, confused visitors, and more difficult enforcement. In addition, the agency took exception to the draft report’s comparisons to the differential pricing practices used at amusement parks, golf courses, and ski areas, noting that the agency’s purpose is different from the purposes of such operations. However, we disagree that these concerns are reasons not to implement different pricing policies at some parks. We recognize that the Park Service’s current fee schedule has been successful but question whether the agency has responded sufficiently to one of the intents of the recreational fee demonstration program: that agencies experiment with innovative pricing structures. If done well, experimenting with differential pricing at Park Service demonstration sites need not result in complex fee schedules, delays at entrance stations, confused visitors, or significant increases to the cost of collection. It is in this context, that we provided the examples of golf courses, amusement parks, and ski areas—recreation activities that routinely use differential pricing to which the public is already accustomed. In many cases, these fee systems are equitable, easily understood by the public, and do not cause delay or confusion. Furthermore, the Park Service comments on this point are not consistent with the January 1998 report to the Congress on the status of the fee demonstration program, which was jointly prepared by the Park Service, the Forest Service, BLM, and the Fish and Wildlife Service and transmitted by the Undersecretary of the Department of Agriculture and an Assistant Secretary of the Department of the Interior. In that report, the four agencies noted that among the lessons learned up to that point was that differential pricing could be used to maximize resource protection or to minimize infrastructure investment. The report states that “higher fees on weekends, summer months, or other traditionally-high recreation use, might reduce the peak loads on resources and facilities . . . . Reductions in peak loads can directly reduce the cost to taxpayers associated with operating the recreation sites, providing services to these sites, and any attendant damage to the resource.” The Park Service also raised concerns about the draft report’s discussion of the potential for a joint fee demonstration site between the Park Service and BLM at El Malpais National Monument and El Malpais National Conservation Area. (BLM did not comment on this point.) The Park Service said that (1) a cost-benefit analysis showed it was not worth collecting fees and (2) collecting fees would affect the use of the area by five neighboring Native American tribes. It was clear from our work that there was disagreement among Park Service and BLM officials over whether El Malpais was a suitable site for inclusion in the demonstration program and that this disagreement continues. The boundaries of the agencies’ land make it unlikely that the project could succeed without a joint effort. We disagree with the Park Service’s concerns raised on this point and question their accuracy since the analysis showing that fee revenues would be low, referred to in the Park Service’s comments, has not been completed. We obtained a draft of that analysis which, according to Park Service staff at El Malpais National Monument, was the most recent analysis available as of October 15, 1998. The draft analysis contains no information on anticipated costs or revenues from charging fees at this site. Furthermore, we disagree with the Park Service’s assertion that fees would affect Native American use of the site. According to the Park Service regional fee demonstration coordinator, at park units where similar situations existed, local managers were able to resolve cultural issues with the Native Americans using the sites. The Fish and Wildlife Service commented that there may be opportunities for the agency to experiment with off-peak pricing, but such opportunities would be limited to those sites where there is sufficient visitation to create crowding and provide an incentive for off-peak use. We agree. In fact, crowded parking at one refuge was a big enough concern that managers were considering measures to better handle visitation during peak periods. The Fish and Wildlife Service also commented on the need for greater coordination among the agencies. The agency noted that cooperative fees have been tried in many instances where they are appropriate and that some of these have resulted in moderate success. We encourage the agency to continue to look for opportunities to coordinate since it would generally increase the level of service provided to the visiting public. The Department of Agriculture’s Forest Service agreed with the recommendation for the agencies to look for opportunities to coordinate their fee programs. Data from recreational fee demonstration sites participating in 1997 suggest that the new or increased fees have had no overall adverse effect on visitation, although visitation did decline at a number of sites. Such data, however, are based on only 1 year’s experience, so the full impact of fees on visitation will not be known until completion of the program. Early research on visitors’ opinions of the new fees has shown that visitors generally support the need for, and the amount of, new fees. However, these conclusions are based on limited analysis in that only two of the four agencies—the Park Service and the Forest Service—have completed visitor surveys at a small number of sites participating in the demonstration program. Accordingly, the survey results may not represent visitors’ opinions at all participating sites or represent views of nonvisitors. Each participating agency planned to conduct additional visitor surveys in 1998 and 1999 to more fully assess the impact of fees on visitation. However, some interest groups and recreation fee experts have identified some research gaps, such as potential visitors who do not come to recreation sites or who do go to sites but drive off because of the new or increased fees and fail to participate in the survey. A number of interest groups we contacted were generally supportive of the program. However, some had concerns about the program and how it was being implemented. Although data for more years will be needed to fully assess the effect of increased recreational fees on visitation, 1997 data from the 206 sites participating in the demonstration program preliminarily suggest that the increased fees have had no major adverse effect on visitation. Except for BLM, each agency reported that, overall, visitation increased from 1996 to 1997 at its sites, even though some individual sites experienced declines in visitation, especially when new fees were charged. Data from 1997 are the first available to assess the impact of the fee demonstration program on visitation, since the four agencies spent 1996 designing the program and selecting the sites. Overall, of the 206 demonstration sites operated by the four agencies, visitation during 1997 increased by 4.6 percent from 1996. Visitation increased at three agencies’ sites, with the Park Service sites showing the largest increase, while BLM reported an overall decline in visitation of 10.4 percent (see table 5.1). Among the 206 sites, visitation increased at 120 sites, decreased at 84 sites, and was unchanged at 2 sites (see table 5.2). Because these data represent only the change in 1 year and many factors besides fees can affect visitation levels, several agency officials told us that the 1996 to 1997 visitation changes provide only a preliminary indicator of the impact of increasing or imposing fees at the demonstration sites. In addition, visitation can be affected by a variety of factors, such as weather patterns, the overall state of the economy, gasoline prices, currency exchange rates, and historical celebrations. Accordingly, changes in fee levels or instituting new fees, by themselves, do not fully account for changes in visitation levels. Nonetheless, on the basis of the data currently available, a report by the four participating agencies to the Congress states, “Visitation to the fee demonstration sites does not appear to have been significantly affected, either positively or negatively, by the new fees.” While overall visitation increased 4.6 percent among all agencies in 1997, visitation levels varied among agencies and among sites within the same agency. During the period, visitation at nondemonstration sites among the agencies increased 3.6 percent. Changes in visitation to sites participating in the recreational fee demonstration program are summarized below for each of the four participating agencies. Annual visitation at the Park Service’s 96 sites participating in the recreational fee demonstration program in 1997 increased 5.6 percent over 1996—from 141.1 million to 149.0 million visitors. Visitation increased at 50 sites, decreased at 45 sites, and remained unchanged at 1 site. Some sites that raised existing fees in 1997 experienced significantly higher rates of visitation after the increased or new fees went into effect. For example, at one site we visited, Timpanogos Cave National Monument in Utah, a new entrance fee plus increased fees for cave tours allowed the park to hire additional cave interpreters, which lengthened the season for cave tours by 3 months. As a result, visitation increased 16 percent, and about 16,000 more visitors were able to tour the site in 1997 than 1996. In contrast, at another site we visited, Frederick Douglass National Historic Park in Washington, D.C., visitation declined 24 percent from 45,000 in 1996 to 34,000 in 1997. In 1997, the site instituted a new $3 per person entrance fee, whereas in 1996, entrance was free. According to a Park Service official, the new fees probably played a role in the decline in visitation. In commenting on a draft of this report, the Park Service stated that the closure of a nearby museum and several major road projects may have also influenced visitation at the site. Because visitation at the Park Service’s sites represents about three-quarters of total 1997 visitation at all of the demonstration program sites, we asked the Park Service for data on historical visitation levels at both its demonstration and nondemonstration sites. These data show that visitation at nondemonstration sites rose faster from 1996 to 1997, 7.0 percent compared with 5.6 percent for demonstration sites. The higher fees might be one factor accounting for a smaller percentage increase in visitation at the demonstration sites, but other factors might be more important. We found that the larger percentage increase at the nondemonstration sites in 1997 was consistent with changes in visitation over the last few years (1993-97) and, therefore, might have occurred even if fees had not been increased at the demonstration sites. Since 1994, there has been a steady trend in which visitation at nondemonstration sites has grown relative to visitation at demonstration sites. In fact, there was a much more substantial difference between the two groups in the changes in visitation from 1995 to 1996 before fees were increased at any of the sites. During that period, visitation increased by 0.9 percent at the nondemonstration sites but fell by 4.1 percent at the demonstration sites. Of the Forest Service’s 39 fee demonstration sites operating in 1997, visitation totaled 35.2 million—an increase of 724,000 recreation visits or a 2-percent increase over 1996. Visitation increased at 25 sites and decreased at 14 sites. At some sites where new fees were charged or where fees were paid only for entrance to a visitor center, visitation generally declined, according to a Forest Service official. For example, after Mono Lake in the Inyo National Forest in northern California instituted a $2 fee per person for day use or entry to a section of the visitor center (an exhibit room and movie theater), visitation declined 10 percent from the prior year, according to a Forest Service official. At other Forest Service sites, visitation increased despite new fees. At one site we visited, the Mount St. Helens National Volcanic Monument in Washington State, 1997 visitation rose to 3.1 million—a 15-percent increase over 1996. This increase occurred even though the site implemented two new fees: a user fee of $8 for a 3-day pass to the visitor centers and other developed sites and a climbing fee of $15. In 1997, the site also opened an additional visitor center and deployed snow plows earlier than in prior years, further increasing visitation. Visitation at the Fish and Wildlife Service’s 61 sites participating in the program increased from 9.4 million in 1996 to 9.5 million in 1997, or slightly over 1 percent. In 1997, visitation decreased at 17 sites, increased at 43 sites, and was unchanged at 1 unit compared with visitation in 1996. At the 30 refuges charging fees for the first time as well as at the 31 refuges that increased existing fees, there was little or no change in the level of visitation or participation in activities. The three sites we visited reflected these national visitation patterns. At Nisqually National Wildlife Refuge in Washington State, the entrance fee was increased from $2 to $3, and visitation increased by 41 percent, from about 45,000 in 1996 to 63,000 in 1997. At Chincoteague National Wildlife Refuge in Virginia, the entrance fee increased from $4 to $5, and visitation increased 7 percent, from 1.3 million visitors in 1996 to 1.4 million visitors in 1997. At another site we visited, Bosque del Apache National Wildlife Refuge in New Mexico, the entrance fee increased from $2 to $3, and visitation declined 10 percent from 132,000 in 1996 to 119,000 in 1997. Overall visitation at BLM’s 10 demonstration sites dropped by 10.4 percent from 1996 to 1997. This drop reflected decreases at eight sites and increases at two other sites. According to BLM, factors affecting visitation in 1997 included (1) inclement weather and flooding that limited access to recreation sites such as Paria Canyon-Coyote Buttes in Arizona and Utah, where visitation declined 16 percent between 1996 and 1997; (2) construction projects that interfered with visitors’ use of several sites such as the Kipp Recreation Area in Montana; and (3) new fees, such as at Anasazi Heritage Center in Colorado, where visitation declined 22 percent, in part because of resistance to new fees. At one BLM site we visited, Red Rock Canyon National Conservation Area west of Las Vegas, Nevada, a new entrance fee of $5 was implemented in 1997, but visitation increased from about 1 million in 1996 to about 1.14 million in 1997. At another BLM site we visited, Yaquina Head Outstanding Natural Area on the central Oregon coast, site visitation declined 10 percent, from about 540,000 in 1996 to about 486,000 in 1997. Visits to the interpretive center declined 27 percent when fees were introduced, and at the lighthouse, visits dropped from 531 walk-in visitors a day to 65—an 88-percent decrease. Subsequent changes in the lighthouse fee raised the average daily attendance to 425 in July 1998. Surveys completed by the Park Service and the Forest Service show that visitors generally support the need for, and the amount of, new or increased entrance or user fees. However, these surveys are limited to only a few sites and do not cover visitors to the sites of the Fish and Wildlife Service and BLM. Both the Park Service and the Forest Service are planning additional surveys for 1998 and 1999 that will probe more deeply into visitation issues. In addition, some representatives of interest groups and recreation fee researchers identified several areas needing further research to fully assess the impact of the fee demonstration program. Agency officials agreed that additional research is needed in a number of areas. All four agencies have research planned to address several of the research topics. Research on actual impact of the fee demonstration program by both the Park Service and the Forest Service shows that most visitors support the need for fees and believe that the fees are set at about the right level. A Park Service survey in 11 national park units taken during summer 1997 showed that 83 percent of the respondents were either satisfied with the fees they paid or thought the fees were too low; 17 percent thought the fees were too high. According to 96 percent of respondents, the fees would not affect their current visit or future plans to visit the park. Visitors supported the new fees in large part because they wanted all or most of the fee revenues to remain in the park where they were collected or with the Park Service so that the funds could be used to improve visitor services or protect resources, rather than be returned to the U.S. Treasury. Three surveys at fee demonstration sites administered by the Forest Service found general support for the program. A survey of over 400 visitors at the Mount St. Helens National Volcanic Monument in Washington State in 1997 found 68 percent of those surveyed said their visitor experience was worth the fee they paid. Although over 50 percent of those surveyed were not aware of the new fees prior to coming to Mount St. Helens, 69 percent said their visitation plans did not change as a result of the new fees. Overall, 92 percent of those surveyed were either very satisfied or satisfied with their experience at the site. A June 1997 to May 1998 survey of 1,392 backpackers and hikers at Desolation Wilderness, Eldorado National Forest, in California found that a majority accepted the concept of wilderness use fees and considered the amount charged to be about right. However, day-use fees were less acceptable than overnight camping fees—about 33 percent of those who were surveyed disliked day-use fees compared with 20 percent who disliked camping fees. Starting in 1997, visitors to all 39 of the Forest Service’s fee demonstration sites were given the opportunity to respond to a customer “comment card” when they purchased a permit. As of March 1998, 528 cards had been received from visitors to 45 individual national forests participating in the fee demonstration program. About 57 percent of the respondents either agreed or strongly agreed with the statement that the opportunities and services they experienced during their visits were at least equal to the fee they paid. Because only two of the four agencies participating in the recreational fee demonstration program have completed visitor surveys, additional research is planned for 1998 and 1999 to more fully assess visitors’ views on new or increased recreational fees. In 1998, both BLM and the Fish and Wildlife Service began their initial evaluations of the impact of the fee demonstration program on visitors. These surveys will be included as part of the final evaluation report of the demonstration program, which is intended to be a comprehensive evaluation on the impact of fees on visitation by each of the four agencies. Additional research by all four agencies, when completed, should more fully illustrate public acceptance and reaction to new or increased fees. Surveys on the impact of fees on visitation and other issues planned for 1998 and 1999 include the following: The Park Service plans additional research on visitation in 1998 that will (1) survey the managers at all 100 recreational fee demonstration sites concerning visitation and obtain their perceptions of the equity, the efficiency, and the quality of visitors’ experiences resulting from the fee demonstration program; and (2) conduct detailed case study evaluations at 13 fee demonstration sites, including a detailed visitor survey at each site. The case study sites will explore such questions as whether fees affected the mix of sites’ visitors and how fees and changes in fee levels have affected the visitors’ experience at the sites, among other questions. The surveys are being administered for the Park Service by the University of Idaho with assistance from the University of Montana and Pennsylvania State University. Survey results are expected by April 1999. The Forest Service plans to survey visitors at several national forests in 1998 to assess their views on new or increased fees under the demonstration program. Several visitor surveys will be completed at the national forests in Southern California as part of the fee demonstration project. The primary objectives of the surveys are to assess visitors’ responses to new recreational fees and the effects of the new fees on visitation patterns and to complete a follow-up survey of users who visited the demonstration sites before the new fees were in place. The surveys are being done by the Pacific Southwest Research Station in Riverside and by California State University, San Bernadino, and should be completed in 1999. In addition, a follow-up to a 1997 visitor survey is planned to assess the opinions of campers on new fee charges at the Boundary Waters Canoe Area Wilderness in Minnesota. The survey is being done by the College of Natural Resources, University of Minnesota, and should be completed by November 1998. A 1998 survey of a total of 2,600 visitors is planned at nine of the Fish and Wildlife Service’s wildlife refuges, according to an agency official. The survey objectives are to obtain visitors’ opinions on the fairness and equity of the fee being charged, alternative fee-collection methods, and the use of revenues from fee collections, among other topics. The nine sites selected will include those charging both entrance and user fees as well as sites with new fees and those that changed existing fees. The study is being completed for the Service by a contractor to the Department of the Interior’s National Biological Survey with assistance from Colorado State University. Survey results will be available by the end of 1998. During September 1998, BLM plans to survey a total of 800 people who visited eight different demonstration sites to assess their views on the program. The specific objectives of the survey are to determine the appropriateness of the fees charged, how revenues from fees should be used, and how fees will affect future visitation, among other topics. The sites selected will represent a cross-section of both dispersed and developed recreation sites. The survey is being done with assistance from the University of Virginia Survey Research Center and should be completed by December 1998. While much of the completed research on visitors’ opinions about recreational fees shows general support for the demonstration program, recreation fee experts and some interest groups we contacted raised concerns about some effects that completed or planned visitation research, generally, does not address. The concerns fell into three areas: the impact of new or increased fees on those not visiting recreation sites, backcountry users, and low-income users. First, almost all completed and planned visitation surveys concerning the recreational fee demonstration program have assessed or will assess visitors who have paid a user or entrance fee at the recreation site. This practice is consistent with the agencies’ evaluation approach of assessing visitors’ reactions to paying new or increased fees. However, potential visitors who do not come to the recreation site or who come to the site but leave because of new or increased fees have not been included in the surveys. For example, at Glacier National Park in 1997-98 a fee was collected at the park’s western entrance on certain winter weekends. According to reports in the media, during this period, passengers in a number of cars refused to pay the fee and canceled their visit to the park. It is because of situations like this that several recreation fee researchers we contacted said further research is needed to determine whether recreational fees are precluding potential recreation users from visiting the sites in the demonstration program. Representatives from two of the four agencies participating in the fee demonstration program agreed this was an important research concern that completed or planned visitation research will not address. The Forest Service plans a national recreation survey in 1998-99 that, among other topics, will address the general public’s reaction to new or increased fees. In commenting on this report, the Park Service said it plans to conduct a survey of the general public to determine the impact of new or increased fees on visitation. This survey should be completed by December 1999. Fish and Wildlife Service officials said they had not planned such research because (1) this type of research was expensive to conduct and (2) it was not yet a high enough priority among competing research needs within the agency. Officials from BLM said that if fee increases appeared to be a factor in causing a decline in 1998 visitation figures, the agency would be likely to conduct research on this topic. Second, limited visitation surveys have been completed or are planned on the impact of new or increased fees on backcountry recreation. Only one of the completed surveys and one survey planned for 1998 has or will focus exclusively on backcountry recreation: the Forest Service’s 1997-98 survey of Desolation Wilderness in northern California and its summer 1998 survey of visitors to the Boundary Waters Canoe Area Wilderness in Minnesota. Furthermore, only 1 of the 11 national park units included in the Park Service’s 1997 visitation survey had instituted fees for backcountry use. One interest group contacted, Outward Bound USA,suggested that visitors’ acceptance of new or increased fees was greater in developed recreation areas and that backcountry users were less enthusiastic about the program because agencies charge multiple fees for backcountry activities in the same area and many backcountry fees are new fees rather than increases in existing fees. Several recreation fee researchers contacted said that since many backcountry use fees were new, additional research was needed to determine if fees were affecting backcountry visitation patterns. While representatives from the Park Service and the Forest Service agreed this was an important research concern, Fish and Wildlife Service officials did not, since their recreation sites do not involve nearly as much dispersed backcountry recreation as the Park Service’s and the Forest Service’s. A BLM official acknowledged this was an important issue, but said the agency’s visitation survey would only be administered at a small number of sites with dispersed backcountry recreation. In commenting on a draft of this report, the Park Service said that it plans to conduct a survey of backcountry/winter recreation users, to be completed by December 1999, to determine the impact of new or increased fees on visitation. A Forest Service official said the agency’s two surveys would shed some light on the impact of fees on backcountry use but believed more research was needed to fully assess the impact of fees on the Forest Service’s many sites with backcountry use. The Forest Service official favored more emphasis on such research but said that funding it would have to be balanced with other research priorities. Third, concerns have been expressed about the effect of new or increased fees on low-income visitors to federal recreation sites participating in the fee demonstration program. While BLM and the Fish and Wildlife Service plan surveys to address this issue, neither the Park Service nor the Forest Service has completed or plans research sufficient to address this topic at a number of sites participating in the demonstration program. Two groups we contacted, the National Parks and Conservation Association and Outward Bound USA, emphasized that although recreational fees are becoming more common, at some point fee increases will affect the demographics of recreation users, particularly those with limited means. In commenting on a draft of this report, the Forest Service stated that it is considering requiring fee demonstration sites to (1) collect data on the impact of fees on low-income and ethnic populations and (2) offer proposals to mitigate any impacts. Prior recreation fee research has also raised concerns about the impact of fees on the visitation patterns of low- and moderate-income users. For example, a study of the impact of fees on recreational day use at Army Corps of Engineers recreation facilities suggests that a larger proportion of low-income users would stop visiting a site if fees were charged and, since low-income users are more sensitive to the magnitude of fees charged, that higher fees would displace a higher proportion of low-income users. In addition, a 1997 survey of 1,260 visitors to 11 national park units found that 17 percent thought the fees charged were too high and that the lower the respondent’s income, the greater the tendency to think the fees charged were too high. Several recreation fee researchers contacted said that while some completed research has shown support for new fees among users of all income levels, further research is needed to understand how new fees and fee levels affect visitation of low-income users at federal recreation sites. A number of interest groups we contacted, while generally supportive of the program, had some concerns about how the program was being implemented and were withholding a strong endorsement until more tangible results of the program were available. Some groups were concerned that recreational fee increases represented an unfair burden on commercial recreation providers and that public acceptance of fee increases may diminish if fee increases go much higher. Also, some users were concerned that fees were too high and amounted to double taxation. All nine of the interest groups we contacted supported the recreational fee demonstration program, but some had concerns about how the program was being implemented. For example, the American Recreation Coalition supports the program because fees have generated funds to preserve aging agency facilities, provide new interpretative services, or experiment with new or innovative fee-collection initiatives, such as a regional trail pass program. However, the coalition was concerned that, in some cases, new or increased fees were being added to permit fees already paid by commercial recreation providers to the agencies, which represented an unfair and costly burden to their operations. The National Parks and Conservation Association told us it supports the fee demonstration program because fees are retained at the sites where they are collected and are used to reduce maintenance backlogs. At the same time, however, the association was concerned that at some point the public’s acceptance of fee increases may erode. For example, according to the association, excessive use fees for private boaters along the Colorado River and a doubling or tripling of entrance fees at certain popular national parks such as Yosemite are actions that are likely to stretch the limit of public acceptance of new recreational fees. Another group from Washington State, the Mountaineers, told us that while the public has initially accepted the program, the group was withholding a strong endorsement of it until it could see the results from the agencies’ spending on increased maintenance, enhanced visitor services, or interpretative programs and the results of visitor surveys. Some visitors to federal recreation sites under the demonstration program have voiced opposition to new or increased fees. For example, a Forest Service analysis of 528 comment cards found that about 26 percent disagreed or strongly disagreed with the statement that the value of the recreation opportunities and services the visitors had experienced was at least equal to the fee they paid. In addition, 43 percent of the 420 people providing written comments on the cards made negative statements about the recreational fees, such as “the price is too high,” “this is double taxation,” or “I oppose the fees.” Similarly, an analysis of 484 pieces of correspondence received by the Park Service between July 1996 and September 1997 showed that 67 percent of respondents expressed some opposition to new fees. According to Park Service and Forest Service officials, the surveys were not based on statistical sampling and, therefore, are not representative of all users. Comment cards and correspondence are more likely to be completed by those having a strong opinion on fees, especially those who are opposed to fees. | Pursuant to a congressional request, GAO reviewed the implementation of the recreational fee demonstration program by the National Park Service (NPS), the Forest Service, the Bureau of Land Management (BLM), and the Fish and Wildlife Service (FWS), focusing on the: (1) implementation of the program and the fee revenues generated; (2) program's expenditures; (3) extent to which the agencies have used innovative or coordinated approaches to fee collection; and (4) program's effects, if any, on visitation. GAO noted that: (1) among the four agencies, the pace and the approach used to implement the recreational fee demonstration program have differed; (2) this difference reflects the extent of the agencies' experiences in charging fees prior to the demonstration; (3) nonetheless, each agency has been successful in increasing fee revenues; (4) the four agencies estimated that their combined recreational fee revenues have nearly doubled from about $93 million in fiscal year (FY) 1996 to an about $179 million in FY 1998; (5) of the four agencies, NPS is generating the most fee revenues; (6) for FY 1998, NPS estimates that its fee revenues will be about 85 percent of the total estimated revenues collected by the four agencies at demonstration sites; (7) about 76 percent of the funds available under the program had not been spent through March 1998; (8) thus far, most expenditures have been for repairs and maintenance and the cost of fee collection; (9) the agencies expect to make significant expenditures in the latter part of FY 1998 and in FY 1999; (10) in the longer term, because some sites may have a much greater potential than others for raising revenues, the requirement that at least 80 percent of the fees be retained at the location where they were collected may lead to substantial inequities between sites; (11) some sites may reach the point where they have more revenues than they need for their projects, while other sites still do not have enough; (12) opportunities remain for the agencies to be more innovative and cooperative in designing, setting, and collecting fees; (13) among the agencies, several notable examples of innovation exist at demonstration sites of the Forest Service and the BLM; (14) these innovations have resulted in either more equitable pricing for the visitors, or greater convenience for visitors in how they pay fees; (15) while NPS has been innovative in making fees more convenient for visitors to pay, it has not experimented with different pricing structures to make fees more equitable; (16) coordination of fees among agencies has been erratic; (17) overall, preliminary data suggest the increased or new fees have had no major adverse effect on visitation to the fee demonstration sites; (18) with data from just 1 year, however, it is difficult to accurately assess the fees' impact on visitation; (19) the agencies' surveys indicate that visitors generally support the purpose of the program and the level of the fees implemented; and (20) each agency is planning additional visitor surveys and research in 1998 and 1999. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
Each year, we issue well over 1,000 audit and evaluation products to assist the Congress in its decision making and oversight responsibilities. As one indicator of the degree to which the Congress relies on us for information and analysis, GAO officials were called to testify 151 times before committees of the Congress in fiscal year 2001. Our audit and evaluation products issued in fiscal year 2001 contained over 1,560 new recommendations targeting improvements in the economy, efficiency, and effectiveness of federal operations and programs that could yield significant financial and other benefits in the future. History tells us that many of these recommendations will contribute to important improvements. At the end of fiscal year 2001, 79 percent of the recommendations we made 4 years ago had been implemented. We use a 4-year interval because our historical data show that agencies often need this length of time to complete action on our recommendations. Actions on the recommendations in our products have a demonstrable effect on the workings of the federal government. During fiscal year 2001, we recorded hundreds of accomplishments providing financial and other benefits that were achieved based on actions taken by the Congress and federal agencies, and we made numerous other contributions that provided information or recommendations aiding congressional decision making or informing the public debate to a significant extent. For example, our findings and recommendations to improve government operations and reduce costs contributed to legislative and executive actions that yielded over $26.4 billion in measurable financial benefits. We achieve financial benefits when our findings and recommendations are used to make government services more efficient, improve the budgeting and spending of tax dollars, or strengthen the management of federal resources. Not all actions on our findings and recommendations produce measurable financial benefits. We recorded 799 actions that the Congress or executive agencies had taken based on our recommendations to improve the government’s accountability, operations, or services. The actions reported for fiscal year 2001 include actions to combat terrorism, strengthen public safety and consumer protection, improve computer security controls, and establish more effective and efficient government operations. In 1990, we began an effort to identify for the Congress those federal programs, functions, and operations that are most at risk for waste, fraud, abuse, and mismanagement. Every 2 years since 1993, with the beginning of each new Congress, we have published a summary assessment of those high-risk programs, functions, and operations. In 1999, we added the Performance and Accountability Series to identify the major performance and management issues confronting the primary executive branch agencies. In our January 2001 Performance and Accountability Series and High-Risk Update, we identified 97 major management challenges and program risks at 21 federal agencies as well as 22 high-risk areas and the actions needed to address these serious problems. Figure 1 shows the list, as of May 2002, of high-risk issues including the Postal Service’s transformational efforts and long-term outlook, which we added to the high-risk list in April 2001. Congressional leaders, who have historically referred extensively to these series in framing oversight hearing agendas, have strongly urged the administration and individual agencies to develop specific performance goals to address these pervasive problems. In addition, the President’s recently issued management agenda for reforming the federal government mirrors many of the issues that GAO has identified and reported on in these series, including a governmentwide initiative to focus on strategic management of human capital. We will be issuing a new Performance and Accountability Series and High-Risk Update at the start of the new Congress this coming January. The Government Management Reform Act of 1994 requires (1) GAO to annually audit the federal government’s consolidated financial statements and (2) the inspectors general of the 24 major federal agencies to annually audit the agencywide financial statements prepared by those agencies. Consistent with our approach on a full range of management and program issues, our work on the consolidated audit is done in coordination and cooperation with the inspectors general. The Comptroller General reported on March 29, 2002, on the U.S. government’s consolidated financial statements for fiscal years 2001 and 2000. As in the previous 4 fiscal years, we were unable to express an opinion on the consolidated financial statements because of certain material weaknesses in internal control and accounting and reporting issues. These conditions prevented us from being able to provide the Congress and the American citizens an opinion as to whether the consolidated financial statements are fairly stated in conformity with U.S. generally accepted accounting principles. While significant and important progress is being made in addressing the impediments to an opinion on the U.S. government’s consolidated financial statements, fundamental problems continue to (1) hamper the government’s ability to accurately report a significant portion of its assets, liabilities, and costs, (2) affect the government’s ability to accurately measure the full costs and financial performance of certain programs and effectively manage related operations, and (3) significantly impair the government’s ability to adequately safeguard certain significant assets and properly record various transactions. In August 2001, the principals of the Joint Financial Management Improvement Program (JFMIP)—Secretary of the Treasury O’Neill, Office of Management and Budget Director Daniels, Office of Personnel Management Director James, and Comptroller General Walker, head of GAO and chair of the group—began a series of periodic meetings that have resulted in unprecedented substantive deliberations and agreements focused on key financial management reforms issues such as better defining measures for financial management success. These measures include being able to routinely provide timely, accurate, and useful financial information and having no material internal control weaknesses or material noncompliance with applicable laws, regulations, and requirements. In addition, the JFMIP principals have agreed to (1) significantly accelerate financial statement reporting so that the government’s financial statements are more timely and (2) discourage costly efforts designed to obtain unqualified opinions on financial statements without addressing underlying systems challenges. For fiscal year 2004, audited agency financial statements are to be issued no later than November 15, with the U.S. government’s audited consolidated financial statement becoming due by December 15. GAO also issues a wide range of standards, guidance, and management tools intended to assist the Congress and agencies in putting in place the structures, processes, and procedures needed to help avoid problems before they occur or develop into full-blown crises. For example, the Federal Managers’ Financial Integrity Act of 1982 (FMFIA) requires GAO to issue standards for internal control in government. Internal control is an integral part of an organization’s management that provides reasonable assurance that the following objectives are being achieved: effectiveness and efficiency of operations, reliability of financial reporting, and compliance with applicable laws and regulations. As such, the internal control standards that GAO issues provide an overall framework for establishing and maintaining internal control, and identifying and addressing major performance and management challenges and areas at greatest risk to waste, fraud, abuse, and mismanagement. A positive control environment is the foundation for the standards. Management and employees should establish and maintain an environment throughout the organization that sets a positive and supportive attitude toward internal control and conscientious management. One factor is the integrity and ethical values maintained and demonstrated by management and staff. Agency management plays a key role in providing leadership in this area, especially setting and maintaining the organization’s ethical tone, providing guidance for proper behavior, removing temptations for unethical behavior, and providing discipline when appropriate. In addition to setting standards for internal control, GAO participates in the setting of the federal government’s accounting standards and is responsible for setting the generally accepted government auditing standards for auditors of federal programs and assistance. GAO also assists congressional and executive branch decision makers by issuing guides and tools for effective public management. For example, in addition to setting standards for internal control, we have issued detailed guidance and management tools to assist agencies in maintaining or implementing effective internal control and, when needed, to help determine what, where, and how improvements can be made. We have also issued guidance for agencies to address the critical governmentwide high-risk challenge of computer security. This work draws on lessons from leading public and private organizations to show the Congress and federal agencies the steps that can be taken to protect the integrity, confidentiality, and availability of the government’s data and the systems it relies on. Similarly, we have published guidance for the Congress and managers on dealing with the other governmentwide high-risk issue— human capital. These guides on human capital are assisting managers in adopting a more strategic approach to the use of their organization’s most important asset—its people. Overall, GAO has undertaken a major effort to identify ways agencies can effectively implement the statutory framework that the Congress has put in place to create a more results-oriented and accountable federal government. GAO has an investigations unit that focuses on investigating and exposing potential criminal misconduct and serious wrongdoing in programs that receive federal funds. The primary mission of this unit is to conduct investigations of alleged violations of federal criminal law and serious wrongdoing and to review law enforcement programs and operations, as requested by the Congress and the Comptroller General. Through investigations, our special investigations team develops examples of misconduct and wrongdoing that illustrate program weaknesses, demonstrate potential for abuse, and provide supporting evidence for GAO recommendations and congressional action. Investigators often work directly with other GAO teams on collaborative efforts that enhance the agency’s overall ability to identify and report on wrongdoing. Key issues in the investigations area are: fraudulent activity and regulatory noncompliance in federal unethical conduct by federal employees and government officials, as well fraud and misconduct in grant, loan, and entitlement programs; adequacy of federal agencies’ security systems, controls, and property as tested through proactive special operations; and integrity of federal law enforcement and investigative programs. One example of these collaborations between our investigations team and audit and evaluations teams is the use of forensic audit techniques to identify instances of fraud, waste, and abuse at various agencies. This approach combines financial auditor and special investigator skills with data mining and file comparison techniques to identify unusual trends and inconsistencies in agency records that may indicate fraudulent or improper activity. For example, by comparing a list of individuals who received government grants and loans to a list of people whose social security numbers indicate they have died, we identified people improperly receiving benefits. Data mining techniques have also been used to identify unusual government purchase card activity that, upon further investigation, were determined to be abusive and improper purchases. Overall, in 2001 GAO referred 61 matters to the Department of Justice and other law enforcement and regulatory agencies for investigation, and its special investigations accounted for $1.8 billion in financial benefits. GAO also maintains a system for receiving reports from the public on waste, fraud, and abuse in federally funded programs. Known as the GAO FraudNET, the system received more than 800 cases in 2001. Reports of alleged mismanagement and wrongdoing covered topics as varied as misappropriation of funds, security violations, and contractor fraud. Most of the matters reported to GAO were referred to inspectors general of the executive branch for further action or information. Other matters that indicate broader problems or systemic issues of congressional interest are referred to GAO’s investigations unit or other GAO teams. | The United States General Accounting Office (GAO) is an independent, professional, nonpartisan agency in the legislative branch that is commonly referred to as the investigative arm of Congress. Congress created GAO in the Budget and Accounting Act of 1921 to assist in the discharge of its core constitutional powers--the power to investigate and oversee the activities of the executive branch, the power to control the use of federal funds, and the power to make laws. All of GAO's efforts on behalf of Congress are guided by three core values: (1) Accountability--GAO helps Congress oversee federal programs and operations to ensure accountability to the American people; (2) Integrity--GAO sets high standards in the conduct of its work. GAO takes a professional, objective, fact-based, non-partisan, nonideological, fair, and balanced approach on all activities; and (3) Reliability--GAO produces high-quality reports, testimonies, briefings, legal opinions, and other products and services that are timely, accurate, useful, clear, and candid. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
The Army Guard is the oldest component of any of the uniformed services. It traces its roots to the colonial militia, and claims a “birth”of 1636. Today, the Army Guard exists in 54 locations that include all 50 states, the District of Columbia, and three territories: Guam, the Virgin Islands, and Puerto Rico. There are about 2,300 Army Guard units within these locations and over 350,000 Army Guard members. During peacetime, Army Guard units report to the adjutant generals of their states or territories, or in the case of the District of Columbia, to the Commanding General. Each adjutant general reports to the governor of the state, or in the case of the District of Columbia, the mayor. At the state level, the governors have the ability, under the Constitution of the United States, to call up members of the Army Guard in times of domestic emergency or need. The Army Guard’s state mission is perhaps the most visible and well known. Army Guard units battle fires or help communities deal with floods, tornadoes, hurricanes, snowstorms, or other emergency situations. In times of civil unrest, the citizens of a state rely on the Army Guard to respond, if needed. During national emergencies, however, the President has the authority to mobilize the Army Guard, putting them in federal duty status. While federalized, the units answer to the Combatant Commander of the theatre in which they are operating and, ultimately, to the President. Even when not federalized, the Army Guard has a federal mission to maintain properly trained and equipped units, available for prompt mobilization for war, national emergency, or as otherwise needed. Nonfederalized Army Guard members’ pay and allowances are paid with state funds while federalized Army Guard members’ pay and allowances are paid with federal funds. Typically, Army Guard members enlist for 8 years and are entitled to a number of benefits while serving in the Army Guard, including those for health care, life insurance, and other state-specific benefits. After their enlistment periods, former Army Guard members are entitled to veterans’ benefits, such as veterans’ health care and burial benefits. Army Guard members are required to attend one drill weekend each month and one annual training period (usually 2 weeks in the summer) each year. Initially, all nonprior service personnel are required to attend initial entry training, also known as Basic Training. After Basic Training, soldiers go to their Advanced Individual Training, which teaches them the special skills they will need for their jobs in the Army Guard. This training can usually be scheduled to accommodate civilian job or school constraints. The Army Guard has armories and training facilities in more than 2,800 communities. The Army Guard is a partner with the active Army and the Army Reserves in fulfilling the country's military needs. The National Guard Bureau (NGB) assists the Army Guard in this partnership. NGB is a joint bureau of the Departments of the Army and the Air Force and is charged with overseeing the federal functions of the Army Guard and the Air Guard. In this capacity, NGB helps the Army Guard and the Air Guard procure funding and administer policies. NGB also acts as a liaison between the Departments of the Army and Air Force and the states. All Army forces are integrated under DOD’s “total force” concept. DOD’s total force concept is based on the premise that it is not practically feasible to maintain active duty forces sufficient to meet all possible war contingencies. Under this concept, DOD’s active and reserve components are to be blended into a cohesive total force to meet a given mission. On September 14, 2001, the President declared a national emergency as a result of the terrorist attacks on the World Trade Center and the Pentagon and the continuing and immediate threat of further attacks on the United States. Concurrent with this declaration, the President authorized the Secretary of Defense to call troops to active duty pursuant to 10 U.S.C. Section 12302. The Secretary of Defense delegated to the Secretary of the Army the authority to order Army Guard soldiers to active duty as part of the overall mobilization effort. Approximately 93,000 Army Guard soldiers were activated as of March 2003. At that time, Army Guard soldiers accounted for 34 percent of the total reserve components mobilized in response to the terrorist attacks on September 11, 2001. The active duty federal missions established in response to the September 2001 national emergency were categorized into two operations: Operation Enduring Freedom and Operation Noble Eagle. In general, missions to fight terrorism outside the United States were categorized under Operation Enduring Freedom, while missions to provide domestic defense were categorized as Operation Noble Eagle. For example, Army Guard soldiers participated in direct combat in Afghanistan under Operation Enduring Freedom. U.S. homeland security missions, such as guarding the Pentagon, airports, nuclear power plants, domestic water supplies, bridges, tunnels, and other military assets were conducted under Operation Noble Eagle. The Army Guard also supported federal peacekeeping operations in Southwest Asia with Operation Desert Spring and in Kosovo with Operation Joint Guardian under various other military operations. While on active duty, all Army Guard soldiers earn various statutorily authorized pays and allowances. The types of pay and allowances Army Guard soldiers are eligible to receive vary depending upon rank and length of service, dependency status, skills and certifications acquired, duty location, and the difficulty of the assignment. While Army Guard soldiers mobilized to active duty may be entitled to receive additional pays and allowances, we focused on 14 basic types of pays and allowances applicable to the Army Guard units we selected for case studies. As shown in table 1, we categorized these 14 pay and allowance types into two groups: (1) pays, including basic pay, special duty assignment pay, parachute jumping and foreign language proficiency skill-based pays, and location-based hostile fire and hardship duty pays and (2) allowances, including allowances for housing, subsistence, family separation, and cost of living for the continental United States. In addition, Army Guard soldiers may be eligible for tax advantages associated with their mobilization to active duty. That is, mobilized Army Guard soldiers assigned to or working in a combat zone are entitled to exclude from taxable income certain military pay that would otherwise be taxable. As shown in figure 1, there are three key phases associated with starting and stopping relevant pays and allowances for mobilized Army Guard soldiers: (1) initial mobilization (primarily through the Soldier Readiness Processing), (2) deployment, which includes carrying out assigned mission operations while on active duty, and (3) demobilization. Army Guard units and state-level command support components, as well as active Army finance components and DFAS, have key roles in this process. In addition, there are five key computer systems involved in authorizing, entering, and processing active duty pays to mobilized Army Guard soldiers through the three key phases of their mobilization: Army’s standard order writing system, Automated Fund Control Order System (AFCOS); Army Guard’s personnel system, Standard Installation Division Personnel Reporting System (SIDPERS); Army Guard’s pay input system, JUMPS Standard Terminal Input System active Army’s pay input system, Defense Military Pay Office System (DMO); and DFAS’ Army Guard and Reserve pay system, DJMS-RC. During the initial mobilization, units receive an alert order and begin a mobilization preparation program, Soldier Readiness Processing (SRP). The financial portion of the SRP is conducted by one of the 54 United States Property and Fiscal Offices (USPFO) to verify the accuracy of pay records for each soldier and to make changes to pay records based on appropriate supporting documentation for the pays and allowances that the soldiers will be entitled to receive when initially mobilized. If documentation, such as birth certificates for dependents or parachute jumping certifications, is missing, soldiers have a few days to obtain the necessary documents. The unit commander is responsible for ensuring that all personnel data for each soldier under their command are current. When the unit receives a mobilization order, USPFO pay technicians are responsible for initiating basic pay and allowances by manually entering the start and stop dates into DJMS-RC for the active duty tour that appears on each soldier’s mobilization order. Army Guard pay technicians use JUSTIS to access and record data in DJMS-RC. By entering the soldier’s Social Security number and mobilization order number into JUSTIS, the pay technician can view the pay data in DJMS-RC, ensure that they are complete, and enter any missing data supported by documentation provided by the soldier. If done correctly, soldiers will start to receive basic pay, basic allowances for housing, basic allowances for subsistence, and jump pay automatically based on the start date entered into DJMS-RC. After soldiers complete their initial SRP and receive individual mobilization orders, they travel as a unit to a mobilization station. At the mobilization station, mobilized Army Guard personnel undergo a second SRP review. In this second SRP, mobilization station personnel are responsible for confirming or correcting the results of the first SRP, including making necessary reviews to ensure that each soldier’s records are current. Mobilization pay technicians are required to promptly initiate pays that were not initiated during the first SRP and enter appropriate pay changes into DJMS-RC. The mobilization station commander is required to certify that the unit is ready for mobilization, including ensuring that all authorized active duty pays are in place for the soldiers in the unit, at the end of this process. DJMS-RC will generate certain pays and allowances automatically for each 2-week pay period until the stop date entered in DJMS-RC. If entered correctly, the stop date in DJMS-RC will be the end of active duty tour date documented on the soldier’s mobilization orders. This automated feature is intended to prevent erroneous payments to soldiers beyond their authorized active duty status. However, human intervention is required when a pay or allowance error is detected or an event occurs that requires a change in the soldier’s pay and personnel file. For example, a change in dependent status, such as marriage or divorce, a promotion, jump pay disqualification, or being demobilized before an active duty tour ends would change or eliminate some of the pays and allowances a soldier would be entitled to receive. All pays and allowances and subsequent changes are documented in the Master Military Pay Account (MMPA)—the central pay record repository in DJMS-RC for each soldier. While deployed on active duty, there are several Army Guard (USPFO), active Army, and DFAS components involved in paying mobilized Army Guard personnel. The active Army servicing finance office, which may be within the United States or in a foreign country, is responsible for initiating pays earned while the soldier is deployed, such as hostile fire pay and hardship duty pay. Pay technicians start hostile fire pay for each soldier listed on a battle roster or flight manifest. Thereafter, hostile fire pay is automatically generated each pay period. Other location-based pays, such as hardship duty, require pay transactions each month. The servicing finance office for the deployed phase is under the jurisdiction of the active Army. Active Army servicing finance offices use DMO to enter pay transactions into DJMS-RC. Under certain conditions, either active Army pay servicing offices or USPFOs can process applicable pay-altering transactions, such as those related to a soldier’s early separation from active duty or a soldier’s death. Upon completion of an active duty tour, soldiers normally return to the same Army locations from which they were mobilized for demobilization out-processing before returning to their home units. Demobilization personnel, employed by the active Army or Army Guard, are required to provide each soldier with a Release from Active Duty (REFRAD) order and a Form DD 214, Certificate of Release or Discharge from Active Duty. The demobilization station pay technicians are to use these documents as a basis for deactivating the soldier’s active duty pay and allowances as of the date of release from active duty. At this time, the supporting USPFO is responsible for discontinuing monthly input of all nonautomated pays and allowances. If the demobilization station did not take action to return a soldier to a demobilized status, the state USPFO has this responsibility. In 1995, the Army decided to process pays to mobilized Army Guard soldiers from the DJMS-RC system rather than the active Army payroll system used to pay mobilized Army Guard soldiers previously. According to the then Deputy Assistant Secretary of the Army (Financial Operations), this decision was made as an interim measure (pending the conversion to a single system to pay both active and reserve component soldiers) based on the belief that DJMS-RC provides the best service to the reserve component soldiers. DJMS-RC is a large, complex, and sensitive payroll computer application used to pay Army and Air National Guard and Army and Air Force Reserve personnel. DFAS has primary responsibility for developing guidance and managing operations of the system. DFAS Indianapolis is the central site for all Army military pay and is responsible for maintaining over 1 million MMPAs for the Army. Each MMPA contains a soldier’s pay-related personnel, entitlement, and performance data. All pay-related transactions that are entered into DJMS-RC, through JUSTIS and DMO, update the MMPA. Personnel data contained in the MMPA are generated from SIDPERS—a personnel database maintained and used by the Army Guard at the 54 state-level personnel offices to capture data on personnel-related actions (e.g. discharge, promotion, demotion actions that impact soldiers’ pay). DFAS Denver is responsible for designing, developing, and maintaining customer requirements for the Military and Civilian Pay Services business line, and its Technical Support Office designs and maintains the DJMS-RC core pay software. DFAS-Indianapolis serves as a “gatekeeper” in that it monitors the daily status of data uploaded to DJMS-RC to ensure that all transactions are received and processed in DJMS-RC. Users can sign on to DJMS-RC directly through online interactive software used for file transfer transactions, online queries of MMPAs, and downloads of data files and various DJMS-RC reports. JUSTIS is the pay input subsystem used by the 54 state-level Army Guard commands, including the USPFOs, to update DJMS-RC. Database management of JUSTIS is decentralized in that each of the 54 sites owns and maintains its own JUSTIS database. This subsystem processes transactions for submission to DJMS-RC to create payments for Army National Guard soldiers. JUSTIS receives certain pay-affecting personnel data from SIDPERS. JUSTIS receives a limited amount of mobilization order data directly from AFCOS. These systems share the same operating system platform and certain database tables. However, additional data needed to create pay transactions associated with active duty pay and allowances must be entered manually into JUSTIS from hard copies of mobilization orders. DMO is the pay input subsystem used by active Army finance offices and the DOD military pay offices, including those in overseas locations such as Europe, Korea, and Iraq, to update DJMS-RC. This pay input subsystem can be used by active Army finance offices to create transactions for military pay and allowances that are not reported at the time of mobilization for upload to DJMS-RC and for active Army finance offices to use to enter location-based pays, such as hostile fire and hardship duty pays and combat zone tax exclusion transactions. We found significant pay problems at the six Army Guard units we audited. These problems related to processes, human capital, and systems. The six units we audited, including three special forces and three military police units, were as follows: Special forces units Colorado B Company, 5th Battalion, 19th Special Forces Virginia B Company, 3rd Battalion, 20th Special Forces West Virginia C Company, 2nd Battalion, 19th Special Forces Mississippi 114th Military Police Company California 49th Military Police Headquarters and Headquarters Maryland 200th Military Police Company In addition, we conducted a limited review of the pay experiences of a seventh unit mobilized more recently and deployed to Iraq in April 2003— the Colorado Army Guard’s 220th Military Police Company—to determine the extent to which the pay problems we found in our six case study units persisted. As shown in figure 2, these units were deployed to various locations in the United States and overseas in support of Operations Noble Eagle and Enduring Freedom. These units were deployed to help perform a variety of critical mission operations, including search and destroy missions in Afghanistan against Taliban and al Qaeda forces, guard duty for al Qaeda prisoners in Cuba, providing security at the Pentagon shortly after the September 11, 2001, terrorist attacks, and military convoy security and highway patrols in Iraq. For the six units we audited, we found significant pay problems involving over one million dollars in errors. These problems consisted of underpayments, overpayments, and late payments that occurred during all three phases of Army Guard mobilization to active duty. Overall, for the 18- month period from October 1, 2001, through March 31, 2003, we identified overpayments, underpayments, and late payments at the six case study units estimated at $691,000, $67,000, and $245,000, respectively. In addition, for one unit, these pay problems resulted in largely erroneous debts totaling $1.6 million. Overall, we found that 450 of the 481 soldiers from our case study units had at least one pay problem associated with their mobilization to active duty. Table 2 shows the number of soldiers with at least one pay problem during each of the three phases of active duty mobilization. Due to the lack of supporting documents at the state, unit, and battalion levels, we may not have identified all of the pay problems related to the active duty mobilizations of these units. We have provided documentation for the pay problems we identified to appropriate DOD officials for further research to determine whether additional amounts are owed to the government or the soldiers. The payment problems we identified at the six case study units did not include instances of fraudulent payments, which were a major finding resulting from the further investigation of improper payments found in our 1993 audit of Army military payroll. Nonetheless, we found the inaccurate, late, and missing pays and associated erroneous debts found during our current audit had a profound financial impact on individual soldiers and their families. Some of the pay problems we identified included the following. DOD erroneously billed 34 soldiers in a Colorado National Guard Special Forces unit an average of $48,000 each. Though we first notified DOD of these issues in April and sent a follow-up letter in June 2003, the largely erroneous total debt for these soldiers of about $1.6 million remained unresolved at the end of our audit in September 2003. As a result of confusion over responsibility for entering transactions associated with a Colorado soldier’s promotion, the soldier’s spouse had to obtain a grant from the Colorado National Guard to pay bills while her husband was in Afghanistan. Some soldiers did not receive payments for up to 6 months after mobilization and others still had not received certain payments by the conclusion of our audit work. Ninety-one of 100 members of a Mississippi National Guard military police unit that was deployed to Guantanamo Bay, Cuba, did not receive the correct amount of Hardship Duty Pay. One soldier from the Mississippi unit was paid $9,400 in active duty pay during the 3 months following an early discharge for drug-related charges. Forty-eight of 51 soldiers in a California National Guard military police unit received late payments because the unit armory did not have a copy machine available to make copies of needed pay-related documents. Four Virginia Special Forces soldiers who were injured in Afghanistan and unable to resume their civilian jobs experienced problems in receiving entitled active duty pays and related health care. In some cases, the problems we identified may have distracted these professional soldiers from mission requirements, as they spent considerable time and effort while deployed attempting to address these issues. Further, these problems may adversely affect the Army’s ability to retain these valuable personnel. Appendixes I–VI provide details of the pay experiences of the soldiers at the case study units we audited. Procedural requirements, particularly in light of the potentially hundreds of organizations and thousands of personnel involved, were not well understood or consistently applied with respect to determining (1) the actions required to make timely, accurate active duty pays to mobilized Army Guard soldiers and (2) the component responsible, among Army Guard, active Army, and DFAS, for taking the required actions. Further, we found instances in which existing guidance was out of date—some of which still reflected practices in place in 1991 during Operation Desert Storm. These complex, cumbersome processes, which were developed in piecemeal fashion over a number of years, provide numerous opportunities for control breakdowns. We found that a substantial number of payment errors were caused, at least in part, by unclear procedural requirements for processing active duty pay and allowance entitlements to mobilized Army Guard soldiers. Overall, as shown in figures 3, 4 and 5, we found that an extensive, cumbersome, and labor-intensive process has evolved to pay mobilized Army Guard soldiers for their active duty service. While figures 3, 4 and 5 provide an overview of the process, particularly of the types of DOD organizations involved, they do not fully capture the numbers of different DOD components involved. Specifically, thousands of Army Guard (individual units and state-level organizations), active Army, and DFAS components may be involved in authorizing, processing, and paying mobilized Army Guard soldiers, including an estimated 2,300 local Army Guard home units, unit commanders, and unit administrators that are involved in maintaining up-to-date soldier personnel and related pay records; 54 state-level Army Guard commands, including both USPFOs and state- level personnel offices involved in authorizing and starting active duty pay transactions; active Army finance offices or DOD Military Pay Offices at over 15 mobilization stations across the United States that are involved in processing Army Guard personnel to and from their active duty locations; 28 active Army area servicing finance offices at over 50 locations worldwide that are involved in servicing Army Guard soldiers’ location- based active duty pays; DFAS-Indianapolis—the central site for processing Army Guard soldiers’ active duty pays; DFAS-Denver—the central site for maintaining the pay system used to pay Army Guard soldiers; DFAS-Cleveland—the central site for handling soldier military pay The Army National Guard Financial Services Center—the Army Guard organization responsible for providing guidance, training, and oversight and coordination for active duty pays to Army Guard personnel. Several of these organizations with key roles in payroll payments to mobilized Army Guard soldiers, including DOD, DFAS, Army, and the Army Guard, have issued their own implementing regulations, policies, and procedures. In addition, we found unwritten practices in place at some of the case study locations we audited. Existing written policies and procedures are voluminous—the DOD Financial Management Regulations (FMR) guidance on pay and allowance entitlements alone covers 65 chapters. As a result of their size and continually evolving nature as legal, procedural, and system requirements change, we found that policies and procedures were not well understood or consistently applied across the potentially hundreds of organizations and thousands of personnel involved in paying mobilized Army Guard personnel. These processes have been developed in piecemeal fashion over a number of years to accommodate changing legislative requirements, DOD policies, and the unique operating practices of different DOD organizations and systems involved in these processes. As discussed in the following sections, these extensive and evolving policies and procedures were confusing both across various organizations and personnel involved in their implementation and, more importantly, to the Army Guard soldiers who are the intended beneficiaries. In addition, these cumbersome policies and procedures contributed to the pay errors we identified. We found instances in which unclear procedural requirements for processing active duty pays contributed to erroneous and late pays and allowances to mobilized Army Guard soldiers. For example, we found existing policies and procedural guidance were unclear with respect to the following issues. Amending active duty orders. A significant problem we found at the case study locations we audited concerned procedures that should be followed for amending active duty orders. We found instances at two of our case study locations in which military pay technicians at either a USPFO or an active Army finance office made errors in amending existing orders. These errors resulted in establishing virtually all prior pays made under the original orders as debts. A major contributor to the pay errors we found in this area was that existing procedures did not clearly state how USPFO and active Army finance personnel should modify existing order tour start and stop information in the pay system when necessary without also unintentionally adversely affecting previous pays and allowances. Also, these procedures did not warn USPFO and active Army personnel that using alternative methods will automatically result in an erroneous debt assessment and garnishment of up to two-thirds of the soldier’s pay. We identified over $1 million in largely erroneous debt transactions as a result of breakdowns in this area. At the Colorado Special Forces unit, we found that actions taken by the Colorado USPFO in an attempt to amend 34 soldiers’ orders resulted in reversing the active pay and allowances the soldiers received for 11 of the 12 months they were deployed on active duty in Afghanistan and instead establishing these payments as debts. These 34 soldiers received notice on their Leave and Earnings Statements that they owed the government an average of approximately $48,000 per soldier, for a total largely erroneous debt of $1.6 million. Although we informed DOD of this problem in April 2003, as of the end of our audit fieldwork in September 2003, the problems at the Colorado Special Forces unit had not been resolved. DOD officials did advise us that, as a result of our work, they implemented a software change on September 18, 2003, intended to help avoid such problems in the future. Specifically, we were told new warning messages have been added to JUSTIS that will appear when a transaction is entered to cancel or amend a tour of duty. The new warnings will advise that the transaction will or could result in a collection action and will ask the pay technician to confirm that is their intent. While we did not verify the effectiveness of this change, it has the potential to reduce pay problems associated with errors made in amending orders. Required time frames for processing pay transactions. Written requirements did not exist with respect to the maximum amount of time that should elapse between the receipt by the responsible Army Guard or Army pay office of proper documentation and processing the related pay transaction through the pay system. While some of the locations we audited had established informal processing targets, for example, 3 days, we also found numerous instances in which available documentation indicated lengthy delays in processing pay transactions after pay offices received supporting documentation. These lengthy processing delays resulted in late payroll payments to deployed soldiers. Required monthly reconciliations of pay and personnel data. The case study units lacked specific written requirements for conducting and documenting monthly reconciliations of pay and personnel mismatch reports and unit commanders’ finance reports. Available documentation showed that these controls were either not done or were not done consistently or timely. Because, as discussed later in this report, the processing of Army Guard pay relies on systems that are not integrated or effectively interfaced, these after-the-fact detective controls are critical to detecting and correcting erroneous or fraudulent pays. To be effective, the 54 state-level Army Guard commands must individually reconcile common data elements in all 54 state-operated personnel databases for Army Guard personnel with corresponding DJMS-RC pay records at least monthly. Because of the lack of clarity in existing procedural requirements in this area, we found that several of the locations we visited had established standard but undocumented reconciliation practices. However, at the six case study locations we audited, we found that although all the USPFOs told us they received monthly SIDPERS and DJMS-RC mismatch reports, they did not always fully reconcile and make all necessary system corrections each month. Lacking specific written policies and procedural requirements for such reconciliations, several of the case study locations we audited established a standard, but undocumented, practice of reconciling roughly a third of the common data elements every month, so that all elements were to be reconciled and all necessary corrective actions taken over a 3-month period. However, documentation was not always retained to determine the extent to which these reconciliations were done and if they were done consistently. Our findings are similar to those in reports from Army Guard operational reviews. For example, the results of the most recent reviews at three of the six case study locations we audited showed that state Army Guard personnel were not performing effective reconciliations of pay and personnel record discrepancies each month. One such report concluded, “Failure to reconcile the Personnel/Pay Mismatch listing monthly provides a perfect opportunity to establish fraudulent personnel or pay accounts.” Several of the instances we identified in which soldiers received pay and allowances for many months after their release from active duty likely would have been identified sooner had USPFO military pay personnel investigated the personnel/pay mismatch report discrepancies more frequently. For example, at one case study unit, 34 soldiers received pay for several months past their official discharge dates. Although records were not available to confirm that these overpayments were reported as discrepancies on monthly mismatch reports, the USPFO military pay supervisor told us that at the time the mismatch reports were not being used to identify and correct pay-affecting errors. As discussed later, at another case study unit, a mobilized soldier was released from active duty and discharged from the Army in June 2002, earlier than his planned release date due to alleged involvement in drug- related activities. But, the soldier continuied to receive active duty pay. The soldier’s SIDPERS personnel record on July 2, 2002, to reflect the discharge. According to pay records, the soldier’s pay continued until the USPFO military pay supervisor identified the discrepancy on the September 25, 2002, personnel/pay mismatch report and initiated action that stopped the soldier’s pay effective September 30, 2002. However, because this discrepancy was not identified until late September, the soldier received $9,400 in extra pay following his discharge from the Army. In addition, while as discussed previously, we found a number of instances in which Army Guard soldiers’ active duty pays continued after their demobilization, available documentation showed only one instance in the six case study units we visited in which a reconciliation of the unit commander’s finance report resulted in action to stop improper active duty pay and allowances. Specifically, available documentation shows that an administrative clerk’s review of this report while the unit was mobilized in Guantanamo Bay, Cuba, resulted in action to stop active duty pay and allowances to a soldier who was previously demobilized. However, it is also important to note that while these reconciliations are an important after-the-fact detective control, they are limited because they can only detect situations in which payroll and personnel records do not agree. A number of pay errors we identified resulted from the fact that neither personnel nor pay records were updated. Soldiers returning from deployments earlier than their units. For four of our case study units, we found instances in which Army Guard soldiers’ active duty pays were not stopped at the end of their active duty tours when they were released from active duty earlier than their units. We found procedural guidance did not clearly specify how to carry out assigned responsibilities for soldiers who return from active duty earlier than their units. DFAS-Indianapolis guidance provides only that “the supporting USPFO will be responsible for validating the status of any soldier who does not return to a demobilized status with a unit.” The guidance did not state how the USPFO should be informed that a soldier did not return with his or her unit, or how the USPFO was to take action to validate the status of such soldiers. At one of our case study locations, officials at the USPFO informed us that they became aware that a soldier had returned early from a deployment when the soldier appeared at a weekend drill while his unit was still deployed. Data input and eligibility requirements for housing and family separation allowances. Our audit work at two of our case study locations indicated that procedural guidance was not clear with respect to transaction entry and eligibility requirements for the basic allowance for housing and the family separation allowance, respectively. For example, during our audit work at one of our case study locations, we determined that because of inconsistent interpretations of existing guidance for “dependents” in entering transactions to start paying soldiers’ basic allowance for housing, a number of Maryland soldiers were not paid the correct amount. At another case study location, we found that existing guidance on eligibility determination was misinterpreted so that soldiers were erroneously refused the “single parent soldiers family separation allowance” to which they were entitled. We also found that existing policies and procedures were unclear with respect to organizational responsibilities. Confusion centered principally around pay processing responsibility for Army Guard soldiers as they move from state control to federal control and back again. To be effective, current processes rely on close coordination and communication between state (Army Guard unit and state-level command organizations) and federal (active Army finance locations at mobilization/demobilization stations and at area servicing finance offices). However, we found a significant number of instances in which critical coordination requirements were not clearly defined. Individual Case Illustration: Confusion over Responsibility for Entering Pay Transactions Results in Family Obtaining a Grant to Pay Bills A sergeant incurred pay problems during his mobilization and deployment to Afghanistan in support of Operation Enduring Freedom that caused financial hardship for his family while he was deployed. In this case, the active Army and his state's USPFO were confused as to responsibility for processing pay input transactions associated with a promotion. Specifically, pay input transactions were required for his promotion from a sergeant first class (E-7) to master sergeant (E-8), his demotion back to an E-7, and a second promotion back to an E-8. The end result was the soldier was overpaid during the period of his demotion. DFAS garnished his wages and collected approximately $1,100 of the soldier's salary. These garnishments reduced the soldier's net pay to less than 50 percent of the amount he had been receiving. As a result, the soldier's wife had to obtain a grant of $500 from the Colorado National Guard's Family Support Group to pay bills. DFAS Indianapolis mobilization procedures authorize the Army Guard’s USPFOs and the active Army’s mobilization station and in-theater finance offices to enter transactions for deployed soldiers. However, we found existing guidance did not provide for clear responsibility and accountability between USPFOs and active Army mobilization stations and in-theater servicing finance offices with respect to responsibility for entering transactions while in-theater and terminating payments for soldiers who separate early or who are absent without leave or are confined. For example, at one of our case study locations, we found that this broad authority for entering changes to soldiers’ pay records enabled almost simultaneous attempts by two different pay offices to enter pay transactions into DJMS-RC for the same soldier. As shown in the following illustration, at another case study location we found that, in part because of confusion over responsibility for starting location-based pays, a soldier was required to carry out a dangerous multiday mission to correct these payments. This page is intentionally left blank. Individual Case Illustration: Difficulty in Starting In-Theatre Pays A sergeant with the West Virginia National Guard Special Forces unit was stationed in Uzbekistan with the rest of his unit, which was experiencing numerous pay problems. The sergeant told us that the local finance office in Uzbekistan did not have the systems up and ready, nor available personnel who were familiar with DJMS-RC. According to the sergeant, the active Army finance personnel were only taking care of the active Army soldiers’ pay issues. When pay technicians at the West Virginia USPFO attempted to help take care of some of the West Virginia National Guard soldiers’ pay problems, they were told by personnel at DFAS-Indianapolis not to get involved because the active Army finance offices had primary responsibility for correcting the unit’s pay issues. Eventually, the sergeant was ordered to travel to the finance office at Camp Doha, Kuwait, to get its assistance in fixing the pay problems. As illustrated in the following map. This trip, during which a soldier had to set aside his in-theatre duties to attempt to resolve Army Guard pay issues, proved to be not only a major inconvenience to the sergeant, but was also life-threatening. At Camp Doha (an established finance office), a reserve pay finance unit was sent from the United States to deal with the reserve component soldiers’ pay issues. The sergeant left Uzbekistan for the 4-day trip to Kuwait. He first flew from Uzbekistan to Oman in a C-130 ambulatory aircraft (carrying wounded soldiers). From Oman, he flew to Masirah Island. From Masirah Island he flew to Kuwait International Airport, and from the airport he had a 45-minute drive to Camp Doha. The total travel time was 16 hours. The sergeant delivered a box of supporting documents used to input data into the system. He worked with the finance office personnel at Camp Doha to enter the pertinent data on each member of his battalion into DJMS-RC. After 2 days working at Camp Doha, the sergeant returned to the Kuwait International Airport, flew to Camp Snoopy in Qatar, and from there to Oman. On his flight between Oman and Uzbekistan, the sergeant’s plane took enemy fire and was forced to return to Oman. No injuries were reported. The next day, he left Oman and returned safely to Uzbekistan. While guidance that permits both Army Guard and active Army military pay personnel to enter transactions for mobilized Army Guard soldiers provides flexibility in serving the soldiers, we found indications that it also contributed to soldiers being passed between the active Army and Army Guard servicing locations. For example, at another of our case study locations, we were told that several mobilized soldiers sought help in resolving active duty pay problems from the active Army’s mobilization station finance office at Fort Knox and later the finance office at Fort Campbell. However, officials at those active Army locations directed the soldiers back to the USPFO because they were Army Guard soldiers. We also found procedures were not clear on how to ensure timely processing of active duty medical extensions for injured Army Guard soldiers. Army Regulation 135-381 provides that Army Guard soldiers who are incapacited as a result of injury, illness, or disease that occured while on active duty for more than 30 consecutive days are eligible for continued health benefits. That is, with medical extension status, soldiers are entitled to continue to receive active duty pays, allowances, and medical benefits while under a physician’s care. At the Virginia 20th Special Forces, B Company, 3rd Battalion, we found that four soldiers were eligible for continued active duty pay and associated medical benefits due to injuries incurred as a result of their involvement in Operation Enduring Freedom. Although these injuries precluded them from resuming their civilian jobs, they experienced significant pay problems as well as problems in receiving needed medical care, in part, as a result of the lack of clearly defined implementing procedures in this area. All four soldiers experienced pay disruptions because existing guidance was not clear on actions needed to ensure that these soldiers were retained on active duty medical extensions. One of the soldiers told us, “People did not know who was responsible for what. No one knew who to contact or what paperwork was needed….” As a result, all four have experienced gaps in receiving active duty pay and associated medical benefits while they remained under a physician’s care for injuries received while on their original active duty tour. Individual Case Illustration: Unclear Regulations for Active Duty Medical Extension Four soldiers who were injured while mobilized in Afghanistan for Operation Enduring Freedom told us that customer service was poor and no one was really looking after their interest or even cared about them. These problems resulted in numerous personal and financial difficulties for these soldiers. · “Not having this resolved means that my family has had to make greater sacrifices and it leaves them in an unstable environment. This has caused great stress on my family that may lead to divorce.” · “My orders ran out while awaiting surgery and the care center tried to deny me care. My savings account was reduced to nearly 0 because I was also not getting paid while I waited. I called the Inspector General at Walter Reed and my congressman. My orders were finally cut. In the end, I was discharged 2 weeks before my care should have been completed because the second amendment to my orders never came and I couldn’t afford to wait for them before I went back to work. The whole mess was blamed on the ‘state’ and nothing was ever done to fix it.” · One sergeant was required to stay at Womack, the medical facility at Fort Bragg, North Carolina, while on medical extension. His home was in New Jersey. He had not been home for about 20 months, since his call to active duty. While he was recovering from his injuries, his wife was experiencing a high-risk pregnancy and depended upon her husband’s medical coverage, which was available while he remained in active duty status. Even though she lived in New Jersey, she scheduled her medical appointments near Fort Bragg to be with her husband. The sergeant submitted multiple requests to extend his active duty medical extension status because the paperwork kept getting lost. Lapses in obtaining approvals for continued active duty medical extension status caused the sergeant’s military medical benefits and his active duty pay to be stopped several times. He told us that because of gaps in his medical extension orders, he was denied medical coverage, resulting in three delays in scheduling a surgery. He also told us he received medical bills associated with his wife’s hospitalization for the delivery of their premature baby as a result of these gaps in coverage. We found several instances in which existing DOD and Army regulations and guidance in the pay and allowance area are outdated and conflict with more current legislative and DOD guidance. Some existing guidance reflected pay policies and procedures dating back to Operations Desert Shield and Desert Storm in 1991. While we were able to associate pay problems with only one of these outdated requirements, there is a risk that they may also have caused as yet unidentified pay problems. Further, having out-of-date requirements in current regulations may contribute to confusion and customer service issues. For example, the National Defense Authorization Act for Fiscal Year 1998 replaced the basic allowance for quarters and the variable housing allowance with the basic allowance for housing. However, volume 7A, chapter 27 of the DOD FMR, dated February 2002, still refers to the basic allowance for quarters and the variable housing allowance. The act also replaced foreign duty pay with hardship duty pay. Yet, chapter 8 of Army Regulation 37-104-4 (Military Pay and Allowances Policy and Procedures – Active Component) still refers to foreign duty pay. Further, current DFAS and Army mobilization procedural guidance directs active Army finance units to use transaction codes to start soldiers’ hardship duty pays that are incorrect. Effective December 2001, DOD amended FMR, Volume 7A, chapter 17, to establish a new “designated area” hardship duty pay with rates of $50, $100, or $150 per month, depending on the area. However, DFAS guidance dated December 19, 2002, directed mobilization site finance offices to use transaction codes that resulted in soldiers receiving a prior type of hardship duty pay that was eliminated in the December 2001 revisions. At one of our case study locations, we found that because the active Army finance office followed the outdated DFAS guidance for starting hardship duty pays, 91 of 100 Mississippi military police unit soldiers deployed to Cuba to guard al Qaeda prisoners were paid incorrect amounts of hardship duty pay. In addition, Army Regulation 37-104-4, dated September 1994, which was still in effect at the end of our audit work, provides that mobilized Army Guard soldiers are to be paid through the active Army pay system—the Defense Joint Military Pay System-Active Component (DJMS-AC). This procedure, in effect during the mobilizations to support Operations Desert Shield and Desert Storm, was changed in 1995. Specifically, in 1995, it was agreed that Army Guard personnel would no longer be moved to the active duty pay system, DJMS-AC, when mobilized to active duty, but would remain on the DJMS-RC system. Maintaining such outdated references in current policies may have contributed to confusion by USPFO and active Army finance personnel regarding required actions, particularly in light of the extensive set of policies and procedures now in effect in this area. With respect to human capital, we found weaknesses, including (1) insufficient resources allocated to pay processing, (2) inadequate training related to existing policies and procedures, and (3) poor customer service. The lack of sufficient numbers of well-trained, competent military pay professionals can undermine the effectiveness of even a world-class integrated pay and personnel system. A sufficient number of well-trained military pay staff is particularly crucial given the extensive, cumbersome, and labor-intensive process requirements that have evolved to support active duty pay to Army Guard soldiers. GAO’s Standards for Internal Control in the Federal Government states that effective human capital practices are critical to establishing and maintaining a strong internal control environment. Specifically, management should take steps to ensure that its organization has the appropriate number of employees, and that appropriate human capital practices, including hiring, training, and retention, are in place and effectively operating. Our audit identified concerns with the numbers of knowledgeable personnel dedicated to entering and processing active duty pays and allowances to mobilized Army Guard soldiers. As discussed previously, both active Army and Army Guard military pay personnel play key roles in this area. Army Guard operating procedures provide that the primary responsibility for administering Army Guard soldiers’ pay as they are mobilized to active duty rests with the 54 USPFOs. These USPFOs are responsible for processing pay for drilling reservists along with the additional surge of processing required for initiating active duty pays for mobilized soldiers. Our audit work identified concerns with the human capital resources allocated to this area, primarily with respect to the Army Guard military pay processing at the state-level USPFOs. Specifically, we identified concerns with (1) the number of staff on board in the military pay sections of the USPFOs, (2) the relatively lower grade structure for nonsupervisory personnel in the USPFOs’ military pay sections in comparison with the grades for similar positions in other sections of the USPFO which led to difficulty in recruiting and retaining military pay processing personnel, and (3) as discussed in the following section, few of the military pay technicians on board at the six locations we audited had received formal training on pay eligibility and pay processing requirements for mobilized Army Guard personnel. NGB provides annual authorization for the overall staffing levels for each state. Within these overall staffing authorizations, each state allocates positions to each of the sections within a USPFO, including the military pay section and other sections such as vendor and contract pay. We compared the actual number of personnel on board to the NGB-authorized staffing level for the military pay sections at the case study locations we audited. During our audit period, two of the six case study locations had fewer military pay technicians on board than they were authorized. Officials at several of the six case study units also stated that restrictions on rank/grade at which USPFOs are allowed to hire personnel for their military pay sections made it difficult to recruit and retain employees. For example, a USPFO official told us that retaining personnel in the military pay section of the USPFOs was particularly difficult because similar administrative positions in other sections of the USPFO were typically higher paying and provided better benefits than the positions in the military pay section. The highest pay grade of the nonsupervisory pay technicians at the six case study units was a GS-7, and the majority of personnel were in the GS-6 pay grade. Although the Army and DFAS have established an agreement that in part seeks to ensure that resources are available to provide appropriately skilled pay personnel at mobilization stations to support surge processing, no such contingency staffing plan exists for the USPFOs. Specifically, a November 2002 memorandum of understanding between the Army and DFAS states that the active Army has primary responsibility to provide trained military or civilian resources to execute active duty pay and allowance surge processing requirements. However, this memorandum does not address the resources needed for surge processing at USPFOs. As discussed previously, pay problems at the case study units were caused in part by USPFO military pay sections attempting to process large numbers of pay transactions without sufficient numbers of knowledgeable personnel. Lacking sufficient numbers of personnel undermines the ability of the USPFO pay functions to carry out established control procedures. For example, our audits at several of the six case study units showed that there were no independent reviews of proposed pay transactions before they were submitted to DJMS-RC for processing. Such independent supervisory reviews are required by DJMS-RC operating procedures. However, a USPFO official told us that because of the limited number of pay technicians available to process pay transactions—particularly when processing massive numbers of transactions to start active duty pays at the same time—this requirement was often not followed. The Chief of Payroll at one of our case study locations told us that because they were currently understaffed, staff members worked 12 to 14 hours a day and still had backlogs of pay start transactions to be entered into the pay system. We were also told that two of our other case study locations experienced backlogs and errors in entering pay start transactions when they were processing large numbers of Army Guard soldiers during initial mobilizations. Military pay personnel told us that they were able to avoid backlogs in processing pay start transactions during mobilization processing by conscripting personnel from other USPFO sections to help in assembling and organizing the extensive paperwork associated with activating appropriate basic pays, entitlements, and special incentive pays for their mobilized Army Guard soldiers. In addition to concerns about the numbers of personnel onboard at the USPFO military pay offices involved in processing pay transactions for our case study units, we identified instances in which the personnel at military pay offices at both the USPFOs and the active Army finance offices did not appear to know the different aspects of the extensive pay eligibility or payroll processing requirements used to provide accurate and timely pays to Army Guard soldiers. There are no DOD or Army requirements for military pay personnel to receive training on pay entitlements and processing requirements associated with mobilized Army Guard soldiers or for monitoring the extent to which personnel have taken either of the recently established training courses in the area. Such training is critical given that military pay personnel must be knowledgeable about the extensive and complex pay eligibility and processing requirements. We also found that such training is particularly important for active Army pay personnel who may have extensive experience and knowledge of pay processing requirements for regular Army soldiers, but may not be well versed in the unique procedures and pay transaction entry requirements for Army Guard soldiers. During our work at the case study units, we identified numerous instances in which military pay technicians at both the USPFOs and active Army finance office locations made data coding errors when entering transaction codes into the pay systems. We were told that these errors occurred because military pay personnel—particularly those at the active Army finance office locations—were unfamiliar with the system’s pay processing requirements for active duty pays to mobilized Army Guard personnel. Correcting these erroneous transactions required additional labor-intensive research and data entry by other more skilled pay technicians. As discussed previously, we also found that pay technicians did not understand how to properly code data on the soldiers’ dependents status, which is used to determine housing allowances, into the pay system. As a result, we identified cases in which soldiers were underpaid housing allowances to which they were entitled. Personnel at active Army finance offices told us that while they are readily familiar with the pay processing requirements for active Army personnel (using DJMS-AC), they had little experience with, or training in, the policies and procedures to be followed in entering pay transactions into DJMS-RC. An Army finance office official told us that handling two sets of pay transaction processing procedures is often confusing because they are often required to process a large number of both active Army personnel and Army Guard and other reserve personnel using different processes and systems at the same time. While the Army Guard offers training for their military pay technicians, we found that there was no overall monitoring of Army Guard pay personnel training. At several of the case study locations we audited, we found that Army Guard pay technicians relied primarily on on-the-job-training and phone calls to the Army Guard Financial Services Center in Indianapolis or to other military pay technicians at other locations to determine how to process active duty pays to activated Army Guard personnel. Beginning in fiscal year 2002, the Army Guard began offering training on mobilization pays and transaction processing to the USPFO military pay technicians. However, there is no requirement for USPFO pay technicians to attend these training courses. In addition, available documentation showed that two of the five scheduled courses for fiscal year 2003 were canceled—one because of low registration and one because of schedule conflicts. Only two of the six case study locations we audited tracked the extent to which pay technicians have taken training in this area. We were told that few of the military pay technicians at the state Army Guard USPFOs we audited had formal training on JUSTIS, DJMS-RC, or mobilization pay processing requirements and procedures. Throughout our case studies, we found numerous errors that involved some element of human capital. One payroll clerk told us that she had not received any formal training on how to operate JUSTIS when she was assigned to the job. Instead, she stated, she has learned how to operate the system through on-the-job training and many phone calls to system support personnel in Indianapolis. She estimated that she was not fully comfortable with all the required transaction processing procedures until she had been on the job for about 7 years. In addition, unit commanders have significant responsibilities for establishing and maintaining the accuracy of solders’ pay records. U.S. Army Forces Command Regulation 500-3-3, Reserve Component Unit Commander’s Handbook (July 15, 1999) requires unit commanders to (1) annually review and update pay records for all soldiers under their command as part of an annual soldier readiness review and (2) obtain and submit supporting documentation needed to start entitled active duty pay and allowances based on mobilization orders. However, we saw little evidence the commanders of our case study units carried out these requirements. Further, neither Army Guard unit commanders nor active Army commanders were required to receive training on the importance of the pay to on-board personnel reconciliations, discussed previously, as an after-the-fact detective control to proactively identify Army Guard soldiers who should no longer receive active duty pays. We were told that this was primarily because unit commanders have many such administrative duties, and without additional training on the importance of these actions, they may not receive sufficient priority attention. The lack of unit commander training on the importance of these requirements may have contributed to the pay problems we identified at our case study units. For example, at our Virginia case study location, we found that when the unit was first mobilized, USPFO pay personnel were required to spend considerable time and effort to correct hundreds of errors in the unit’s pay records dating back to 1996. Such errors could have been identified and corrected during the preceding years’ readiness reviews. Further, we observed many cases in which active duty pays were not started until more than 30 days after the entitled start dates because soldiers did not submit the paperwork necessary to start these pays. Customer Service Concerns Through data collected directly from selected soldiers and work at our six case study locations, we identified a recurring soldier concern with the level and quality of customer service they received associated with their pays and allowances when mobilized to active duty. None of the DOD, Army, or Army Guard policies and procedures we examined addressed the level or quality of customer service that mobilized Army Guard soldiers should be provided concerning questions or problems with their active duty pays. However, we identified several sources that soldiers may go to for customer service or information on any such issues. These include the military pay section of the USPFO of their home state’s Army Guard, the designated active Army area servicing finance office, and a toll free number, 1-888-729-2769 (Pay Army). While soldiers had multiple sources from which they could obtain service, we found indications that many Army Guard soldiers were displeased with the customer service they received. We found that not all Army Guard soldiers and their families were informed at the beginning of their mobilization of the pays and allowances they should receive while on active duty. This information is critical for enabling soldiers to identify if they were not receiving such pays and therefore require customer service. In addition, as discussed later in this report, we found that the documentation provided to Army Guard soldiers—primarily in the form of leave and earnings statements—concerning the pays and allowances they received did not facilitate customer service. Our audit identified customer service concerns at all three phases of the active duty tours and involving DFAS, active Army, and Army Guard servicing components. Consistent with the confusion we found among Army Guard and active Army finance components concerning responsibility for processing pay transactions for mobilized Army Guard soldiers, we found indications that the soldiers themselves were similarly confused. Many of the complaints we identified concerned confusion over whether Army Guard personnel mobilized to active duty should be served by the USPFO because they were Army Guard soldiers or by the active Army because they were mobilized to federal service. One soldier told us that he submitted documentation on three separate occasions to support the housing allowance he should have received as of the beginning of his October 2001 mobilization. Each time he was told to resubmit the documentation because his previously submitted documents were lost. Subsequently, while he was deployed, he made additional repeated inquiries as to when he would receive his housing allowance pay. He was told that it would be taken care of when he returned from his deployment. However, when he returned from his deployment, he was told that he should have taken care of this issue while he was deployed and that it was now too late to receive this allowance. Data collected from Army Guard units mobilized to active duty indicated that some members of the units had concerns with the pay support customer service they received associated with their mobilization— particularly with respect to pay issues associated with their demobilization. Specifically, of the 43 soldiers responding to our question on satisfaction with customer support during the mobilization phase, 10 indicated satisfaction, while 15 reported dissatisfaction. In addition, of the 45 soldiers responding to our question on customer support following demobilization, 5 indicated satisfaction while 29 indicated dissatisfaction. Of the soldiers who provided written comments about customer service, none provided any positive comments about the customer service they received, and several had negative comments about the customer service they received, including such comments as “nonexistent,” “hostile,” or “poor.” For example, a company commander for one of our case study units told us that he was frustrated with the level of customer support his unit received during the initial mobilization process. Only two knowledgeable military pay officials were present to support active duty pay transaction processing for the 51 soldiers mobilized for his unit. He characterized the customer service his unit received at initial mobilization as time consuming and frustrating. Personnel we talked with at the Colorado special forces unit we audited were particularly critical of the customer service they received both while deployed in Afghanistan and when they were demobilized from active duty. Specifically, unit officials expressed frustration with being routed from one office to another in their attempts to resolve problems with their active duty pays and allowances. For example, the unit administrator told us he contacted the servicing area active Army finance office for the 101st Airborne in West Virginia because his unit was attached to the 101st when they were deployed. The finance office instructed him to contact the USPFO in West Virginia because, although he was from a Colorado unit, his unit was assigned to a West Virginia Army Guard unit. However, when he contacted the West Virginia USPFO for service, officials from that office instructed him to contact the USPFO in his home state of Colorado to provide service for his pay problems. Several systems issues were significant factors impeding accurate and timely payroll payments to mobilized Army Guard soldiers, including the lack of an integrated or effectively interfaced pay system with both the personnel and order-writing systems, limitations in DJMS-RC processing capabilities, and ineffective system edits of payments and debts. DOD has a significant system enhancement project under way to improve military pay. However, given that the effort has been under way for about 5 years and DOD has encountered challenges fielding the system, it is likely that the department will continue to operate with existing system constraints for at least several more years. Our findings related to weaknesses in the systems environment were consistent with issues raised by DOD in its June 2002 report to the Congress on its efforts to implement an integrated military pay and personnel system. Specifically, DOD’s report acknowledged that major deficiencies in the delivery of military personnel and pay services to ensure soldiers receive timely and accurate personnel and pay support must be addressed by the envisioned system. Further, the report indicated these deficiencies were the direct result of the inability of a myriad of current systems with multiple, complex interfaces to fully support current business process requirements. Figure 6 provides an overview of the five systems currently involved in processing Army Guard pay and personnel information. The five key DOD systems involved in authorizing, entering, processing, and paying mobilized Army Guard soldiers were not integrated. Lacking either an integrated or effectively interfaced set of personnel and pay systems, DOD must rely on error-prone, manual entry of data from the same source documents into multiple systems. With an effectively integrated system, changes to personnel records automatically update related payroll records from a single source of data input. While not as efficient as an integrated system, an automatic personnel-to-payroll system interface can also reduce errors caused by independent, manual entry of data from the same source documents into both pay and personnel systems. Without an effective interface between the personnel and pay systems, we found instances in which pay-affecting information did not get entered into both the personnel and pay systems, thus causing various pay problems—particularly late payments. We found that an existing interface could be used to help alert military pay personnel to take action when mobilization transactions are entered into the personnel system. Specifically, Army Guard state personnel offices used an existing interface between SIDPERS and JUSTIS to transmit data on certain personnel transactions (i.e., transfers, promotions, demotions, and address changes) to the 54 USPFOs to update the soldier’s pay records. However, this personnel-to-pay interface (1) requires manual review and acceptance by USPFO pay personnel of the transactions created in SIDPERS and (2) does not create pay and allowance transactions to update a soldier’s pay records. For example, when Army Guard soldiers change from inactive drilling status to active duty status, state personnel offices create personnel-related transactions in SIDPERS, but associated pay- related transactions to update the soldier’s pay records are not automatically created in JUSTIS. USPFO pay personnel are not aware that a pay-related transaction is needed until they receive documentation from the soldier, the soldier’s unit commander, or the monthly personnel/pay mismatch report. Automated improvements, such as an administrative action transmitted through the personnel-to-payroll interface, could be used to proactively alert USPFOs of certain pay-impacting transactions that are created in SIDPERS as a means to help ensure timely and accurate pay. In our case studies, we found instances in which mobilization order data that were entered into SIDPERS were either not entered into DJMS-RC for several months after the personnel action or were entered inconsistently. At the case study locations we audited, we found several instances in which Army Guard soldiers received amended or revoked orders that were entered into SIDPERS but were not entered into DJMS-RC. We also found instances in which personnel pay-affecting changes such as changes in family separation allowance, basic allowance for housing, and active duty pay increases from promotions, were not entered into the pay system promptly. Consequently, these soldiers either received active duty pays they were not entitled to receive—some for several months—or did not timely receive active duty pays to which they were entitled. Individual Case Illustration: Overpayment due to Lack of Integrated Pay and Personnel Systems A soldier with the Mississippi Army National Guard was mobilized in January 2002 with his unit and traveled to the mobilization station at Fort Campbell. The unit stayed at Fort Campbell to perform post security duties until June 2002. On June 14, 2002, the E-4 specialist received a "general" discharge order from the personnel office at Fort Campbell for a drug-related offense. However, he continued to receive active duty pay, totaling approximately $9,400, until September 2002. Although the discharge information was promptly entered into the soldier's personnel records, it was not entered into the pay system for almost 4 months. This problem was caused by weaknesses in the processes designed to work around the lack of integrated pay and personnel systems. Further, the problem was not detected because reconciliations of pay and personnel data were not performed timely. Specifically, it was not until over 3 months after the soldier's discharge, through its September 2002 end-of-month reconciliation, that the Mississippi Army National Guard USPFO identified the overpayment and took action on October 2, 2002, to stop the individual's pay. However, collection efforts on the $9,400 overpayment did not begin until July 2003, when we pointed out this situation to USPFO officials. The lack of an integrated set of systems was also apparent in the relationship between JUSTIS and the order writing system—AFCOS. Currently, certain personnel and order information entered and stored in the AFCOS database is automatically filled in the JUSTIS input screens pertaining to active duty tours for state missions upon entry of the soldier’s Social Security Number and order number. This auto-fill functionality eliminates the need for some error-prone, manual reentry of data into JUSTIS. However, currently, manual entry of data from a hard copy of the soldier’s orders and other documentation is required to initiate the soldier’s pay and allowances—a procedure that defeats the purpose of an effective interface. For example, at one of the case study units we audited, USPFO pay personnel had to manually enter the soldier’s active duty tour start and stop dates into JUSTIS from a hard copy of the actual mobilization order. When we brought this to the attention of NGB officials, they stated that providing the auto-fill functionality to the mobilization input screens would require minimal programming changes. NGB officials stated that they planned to release a programming software change to all 54 USPFOs that would allow the start and stop dates to be automatically filled into the mobilization screens to reduce the need for reentry of some mobilization information. Because this software change was scheduled to occur after the conclusion of our fieldwork, we did not verify its effectiveness. In any case, while this proposed programming change may be beneficial, it does not eliminate the need for manual entry and review of certain other mobilization data needed to initiate a soldier’s basic pay and allowances. DOD has acknowledged that DJMS-RC is an aging, COBOL/mainframe- based system. Consequently, it is not surprising that we found DFAS established a number of “workarounds”—procedures to compensate for existing DJMS-RC processing limitations with respect to processing active duty pays and allowances to mobilized Army guard soldiers. Such manual workarounds are inefficient and create additional labor-intensive, error- prone transaction processing. We observed a number of such system workaround procedures at the case study units we audited. For example, for the special forces units we audited, our analysis disclosed a workaround used to exclude soldiers’ pay from federal taxes while in combat. Specifically, DJMS-RC was not designed to make active duty pays and exclude federal taxes applicable to those pays in a single pay transaction. To compensate for this system constraint, DFAS established a workaround that requires two payment transactions over a 2-month payroll cycle to properly exempt soldiers’ pay for the combat zone tax exclusion. That is, for those soldiers entitled to this exclusion, DJMS-RC withholds federal taxes the first month, identifies the taxes to be refunded during end- of-month pay processing, and then makes a separate payment during the first pay update the following month to refund the taxes that should not have been withheld. Soldiers’ taxes could not be refunded the same month because the DJMS-RC refund process occurs only one time a month. In addition, because of limited DJMS-RC processing capabilities, the Army Guard USPFO and in-theatre active Army area servicing finance office pay technicians are required to manually enter transactions for nonautomated pay and allowances every month. DJMS-RC was originally designed to process payroll payments to Army Reserve and Army Guard personnel on weekend drills or on short periods of annual active duty (periods of less than 30 days in duration) or for training. With Army Guard personnel now being paid from DJMS-RC for extended periods of active duty (as long as 2 years at a time), DFAS officials told us that the system is now stretched because it is being used to make payments and allowances that it was not structured or designed to make, such as hostile fire pay and the combat zone tax exclusion. Many of these active duty pay and allowances require manual, monthly verification and reentry into DJMS-RC because, while some pays, such as basic active duty pay and jump pay, can be generated automatically, DJMS-RC is not programmed to generate automatic payment of certain other types of pay and allowances. For example, each month USPFO pay personnel are responsible for entering into JUSTIS special duty assignment pay, foreign language proficiency pay, and high altitude low opening (HALO) pay, and Army area servicing finance offices are responsible for entering into DMO hardship duty pay, for deployed soldiers entitled to these types of pays and for which a performance certification is received from the respective unit commanders. However, because pay transactions must be manually entered every month soldiers are entitled to receive these pays, it is often difficult to ensure that mobilized soldiers receive their entitled nonautomated pays and allowances. For example, we found a number of instances in which soldiers were underpaid their entitled jump, foreign language proficiency, special duty assignment, or hardship duty pays because pay technicians inadvertently omitted the monthly manual input required to initiate these types of pays every month. At one of the case study units, we found USPFO pay personnel had a procedure in place to help prevent inadvertently omitting month-to-month entry of nonautomated pays for entitled soldiers. Specifically, pay personnel at the USPFO in Maryland used a warning screen within JUSTIS as a mechanism to alert them that soldiers were eligible to receive that particular pay component that month. Although this does not alleviate the problem of month-to-month manual entry, the warning screen could be used to help preclude some of the pay problems we found resulting from failures to enter transactions for nonautomated, month-to-month pay and allowance entitlements. Further, these month-to-month pays and allowances were not separately itemized on the soldiers’ leave and earnings statements in a user-friendly format. In contrast, at four of our six case study units, we found that a significant number of soldiers were overpaid their entitled automated pays when they were demobilized from active duty before the stop date specified in their original mobilization orders. This occurred because pay technicians did not update the stop date in DJMS-RC, which is necessary to terminate the automated active duty pays when soldiers leave active duty early. For example, the military finance office in Kuwait, which was responsible for paying Virginia 20th Special Forces soldiers in the fall of 2002, did not stop hostile fire and hardship duty pays as required when these soldiers left Afghanistan in October 2002. We found that 55 of 64 soldiers eligible for hostile fire pay were overpaid for at least 1 month beyond their departure from Afghanistan. Individual Case Illustration: Problems in Deciphering a Leave and Earnings Statement An Army National Guard Special Forces sergeant believed that he was not receiving certain active duty pays and allowances during his mobilization to active duty in support of Operation Enduring Freedom. On March 23, 2002, the sergeant wrote a letter from Afghanistan to a fellow battalion soldier back in his home state, discussing his pay problems. The sergeant stated that he was not receiving his special duty assignment pay from November 2001 to March 2002. The sergeant’s letter also stated he was not receiving his hostile fire pay and combat zone tax exclusion. His letter concluded, “Are they really fixing pay issues or are they putting them off till we return? If they are waiting, then what happens to those who (god forbid) don’t make it back?” The sergeant was killed in action in Afghanistan on April 15, 2002, before he knew if his pay problems were resolved. Our review determined that some of the sergeant’s pays were started up to 2 months late, but others had actually been paid properly. The sergeant apparently was not aware of receiving these payments because of the way they were combined. Soldiers’ pays may appear as lump sum payments under “other credits” on their leave and earnings statements. In many cases these other credit pay and allowances appeared on their leave and earning statements without adequate explanation. As a result, we found indications that Army Guard soldiers had difficulty using the leave and earnings statements to determine if they received all entitled active duty pays and allowances. In addition, several Army Guard soldiers told us that they had difficulty discerning from their leave and earnings statements whether lump sum catch-up payments fully compensated them for previously underpaid active duty pay and allowance entitlements. Without such basic customer service, the soldiers cannot readily verify that they received all the active duty pays and allowances to which they were entitled. As shown in the example leave and earnings statement extract included in figure 7, an Army Guard soldier who received a series of corrections to special duty assignment pay along with a current special duty assignment payment of $110 is likely to have difficulty discerning whether he or she received all and only entitled active duty pays and allowances. While DJMS-RC has several effective edits to prevent certain overpayments, it lacks effective edits to reject large proposed net pays over $4,000 at midmonth and over $7,000 at end-of-month before their final processing. DOD established these thresholds to monitor and detect abnormally large payments. As a result of the weaknesses we identified, we found several instances in our case studies in which soldiers received large lump sum payments, probably related to previous underpayments or other pay errors, with no explanation. Further, the lack of preventive controls over large payments poses an increased risk of fraudulent payments. DJMS-RC does have edits that prevent soldiers from (1) being paid for pay and allowances beyond the stop date for the active duty tour, (2) being paid for more than one tour with overlapping dates, or (3) being paid twice during a pay period. Each month, DFAS Indianapolis pay personnel receive an Electronic Fund Transfer Excess Dollar Listing after the electronic fund transfer payment has been processed in DJMS-RC and deposited to the soldier’s bank account. DJMS-RC does not contain edit checks to reject payments over the threshold amounts or to require review and approval of payments over these amounts prior to their final processing. For example, at one of the case study units we audited, DJMS-RC did not have edit checks to prevent one soldier from receiving an erroneous electronic payment totaling $20,110 without prior approval (see the individual case illustration below for details). In addition, our analysis showed 76 other payroll-related payments during the period October 1, 2001, through March 31, 2003, of over $7,000 (net) each that were paid by DJMS-RC. Because the Electronic Fund Transfer Excess Dollar Listing is printed after the payment is made, timely detection of errors is critical to help ensure that erroneous payments are recovered and that fraud does not occur. Similarly, DJMS-RC does not have system edits to prevent large debts from being assessed without review and approval prior to being processed and does not provide adequate explanations for pay-related debt assessments. Our case studies identified individuals who received debt notices in excess of $30,000 with no explanation. At five of the 6 units audited, we identified 86 individuals who had total pay and allowance debts of approximately $300,000 as of March 31, 2003. Individual Case Illustration: System Edits Do Not Prevent Large Payments and Debts A sergeant with the Colorado Army National Guard, Special Forces, encountered numerous severe pay problems associated with his mobilization to active duty, including his deployment to Afghanistan in support of Operation Enduring Freedom. The sergeant’s active duty pay and other pay and allowances should have been stopped on December 4, 2002, when he was released from active duty. However, because the sergeant’s mobilization orders called him to active duty for 730 days and not the 365 days that he was actually mobilized, and the Army area servicing finance office at the demobilization station, Fort Campbell, did not enter the release from active duty date into DJMS-RC, the sergeant continued to improperly receive payments, as if he were still on active duty, for 2 and a half months after he was released from active duty totaling over $8,000. The sergeant was one of 34 soldiers in the company whose pay continued after their release from active duty. In an attempt to stop the erroneous payments, in February 2003, pay personnel at the Colorado USPFO created a transaction to cancel the tour instead of processing an adjustment to amend the stop date consistent with the date on the Release from Active Duty Order. When this occurred, DJMS-RC automatically processed a reversal of 11 months of the sergeant’s pay and allowances that he earned while mobilized from March 1, 2002, through February 4, 2003, which created a debt in the amount of $39,699 on the soldier’s pay record; however, the reversal should have only been from December 5, 2002, through February 4, 2003. In April 2003, at our request, DFAS-Indianapolis personnel intervened in an attempt to correct the large debt and to determine the actual amount the sergeant owed. In May 2003, DFAS-Indianapolis erroneously processed a payment transaction instead of a debt correction transaction in DJMS-RC. This created a payment of $20,111, which was electronically deposited to the sergeant’s bank account without explanation, while a debt of $30,454 still appeared on his Leave and Earnings Statement. About 9 months after his demobilization, the sergeant’s unpaid debt balance was reportedly $26,559, but the actual amount of his debt had not yet been determined as of September 2003. In addition, we found that current procedures used to notify soldiers of large payroll-related debts did not facilitate customer service. Under current procedures, if a soldier is determined to owe the government money while on active duty, he is assessed a debt and informed of this assessment with a notation of an “Unpaid Debt Balance” in the remarks section of his Leave and Earnings Statement. A soldier at one of our case study units told us that he was not notified in advance of his receipt of his Leave and Earnings Statement that he had a debt assessment and that two- thirds of his pay would be garnished. As a result, he was not able to plan his financial affairs to avoid late payments on his car and other loans. This debt assessment notification procedure is even more egregious when debts, particularly large debts, are assessed in error and up to two-thirds of the soldier’s pay may be garnished to begin repaying the erroneous debt. For example, at our case study units, we found that the only notice several soldiers received when they were erroneously assessed payroll debts was an “Unpaid Debt Balance” buried in the remarks section of their Leave and Earnings Statements. One such assessment showing a $39,489.28 debt is shown in figure 8. DOD has a major system enhancement effort under way in this area described as the largest personnel and pay system in the world in both scope and number of people served—the Defense Integrated Military Human Resources System (DIMHRS). One of the major benefits expected with DIMHRS is “service members receiving accurate and timely pay and benefits.” Begun in 1998, DIMHRS is ultimately intended to replace more than 80 legacy systems (including DJMS-RC) and integrate all pay, personnel, training, and manpower functions across the department by 2007. By the end of fiscal year 2003, DOD reporting shows that it will have invested over 5 years and about $360 million in conceptualizing and planning the system. In 2002, DOD estimated that integrated personnel and pay functions of DIMHRS would be fully deployed by fiscal year 2007. It also reported a development cost of about $427 million. However, our review of the fiscal year 2004 DOD Information Technology budget request shows that DOD is requesting $122 million and $95 million, respectively, for fiscal years 2004 and 2005. In addition, the department reported that the original DIMHRS project completion milestone date has slipped about 15 months. Part of the requested funding for fiscal year 2004 was to acquire a payroll module, Forward Compatible Payroll. According to program officials, this module, in conjunction with a translation module and a Web services component, is to replace DJMS-RC and DJMS-AC systems by March 2006, with the first deployment to the Army Reserve and Army Guard in March 2005. In assessing the risks associated with DIMHRS implementation as part of its fiscal year 2004 budget package, DOD highlighted 20 such risks. For example, DOD reported a 60 percent risk associated with “Service issues with business process reengineering and data migration.” The department’s ability to effectively mitigate such risks is of particular concern given its poor track record in successfully designing and implementing major systems in the past. Consequently, given the schedule slippages that have already occurred combined with the many risks associated with DIMHRS implementation, Army Guard soldiers will likely be required to rely on existing pay systems for at least several more years. Our limited review of the pay experiences of the soldiers in the Colorado Army Guard’s 220th Military Police Company, which was mobilized to active duty in January 2003, sent to Kuwait in February 2003, and deployed to Iraq on military convoy security and highway patrol duties in April 2003, indicated that some of the same types of pay problems that we found in our six case study units continued to occur. Of the 152 soldiers mobilized in this unit, we identified 54 soldiers who our review of available records indicated were either overpaid, underpaid, or received entitled active duty pays and allowances over 30 days late, or for whom erroneous pay-related debts were created. We found that these pay problems could be attributed to control breakdowns similar to those we found at our case study units, including pay system input errors associated with amended orders, delays and errors in coding pay and allowance transactions, and slow customer service response. For example, available documentation and interviews indicate that while several soldiers submitted required supporting documentation to start certain pays and allowances at the time of their initial mobilization in January 2003, over 20 soldiers were still not receiving these pays in August 2003. Colorado USPFO military pay-processing personnel told us they are reviewing pay records for all deployed soldiers from this unit to ensure that they are receiving all entitled active duty pays and allowances. The extensive problems we identified at the case study units vividly demonstrate that the controls currently relied on to pay mobilized Army Guard personnel are not working and cannot provide reasonable assurance that such pays are accurate or timely. The personal toll that these pay problems have had on mobilized soldiers and their families cannot be readily measured, but clearly may have a profound effect on reenlistment and retention. It is not surprising that cumbersome and complex processes and ineffective human capital strategies, combined with the use of an outdated system that was not designed to handle the intricacies of active duty pay and allowances, would result in significant pay problems. While it is likely that DOD will be required to rely on existing systems for a number of years, a complete and lasting solution to the pay problems we identified will only be achieved through a complete reengineering, not only of the automated systems, but also of the supporting processes and human capital practices in this area. However, immediate actions can be taken in these areas to improve the timeliness and accuracy of pay and allowance payments to activated Army Guard soldiers. The need for such actions is increasingly imperative in light of the current extended deployment of Army Guard soldiers in their crucial role in Operation Iraqi Freedom and anticipated additional mobilizations in support of this operation and the global war on terrorism. Immediate steps to at least mitigate the most serious of the problems we identified are needed to help ensure that the Army Guard can continue to successfully fulfill its vital role in our national defense. We recommend that the Secretary of Defense direct the Secretary of the Army, in conjunction with the Under Secretary of Defense (Comptroller), to take the following actions to address the issues we found with respect to the existing processes, human capital, and automated systems relied on to pay activated Army Guard personnel. Establish a unified set of policies and procedures for all Army Guard, Army, and DFAS personnel to follow for ensuring active duty pays for Army Guard personnel mobilized to active duty. Establish performance measures for obtaining supporting documentation and processing pay transactions (for example, no more than 5 days would seem reasonable). Establish who is accountable for stopping active duty pays for soldiers who return home earlier than their units. Clarify the policies and procedures for how to properly amend active duty orders, including medical extensions. Require Army Guard commands and unit commanders to carry out complete monthly pay and personnel records reconciliations and take necessary actions to correct any pay and personnel record mismatches found each month. Update policies and procedures to reflect current legal and DOD administrative requirements with respect to active duty pays and allowances and transaction processing requirements for mobilized Army Guard soldiers. Consider expanding the scope of the existing memorandum of understanding between DFAS and the Army concerning the provision of resources to support surge processing at mobilization and demobilization sites to include providing additional resources to support surge processing for pay start and stop transaction requirements at Army Guard home stations during initial soldier readiness programs. Determine whether issues concerning resource allocations for the military pay operations identified at our case study units exist at all 54 USPFOs, and if so, take appropriate actions to address these issues. Determine whether issues concerning relatively low-graded military pay technicians identified at our case study units exist at all 54 USPFOs, and if so, take appropriate actions to address these issues. Modify existing training policies and procedures to require all USPFO and active Army pay and finance personnel responsible for entering pay transactions for mobilized Army Guard soldiers to receive appropriate training upon assuming such duties. Require unit commanders to receive training on the importance of adhering to requirements to conduct annual pay support documentation reviews and carry out monthly reconciliations. Establish an ongoing mechanism to monitor the quality and completion of training for both pay and finance personnel and unit commanders. Identify and evaluate options for improving customer service provided to mobilized Army Guard soldiers by providing improved procedures for informing soldiers of their pay and allowance entitlements throughout their active duty mobilizations. Identify and evaluate options for improving customer service provided to mobilized Army Guard soldiers to ensure a single, well-advertised source for soldiers and their families to access for customer service for any pay problems. Review the pay problems we identified at our six case study units to identify and resolve any outstanding pay issues for the affected soldiers. Evaluate the feasibility of using the personnel-to-pay interface as a means to proactively alert pay personnel of actions needed to start entitled active duty pays and allowances. Evaluate the feasibility of automating some or all of the current manual monthly pays, including special duty assignment pay, foreign language proficiency pay, hardship duty pay, and HALO pay. Evaluate the feasibility of eliminating the use of the “other credits” for processing hardship duty (designated areas), HALO pay, and special duty assignment pay, and instead establish a separate component of pay for each type of pay. Evaluate the feasibility of using the JUSTIS warning screen to help eliminate inadvertent omissions of required monthly manual pay inputs. Evaluate the feasibility of redesigning Leave and Earnings Statements to provide soldiers with a clear explanation of all pay and allowances received so that they can readily determine if they received all and only entitled pays. Evaluate the feasibility of establishing an edit check and requiring approval before processing any debt assessments above a specified dollar amount. Evaluate the feasibility of establishing an edit check and requiring approval before processing any payments above a specified dollar amount. As part of the effort currently under way to reform DOD’s pay and personnel systems—referred to as DIMHRS—incorporate a complete understanding of the Army Guard pay problems as documented in this report into the requirements development for this system. In developing DIMHRS, consider a complete reengineering of the processes and controls and ensure that this reengineering effort deals not only with the systems aspect of the problems we identified, but also with the human capital and process aspects. In its written comments, DOD concurred with our recommendations and identified actions to address the identified deficiencies. Speciifically, DOD’s response outlined some actions already taken, others that are underway, and further planned actions with respect to our recommendations. If effectively implemented, these actions should substantially resolve the deficiencies pointed out in our report. DOD’s comments are reprinted in appendix VIII. As agreed with your office, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days from its date. At that time, we will send copies of the report to interested congressional committees. We will also send copies of this report to the Secretary of Defense, the Under Secretary of Defense (Comptroller), the Secretary of the Army, the Director of the Defense Finance and Accounting Service, the Director of the Army National Guard, and the Chief of the National Guard Bureau. We will make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions regarding this report, please contact me at (202) 512-9505 or [email protected] or Geoffrey Frank, Assistant Director, at (202) 512-9518 or [email protected]. On December 5, 2001, the Colorado Army National Guard’s B Company, 5th Battalion, 19th Special Forces, was mobilized to active duty on orders for a 2-year period—through December 4, 2003. The unit was mobilized at Fort Knox and subsequently deployed in Afghanistan, Uzbekistan, and surrounding areas to search for Taliban and al Qaeda terrorists as part of Operation Enduring Freedom. The unit returned to Fort Campbell for demobilization and was released from active duty on December 4, 2002—1 year before the end of the unit’s original mobilization orders. A timeline of the unit’s actions associated to its mobilization under Operation Enduring Freedom is shown in figure 9. As summarized in table 3, the majority of soldiers from Colorado’s B Company experienced some sort of pay problem during one or more of the three phases of their active duty mobilization. Overall, all 62 soldiers with the company had at least one pay problem associated with their mobilization. These pay problems included not receiving entitled pays and allowances at all; not receiving some entitled pays and allowances within 30 days; and for some, overpayments of pays and allowances. Specifically, we found (1) 56 soldiers did not receive certain pay and allowance entitlements at all, or within 30 days of their initial mobilization, (2) 61 soldiers either did not receive, or did not receive within 30 days, the hostile fire pay or other “high-risk location” pays they were entitled to receive based on their deployment in Uzbekistan and Afghanistan, and (3) 53 soldiers either improperly continued to receive hostile fire pay after leaving high-risk locations overseas or continued to receive paychecks, as if they were still on active duty status, for over 2 months beyond their release from active duty. In total, we identified estimated overpayments of $494,000, underpayments of $28,000, and late payments of $64,000, associated with the pay problems we found. Of the estimated $494,000 in overpayments, we identified about $88,000 that was subsequently collected from the soldiers of Colorado’s B Company. In addition, in trying to correct overpayments associated with Colorado B Company’s departure from high-risk locations and release from active duty, the Defense Finance and Accounting Service (DFAS) billed 34 of the unit’s soldiers an average of $48,000 each, for a largely erroneous total debt of over $1.6 million. Many soldiers with the company characterized the service they received from the state United States Property and Fiscal Office (USPFO) and the active Army finance offices while deployed in Afghanistan and surrounding areas as “poor” or “openly hostile.” Some of the soldiers in the unit expressed significant dissatisfaction with the time and effort they, or their spouses were required to spend attempting to identify and correct their pay. These pay problems had a variety of adverse effects. The labor-intensive efforts by the special forces soldiers to address pay problems, in some cases, distracted them from important mission operations. In addition, several soldiers told us that the numerous pay problems they encountered would play a major role in their decision whether to reenlist. According to several soldiers from Colorado’s B Company, the combined effect of (1) recurring pay problems, (2) having two-thirds of their monthly training paychecks garnished to pay off often erroneous payroll-related debts, and (3) receiving poor payroll customer service during their active duty tours adversely affects morale and may have an adverse effect on a soldier’s willingness to continue his or her service with the Army Guard. For example, a unit official advised us that as of September 30, 2003, three soldiers had left B Company primarily due to frustration over pay problems. The unit official indicated that he expected additional soldiers would depart as a result of the current debt problems. As summarized in table 4, we identified a number of pay problems associated with eight different types of active duty pays and allowances related to the unit’s mobilization to active duty. These problems resulted from failure to enter data, data entry errors, or late entry of data needed by Army Guard USPFO military pay personnel and by active Army military pay personnel at the unit’s mobilization station to start active duty pays. We also found that these pay problems were exacerbated by breakdowns in customer service. In total, 56 out of 62 soldiers did not receive certain pays and allowances at all, or in a timely manner, after being activated on December 5, 2001. As illustrated in table 4, 11 soldiers did not receive entitled Jump pay within 30 days of entitlement, 10 did not receive HALO pay within 30 days of entitlement, and 41 soldiers did not receive at least 1 month of their special duty assignment pay. According to DFAS procedures, the unit’s Army Guard USPFO should have initiated these pays. In addition, these problems could have been minimized if they were identified and corrected by the Army mobilization station finance office at Fort Knox during the soldier readiness processing at that location. According to Army regulations, the active Army mobilization station is required to conduct a soldier readiness program to review every mobilizing soldier’s pay account for accuracy. In essence, under Department of Defense (DOD) guidance, the active Army mobilization stations are to act as a “safety net” to catch and correct any errors in soldiers’ active duty pays and allowances before they are deployed on their active duty missions. The underpayments and late payments resulted in adverse financial repercussions for a number of the unit’s members and their families. We were told that many of the unit members’ spouses tried to contact the soldiers while they were deployed to find out why they were not receiving the anticipated funds. We were told that neither the spouses nor the soldiers received clear guidance on whom to contact to address their pay concerns. For example, some soldiers sought help from the active Army’s finance offices at Fort Knox and Fort Campbell. However, upon contacting officials at those locations, soldiers were told that the active Army could not help them because they were Army Guard soldiers and should therefore contact their home state Army Guard USPFO. According to DFAS officials, the active Army finance offices have the capability to service Army Guard soldiers. Fort Knox and Fort Campbell finance personnel were either unaware of their capability or unwilling to take the actions needed to address the unit’s active duty pay concerns. Colorado’s B Company soldiers turned back to the USPFO for assistance. Although the USPFO did process a number of transactions to start entitled active duty pays and allowances for the unit’s soldiers, such pays were started more than 30 days after the date they were entitled to receive such pays. In one case, a soldier’s spouse had to obtain a $500 grant from the Colorado National Guard in order to pay bills while her husband was on active duty. Colorado’s B Company was deployed to Uzbekistan and Afghanistan in February 2002. As summarized in table 5, we identified pay problems associated with the hostile fire pay, combat zone tax exclusion, and hardship duty pay that unit soldiers were entitled to receive based on their deployment to Afghanistan and surrounding areas. Specifically, after arriving in Afghanistan, some soldiers in Colorado’s B Company received these pays sporadically, were not paid at all, were paid but for inexplicable dollar amounts, or were overpaid their entitled active duty pays and allowances while deployed. For example, 16 of the 62 soldiers in B Company received the wrong type of hardship duty pay, formerly called Foreign Duty Pay, in addition to the correct hardship duty location pay while they were deployed in Afghanistan. We found that these pay problems could be attributed, in part, to the active Army servicing finance office’s lack of knowledge about how to process transactions through the Defense Joint Military Pay System-Reserve Component system (DJMS-RC) to start location-based pays and allowances for the unit’s soldiers. For example, we were told that because active Army in-theater finance personnel were unfamiliar with the required procedures to follow in starting hardship duty pays, they entered transactions that resulted in soldiers receiving two different location-based types of hardship duty pay for the same duty. Further, Army Guard soldiers told us the active Army finance office could not effectively answer questions concerning their pay entitlements or transaction processing documentation requirements. After not receiving any pay support from the active Army servicing finance location, the unit’s soldiers told us they contacted their Army Guard USPFO in Colorado for assistance. However, Colorado USPFO officials informed them that they did not have the capability to start location-based pays and allowances for Army Guard soldiers. A frequent complaint we received from Colorado’s B Company soldiers concerned the circular nature of any attempts to get assistance on pay issues while deployed overseas. B Company’s soldiers told us they spent significant amounts of time and effort trying to correct the pay problems while deployed on critical mission operations in Afghanistan and surrounding areas—time and focus away from the mission at hand. For example, as discussed in greater detail in our West Virginia case study summary, a soldier from that unit took several days away from his unit to get location-based pay started for both the West Virginia and Colorado special forces units. We were also told that some members of the unit used their satellite radios to attempt to resolve their pay problems while deployed in Afghanistan. In addition, several of the unit’s soldiers told us their ability to identify and correct pay problems while deployed was impaired by limited access to telephones, faxes, e-mail, and their current Leave and Earnings Statements. In the late summer to early fall of 2002, soldiers from Colorado’s B Company began returning from Afghanistan and surrounding areas to Fort Campbell to begin their demobilization from active duty. However, the active Army’s finance office at Fort Campbell failed to properly stop soldiers’ pay as of their demobilization dates, which for most of the unit’s soldiers was December 4, 2002. As summarized in table 6, 39 of the unit’s 62 soldiers continued to receive active duty pay and allowances, some until February 14, 2003—2 and a half months after the date of their release from active duty. We found that both the active Army servicing finance location for the unit while it was in Afghanistan and at Fort Campbell upon its return to the United States did not take action to stop active duty pays and allowances. According to DFAS procedures, the finance office at the servicing demobilization station is to conduct a finance out-processing, which would include identifying and stopping any active duty pays that soldiers were no longer entitled to receive. According to DFAS-Indianapolis Reserve Component mobilization procedures, the local servicing active Army finance office also has primary responsibility for entering transactions to stop hardship duty pay, hostile fire pay, and the combat zone tax exclusion when soldiers leave an authorized hostile fire/combat zone. However, in this case, that office did not take action to stop these types of pay and allowances for many of the unit’s soldiers. For example, military pay personnel at Fort Campbell failed to deactivate hostile fire pay for 41 out of 62 B Company soldiers. With regard to customer service, some soldiers in the unit told us that upon their return from overseas deployments, they were informed that they should have corrected these problems while in- theater, despite the fact that these problems were not detected until the demobilization phase. Colorado’s B Company demobilization was complicated by the fact that the unit did not demobilize through the same active Army location used to mobilize the unit. DFAS procedures provide that Army Guard soldiers are to demobilize and have their active duty pays stopped by the installation from which they originally mobilized. However, the unit received orders to demobilize at Fort Campbell rather than Fort Knox where they originally mobilized. According to Fort Campbell personnel, Colorado’s B Company out-processed through the required sections, including finance, during their demobilization. Nonetheless, the finance office at that active Army location failed to stop all active duty pays and allowances when the unit was demobilized from active duty. Fort Campbell finance office personnel we interviewed were not present during B Company’s demobilization and had no knowledge of why pay was not stopped during the demobilization process. Failure to stop location-based and other active duty pays and allowances for the unit’s soldiers resulted in overpayments. As a result of the Colorado USPFO’s errors made in attempting to amend the unit’s orders to reflect an earlier release date than the date reflected in the unit’s original mobilization orders, large debts were created for many soldiers in the unit. Specifically, largely erroneous soldier debts were created when personnel at the Colorado USPFO inadvertently revoked the soldiers’ original mobilization orders when attempting to amend the orders to reflect the unit’s actual release date of December 4, 2002—1 year before the end of the unit’s original orders. As a result, 34 soldiers received notice on their Leave and Earnings Statements that rather than a debt for the 2 and a half months of active duty pay and allowances they received after their entitlement had ended, they owed debts for the 11 months of their active duty tour—an average of $48,000 per soldier, for a total debt of $1.6 million. Several of the soldiers in the company noticed the erroneous debt and called their unit commander. Some of the soldiers wanted to settle the debt by writing a check to DFAS. However, they were told not to because the exact amount of each soldier’s debt could not be readily determined and tracking such a payment against an as-yet undetermined amount of debt could confuse matters. Meanwhile, some soldiers now returned from active duty, resumed participation in monthly training, and began having two-thirds of their drill pay withheld and applied to offset their largely erroneous debt balances. We were told that it would take approximately 4 to 5 years for the soldiers to pay off these debts using this approach. On April 17, 2003, and in a subsequent June 20, 2003, letter, we brought this matter to the attention of DFAS and the DOD Comptroller, respectively. Table 7 provides an overview of the actions leading to the creation of largely erroneous payroll-related debts for many of the unit’s soldiers and DOD’s actions to address these largely erroneous debts. Despite considerable time and effort of DFAS and others across the Army Guard and Army, as of the end of our fieldwork in September 2003, Colorado’s B Company debt problems had not been resolved. In fact, for one sergeant, his pay problems were further complicated by these efforts. For example, in attempting to reduce the soldier’s recorded $30,454 debt by $20,111, DFAS instead sent the soldier a payment of $20,111. As of September 2003, about 9 months after his demobilization, the sergeant’s reported unpaid debt balance was $26,806, but the actual amount of his debt remained unresolved. On January 2, 2002, the Virginia Army National Guard’s B Company, 3rd Battalion, 20th Special Forces, was called to active duty in support of Operation Enduring Freedom for a 1-year tour. The unit in-processed at Fort Pickett, Virginia, and departed for Fort Bragg, North Carolina. The unit mobilized at Fort Bragg and for the next several months performed various duties on base until May 2002. In early May 2002, Virginia’s B Company deployed to Afghanistan to perform search and destroy missions against al Qaeda and Taliban terrorists. Although several of B Company’s soldiers returned from Afghanistan during August and September 2002, most of the unit’s members returned to Fort Bragg for demobilization during October 2002 and were released from active duty on January 2, 2003. A timeline of the unit’s actions associated with its mobilization under Operation Enduring Freedom is shown in figure 10. As summarized in table 8, the majority of soldiers from Virginia’s B Company experienced some sort of pay problem during one or more of the three phases of their active duty mobilization. Overall, 64 of the 65 soldiers with the company experienced at least one pay problem associated with their mobilization. These pay problems included not receiving entitled pays and allowances at all; not receiving some entitled pays and allowances within 30 days; and for some, overpayments of pays and allowances. Specifically, we found (1) 31 soldiers did not receive certain pay and allowance entitlements at all, or within 30 days of their initial mobilization entitlement, or were overpaid, (2) 63 soldiers either did not receive, or did not receive within 30 days, the hardship duty pay or other high-risk location pays they were entitled to receive based on their deployment to Afghanistan, and (3) 60 soldiers improperly continued to receive hardship duty pay or hostile fire pay after leaving high-risk locations overseas. In total, we identified estimated overpayments of $25,000, underpayments of $12,000, and late payments of $28,000 associated with the pay problems we found. Of the estimated $25,000 in overpayments, we identified about $2,000 that was subsequently collected from the soldiers. Our audit showed that the pay problems experienced by Virginia’s B Company were the result of a number of factors, including late submission of required pay support documents, incorrect pay inputs by Army personnel, and an active Army in-theater finance office’s lack of knowledge about the unit’s presence in Afghanistan. These pay problems had a number of adverse effects. Several B Company soldiers we interviewed expressed dissatisfaction with the time and effort they, or their spouses, were required to spend attempting to identify and correct problems with their pay. Another complaint concerned the circular nature of any attempts to get assistance. For example, we were told the USPFO referred soldiers to the active Army finance office and that office referred them back to the USPFO. Virginia USPFO officials informed us that the circular nature of giving assistance to soldiers was sometimes unavoidable. For example, they said that once soldiers left their home unit and the Fort Bragg and in-theater finance offices assumed pay responsibilities, the USPFO informed soldiers and their spouses to contact these active Army finance offices to discuss active duty payment problems. USPFO officials acknowledged that in instances in which the active Army finance office did not resolve soldiers’ pay problems, USPFO staff would try to fix the problems. According to several soldiers, the combined effect of recurring pay problems and receiving poor payroll customer service during their active duty tours adversely affects morale and may have a negative effect on the soldiers’ willingness to continue serving with the Army National Guard. Several soldiers told us that the numerous pay problems they encountered would play a major role in their decisions whether to reenlist. As summarized in table 9, we identified a number of pay problems associated with the unit’s mobilization to active duty. These problems resulted from failures by unit soldiers to provide necessary documentation to initiate certain pays, and data entry errors or late entry of data needed to start active duty pays by Army Guard USPFO military pay personnel and/or by active Army military pay personnel at the unit’s mobilization station. We identified 31 out of 65 soldiers from Virginia’s B Company who did not receive certain types of pay at all, were not paid in a timely manner, or were overpaid after being activated on January 2, 2002. The types of pay for which most problems occurred during mobilization were parachute jump pay, foreign language proficiency pay, HALO pay, and basic pay. As shown in table 9, we identified 8 soldiers who were underpaid for jump pay, 10 soldiers who were underpaid for foreign language pay, and 10 soldiers who were overpaid for HALO pay. Prior to being mobilized, the soldiers in Virginia’s B Company attended a soldier readiness program at the USPFO at Fort Pickett, Virginia. Part of this program was intended to ensure that soldiers had proper administrative paperwork and financial documents necessary to start all entitled active duty pays at mobilization. Virginia USPFO personnel who conducted the finance portion of B Company’s soldier readiness program verified soldiers’ supporting financial documentation and updated, if necessary, each soldier’s Master Military Pay Accounts (MMPA). This verification process disclosed that many soldiers had unresolved pay errors that had occurred as far back as 1996. According to U.S. Army Forces Command Regulation 500-3-3, these problems should have been corrected during required annual soldier readiness reviews conducted at the unit’s home station. As part of our analysis of the unit’s pay, we determined that some of these long-standing pay problems had been resolved. For example, over $22,500 was processed for 52 B Company soldiers’ and included in soldiers’ pay distributions from October 2001 to March 2003. USPFO officials told us that they have been working with a sergeant from Virginia’s B Company who performed a detailed analysis of soldiers’ long-standing pay problems in addition to pay problems that occurred subsequent to January 2002 for the majority of their mobilization. This sergeant performed these pay-related tasks in addition to his mission- related duties as a professional engineer. After leaving the unit’s home station, B Company traveled to Fort Bragg, its active Army mobilization station. Fort Bragg personnel conducted a second soldier readiness program that was intended to identify and fix any pay issues not resolved at the home station. According to USPFO officials and active Army finance office officials at Fort Bragg, problems with jump pay and foreign language pay occurred at mobilization because the necessary documentation to support jump pay eligibility or language proficiency for a number of soldiers was not always provided to the USPFO or the mobilization station. For example, of the 8 soldiers in the unit who were underpaid for jump pay, 4 did not receive jump pay until mid- February 2002 and 1 did not begin to receive jump pay until mid-March. In another instance, we identified 10 soldiers who were eligible to receive foreign language proficiency pay in January 2002, but did not receive payments for 1 or more months after they became eligible. Further, nine soldiers in the unit were eligible for HALO pay in January 2002. However, again, in part because of the lack of proper documentation from the unit’s soldiers, but also because of pay input errors at the active Army finance unit at Fort Bragg, pay problems occurred for seven of the nine soldiers during January 2002, the initial month of their mobilization. The seven soldiers eligible for HALO pay received both jump pay as well as HALO pay during January 2002, which resulted in overpayments to these soldiers. These overpayments occurred because Fort Bragg, unaware that the USPFO had previously processed HALO pay for these soldiers, processed HALO pay a second time, based on supporting documentation received from the unit. Also, we found that two soldiers, who were not eligible to receive HALO pay, received HALO pay for 3 months and another soldier received HALO pay starting in January but did not become eligible for this pay until mid-April 2002. Documentation was not available to explain these errors. In May 2002, Virginia’s B Company left Fort Bragg and traveled to Afghanistan to assist in missions against al Qaeda and Taliban forces. While in Afghanistan, the soldiers encountered additional pay problems related to hardship duty pay, special duty assignment pay, and, to a lesser extent, hostile fire pay and basic pay. Also, the soldiers experienced problems in receiving the full amounts of their entitled HALO pay. Table 10 summarizes the pay problems we identified for the unit while it was deployed. Once the soldiers arrive in-theater, an active Army finance office assigned to the unit is responsible for initiating assignment and location-based pays for the unit’s soldiers in DJMS-RC. However, we found that the active Army in-theater finance offices did not always know which units they were responsible for servicing or their location. The in- theater finance office for Virginia’s B Company, located in Kuwait, did not start these pays as required. We were told that this occurred because finance personnel in Kuwait did not know that B Company had arrived in Afghanistan. Virginia’s B Company soldiers, who were not regularly receiving their leave and earnings statements while in Afghanistan, told us they became concerned that they were not receiving pays they were entitled to while deployed based on conversations with their spouses. After attempts to initiate location-based pays at the battalion finance unit in Afghanistan were unsuccessful because finance personnel at that location were not familiar with DJMS-RC’s transaction processing requirements for starting these types of pay, two soldiers were ordered to travel to Camp Snoopy, Qatar, where another Army finance office was located. Attempts to start assignment and location-based pays for the unit’s soldiers at Camp Snoopy were also unsuccessful. One of the soldiers told us that they flew to Kuwait because they were advised that the finance unit at that active Army finance office was more knowledgeable about how to enter the necessary transactions into DJMS-RC to pay the unit’s soldiers. The soldier told us he took an annotated battle roster listing the names of all Virginia’s B Company soldiers deployed in and around Afghanistan at that time and the dates they arrived in country with him as support for starting the unit’s in theater-based pays. Finally, at Kuwait the appropriate in-theater pays were activated and the two soldiers returned to Afghanistan. As shown in figure 11, the entire trip required interim stops at eight locations because of limited air transportation and took about a week. Despite this costly, time-consuming, and risky procedure to start location- based pays for the unit, 63 of Virginia’s B Company soldiers, who became eligible for hardship duty pay in May 2002, not receive their location-based pay entitlements until July 2002. Problems with special duty assignment pay also occurred during the unit’s deployment. We found that both underpayments and overpayments of this type of pay were made as a result of confusion about who was responsible for making the manual monthly transactions necessary for entitled soldiers in the unit to receive these pays.. For example, 10 soldiers in B Company did not receive at least 1 month of entitled special duty assignment pay. Conversely, overpayments of this type of pay were made when B Company left Afghanistan and returned to Fort Bragg to demobilize in October 2002, and both the active Army finance office at Fort Bragg and the Virginia USPFO entered special duty assignment pay transactions for the unit’s eligible soldiers. Fort Bragg processed October and November 2002 special duty assignment duty payments for 24 of the unit’s soldiers in December 2002. Virginia’s USPFO, unaware that Fort Bragg had made these payments in December 2002, also paid all 24 eligible soldiers special duty assignment pay for October and November 2002 several months later. USPFO officials explained that their military pay office processed the payments because B Company submitted the necessary documentation certifying that the unit’s soldiers were entitled to receive back pay for missed special assignment duty pays. The officials told us that special duty assignment pay was processed because, having received this certification from the unit, they assumed that payments had not yet been made. Virginia’s B Company soldiers also experienced problems with HALO pay during deployment. We identified 11 B Company soldiers eligible for HALO pay who did not receive 1 or more months of this pay as of March 31, 2003. We determined that these problems occurred because such pays require manual monthly input, and the pay technicians inadvertently did not make the required entries each month. In addition, 2 of the unit’s soldiers did not receive all hostile fire payments to which they were entitled. One soldier did not receive the first month of entitled hostile fire pay for May 2002, and the other soldier received hostile fire pay for May 2002 but not for the remaining months of his deployment. Although some soldiers in B Company left Afghanistan during August and September 2002, most of the unit returned to Fort Bragg in October 2002 to begin the demobilization process. As summarized in table 11, 57 soldiers continued to receive pays to which they were no longer entitled because they left Afghanistan, including either hostile fire pay, hardship duty pay, or both. According to DOD mobilization procedures, the finance office at the servicing demobilization station is to conduct a finance out-processing. The finance office is responsible for inputting transactions to stop certain location-based pays, such as hardship duty pay and hostile fire pay. In addition, according to DOD’s Financial Management Regulation (FMR), Volume 7A, chapters 10 and 17, location-based pays must be terminated when the soldier leaves the hostile fire/combat zone. Overpayments to B Company soldiers occurred during demobilization because the in-theater finance office continued to make hostile fire and hardship duty pays after soldiers left Afghanistan in October 2002, and the Fort Bragg active Army finance office did not enter transactions into DJMS- RC to stop these payments as required. We found that 55 of 64 soldiers eligible for hostile fire pay were overpaid for at least 1 month beyond their departure from Afghanistan. Also, we found that 57 of 64 soldiers eligible for hardship duty pay were overpaid at least part of 1 month. A Fort Bragg official explained that the Army finance office personnel at Fort Bragg were not aware that these payments were still being made after the soldiers had returned to the United States, but, subsequently determined that hostile fire and hardship duty overpayments were occurring and took action to terminate the payments. Also, four members of Virginia’s B Company, who were injured while deployed in Afghanistan, returned to Fort Bragg and requested medical extensions to their active duty tours so they could continue to receive active duty pay and medical benefits until they recovered. One of the soldiers told us, “People did not know who was responsible for what. No one knew who to contact or what paperwork was needed ….” To support themselves and their families, these four soldiers needed the active duty military pay they were entitled to receive while obtaining medical treatment and recovering from their injuries. However, after risking their lives for their country, all four have had gaps in receiving active duty pay while they remained under a physician’s care after their demobilization date and have experienced financial difficulties. In addition, when active duty pay was stopped, the soldiers’ medical benefits were discontinued. As discussed earlier in this report, these pay-related problems for wounded soldiers caused significant hardship for them and their families. On December 5, 2001, West Virginia’s 19th Special Forces Group, 2nd Battalion, C Company, was called to active duty in support of Operation Enduring Freedom for a 1-year tour. The unit was mobilized at Fort Knox and subsequently deployed in Afghanistan, Uzbekistan, and surrounding areas to search for possible Taliban and al Qaeda terrorists. The unit returned to Fort Campbell for demobilization and was released from active duty on December 4, 2002. A timeline of the unit’s actions associated with its mobilization under Operation Enduring Freedom is summarized in figure 12. As summarized in table 12, the majority of soldiers from C Company experienced some sort of pay problem during one or more of the three phases of their active duty mobilization. Overall, 86 of the 94 soldiers with the company experienced at least one pay problem associated with its mobilization. Specifically, we identified (1) 36 soldiers who were either overpaid, did not receive certain pay and allowance entitlements at all, or did not receive pay within 30 days of their initial mobilization entitlement, (2) 84 soldiers who were either overpaid, did not receive, or did not receive within 30 days, the hostile fire pay or other high-risk location pays they were entitled to receive based on their deployment in Uzbekistan and Afghanistan, and (3) 66 soldiers who did not receive, or did not receive within 30 days, their special duty assignment pay during their demobilization. In total, we identified estimated overpayments of $31,000, underpayments of $9,000, and late payments of $61,000 associated with the identified pay problems. We did not identify any collections related to overpayments for this unit. As summarized in table 13, several soldiers from C Company did not receive the correct pay or allowance when called to active duty. We found that some soldiers received payments over 5 months late and other soldiers had been overpaid. Seven soldiers did not receive their $225 per month HALO pay until over a month after mobilization, and 18 other soldiers received combat diver pay and HALO pay to which they were not entitled. Prior to being mobilized, the soldiers in C Company attended a soldier readiness program at their unit armory. This program was intended to ensure that all soldiers had proper administrative paperwork and financial documents and were physically fit for the ensuing mobilization. West Virginia USPFO personnel who conducted the finance portion of C Company’s soldier readiness program were required to verify soldiers’ supporting financial documentation, and update, if necessary, soldiers’ pay records in DJMS-RC. Soldiers not submitting the correct paperwork at the time of the Soldier Readiness Program caused some payments to be late. For example, according to the USPFO, one soldier did not submit the proper paperwork for his family separation allowance. The delay in submission caused his first payment to be over 3 months late. Another problem with the unit’s mobilization related to 17 soldiers who had significant problems with their HALO pay. According to USPFO personnel, the unit commander for C Company did not provide the USPFO a list of the unit members who were eligible to receive HALO pay. Therefore, the USPFO paid all the unit members who were parachute qualified the regular parachute pay. Once the USPFO received a list of the unit’s 17 HALO- qualified soldiers, pay personnel attempted to recoup the regular jump pay and pay the HALO team the increased HALO pay amount. USPFO personnel told us they did not know how to initiate a payment for the difference between regular jump and HALO pay. Consequently, they entered transactions to recoup the entire amount of jump pay and then initiated a separate transaction to pay the correct amount of HALO pay. According to the DOD FMR, volume 7A, chapter 24, soldiers who are eligible to receive regular parachute pay and HALO pay are paid the higher of the two amounts, but not both. In this case, the 17 members of C Company’s HALO team should have received a $225 per month payment from the beginning of their mobilization. Pay records indicate that this correction initiated by the USPFO occurred about 2 months after the unit mobilized. When the USPFO personnel attempted to collect the soldiers’ regular parachute pay, they inadvertently collected a large amount of the soldiers’ basic active duty pay for the first month of their mobilization. Personnel at the USPFO stated that the error caused debts on soldiers’ accounts but was corrected immediately after a pay supervisor at the USPFO detected the error in February. Even after the soldiers’ pay was corrected, USPFO personnel did not stop the regular parachute pay for the HALO team members, but instead let it continue, then collected the $150 per month parachute pay manually, and then paid the correct $225 per month HALO pay. This error-prone, labor- intensive manual collection and subsequent payment method used by the USPFO personnel to pay C Company’s HALO team the higher HALO rate of pay was not consistently applied each month and resulted in 7 soldiers being overpaid when their regular parachute pay was not collected. In addition to the 7 soldiers who were actually on the HALO team, 10 other soldiers were on the initial list given to the USPFO but were actually not on the HALO team. The unit commander for C Company provided a more accurate list to the USPFO some time after the first list, and only members on the more accurate list continued to receive HALO pay. However, USPFO pay personnel did not attempt to collect the HALO pay from unit members on the first list who had incorrectly received HALO pay. As a result of this complex collection and payment process, the unit’s soldiers were confused about whether they were receiving all their entitled active duty pays while mobilized. After leaving the unit’s home station, C Company traveled to Fort Knox, its active Army mobilization station. As required by Army guidance, Fort Knox personnel conducted a second soldier readiness program to identify and fix unresolved pay issues associated with the unit’s mobilization. Based on our findings that the pay problems continued after this review, it does not appear that the active Army finance office at Fort Knox carried out its responsibility to review and validate all of C Company soldiers’ active duty pays and allowance support records. Problems with HALO and family separation pay were not resolved for several months after the mobilization. As a result, the soldiers’ pay problems persisted into their deployment overseas. As summarized in table 14, we identified a number of pay problems associated with three different types of active duty pays related to the unit’s deployment. After going through initial in-processing at Fort Knox, C Company soldiers traveled to Fort Campbell where they prepared to deploy overseas. Starting in December 2001, members of C Company traveled to Uzbekistan and Afghanistan to perform special forces missions. During their deployment overseas, C Company soldiers consistently experienced problems related to specific location-based payments such as hostile fire pay and hardship duty pay. In 78 cases, the payments were not started within 30 days from when the soldiers were entitled to the payments. In 22 other cases, we determined that soldiers had not received all location- based pays as of March 31, 2003. In 60 cases, the soldiers were overpaid or payments were not stopped when they left the combat zones. Due to the lack of supporting documents at the state, unit, and battalion-levels, dates for when each soldier entered and left combat zones were not always available. Consequently, there may have been other deployment-related pay problems for C Company that we were not able to identify. According to DFAS policy, when soldiers from C Company arrived in Uzbekistan the in-theater finance office in Uzbekistan was responsible for initiating location-based payments for the unit. Unit personnel stated that the staff in the finance office in Uzbekistan were not adequately trained in how to input pays into DJMS-RC. Initially, we were told the Uzbekistan finance office incorrectly believed it was the West Virginia USPFO’s responsibility to start location-based pays for the deployed soldiers from C Company. The active Army finance office in Uzbekistan instructed the unit to contact the West Virginia USPFO to start location-based pays. However, DFAS policy clearly states that it is the active Army in-theater finance office’s responsibility to start and maintain monthly location-based payments. After attempts by the unit administrator and the Uzbekistan finance office failed to initiate the payments, a sergeant in C Company was ordered to travel to Camp Doha, Kuwait, to have the unit’s location-based pays started. The soldier stated that he traveled to Camp Doha because he was told that the finance unit at that active Army finance location was more knowledgeable in how to enter transactions into DJMS-RC to initiate location-based pays for the unit’s soldiers. The soldier took with him all the necessary paperwork to have the pays started for all the companies under the battalion, including C Company. On the return flight from the sergeant’s mission in Kuwait, his plane encountered enemy fire and was forced to return to a safe airport until the next day. The failure by active Army personnel at the finance office in Uzbekistan to enter the transactions necessary to start location-based pays for the unit delayed payments to some soldiers for up to 9 months and put one soldier in harm’s way. Per DOD FMR, volume 7A, chapter 10, soldiers who perform duty in hostile fire zones are entitled to hostile fire pay as soon as they enter the zone. However, we found that 45 soldiers in C Company did not have their hostile fire pay started until over 30 days after they were entitled to receive it. Some of C Company’s soldiers received retroactive payments over 2 months after they should have received their pay. In addition, as of March 31, 2003, we determined that 18 soldiers from the unit were not yet paid for 1 or more months that they were in the hostile fire zone. We also identified 40 soldiers who received hostile fire pay after they had left the country and were no longer entitled to receive such pays. These overpayments occurred primarily because hostile fire pay is an automatic recurring payment based on the start and stop date for the soldier’s mobilization entered into DJMS-RC. However, in this case, the active Army finance office in Uzbekistan did not amend the stop dates for automated active duty pays in DJMS-RC to reflect that C Company left the designated area before the stop date entered into DJMS-RC. The active Army finance office’s failure to follow prescribed procedures resulted in overpayment of this pay to 40 soldiers. Per DOD FMR, volume 7A, chapter 17, soldiers who perform duties in designated areas for over 30 days are entitled to the hardship duty pay incentive. The FMR provides for two mutually exclusive types of hardship duty pay for identified locations—one according to specified “designated areas” and the other for specified “certain places.” Effective December 31, 2001, the regulation no longer permitted soldiers newly assigned to locations specified as “certain places” to begin receiving hardship duty pay. However, the regulation specified Afghanistan and Uzbekistan as designated areas and provided for paying $100 a month to each soldier serving there. While deployed to Afghanistan and Uzbekistan, 29 soldiers in C Company were mistakenly provided both types of hardship duty pay. The local finance office in Uzbekistan correctly entered transactions to start C Company’s hardship duty pay for designated areas into the DJMS-RC pay system. Due to limitations in DJMS-RC, the local finance office was required to manually enter the designated area payments for each soldier every month the unit was in a designated area. However, DFAS documentation shows that finance personnel at Fort Bragg incorrectly initiated a recurring certain places hardship duty payment for soldiers in C Company. For some soldiers, payments continued until May 31, 2002 and for others the payments continued until the end of their tour of active duty on December 4, 2002. These erroneous certain places hardship duty pays resulted in overpayments. In addition, because DJMS-RC processing capability limitations required the designated areas payment to be manually entered every month the unit was in the designated area, the in-theater finance office in Uzbekistan failed to consistently enter the monthly designated area payments for all entitled soldiers. Throughout the time C Company was in Uzbekistan and Afghanistan, we identified a total of 5 soldiers who missed one or more monthly payments of entitled hardship duty designated area pay. Other soldiers received entitled payments over 9 months late. Still others were paid more than once for the same month or paid after leaving the designated area, resulting in overpayments to 12 soldiers. The mix of erroneous certain places hardship duty payments along with sporadic payments of the correct type of designated area hardship duty pay caused confusion for the soldiers of C Company and their families regarding what types of pay they were entitled to receive and whether they received all active duty entitlements. C Company returned to Fort Campbell during the fall of 2002 to begin the demobilization process. By October 2002, all of the unit had returned from overseas and was demobilized on December 4, 2002. As shown in table 15, 66 of C Company’s 94 soldiers experienced pay problems associated with their demobilization from active duty. In October 2002, eligible soldiers in the unit were entitled to a special duty assignment pay increase from $110 per month to $220 per month. To initiate this higher pay rate, the West Virginia Army National Guard military personnel office was required to cut new special duty assignment pay orders for all eligible C Company soldiers. USPFO officials stated that they could not pay the increased amount until they received a copy of the new orders. The USPFO personnel did not continue to pay the $110 a month to the soldiers because they did not want to have to recoup the old amount and then pay the correct amount when orders were received. However, the orders for the soldiers were not received by the USPFO for several months, which created a delay in the payment of the soldiers’ special duty assignment pay. Supporting documents showed that a delay in the production of the orders by the West Virginia Army National Guard military personnel office caused the late payments. For C Company, 63 soldiers received their last 3 months of special duty assignment pay over 30 days late. Another 3 soldiers did not receive their last 3 months of special duty assignment pay because the USPFO inadvertently overlooked the manual transaction entries required to process special duty assignment pay for those soldiers. On December 27, 2001, the Mississippi Army National Guard’s 114th Military Police Company was called to active duty in support of Operation Noble Eagle for a 1-year tour—through January 5, 2003. The unit mobilized in Clinton, Mississippi, and departed for Fort Campbell, Kentucky, on January 6, 2002. The unit in-processed at Fort Campbell and for the next 5 months performed military police duties at Fort Campbell until early June. On June 10, 2002, the 114th Military Police Company deployed to Guantanamo Bay, Cuba, to perform base security and guard duties for Taliban and al Qaeda prisoners. After guarding detainees in Cuba for approximately 6 months, the unit returned to Fort Campbell in late November 2002. At Fort Campbell the unit out-processed and returned to Clinton, Mississippi, and was released from active duty on January 5, 2003. A time line of actions associated with the unit’s active duty mobilization is shown in figure 13. As summarized in table 16, at every stage of the unit’s 1-year tour of active duty, soldiers experienced various pay problems. Of the 119 soldiers of the Mississippi Army National Guard’s 114th Military Police Company, 105 experienced at least one pay problem associated with mobilization in support of Operation Noble Eagle. Specifically, we found that (1) 21 soldiers experienced underpayments, overpayments, or late payments, or a combination of these, during their initial mobilization, including some soldiers who did not receive payments for up to 7 months after their mobilization dates, and others who still have not received certain payments, (2) 93 soldiers experienced underpayments, overpayments, late payments, or some combination, during their tour of active duty at Fort Campbell and in Cuba, including in-theater incentives such as hardship duty pay, and (3) 90 soldiers experienced underpayments, overpayments, late payments, or a combination of these, during their demobilization at Fort Campbell, including problems related to the continuation of in-theater incentives and overpayment of active duty pay after demobilization. In total, we identified estimated overpayments of $50,000, underpayments of $6,000, and late payments of $15,000 associated with the pay problems we found. Of the estimated $50,000 in overpayments, we identified about $13,000 that was subsequently collected from the unit’s soldiers. As summarized in table 17, we found that 21 soldiers from the 114th Military Police Company experienced underpayments, overpayments, late payments, or some combination related to pay and allowance entitlements when called to active duty. For example, several soldiers did not receive their entitled $100 per month family separation allowance until 7 months after mobilization, and several other soldiers did not receive the correct type of basic allowance for housing as specified in the DOD FMR, Volume 7A, chapter 26. Prior to being mobilized, the soldiers in the 114th Military Police Company attended a soldier readiness program at their unit armory. The purpose of this review was to ensure that all soldiers had proper administrative paperwork and financial documents and were physically fit for the ensuing mobilization. Mississippi USPFO personnel, who conducted the finance portion of the 114th Military Police unit’s soldier readiness program, were required to verify soldiers’ supporting financial documentation, and update, if necessary, soldiers’ MMPAs. Not submitting the complete and current paperwork at the time of the soldier readiness program contributed to some of the late payments we identified. For example, some soldiers did not receive their family separation allowance because they did not provide documentation supporting custody arrangements. However, we also found that confusion at the USPFO over the eligibility of single parents contributed to these late pays. It was later in the unit’s active duty tour that finance officers initiated action for 11 of the 114th Military Police unit’s soldiers to receive retroactive payments, some for as much as 7-months of back pay. In another case, a former Special Forces soldier improperly received jump pay even though his assignment to this military police unit did not require that special skill. Five soldiers improperly received active duty pay and allowances even though they did not mobilize with the unit. Because these five soldiers were not deployable for a variety of reasons, they were transferred to another unit that was not subject to the current mobilization. However, the delay in entering the transfer and stopping pay caused each of these soldiers to receive active duty pay for 10 days. Several other soldiers received promotions at the time of their mobilization, but state military pay personnel at the USPFO did not enter transactions for the promotions until several months later, resulting in late promotion pay to the affected soldiers. Delays by the unit in submitting the promotion paperwork or by the state personnel office in entering the promotion paperwork into the personnel system caused these problems. However, supporting documents were not available to enable us to determine the specific cause of the delays. After leaving the unit’s home station, the 114th Military Police Company traveled to Fort Campbell, its active Army mobilization station. As required by Army guidance, Fort Campbell personnel conducted a second soldier readiness program intended, in part, to verify the accuracy of soldiers’ pay records. However, instead of conducting a thorough review of each soldier’s pay record, Fort Campbell finance personnel performed only a perfunctory review by asking the soldiers if they were experiencing pay problems. At this point, because the soldiers had only recently mobilized and had not received their first paychecks, they were unaware of pay problems. Failure to follow requirements for finance verification at Fort Campbell of each soldier’s pay account caused pay problems to persist past the mobilization stage. In addition, we were unable to determine specific causes for certain pay problems associated with the unit’s mobilization because the unit remobilized in February 2003, and unit administrative personnel did not retain payroll source documents relating to the prior mobilization. As summarized in table 18, we identified a number of pay problems associated with four types of active duty pays and allowances associated with the unit’s deployment while on active duty. While at Fort Campbell, eight soldiers experienced problems resulting from delays in entering changes in the family separation allowance, basic allowance for housing, and active duty pay increases from promotions. For example, one soldier was promoted to the rank of Private First Class at the end of May, but the pay system did not reflect the promotion until October. Although the soldier eventually received retroactive promotion pay, the delay caused the soldier to be paid at her old rank for 5 months. According to DFAS guidance, when a change occurs in a soldier’s pay, the on-site Army finance office should input the change. In cases where personnel changes occurred that affected pay, either the soldiers failed to submit documents or personnel at Fort Campbell failed to input the changes. Due to the lack of documentation, we could not determine the origin of the delays. During the unit’s deployment to Guantanamo Bay, Cuba, the soldiers encountered additional pay problems related to hardship duty pay, a location-based payment for soldiers located at designated hardship duty locations. Some soldiers received extra hardship duty payments, while others were only paid sporadically. In total, only 9 of the 100 soldiers who deployed to Guantanamo Bay with the 114th Military Police Company received the correctly computed hardship duty pay. Per DOD FMR, Volume 7A, chapter 17, soldiers who perform duties in designated areas for over 30 days are entitled to the hardship duty pay incentive. The FMR provides for two mutually exclusive types of hardship duty pay for identified locations; one according to specified “designated areas” and the other for specified “certain places.” Effective December 2001, the regulation no longer permitted soldiers newly assigned to locations specified as certain places to begin receiving hardship duty pay. However, the regulation specified Guantanamo Bay, Cuba, as a designated area and provided for paying $50 a month to each soldier serving there. Most of the 114th Military Police unit’s soldiers were mistakenly provided both types of hardship duty pay while deployed to Cuba. Upon arrival in Cuba, the local Guantanamo Bay finance office correctly entered transactions to start hardship duty pay for designated areas for the 114th Military Police unit’s soldiers into DJMS-RC. However, unknown to Guantanamo finance personnel, Fort Campbell finance personnel, upon the unit’s departure to Cuba, incorrectly initiated recurring certain places hardship duty payments for the soldiers of the 114th Military Police unit. These payments of both types of hardship duty pay resulted in overpayments to 88 enlisted soldiers of the 114th Military Police Company during the time the soldiers were stationed in Cuba. In addition, as a result of personnel turnover and heavy workload in the active Army’s Guantanamo Bay finance office and limitations in DJMS-RC, the Guantanamo Bay finance office did not make all the required monthly manual transaction entries required to pay hardship duty pays to the 114th Military Police Company’s soldiers. As a result, several soldiers in the unit did not receive one or more monthly hardship duty payments. Limitations in DJMS-RC required the local finance office to manually enter the designated area payments for each soldier on a monthly basis. For 11 soldiers, the finance office inadvertently overlooked entering one or more monthly hardship duty payments. The combination of erroneous certain places payments, along with sporadic payments of hardship duty designated area pays caused confusion for the soldiers who were performing a stressful mission in Cuba regarding whether they were receiving all their active duty pay entitlements. The 114th Military Police Company returned to Fort Campbell on November 23, 2002, to begin the demobilization process. During demobilization, soldiers continued to experience pay problems. As summarized in table 19, overpayment problems consisted of improper continuation of hardship duty pay following the unit’s return from Cuba and failure to stop active duty pay and allowances to soldiers who were discharged or returned from active duty early. According to the DOD FMR, Volume 7A, chapter 17, soldiers are entitled to receive hardship duty pay only while they are stationed in a hardship duty location. While the active Army’s Guantanamo Bay finance office stopped monthly designated area payments upon the unit’s departure from Cuba, the Fort Campbell finance office did not discontinue the incorrect certain places payments that its finance office had initiated months earlier. Consequently, 85 of 88 soldiers of the 114th Military Police unit’s soldiers continued receiving the incorrect certain places payments through their last day of active duty. In addition, five soldiers continued to receive active duty pay and allowances after being discharged or returned from active duty. Instead of demobilizing on schedule with their unit, these five soldiers demobilized individually earlier due to various reasons. According to DFAS guidance, Fort Campbell, the designated demobilization station for the 114th Military Police Company, was responsible for stopping active duty pay for the unit’s demobilizing soldiers. However, when these individual soldiers were released from active duty, Fort Campbell processed discharge orders but Fort Campbell’s finance office failed to stop their pay. Further, in at least one case in which documentation was available, state USPFO military pay personnel did not immediately detect the overpayments in monthly pay system mismatch reports. For these five soldiers, overpayments continued for up to 3 months. One of these soldiers was discharged early because of drug-related charges. However, his pay continued for 3 months past his discharge date. By the time the USPFO stopped the active duty pay, the former soldier had received overpayments of about $9,400. Although the state USPFO military pay personnel stopped the active duty pay in September 2002, no attempt to collect the overpayment was made until we identified the problem. In July 2003, state military pay personnel initiated collection for the overpayment. Another soldier was discharged on July 8, 2002, for family hardship reasons, but his active duty pay was not stopped until August 15, resulting in an overpayment. Another 114th Military Police soldier was returned from active duty on September 11, 2002, for family hardship reasons, but his active duty pay was not stopped until November 30, resulting in an overpayment of about $8,600. Another soldier, facing disciplinary proceedings related to a domestic violence incident, agreed to an early discharge on May 22, 2002. However, the soldier’s active duty pay was not stopped until the unit administrative officer, while deployed in Cuba, reviewed the unit commander’s finance report and discovered the soldier still on company pay records and reported the error. Following his discharge, this soldier continued to receive active duty pay until August 31, resulting in an overpayment. The 200th Military Police Company was called to active duty in support of Operation Noble Eagle on October 1, 2001, for a period not to exceed 365 days. The unit, including 90 soldiers who received orders to mobilize with the 200th Military Police Company, reported to its home station, Salisbury, Maryland, on October 1, 2001, and then proceeded to Camp Fretterd located in Reisterstown, Maryland, for the soldier readiness program (SRP) in- processing. On October 13, 2001, they arrived at their designated mobilization station at Fort Stewart, Georgia, where they remained for the next 2 weeks undergoing additional in-processing. The unit performed general military police guard duties at Fort Stewart until December 15, 2001, when 87 of the soldiers in the unit were deployed to guard the Pentagon. The company arrived at Ft. Eustis, Virginia, in late August 2002 and was released from active duty on September 30, 2002. In addition, 3 of the 90 soldiers who received orders from the 200th Military Police Company were deployed in January 2002 to Guantanamo Bay, Cuba, to perform base security and guard duties with Maryland’s 115th Military Police Company. These soldiers demobilized at Fort Stewart, Georgia, where they were released from active duty on July 10, 2002. A time line of key actions associated with the unit’s mobilization under Operation Noble Eagle is shown in figure 14. As summarized in table 20, the majority of soldiers from the company experienced some sort of pay problem during one or more phases of the three phases of their active duty mobilization. Overall, 83 of the company’s 90 soldiers experienced at least one pay problem associated with their mobilization in support of Operation Noble Eagle. Pay problems included overpayments, underpayments, and late payments of entitlements, such as basic pay, basic allowance for housing, basic allowance for subsistence, family separation allowance and hardship duty pay associated with their initial mobilization, deployment to Fort Stewart, the Pentagon, and Cuba; and demobilization from active duty status. In total, we identified estimated overpayments of $74,000, underpayments of $11,000, and late payments of $10,000, associated with the pay problems we identified. Of the estimated $74,000 in identified overpayments, we identified about $32,000 that was subsequently collected from the unit’s soldiers. Specifically, we determined that 75 soldiers were overpaid, underpaid, and/or paid late during the period of mobilization, including a soldier who did not receive correct payments for up to 7 months after the mobilization date; 64 soldiers experienced pay problems during their tour of active duty related to the proper payment of basic pay, basic allowance for subsistence, basic allowance for housing, family separation allowance, and location-based pays such as hardship duty pay; and 3 soldiers experienced pay problems during their demobilization from Fort Stewart related to continuation of active duty pay entitlements after they were released early from active duty. We identified a number of causes associated with these pay problems, including delays in submitting documents, incorrect data entry, and limited personnel to process the mass mobilizations. Maryland’s USPFO officials told us they had not experienced a large-scale mobilization to active duty in more than 10 years. As summarized in table 21, we identified a number of pay problems associated with eight different types of active duty pays and allowances associated with the unit’s mobilization to active duty. Seventy-five of 90 soldiers from the 200thth Military Police Company did not receive the correct or timely entitlements related to basic pay, basic allowance for housing, basic allowance for subsistence, or family separation allowance when called to active duty. Thirteen soldiers received overpayments because they continued to receive pay after they were released early from active duty. These soldiers mobilized on October 1, 2001, and then received amended orders to be released from active duty around October 13, 2001. However, many continued to receive basic pay, basic allowance for subsistence, basic allowance for housing, and family separation allowance payments through the end of November 2001. The unit administrator stated that many of these soldiers received amended orders after their initial mobilization when it was determined that they were not deployable for a variety of reasons, such as health or family problems. The overpayments occurred because the Maryland Army Guard command was not informed by either unit personnel or the active component that individuals (1) did not deploy or (2) were released from active duty early. The Maryland Army Guard command initiated amendment orders to stop the active duty pays when it became aware of the problem; however, the orders were not generated in time for the USPFO to stop active duty pays in the system. Specifically, in order for pay to be stopped by October 13, 2001, the USPFO must have received and processed the amended orders by October 8, 2001. However, the Maryland Army Guard command did not generate many of the amended orders until November 14, 2001, at which time they would have been sent to the unit and then forwarded to the USPFO too late to meet the pay cutoff. An additional soldier was issued an amended order to release him from active duty on October 13, 2001. Upon our review of his pay account, we determined that he continued to receive active duty pay and allowances for an entire year. We spoke with the unit administrator about this soldier and determined that he mobilized with the unit and was deployed for the entire year that he was paid. The unit administrator and Maryland Army Guard command, along with the USPFO pay officials, were not sure why the amendment order was never processed. They believe that the amendment fell through the cracks due to the general confusion and the limited personnel processing the mass mobilizations after September 11, 2001. Based on our inquiries, the Maryland Army Guard command generated an amendment on August 21, 2003, to reinstate the original order to avoid future questions regarding the soldier’s tour of duty. Further, 42 soldiers from the unit were underpaid their entitled family separation allowance when they mobilized. Soldiers are entitled to receive a family separation allowance after they have been deployed away from home for more than 30 days. We found that these underpayments occurred as a result of Maryland USPFO military pay officials’ errors in calculating the start and stop dates for this allowance. Several soldiers did not receive the correct type of basic allowance for housing after being mobilized as specified in the DOD FMR, Volume 7A, chapter 26. We were unable to determine specific causes and amounts of all the unit’s problems associated with the basic allowance for housing because the unit had remobilized in July 2003 and some of the historical records relating to housing entitlements applicable to the prior mobilization could not be located. Furthermore, the original unit administrator had retired, leaving limited records of the prior mobilization for the current unit administrator. Based on our inquiries, we determined that some soldiers were underpaid their housing allowance because the Maryland USPFO military pay officials entered the incorrect date for the tour and therefore shortened the unit’s soldiers’ allowance by 1 day. Other soldiers did not receive the correct amount for this allowance as a result of different interpretations of how to enter “dependent” information provided on housing allowance application forms (Form 5960). According to personnel officials, married soldiers are required to write in their spouses’ names as dependents on Form 5960 in order to receive the higher housing allowance amount. However, guidance did not clearly specify that simply checking the box indicating that they are married is not sufficient support to receive the higher housing allowance (with dependents) rate. As a result, several soldiers’ dependent information was not loaded into the personnel system correctly, and they were paid a single rate housing allowance instead of the higher married rate allowance. Other soldiers did not receive the correct housing allowance because they did not turn in complete forms and documentation to initiate the correct allowance rate or were late in turning in documents. For example, one soldier, who appeared to have submitted his lease agreement 6 days after being called to active duty, did not receive the correct housing allowance amount for the first 2 months of active duty. During his entire deployment, the soldier attempted to get various unit and military pay officials to take action to initiate back pay for these housing allowance underpayments, including forwarding copies of the lease agreement as proof for payment on three different occasions. As of March 30, 2003, the soldier had not received the correct housing allowance for October and November 2001. Another soldier did not receive the correct amount of housing allowance after his mobilization and complained to the unit administrator. Seven months after his initial mobilization to active duty, finance officials at the active duty station in Fort Belvoir, Virginia, who were attempting to correct the soldier’s housing allowance instead inadvertently entered a transaction to collect the entire amount of the housing allowance previously paid to the soldier. Finance officials at Fort Belvoir subsequently entered a transaction to reverse the error and pay the soldier a “catch-up” housing allowance payment. As summarized in table 22, we identified a number of pay problems associated with five different types of active duty pays and allowances associated with the unit’s deployment. Sixty-two soldiers from the unit were overpaid their entitled subsistence allowance by active Army finance personnel while stationed at the Pentagon during the period of December 15, 2001, through December 31, 2001. Prior to this period, the soldiers were stationed at Fort Stewart and were not provided lodging or mess and properly received the full subsistence allowance. When the unit was redeployed to the Pentagon, mess facilities became available. However, active Army finance personnel did not reduce the unit’s subsistence allowance rate to reflect the available mess facilities. According to DOD FMR, Volume 7A, chapter 25, enlisted soldiers are not entitled to the full subsistence allowance when mess facilities are provided. In January 2002, three soldiers who received mobilization orders from the 200th MP Company left Fort Stewart and traveled with the 115th Military Police Company to Guantanamo Bay, Cuba, to assist with base security and guard duties. While in Cuba, the soldiers were either underpaid, or were late in receiving their entitled hardship duty pays. In accordance with DOD FMR Volume 7A, chapter 17, soldiers who perform duties in “designated areas” for over 30 days are entitled to hardship duty pay. The FMR specifies Guantanamo Bay, Cuba, as a designated area and provides payment of $50 a month to soldiers serving there. While deployed to Cuba, the three soldiers were mistakenly paid the old type of hardship duty pay. Since hardship duty pay is not an automated pay, the active Army finance office at Guantanamo Bay was required to manually enter the “designated areas” payment each month for each soldier. While they were in Cuba, the three soldiers did not receive all their entitled hardship duty pays. Furthermore, the hardship duty pays they did receive were more than 30 days late. The 200th Military Police Company returned to Fort Eustis around the end of August 2002 to begin the demobilization process. We did not identify any pay issues associated with the unit’s soldiers who were released from active duty on September 30, 2002 (the original date for the unit’s demobilization, designated on the mobilization orders). However, as shown in table 23, we did identify three soldiers who continued to receive active duty pay after their early release from active duty. Specifically, the three soldiers from the unit returned from Cuba, demobilized at Fort Stewart, and were released from active duty on July 10, 2002, while their original orders showed a September 30, 2002, release date. They continued to receive active duty pay and allowances through July 15, 2002. Fort Stewart did not provide the amended orders with the earlier release date to the Maryland USPFO office in time to stop the pay. On October 2, 2001, California’s 49th Military Police Headquarters and Headquarters Detachment (HHD) was mobilized to active duty for a period not to exceed 24 months. The 49 th MP HHD mobilized at its home station, Pittsburg, California, and then proceeded to its designated mobilization station, Fort Lewis, Washington, on October 12, 2001. The unit performed its active duty mission at Fort Lewis, where it provided base security as part of Operation Noble Eagle. The unit was demobilized from active duty at Fort Lewis on July 28, 2002. A time line of the unit’s actions with respect to its mobilization under Operation Noble Eagle is shown in figure 15. Almost all soldiers from the 49th Military Police Company experienced some sort of pay problem during one or more phases of the three phases of the active duty mobilization. Overall, 50 of the 51 soldiers with the unit had at least one pay problem associated with their mobilization to active duty in support of Operation Noble Eagle. These pay problems included not receiving pays and allowances at all (underpayments), receiving some pays and allowances over 30 days after entitlement (late payments), and the overpayment of allowances. Specifically, as summarized in table 24, we found that (1) 48 soldiers did not receive certain pay and allowances within 30 days of their initial mobilization entitlement and (2) 41 soldiers did not receive, or did not receive within 30 days, the pay and allowances they were entitled to receive during their deployment. In total, we identified estimated overpayments of $17,000, underpayments of $1,300, and late payments of $67,000 associated with the pay problems we found. In addition, of the $17,000 in overpayments, we found that less than $100 was subsequently collected from the soldiers. We determined a number of causes for these pay problems. First, we found a lack of sufficient numbers of knowledgeable staff. In addition, after-the- fact detective controls were not in place, including a reconciliation of pay and personnel records and the reconciliation of pay records with the unit commander’s records of personnel actually onboard. Currently, as a matter of practice, pay and personnel representatives from the USPFO conduct a manual reconciliation between the pay and personnel system records approximately every 2 months. The purpose of the reconciliation is to ensure that for common data elements, the pay and personnel systems contain the same data. A USPFO official told us that while it is the USPFO’s goal to carry out such reconciliations each month, it currently does not have the resources required to do so. As summarized in table 25, we identified a number of pay problems associated with the unit’s mobilization to active duty. Failures to enter transactions or late entry of transactions needed to start active duty pays by Army Guard USPFO military pay personnel and by active Army military pay personnel at the unit’s mobilization station were the initial cause of the pay problems. We also found that the underlying cause of the pay problems was a lack of sufficient numbers of knowledgeable personnel at the California USPFO and the Fort Lewis Finance Office. In addition, according to Army Guard and active Army officials, neither organization was prepared for the sheer volume of pay transactions associated with mobilizing soldiers to active duty. In total, 48 out of 51 soldiers of the 49th Military Police Company did not receive certain pay and allowances and incentive pays at all, or did not receive them within 30 days after being mobilized on October 2, 2001. The types of pay entitlements either not paid at all or paid late associated with the unit’s initial mobilization included basic pay, basic allowance for subsistence, basic allowance for housing, family separation allowance, and the continental United States cost of living allowance. The late payments during the mobilization phase primarily resulted from California USPFO military pay personnel’s lack of understanding of their responsibility for initiating active duty pays. According to DFAS reserve component mobilization procedures; the California USPFO was responsible for initiating these pays. However, a USPFO military pay official mistakenly instructed the unit to take its pay data to the mobilization station to enter transactions to start active duty pays. The USPFO official stated that the USPFO did not start the active duty pay and allowances at that time because a copy machine was not available to make copies of relevant active duty pay support documentation (such as a lease agreement needed to support a housing allowance entitlement). As a result, the responsibility for initiating this allowance was improperly passed to the active Army finance office at the Fort Lewis mobilization station. The Fort Lewis finance office lacked sufficient numbers of knowledgeable military pay staff to expeditiously enter the large volume of transactions necessary to start active duty pay entitlements for the 49th Military Police Company’s soldiers. DFAS guidance requires finance personnel at the mobilization station to review each soldier’s pay account to identify any errors and input the necessary correcting transactions into DJMS-RC. Initially, the mobilization station finance office assigned an insufficient number of personnel to the task of starting active duty pays for the unit’s 51 mobilizing soldiers. Moreover, one of the assigned pay technicians was not familiar with DJMS-RC and consequently entered data incorrectly for some of the unit’s soldiers. Also, the assigned pay technician initially failed to enter transactions to start pay and allowances for a significant number of the unit’s soldiers because the supporting documentation was misplaced. These documents were later found under a desk in the finance office. Recognizing this shortage of staff knowledgeable about DJMS-RC processing procedures, the Fort Lewis finance office asked the California USPFO to supply additional personnel and also temporarily reassigned soldiers from other units stationed at Fort Lewis to assist in the pay processing. Working together over a 2-month period after the unit was mobilized to active duty, these personnel were able to enter the omitted transactions needed to start active duty pays and correct the previous erroneous entries. In addition, the USPFO did not enter the required data to DJMS-RC to begin cost of living allowance pays for 36 of the unit’s soldiers. DFAS reserve component mobilization procedures state that the USPFO has the initial responsibility for initiating these pays. However, as discussed previously, the USPFO mistakenly sent the 49th Military Police Company to Fort Lewis with their pay documentation, and as a result, it was not until more than 2 months after the unit’s mobilization date that the Fort Lewis finance office pay technicians began to enter these transactions into DJMS-RC. The company commander for the unit told us that he was frustrated with the level of customer support his unit received as it moved through the initial mobilization process. Only two knowledgeable military pay officials were present to support active duty pay transaction processing for the 51 soldiers mobilized for his unit. He characterized the customer service his unit received at initial mobilization as very time-consuming and frustrating. As summarized in table 26, we identified a number of pay problems associated with six different types of active duty pays and allowances associated with the unit’s deployment while on active duty. These problems primarily resulted from a data entry error and inadequate document retention practices. For example, the USPFO paid one soldier her basic pay, basic allowance for subsistence, and basic allowance for housing nearly 4 months late. A USPFO official told us these late payments were caused when a USPFO pay technician entered an incorrect stop date for the soldier’s active duty tour into DJMS-RC. The pay technician, after being notified of the error by the soldier, corrected the data in DJMS-RC, which resulted in the soldier receiving her pay nearly 4 months late. Additionally, USPFO officials were unable to provide support explaining why five other soldiers continued to receive basic pay, the basic allowance for subsistence, and the basic allowance for housing after the date available records show their active duty tours had ended. Consequently, we identified the payments made to these five soldiers as overpayments. Overpayments of family separation allowances to soldiers in the unit resulted from a data entry error and inadequate USPFO document retention practices. A USPFO pay technician incorrectly coded a soldier’s account to receive a family separation allowance when the soldier had only been on active duty for 2 weeks. According to the DOD FMR, Volume 7A, chapter 27, soldiers are only eligible for this allowance after they have been separated more than 30 days from their families on a continuous active duty assignment. This overpayment problem had not been resolved as of March 31, 2003. Additionally, USPFO officials were unable to provide supporting documentation explaining why five soldiers continued to receive a family separation allowance after available documentation showed that these soldiers’ active duty tours had officially ended. We identified these family separation allowance payments for the five soldiers as overpayments. Late, under- and overpayments of foreign language proficiency pays to the unit’s soldiers primarily resulted from delayed or inadequate data entry. For example, our audit showed that USPFO pay technicians failed to enter transactions into DJMS-RC in a timely manner for four soldiers resulting in late foreign language proficiency payments. In addition, USPFO pay technicians failed to enter any foreign language proficiency payment transactions for 1 month for one soldier and for 3 months for another soldier resulting in those soldiers being underpaid. This underpayment issue had not been resolved as of March 31, 2003. In another instance, a soldier received an overpayment of his entitled foreign language proficiency payment when a USPFO pay technician entered the wrong code. Approximately 3 months later, the USPFO pay technician identified the error and recovered the overpayment. Late payment, underpayment, and overpayment of cost of living allowances resulted from the inability of DJMS-RC to pay certain active duty pays and allowances automatically, inaccurate data entry, and inadequate documentation retention practices. For example, our audit discovered that USPFO pay technicians failed to manually enter cost of living allowance transactions into DJMS-RC in a timely manner for 37 soldiers, resulting in late payments to the soldiers. In addition, USPFO officials were unable to provide sufficient documentation to explain why 3 soldiers appeared not to have received cost of living allowance payments due them for a 2-month period. We considered these pay omissions to be underpayments. An Army pay technician at the Fort Lewis finance office entered the incorrect code, thereby paying a soldier the wrong type of allowance, which resulted in an underpayment. California’s 49th Military Police Company demobilized at Fort Lewis on July 28, 2002, and returned to its home station in Pittsburg, California. We did not identify any pay problems for this unit in the demobilization phase. To obtain an understanding and assess the processes, personnel (human capital), and systems used to provide assurance that mobilized Army Guard soldiers were paid accurately and timely, we reviewed applicable policies, procedures, and program guidance; observed pay processing operations; and interviewed cognizant agency officials. With respect to applicable policies and procedures, we obtained and reviewed 10 U.S.C. Section 12302, DOD Directive Number 1235.10, “Activation, Mobilization & Demobilization of the Ready Reserve;” DOD FMR, Volume 7A, “Military Pay Policy and Procedures Active Duty and Reserve Pay”; and the Army Forces Command Regulations 500-3-3, Reserve Component Unit Commander Handbook, 500-3-4, Installation Commander Handbook, and 500-3-5, Demobilization Plan. We also reviewed various Under Secretary of Defense memorandums, a memorandum of agreement between Army and DFAS, DFAS, Army, Army Forces Command, and Army National Guard guidance applicable to pay for mobilized reserve component soldiers. We also used the internal controls standards provided in the Standards for Internal Control in Federal Government. We applied the policies and procedures prescribed in these documents to the observed and documented procedures and practices followed by the various DOD components involved in providing active duty pays to Army Guard soldiers. We also interviewed officials from the National Guard Bureau, State USPFOs, Army and DOD military pay offices, as well as unit commanders to obtain an understanding of their experiences in applying these policies and procedures. In addition, as part of our audit, we performed a review of certain edit and validation checks in DJMS-RC. Specifically, we obtained documentation and performed walk-throughs associated with DJMS-RC edits performed on pay status/active duty change transactions, such as those to ensure that tour start and stop dates match MMPA dates and that the soldier cannot be paid basic pay and allowances beyond the stop date that was entered into DJMS-RC. We also obtained documentation on and walk-throughs of the personnel-to-pay system interface process, the order writing-to-pay system interface process, and on the process for entering mobilization information into the pay system. We held interviews with officials from the Army National Guard Readiness Center, the National Guard Bureau, and DFAS Indianapolis and Denver to augment our documentation and walkthroughs. Because our preliminary assessment determined that current operations used to pay mobilized Army Guard soldiers relied extensively on error- prone manual transactions entry into multiple, nonintegrated systems, we did not statistically test current processes and controls. Instead, we used a case study approach to provide a more detailed perspective of the nature of pay deficiencies in the three key areas of processes, people (human capital), and systems. Specifically, we gathered available data and analyzed the pay experiences of Army Guard special forces and military police units mobilized to active duty in support of Operations Noble Eagle and Enduring Freedom during the period from October 2001 through March 2003. We audited six Army Guard units as case studies of the effectiveness of the controls over active duty pays in place for soldiers assigned to those units: Colorado B Company, 5th Battalion, 19th Special Forces; Virginia B Company, 3rd Battalion, 20th Special Forces; West Virginia C Company, 2nd Battalion, 19th Special Forces; Mississippi 114th Military Police Company; California 49th Military Police Headquarters and Headquarters Maryland 200th Military Police Company. In selecting these six units for our case studies, we sought to obtain the pay experiences of units assigned to either Operation Enduring Freedom or Operation Noble Eagle. We further limited our case study selection to those units both mobilized to active duty and demobilized from active duty during the period from October 1, 2001 through March 31, 2003. From the population of all Army Guard units mobilized and demobilized during this period, we selected three special forces units and three military police units. These case studies are presented to provide a more detailed view of the types and causes of pay problems and the pay experiences of these units as well as the financial impact of pay problems on individual soldiers and their families. We used mobilization data supplied by the Army Operations Center to assist us in selecting the six units we used as our case studies. We did not independently verify the reliability of the Army Operations Center database. We used the Army Operations Center data to select six states that had a large number of special forces or military police units that had been mobilized, deployed, and returned from at least one tour of active duty in support of Operations Noble Eagle and Enduring Freedom. We chose California, Colorado, Maryland, Mississippi, Virginia, and West Virginia. From these six states, we selected three special forces and three military police units that had a variety of deployment locations and missions. We also identified and performed a limited review of the pay experiences of a unit still deployed during the period of our review; Colorado’s 220th Military Police Company. The purpose of our limited review was to determine if there were any pay problems experienced by a more recently mobilized unit. We also obtained in-depth information from soldiers at four of the six case study units. Using a data collection instrument, we asked for soldier views on pay problems and customer service experiences before, during, and after mobilization. Unit commanders distributed the instrument to soldiers in their units. There were 325 soldiers in these units; in total, we received 87 responses. The information we received from these data collection instruments is not representative of the views of the Army Guard members in these units nor of those of Army Guard members overall. The information provides further insight into some of the pay experiences of selected Army Guard soldiers who were mobilized under Operations Noble Eagle and Enduring Freedom. We used DJMS-RC pay transaction extracts to identify pay problems associated with our case study units. However, we did not perform an exact calculation of the net pay soldiers should have received in comparison with what DJMS-RC records show they received. Rather, we used available documentation and follow-up inquiries with cognizant USPFO personnel to identify if (1) soldiers’ entitled active duty pays and allowances were received within 30 days of initial mobilization date, (2) soldiers were paid within 30 days of the date they became eligible for active duty pays and allowances associated with their deployment locations, and (3) soldiers stopped receiving active duty pays and allowances as of the date of their demobilization from active duty. As such, our audit results only reflect problems we identified. Soldiers in our case study units may have experienced additional pay problems that we did not identify. In addition, our work was not designed to identify, and we did not identify, any fraudulent pay and allowances to any Army Guard soldiers. As a result of the lack of supporting documents, we likely did not identify all of the pay problems related to the active duty mobilizations of our case study units. However, for the pay problems we identified, we counted soldiers’ pay problems as a problem only in the phase in which they first occurred even if the problems persisted into other phases. For purposes of characterizing pay problems for this report, we defined over- and underpayments as those pays or allowances for mobilized Army Guard soldiers during the period from October 1, 2001, through March 31, 2003, that were in excess of (overpayment) or less than (underpayment) the entitled payment. We considered as late payments any active duty pays or allowances paid to the soldier over 30 days after the date on which the soldier was entitled to receive such pays or allowances. As such, these payments were those that, although late, addressed a previously unpaid entitlement. We did not include any erroneous debts associated with these payments as pay problems. In addition, we used available data to estimate collections against identified overpayments through March 31, 2003. We did not attempt to estimate payments received against identified underpayments. We provided the support for the pay problems we identified to appropriate officials, at each of our case study locations so that they could fully develop and resolve any additional amounts owed to the government or to the Army Guard soldiers. We briefed DOD and Army officials, National Guard Bureau officials, DFAS officials, and USPFO officials in the selected states on the details of our audit, including our findings and their implications. On October 10, 2003, we requested comments on a draft of this report. We received comments on November 5, 2003, and have summarized those comments in the “Agency Comments and Our Evaluation” section of this report. DOD’s comments are reprinted in appendix VIII. We conducted our audit work from November 2002 through September 2003 in accordance with U.S. generally accepted government auditing standards. GAO DRAFT REPORT DATED OCTOBER 10, 2003 GAO-04-89 (GAO CODE 192080) “MILITARY PAY: ARMY NATIONAL GUARD PERSONNEL MOBILIZED TO ACTIVE DUTY EXPERIENCED SIGNIFICANT PAY PROBLEMS” RECOMMENDATION 1: The GAO recommended that the Secretary of Defense direct the Director of the Defense Finance and Accounting Service (DFAS), in conjunction with the Under Secretary of Defense (Comptroller), to establish a unified set of policies and procedures for all Army Guard, Army, and DFAS personnel to follow for servicing active duty pays for Army Guard personnel mobilized to active duty. (p. 74/GAO Draft Report) DoD RESPONSE: Concur. DFAS and the Army are jointly building on the existing guidance procedures as published in FORSCOM REG 500-3-3, (FORSCOM Mobilization and Deployment Planning System Form Deps, Volume 3, Reserve Component Commanders’ Handbook dated July 15,1999); the National Guard Standard Operating Procedure Contingency Operations; and DFAS AIG Message dated December 19, 2002, Subject: Reserve Component- Mobilization Procedures, to clearly define the roles and responsibilities between mobilization/demobilization stations, United States Property and Fiscal Offices (USPFOs), and deployed Army finance elements. A joint task force has been established to review existing procedural guidance, lessons learned to date, and available metrics. As a first step, expanded central guidance will be published within the next 30 days, which will further articulate the specific responsibilities of the servicing finance activities. This breakout of responsibilities will also be provided in a simple matrix form to visually reinforce this guidance. Within approximately 60 days, the Army and DFAS will begin compliance reviews of the mobilization/demobilization stations to ensure adherence to published guidance and to provide any further assistance these offices may require. Within the next 3 to 6 months, the task force will build upon the existing guidance to provide comprehensive procedures and related standards, down to the individual technician level, for all offices and units responsible for pay input support of mobilized soldiers. RECOMMENDATION 2: The GAO recommended that the Secretary of Defense direct the Secretary of the Army, in conjunction with the Under Secretary of Defense (Comptroller), to establish performance measures for obtaining supporting documentation and processing pay transactions. (p. 75/Draft Report) DoD RESPONSE: Concur. Standards for the timeliness of processing pay transactions are currently in place for units, finance offices, and central site. However, these standards are focused on the full range of transactions and associated unit level data is generated based on the normal permanent/home station relationship with a Reserve Component Pay Support Office. Within the next 6 months, DFAS and the Army will jointly review how these existing mechanisms can be used to more succinctly capture data specifically related to mobilized soldiers and units. RECOMMENDATION 3: The GAO recommended that the Secretary of Defense direct the Secretary of the Army, in conjunction with the Under Secretary of Defense (Comptroller), to establish who is accountable for stopping active duty pays for soldiers who return home separate from their units. (p. 75/Draft Report) DoD RESPONSE: Concur. Within the next 30 days, DFAS, in cooperation with the Army, will reinforce existing procedures on responsibilities for stopping active duty pays for soldiers who return home separate from their units. This will be part of the revised guidance identified in response to recommendation one. In addition, mechanisms have been established to perform automated comparisons of personnel demobilization records and the Defense Joint Military Pay System - Reserve Component (DJMS-RC) to identify any demobilizing soldiers whose tours in the pay system were not adjusted to coincide with the demobilization date. RECOMMENDATION 4: The GAO recommended that the Secretary of Defense direct the Secretary of the Army, in conjunction with the Under Secretary of Defense (Comptroller), to clarify the policies and procedures for how to properly amend active duty orders, including medical extensions. (p. 75/Draft Report) DoD RESPONSE: Concur. For medical extensions, the Army published revised guidance on June 10, 2003, reinforcing procedures on this process. Included were the requirements for publishing orders prior to the end date of the current active duty tour. Concerning the specific case in Colorado cited by the GAO, DFAS and the Army have implemented changes to the input systems to warn the operator processing a tour cancellation when the correct input should be a tour curtailment. Action is complete. RECOMMENDATION 5: The GAO recommended that the Secretary of Defense direct the Secretary of the Army, in conjunction with the Under Secretary of Defense (Comptroller), to require Army Guard commands and unit commanders to carry out complete monthly pay and personnel records reconciliations and take necessary actions to correct any pay and personnel record mismatches found each month. (p. 75/Draft Report) DoD RESPONSE: Concur. Within 60 days, the Army will reinforce to all reserve commands the importance of this requirement. As noted by the GAO, this requirement is already included in US Army Forces Command Regulation 500-3-3, Unit Commander’s Handbook. RECOMMENDATION 6: The GAO recommended that the Secretary of Defense direct the Secretary of the Army, in conjunction with the Under Secretary of Defense (Comptroller), to update policies and procedures to reflect current legal and DoD administrative requirements with Enclosure respect to active duty pays and allowances and transaction processing requirements for mobilized Army Guard soldiers. (p. 75/Draft Report) DoD RESPONSE: Concur. In Fiscal Year 2004, DFAS, the Army, and National Guard will respectively update the cited regulations under their cognizance to the most current and accurate requirements. RECOMMENDATION 7: The GAO recommended that the Secretary of Defense direct the Secretary of the Army, in conjunction with the Under Secretary of Defense (Comptroller), to consider expanding the scope of the existing memorandum of understanding between DFAS and the Army concerning the provisions of resources to support surge processing at mobilization and demobilization sites to include providing additional resources to support surge processing for pay start and stop transactions requirements at Army Guard home stations during initial soldier readiness programs. (p. 75/Draft Report) DoD RESPONSE: Concur. The Army will work with the National Guard on resourcing the USPFOs for mobilization/demobilization surges. However, the memorandum of understanding between DFAS and the Army pertains only to the management and resourcing of Defense Military Pay Offices, to include their role in support of mobilization/ demobilization stations. As such, it is not the appropriate vehicle to address staffing of USPFO under the National Guard. RECOMMENDATION 8: The GAO recommended that the Secretary of Defense direct the Secretary of the Army, in conjunction with the Under Secretary of Defense (Comptroller), to determine whether issues concerning resource allocations for the military pay operations identified at our case study units exist at all 54 USPFOs, and, if so, take appropriate actions to address these issues. (p. 76/Draft Report) DoD RESPONSE: Concur. To support surge requirements, the National Guard could use additional National Guard soldiers being brought on active duty in a Temporary Tour of Active Duty status to augment the USPFO staff based on mobilization workload requirements. The additional requirement and funding will need to be addressed by the supplemental provided to Army. Normal manning at the USPFO, Military Pay Section is based on Full Time Support authorized state strength levels. RECOMMENDATION 9: The GAO recommended that the Secretary of Defense direct the Secretary of the Army, in conjunction with the Under Secretary of Defense (Comptroller), to determine whether concerns over relatively low graded military pay technicians identified at our case study units exist at all 54 USPFOs, and, if so, take appropriate actions to address these issues. (p. 76/Draft Report) Enclosure grade levels in the USPFOs’ Comptroller sections and the current grade levels for military pay technicians were validated as correct under OPM standards. Action is complete. RECOMMENDATION 10: The GAO recommended that the Secretary of Defense direct the Secretary of the Army, in conjunction with the Under Secretary of Defense (Comptroller), to modify existing training policies and procedures to require all USPFO and active Army pay and or finance personnel responsible for entering pay transactions for mobilized Army Guard soldiers to receive appropriate training upon assuming such duties. (p. 76/Draft Report) DoD RESPONSE: Concur. The National Guard has instituted mobilization specific training for pay technicians. The National Guard Financial Services Center quality assurance program is currently used to monitor completion of JUMPS Standard Terminal Input System (JUSTIS) training for USPFO military pay technicians. The US Army Reserve Command (USARC) has expanded training programs on DJMS-RC to help support the immediate training needs of deploying units and mobilization/demobilization stations. Over 35 training events have occurred since February 2002 in support of deploying units and mobilization/demobilization sites. The Army finance school is working with USARC to develop an exportable training package on DJMS-RC, which should be available within the next 6 months. Additionally, DFAS and the Army are sending a joint training team to Kuwait and Iraq in November 2003 to specifically address reserve component support. For the midterm (6 months to 2 years), the training on reserve component pay input for soldiers in finance battalions and garrison support units will be evaluated to determine how best to expand the training within the Army total training infrastructure, particularly in light of the planned integration of reserve and active component pay processing into a single system. The Army finance school is already evaluating the expansion of the current instruction on mobilized reserve component pay in the training curriculum for the finance advanced individual training course. RECOMMENDATION 11: The GAO recommended that the Secretary of Defense direct the Secretary of the Army, in conjunction with the Under Secretary of Defense (Comptroller), to require unit commanders to receive training on the importance of adhering to requirements to conduct annual pay support documentation reviews and carry out monthly reconciliations. (p. 76/Draft Report) DoD RESPONSE: Concur. The importance of conducting annual pay support documentation reviews and monthly reconciliations will be incorporated into precommand courses at the company level for the National Guard by the end of Fiscal Year 2004. RECOMMENDATION 12: The GAO recommended that the Secretary of Defense direct the Secretary of the Army, in conjunction with the Under Secretary of Defense (Comptroller), to establish an ongoing mechanism to monitor the quality and completion of training for both pay and finance personnel and unit commanders. (p. 76/Draft Report) DoD RESPONSE: Concur. The National Guard currently reviews the training status of military pay technicians at the USPFOs as part of the ongoing quality assurance review program. Enclosure The appropriate mechanism for monitoring the training of unit commanders and finance battalion personnel is dependent on the location of that training in the overall Army training infrastructure (i.e. unit training is assessed as part of the annual External Evaluation-ExEval) and, as such will be considered as part of the overall evaluation of the reserve pay training addressed in response to recommendation 10. RECOMMENDATION 13: The GAO recommended that the Secretary of Defense direct the Secretary of the Army, in conjunction with the Under Secretary of Defense (Comptroller), to identify and evaluate options for improving customer service provided to mobilized Army Guard soldiers by providing improved procedures for informing soldiers of their pay and allowance entitlements throughout their active duty mobilization. (p. 76/Draft Report) DoD RESPONSE: Concur. Within the next 30 days, the Army will prepare a standard information flyer to be given to all mobilizing reservists. The flyer will address entitlements as well as sources of pay support. The flyer will be published via Army Knowledge Online and incorporated into the overall revision to procedural guidance addressed in response to recommendation one. RECOMMENDATION 14: The GAO recommended that the Secretary of Defense direct the Secretary of the Army, in conjunction with the Under Secretary of Defense (Comptroller), to identify and evaluate options for improving customer service provided to mobilized Army Guard soldiers with respect to providing a single, well-advertised, source for soldiers and their families to access for customer service for any pay problems. (p. 77/Draft Report) DoD RESPONSE: Concur. The existing centralized information sources on individual soldiers pay will be expanded. Specifically, DFAS will continue to add functionality to myPay for input of discretionary actions. Additionally, DFAS is developing a separate view-only Personal Identification Number capability which soldiers will be able to give their dependents so they can see the Leave and Earning Statement without being able to change anything on the pay record. This enhancement is scheduled for August 2004. The DFAS also operates a central customer service center for pay inquiries for all Services. The toll free number for this center as well as the myPay internet address will be incorporated in the flyer discussed in response to recommendation 13 as well as continue being advertised in locations such as Army Knowledge Online. Until the implementation of DIMHRS, with full integration of pay and personnel, the processing of pay transactions will still require the movement of some entitlement information/authorization from units and personnel to finance via paper. As such, a network of finance support activities is required to geographically align with deployed combat and supporting personnel units. As always, pay remains essentially a command responsibility. For the individual soldier, the single source of pay support is his or her unit, which in-turn interfaces with the appropriate finance and personnel activities. For dependents of deployed soldiers, the single source for finance, or any administrative issues, is either the rear detachment of the soldiers’ deployed unit or, for the National Guard, the applicable State Family Assistance Coordinator. RECOMMENDATION 15: The GAO recommended that the Secretary of Defense direct the Secretary of the Army, in conjunction with the Under Secretary of Defense (Comptroller), to review the pay problems we identified at our six case study units to identify and resolve any outstanding pay issues for the affected soldiers. (p. 77/Draft Report) DoD RESPONSE: Concur. The National Guard Financial Services Center is working with each of the identified units and supporting USPFOs to ensure all pay issues are resolved. The Army and DFAS will continue to work the correction of any specific cases identified as still open for these units. As noted by the GAO, many of the cases identified have already been resolved or involved a delay in payment over 30 days from entitlement rather than an actual unresolved discrepancy. RECOMMENDATION 16: The GAO recommended that the Secretary of Defense direct the Secretary of the Army, in conjunction with the Under Secretary of Defense (Comptroller), to evaluate the feasibility of using the personnel-to-pay interface as a means to proactively alert pay personnel of actions needed to start entitled active duty pays and allowances. (p. 77/Draft Report) DoD RESPONSE: Concur. Within the next 6 months, we will evaluate the feasibility of using the personnel-to-pay interface as a means to proactively alert pay personnel of actions needed to start entitled active duty pays and allowances. RECOMMENDATION 17: The GAO recommended that the Secretary of Defense direct the Director of the Defense Finance and Accounting Service, in conjunction with the Under Secretary of Defense (Comptroller), to evaluate the feasibility of automating some or all of the current manual monthly pays, including special duty assignment pay, foreign language proficiency pay, hardship duty pay and high altitude, low opening jump pay. (p. 77/Draft Report) DoD RESPONSE: Concur. Programming changes to DJMS-RC have been implemented to enhance the processes for special duty assignment pay and foreign language proficiency pay. However, monthly input is still required. Hardship duty pay is scheduled for implementation for April 2004. High altitude, low opening jump pay requires manual computation and input of a transaction for payment. The small volume of members entitled to this pay has not justified nor provided an adequate return on investment for this automation. DFAS has recognized the urgency of improving the military pay system capabilities supporting our Service members. A study was conducted of improvement alternatives in the fall of 2002, which concluded that a new commercial off the shelf based payroll capability (“Forward Compatible Payroll” (FCP)) was the best option to expeditiously improve our system payroll services. FCP is currently prototyping military entitlements and deductions and has already demonstrated that DJMS RC’s current monthly manual pays can be automated rapidly in the new commercial off the shelf based environment. RECOMMENDATION 18: The GAO recommended that the Secretary of Defense direct the Director of the Defense Finance and Accounting Service, in conjunction with the Under Secretary of Defense (Comptroller), to evaluate the feasibility of eliminating the use of the “other credits” for processing Hardship Duty (Designated Areas); high altitude, low opening jump pay; and special duty assignment pay, and instead establishing a separate component of pay for each type of pay. (p. 77/Draft Report) DoD RESPONSE: Concur. Hardship duty pay is scheduled for automation in April 2004. We will also recommend inclusion of automation of high altitude, low opening jump pay in FCP. We acknowledge that the information available to the member is inadequate in today’s system. This has already been addressed in the FCP requirements. Each pay is designed to provide fully automated computation capability for active, Reserve/Guard and detailed leave and earnings statement reporting to the Service member through myPay. FCP will use legacy military pers/pay data feeds to create a single military pay record for each Service member supporting all Service component affiliations and duty statuses. FCP will resolve pay systems capability related problems described in this report. Until such time FCP has been implemented, we will ensure that these certain pays paid under “other credits” are included in the flyer addressed in response to recommendation 13. In addition, DFAS will update the DFAS Reserve Component Mobilization Procedures to mandate a remark be entered on the service member’s leave and earnings statement for pays paid under “other credits” to inform the service member exactly what entitlement(s) they have paid. RECOMMENDATION 19: The GAO recommended that the Secretary of Defense direct the Secretary of the Army, in conjunction with the Under Secretary of Defense (Comptroller), to evaluate the feasibility of using the JUSTIS warning screen to help eliminate inadvertent omissions of required monthly manual pay inputs. (p. 78/Draft Report) DoD RESPONSE: Concur. The National Guard will develop a JUSTIS table identifying all applicable soldiers in order to notify the USPFO technician of accounts requiring monthly entitlement input. This will be more efficient and effective than a pop-up warning screen, which would appear only if the individual soldier’s social security number were input. RECOMMENDATION 20: The GAO recommended that the Secretary of Defense direct the Director of the Defense Finance and Accounting Service, in conjunction with the Under Secretary of Defense (Comptroller), to evaluate the feasibility of redesigning leave and earnings statement to provide soldiers with a clear explanation of all pay and allowances received so that they can readily determine if they received all and only entitled pays. (p. 78/Draft Report) Enclosure understanding their leave and earnings statement by reviewing and updating (as necessary) the information provided on our website(s); by providing independent leave and earnings statement remarks for present and future changes; continuing to provide the USPFOs ands Reserve Component Pay Support Offices with monthly newsletters; and effective immediately, provide the finance battalions/Defense Military Pay Offices with the National Guard newsletter. For the future, FCP is being designed with an easily understandable leave and earnings statement as one of the main requirements. Each pay is designed to provide fully automated computation capability for active, Reserve/Guard and detailed leave and earnings statement reporting through myPay. FCP will use legacy military pers/pay data feeds to create a single military pay record for each Service member supporting all Service component affiliations and duty statuses. FCP will also resolve pay systems capability related problems described in this report. RECOMMENDATION 21: The GAO recommended that the Secretary of Defense direct the Director of the Defense Finance and Accounting Service, in conjunction with the Under Secretary of Defense (Comptroller), to evaluate the feasibility of establishing an edit check and requiring approval before processing any debt assessments above a specified dollar amount. (p. 78/Draft Report) DoD RESPONSE: Concur. The DFAS has already updated its current input system (Defense MilPay Office) to provide a warning to field finance personnel concerning the debt impact of tour cancellation (vice modification) for Reserve/Guard members. DJMS-RC would require a small to medium system change to edit debts that exceeded an established threshold or required approval. Secondary manual processing would be required to start the collection process or delete the debts. RECOMMENDATION 22: The GAO recommended that the Secretary of Defense direct the Director of the Defense Finance and Accounting Service, in conjunction with the Under Secretary of Defense (Comptroller), as part of the current effort underway to reform DoD’s pay and personnel systems-referred to as DIMHRS - incorporate a complete understanding of the Army Guard pay problems as documented in this report into the requirements development for this system. (p. 78/Draft Report) DoD RESPONSE: Concur. The DFAS has provided detailed military pay requirements input to the DIMHRS Program that support fully automated computation of all military pay entitlements and deductions. The DIMHRS system military pay requirements submitted by DFAS would resolve system related pay problems as described in this report. DIMHRS is envisioned to create a single military personnel/pay record for each Service member supporting all Service component affiliations and duty statuses. Enclosure problems identified, but also with the human capital and process aspects when developing DIMHRS. (p. 78/Draft Report) DoD RESPONSE: Concur. The DFAS and Army have been actively involved in recommending an improved operational military pers/pay concept in the DIMHRS environment. Procedural changes are clearly required to capitalize on the opportunities afforded by a modern fully integrated personnel and pay system including improvements in process cycle time, customer service, and accountability. The DFAS is working with the Army DIMHRS Office to document existing workflow and roles and responsibilities. The DIMHRS Program is still in the very early stages of determining when and how integrated processes and workflows will be incorporated into the DIMHRS based operational concept. The DIMHRS “Joint Service Functional Concept of Operations,” dated July 15, 2003, page 14, indicates that the current plan is to “…initially mirror the existing ‘As-Is’ structure until the new capability has been fielded and risk factors/ requirements have been clearly identified. A determination of what additional skills and expertise are required for operators of a knowledge-based personnel community must be made after the capabilities of the commercial off the shelf product are fully known.” Staff making key contributions to this report include: Paul S. Begnaud, Ronald A. Bergman, James D. Berry, Jr., Amy C. Chang, Mary E. Chervenic, Francine M. DelVecchio, C. Robert DeRoy, Dennis B. Fauber, Jennifer L. Hall, Charles R. Hodge, Jason M. Kelly, Julia C. Matta, Jonathan T. Meyer, John J. Ryan, Rebecca Shea, Crawford L. Thompson, Jordan M. Tiger, Patrick S. Tobo, Raymond M. Wessmiller, and Jenniffer F. Wilson. The General Accounting Office, the audit, evaluation and investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. The fastest and easiest way to obtain copies of GAO documents at no cost is through the Internet. GAO’s Web site (www.gao.gov) contains abstracts and full- text files of current reports and testimony and an expanding archive of older products. The Web site features a search engine to help you locate documents using key words and phrases. You can print these documents in their entirety, including charts and other graphics. Each day, GAO issues a list of newly released reports, testimony, and correspondence. GAO posts this list, known as “Today’s Reports,” on its Web site daily. The list contains links to the full-text document files. To have GAO e-mail this list to you every afternoon, go to www.gao.gov and select “Subscribe to e-mail alerts” under the “Order GAO Products” heading. | In light of the recent mobilizations associated with the war on terrorism and homeland security, GAO was asked to determine if controls used to pay mobilized Army Guard personnel provided assurance that such pays were accurate and timely. GAO's audit used a case study approach to focus on controls over three key areas: processes, people (human capital), and systems. The existing processes and controls used to provide pay and allowances to mobilized Army Guard personnel are so cumbersome and complex that neither DOD nor, more importantly, the mobilized Army Guard soldiers could be reasonably assured of timely and accurate payroll payments. Weaknesses in these processes and controls resulted in over- and underpayments and late active duty payments and, in some cases, largely erroneous debt assessments to mobilized Army Guard personnel. The end result of these pay problems is to severely constrain DOD's ability to provide active duty pay to these personnel, many of whom were risking their lives in combat in Iraq and Afghanistan. In addition, these pay problems have had a profound financial impact on individual soldiers and their families. For example, many soldiers and their families were required to spend considerable time, sometimes while the soldiers were deployed in remote, combat environments overseas, seeking corrections to active duty pays and allowances. The pay process, involving potentially hundreds of DOD, Army, and Army Guard organizations and thousands of personnel, was not well understood or consistently applied with respect to determining (1) the actions required to make timely, accurate pays to mobilized soldiers, and (2) the organization responsible for taking the required actions. With respect to human capital, we found weaknesses including (1) insufficient resources allocated to pay processing, (2) inadequate training related to existing policies and procedures, and (3) poor customer service. Several systems issues were also a significant factor impeding accurate and timely payroll payments to mobilized Army Guard soldiers, including (1) non-integrated systems, (2) limitations in system processing capabilities, and (3) ineffective system edits. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
OPM’s mission is to ensure that the federal government has an effective civilian workforce. In this regard, one of the agency’s major human resources tasks is to manage and administer the retirement program for federal employees. According to the agency, the program serves federal employees by providing (1) retirement compensation and (2) tools and options for retirement planning. OPM’s Center for Retirement and Insurance Services administers the two defined benefit retirement plans that provide retirement, disability, and survivor benefits to federal employees. The first plan, the Civil Service Retirement System (CSRS), provides retirement benefits for most federal employees hired before 1984. The second plan, the Federal Employees Retirement System (FERS), covers most employees hired in or after 1984 and provides benefits that include Social Security and a defined contribution system. According to OPM, there are approximately 2.9 million active federal employees and nearly 2.5 million retired federal employees. The agency’s March 2008 analysis of federal employment retirement data estimates that nearly 1 million active federal employees will be eligible to retire and almost 600,000 will most likely retire by 2016. Figure 1 summarizes the estimated number of employees eligible and likely to retire. OPM and employing agencies’ human resources and payroll offices are responsible for processing federal employees’ retirement applications. The process begins when an employee submits a paper retirement application to his or her employer’s human resources office and is completed when the individual begins receiving regular monthly benefit payments (as illustrated in fig. 2). Once an employee submits an application, the employing agency’s human resources office provides retirement counseling services to the employee and augments the retirement application with additional paperwork, such as a separation form that finalizes the date the employee will retire. Then the agency provides the retirement package to the employee’s payroll office. After the employee separates for retirement, the payroll office is responsible for reviewing the documents for correct signatures and information, making sure that all required forms have been submitted, and adding any additional paperwork that will be necessary for processing the retirement package. Once the payroll office has finalized the paperwork, the retirement package is mailed to OPM to continue the retirement process. Payroll offices are expected to submit the package to OPM within 30 days of the retiree’s separation date. Upon receipt of the retirement package, OPM calculates an interim payment based on information provided by the employing agency. The interim payments are partial payments that typically provide retirees with 80 percent of the total monthly benefit they will eventually receive. OPM then starts the process of analyzing the retirement application and associated paperwork to determine the total monthly benefit amount to which the retiree is entitled. This process includes collecting additional information from the employing agency’s human resources and payroll offices or from the retiree to ensure that all necessary data are available before calculating benefits. After OPM completes its review and authorizes payment, the retiree begins receiving 100 percent of the monthly retirement benefit payments. OPM then stores the paper retirement folder at the Retirement Operations Center in Boyers, Pennsylvania. According to the agency’s 2008 performance report, the average processing time from the date OPM receives the initial application to the time the retiree receives a full payment is 42 days. According to the Deputy Associate Director for the Center of Retirement and Insurance Services, about 200 employees are directly involved in processing the approximately 100,000 retirement applications OPM receives annually. This processing includes functions such as determining retirement eligibility, inputting data into benefit calculators, and providing customer service. The agency uses over 500 different procedures, laws, and regulations, which are documented on the agency’s internal Web site, to process retirement applications. For example, the site contains memorandums that outline new procedures for handling special retirement applications, such as those for disability or court orders. Further, OPM’s retirement processing involves the use of over 80 information systems that have approximately 400 interfaces with other internal and external systems. For instance, 26 internal systems interface with the Department of the Treasury to provide, among other things, information regarding the total amount of benefit payments to which an employee is entitled. OPM has stated that the federal employee retirement process currently does not provide prompt and complete benefit payments upon retirement, and that customer service expectations for more timely payments are increasing. The agency also reports that a greater workload is expected due to an anticipated increase in the number of retirement applications over the next decade, yet current retirement processing operations are at full capacity. Further, the agency has identified several factors that limit its ability to process retirement benefits in an efficient and timely manner. Specifically, it noted that current processes are paper-based and manually intensive, resulting in a higher number of errors and delays in providing benefit payments; the high costs, limited capabilities, and other problems with the existing information systems and processes pose increasing risks to the accuracy of benefit payments; current manual capabilities restrict customer service; federal employees have limited access to their retirement records, making planning for retirement difficult; and attracting qualified personnel to operate and maintain the antiquated retirement systems, which have about 3 million lines of custom programming, is challenging. In the late 1980s, OPM recognized the need to automate and modernize its retirement processing and began retirement modernization initiatives that have continuously called for automating its antiquated paper-based processes. The agency’s previously established program management plans included the objectives of having timely and accurate retirement benefit payments and more efficient and flexible processes. For example, the agency’s plans call for processing retirement applications and providing retirees 100 percent of their monthly benefit payments the day it is due versus providing interim monthly payments. Its initial modernization vision called for providing prompt and complete benefit payments by developing an integrated system and automated processes. However, the agency has faced significant and long-standing challenges in doing so. In early 1987, OPM began a program called the FERS Automated Processing System (FAPS). However, after 8 years of planning, the agency decided it needed to reevaluate the program, and the Office of Management and Budget (OMB) requested that an independent board conduct a review to identify critical issues impeding progress and recommend ways to address the issues. The review identified various management weaknesses, including the lack of an established strategic plan, cost estimation methodologies, and baseline; improperly defined and ineffectively managed requirements; and no clear accountability for decision making and oversight. Accordingly, the board suggested areas for improvement and recommended terminating the program if immediate action was not taken. In mid-1996, OPM terminated the program. In 1997, OPM began planning a second modernization initiative, called the Retirement Systems Modernization (RSM) program. The agency originally intended to structure the program as an acquisition of commercially available hardware and software that would be modified in-house to meet its needs. From 1997 to 2001, OPM developed plans and analyses and began developing business and security requirements for the program. However, in June 2001, it decided to change the direction of the retirement modernization initiative. In late 2001, retaining the name RSM, the agency embarked upon its third initiative to modernize the retirement process and examined the possibility of privately sourced technologies and tools. To this end, OPM issued a request for information to obtain private sourcing options and determined that contracting was a viable alternative that would be cost efficient, less risky, and more likely to be completed on time and on budget. In 2006, the agency awarded three contracts for: (1) a commercially available, defined benefits technology solution (DBTS) to automate retirement processing; (2) services to convert paper records to electronic files; and (3) consulting services to support the redesign of its retirement operations. The contract for DBTS was awarded to Hewitt Associates, and the additional contracts to support the technology were awarded to Accenture Ltd. and Northrop Grumman Corporation, as reflected in table 1. OPM produced a December 2007 program management plan that, among other things, described capabilities the agency expected to implement as outcomes of retirement modernization. Among these capabilities, the agency expected to implement retirement benefit modeling and planning tools for active federal employees, a standardized retirement benefit calculation system, and a consolidated system to support all aspects of retirement processing. In February 2008, OPM renamed the program RetireEZ and deployed a limited initial version of DBTS. As the foundation of the modernization initiative, DBTS was to be a comprehensive technology solution that would provide capabilities to substantially automate retirement processing. This technology was to be provided by the contractor for a period of 10 years and was intended to provide, among other things, an integrated database with calculation functionality for retirement processing. In addition to calculating retirement benefit amounts, DBTS was intended to provide active and retired federal employees with self- service, Internet-based tools for accessing accounts, updating retirement records, submitting transactions, monitoring the status of claims, and forecasting retirement income. The technology was also expected to enhance customer service by providing OPM and agency personnel with the capability to access retirement information online. Further, the technology was expected to be integrated with OPM and federal agency electronic retirement records and processes. When fully implemented, the modernized program was expected to serve OPM retirement processing personnel, federal agency human resources and payroll offices, active federal employees, retirees, and the beneficiaries of retirees. According to the agency, in late February 2008, the DBTS was deployed with limited functionality to 26,000 federal employees serviced by the General Services Administration’s (GSA) payroll offices. In April 2008, OPM reported that 13 of the 37 retirement applications received from GSA’s payroll office had been processed through DBTS with manual intervention and provided the retirees 100 percent of their monthly benefits within 30 days from their retirement date. However, a month later, the agency determined that DBTS had not worked as expected and suspended system operation. In October 2008, after 5 months of attempting to address system quality issues, the agency terminated the contract. In November 2008, OPM began restructuring the program and reported that its efforts to modernize retirement processing would continue. Figure 3 illustrates the timeline of retirement modernization initiatives from 1987 to the present. Various entities within OPM are responsible for managing RetireEZ. Specifically, the management is composed of committees, a program office, and operational support, as reflected in table 2. Since 2005, we have conducted several studies of OPM’s retirement modernization noting weaknesses in its management of the initiative. In February of that year, we reported that the agency lacked processes for retirement modernization acquisition activities, such as determining requirements, developing acquisition strategies, and implementing a risk program. Further, the agency had not established effective security management, change management, and program executive oversight. We recommended that the Director of OPM ensure that the retirement modernization program office expeditiously establish processes for effective oversight of the retirement modernization in the areas of system acquisition management, information security, organizational change management, and information technology (IT) investment management. In response, between 2005 and 2007, the agency initiated steps toward establishing management processes for retirement modernization and demonstrated the completion of activities with respect to each of our nine recommendations. However, in January 2008, we reported that the agency still needed to improve its management of the program to ensure a successful outcome for its modernization efforts. Specifically, we reported that initial test results had not provided assurance that DBTS would perform as intended, the testing schedule increased the risk that the agency would not have sufficient resources or time to ensure that all system components were tested before deployment, and trends in identifying and resolving system defects had indicated a growing backlog of problems to be resolved prior to deployment. Further, we reported that although the agency had established a risk management process, it had not reliably estimated the program costs, and its progress reporting was questionable because it did not reflect the actual state of the program. We recommended that the Director of OPM address these deficiencies by conducting effective system tests and resolving urgent and high priority system defects prior to system deployment, in addition to improving program cost estimation and progress reporting. In response to our report, OPM stated that it concurred with our recommendations and was taking steps to address them. However, in March 2008, we determined that the agency was moving forward with system deployment and had not yet implemented its planned actions. OPM subsequently affirmed its agreement with our recommendations in April 2008 and reported that it had implemented or was in the process of implementing each recommendation. As of March 2009, however, these recommendations still had not been fully addressed. OPM remains far from fully implementing the retirement modernization capabilities described when it documented its plans for RetireEZ in 2007. The agency only partially implemented two of eight capabilities that it identified to modernize retirement processing. The remaining six capabilities, which were to be delivered through the DBTS contract, have not been implemented, and OPM’s plans to continue implementing them are uncertain. While the agency has taken steps to restructure the RetireEZ program without the DBTS contract, it has not developed a plan to guide its future modernization efforts. OPM’s retirement modernization plans from 2007 described eight capabilities that were to be implemented to achieve modernized processes and systems. As of late March 2009, the agency had partially implemented two of these capabilities while the remaining six had not been implemented (see table 3). Specifically, it had achieved partial implementation of an integrated database of retirement information that was intended to be accessible to OPM and agency retirement processing personnel. In this regard, the agency implemented a new database, populated with images of retirement information, which is accessible to OPM retirement processing personnel online. This database contains over 8 million files which, according to agency officials, represent approximately 80 to 90 percent of the available retirement information for all active federal employees. However, the capability for the information in the database to be integrated with OPM’s legacy retirement processing systems and to be accessible to other agency retirement processing personnel has not yet been implemented. OPM has also partially implemented enhanced customer service capabilities. Specifically, the agency acquired a new telephone infrastructure (i.e., additional lines) and hired additional customer service representatives to reduce wait times and abandonment rates. However, the agency has not yet developed the capabilities for OPM retirement processing personnel to provide enhanced customer support to active and retired federal employees through online account access and management. Moreover, six other capabilities have not been implemented—and plans to implement them are uncertain—because they were to be delivered through the now-terminated DBTS contract, which had been expected to provide a single system that would automate the processing of retirement applications, calculations, and benefit payments. Among the capabilities not implemented was one for other agencies’ automated submissions of retirement information to OPM that could be used to process retirement applications. While OPM began developing this capability by establishing interfaces with other agencies as part of its effort to implement DBTS, it discontinued the use of the interfaces for processing retirement applications when the DBTS contract was terminated. Thus, federal agencies that submit retirement information to OPM continue to provide paper packages and information when employees are ready to retire. Further, OPM has not implemented a planned capability for active and retired federal employees to access online retirement information through self-service tools. While the agency provided demonstrations of DBTS in April 2008 that showed the ability for employees to access information online, including applying for retirement and modeling future retirement benefits, this capability was to be provided by DBTS, and thus, no longer exists. The contractor had also been expected to deliver a consolidated system to support all aspects of retirement processing and an electronic case management system to support retirement processing. In the absence of these capabilities, the agency continues to manage cases through paper tracking and stand-alone systems. Additionally, OPM and federal agencies continue to rely on nonstandardized systems to determine and calculate retirement benefits, and federal retirees currently have only limited online, self-service tools. Program management principles and best practices emphasize the importance of using a program management plan that, among other things, establishes a complete description that ties together all program activities. An effective plan includes a description of the program’s scope, implementation strategy, lines of responsibility and authority, management processes, and a schedule. Such a plan incorporates all the critical areas of system development and is to be used as a means of determining what needs to be done, by whom, and when. Furthermore, establishing results-oriented (i.e., objective, quantifiable, and measurable) goals and measures, that can be included in a plan, provides stakeholders with the information they need to effectively oversee and manage programs. A plan for the future of the RetireEZ program has not been completed. In November 2008, OPM began restructuring the program and reported it was continuing toward retirement modernization without the DBTS contract. The restructuring efforts have resulted in a wide variety of documentation, including multiple descriptions of the program in formal agency reports, budget documentation, agency briefing slides, and related documents. For example, OPM’s November Fiscal Year 2008 Agency Financial Report described what the RetireEZ program is expected to achieve (e.g., provide retirement modeling tools for federal employees) once implemented. The agency’s Annual Performance Report, dated January 2009, outlined that the new vision for the restructured program is “to support benefit planning and management throughout a participant’s lifecycle through an enhanced federal retirement program.” The agency also presented information to OMB that identified eight fiscal year 2009 program initiatives, as listed in table 4. The agency has developed a variety of informal program documents and briefing slides that describe retirement modernization activities. For instance, one document prepared by the program office describes a five- phased approach that is intended to replace its previous DBTS-reliant strategy. The approach includes the following activities: (1) collecting electronic retirement information, (2) automating the retirement application process, (3) integrating retirement information, (4) developing retirement calculation technologies and tools, and (5) improving post- retirement processes through a technology solution. In addition, briefing slides also prepared by the program office outline a schedule for efforts to identify new technologies to support retirement modernization by drafting a request for information, which OPM expects to issue in late April 2009. Regardless, OPM’s various reports and documents describing its planned retirement modernization activities do not provide a complete plan for its restructured program. Specifically, although agency documents describe program implementation activities, they do not include a definition of the program, its scope, lines of responsibility and authority, management processes, and schedule. Also, the modernization program documentation does not describe results-oriented (i.e., objective, quantifiable, and measurable) performance goals and measures. According to the RetireEZ program manager, the agency is developing plans, but they will not be ready for release until the new OPM director has approved them, which is expected to occur in April 2009. Until the agency completes and uses a plan that includes all of the above elements to guide its efforts, it will not be properly positioned to obtain agreement with relevant stakeholders (e.g., Congress, OMB, federal agencies, and OPM senior executives) for its restructured retirement modernization initiative. Further, the agency will also not have a key mechanism that it needs to help ensure successful implementation of future modernization efforts. OPM has significant management weaknesses in five areas that are important to the success of its retirement modernization program: cost estimating, EVM, requirements management, testing, and program oversight. For example, the agency has not performed key steps, including the development of a cost estimating plan or completion of a work breakdown structure, both of which are necessary to develop a reliable program cost estimate. Also, OPM has not established and validated a performance measurement baseline, which is essential for reliable EVM. Further, although OPM is revising its previously developed system requirements, it has not established processes and plans to guide this work. Nor has the agency addressed test activities, even though developing processes and planning test activities early in the life cycle are recognized best practices for effective testing. Furthermore, although OPM’s Executive Steering Committee and Investment Review Board have recently become more active regarding RetireEZ, these bodies did not exercise effective oversight in the past, which has allowed the aforementioned management weaknesses to persist. Notably, OPM has not established guidance regarding how these entities are to engage with the program when corrective actions are needed. Until OPM addresses these weaknesses, many of which we and others made recommendations to correct, the agency’s retirement modernization initiative remains at risk of failure. The establishment of a reliable cost estimate is a necessary element for informed investment decision making, realistic budget formulation, and meaningful progress measurement. A cost estimate is the summation of individual program cost elements that have been developed by using established methods and validated data to estimate future costs. According to federal policy, programs must maintain current and well- documented estimates of program costs, and these estimates must span the full expected life of the program. Our Cost Estimating and Assessment Guide includes best practices that agencies can use for developing and managing program cost estimates that are comprehensive, well-documented, accurate, and credible, and provide management with a sound basis for establishing a baseline to measure program performance and formulate budgets. This guide identifies a cost estimating process that includes initial steps such as defining the estimate’s purpose (i.e., its intended use, scope, and level of detail); developing the estimating plan (i.e., the estimating approach, team, and timeline); defining the program (e.g., technical baseline description); and determining the estimating structure (e.g., work breakdown structure). According to best practices, these initial steps in the cost estimating process are of the utmost importance, and should be fully completed in order for the estimate to be considered valid and reliable. OPM officials stated that they intend to complete a modernization program cost estimate by July 2009. However, the agency has not yet fully completed initial steps for developing the new estimate. Specifically, the agency has not yet fully defined the estimate’s purpose, developed the estimating plan, defined program characteristics in a technical baseline description, or determined the estimating structure. With respect to the estimate’s purpose, agency officials stated that the estimate will inform the budget justification of RetireEZ for fiscal year 2011 and beyond. However, the agency has not clearly defined the scope or level of detail of the estimate. Regarding the estimating plan, agency officials stated that they have created a timeline to complete the estimate by July 2009. However, the agency has not documented an estimating plan that includes the approach and resources required to complete the estimate in the time period identified. With respect to the technical baseline description, agency officials stated that they are in the advanced stages of developing a request for information and a concept of operations that will serve as the basis for a technical baseline description. These documents are expected to be reviewed for approval in April 2009. Regarding the estimating structure, the agency has developed a work breakdown structure that identifies elements of the program to be estimated. However, the agency has not yet developed a work breakdown structure dictionary that clearly defines each element. Weaknesses in the reliability of OPM’s retirement modernization cost estimate have been long-standing. We first reported on the agency’s lack of a reliable cost estimate in January 2008 when we noted that critical activities, including documentation of a technical baseline description, had not been performed, and we recommended that the agency revise the estimate. Although OPM agreed to produce a reliable program cost estimate, the agency has not yet done so. Until OPM fully completes each of the steps, the agency increases the risk that it will produce an unreliable estimate and will not have a sound basis for measuring program performance and formulating retirement modernization program budgets. OMB and OPM policies require major IT programs to use EVM to measure and report program progress. EVM is a tool for measuring program progress by comparing the value of work accomplished with the amount of work expected to be accomplished. Such a comparison permits actual performance to be evaluated, based on variances from the planned cost and schedule, and future performance to be forecasted. Identification of significant variances and analysis of their causes helps program managers determine the need for corrective actions. Before EVM analysis can be reliably performed, developing a credible cost estimate is necessary. In addition to developing a cost estimate, an integrated baseline review must be conducted to validate a performance measurement baseline and attain agreement of program stakeholders (e.g., agency and contractor officials) before reliable EVM reporting can begin. The establishment of a baseline depends on the completion of a work breakdown structure, an integrated master schedule, and budgets for planned work. Although the agency plans to begin reporting on the restructured program’s progress using EVM in April 2009, the agency is not yet prepared to do so because initial steps have not been completed and are dependent on decisions about the program that have not been made. Specifically, the agency has not yet developed a reliable cost estimate for the program; such an estimate, which is critical for establishing reliable EVM, is not expected to be complete until July 2009; the agency does not plan to conduct an integrated baseline review to establish a reliable performance measurement baseline before beginning EVM reporting; and the work breakdown structure and integrated master schedule that agency officials report they have developed may not accurately reflect the full scope and schedule because key program documentation, such as the concept of operations, has not been completed. This situation resembles the state of affairs that existed in January 2008, when we reported that OPM’s EVM was unreliable because an integrated baseline review had not been conducted to validate the program baseline. At that time we recommended, among other things, that the agency establish a basis for effective use of EVM by validating a program performance measurement baseline through a program-level integrated baseline review. Although the agency stated that it agreed, it did not address this recommendation. Until the agency has developed a reliable cost estimate, performed an integrated baseline review, and validated a performance measurement baseline that reflect its program restructuring, the agency is not prepared to perform reliable EVM. Engaging in EVM reporting without first performing these fundamental steps could again render the agency’s assessment unreliable. Well-defined and managed requirements are a cornerstone of effective system development and acquisition. According to recognized guidance, disciplined processes for developing and managing requirements can help reduce the risks of developing a system that does not meet user and operational needs. Such processes include (1) developing detailed requirements that have been derived from the organization’s concept of operations and are complete and sufficiently detailed to guide system development and (2) establishing policies and plans, including defining roles and responsibilities, for managing changes to requirements and maintaining bidirectional requirements traceability. OPM’s retirement modernization requirements processes include some, but not all, of the elements needed to effectively develop and manage requirements. The agency began an effort to better develop its retirement modernization requirements in November 2008. This effort was in response to the agency’s recognition that its over 1,400 requirements lacked sufficient detail, were incomplete, and required further development. The agency intends to complete this requirements development effort in April 2009. However, the requirements will not be derived from OPM’s concept of operations because the agency is revising the concept of operations expected to be completed by April 2009, to reflect the program restructuring. Further, OPM documentation indicates that the agency has not yet determined the level of detail to which requirements should be developed. Additionally, agency officials stated that OPM is developing a requirements development process for retirement modernization. With respect to requirements management, OPM developed an organizational charter that outlined roles and responsibilities for supporting efforts to manage requirements. However, the agency does not yet have a requirements management plan. OPM’s prior experience with DBTS illustrates the importance of effective requirements development and management. According to RetireEZ program officials, insufficiently detailed requirements, poorly controlled requirements changes, and inadequate requirements traceability were factors that contributed to DBTS not performing as expected. Moreover, these requirements development and management weaknesses were identified, and recommendations for improvement were made by OPM’s independent verification and validation contractor before DBTS deployment. However, the agency has not yet corrected these weaknesses. Until OPM fully establishes requirements development and management processes, the agency increases the risk that it will (1) identify requirements that are neither complete nor sufficiently detailed and (2) not effectively manage requirements changes or maintain bidirectional traceability, thus further increasing agency risk that it will produce a system that does not meet user and operational needs. Effective testing is an essential component of any program that includes developing systems. Generally, the purpose of testing is to identify defects or problems in meeting defined system requirements and satisfying user needs. To be effectively managed, testing should be planned and conducted in a structured and disciplined fashion that adheres to recognized guidance and is coordinated with the requirements development process. Beginning the test planning process in the early stages of a program life cycle can reduce rework later in the program. Early test planning in coordination with requirements development can provide major benefits. For example, planning for test activities during the development of requirements may reduce the number of defects identified later and the costs related to requirements rework or change requests. Further, planning test activities early in a program’s life cycle can inform requests for proposals and help communicate testing expectations to potential vendors. OPM has not begun to plan test activities in coordination with developing its requirements for the RetireEZ program. According to OPM officials, the agency intends to begin its test planning by revising the previously developed DBTS test plans after requirements have been developed. However, the agency has not yet added test planning to its project schedule. Early test planning is especially important to avoid repeating the agency’s experience during DBTS testing when it identified more defects than it could resolve before system deployment. In January 2008, we reported that an unexpectedly high number of defects were identified during testing; yet, the deployment schedule had increased the risk of not resolving all defects that needed to be corrected before deploying DBTS. According to the RetireEZ program officials, the failure to fully address these defects contributed to the limited number of federal employees who were successfully processed by the system when it was deployed in February 2008. If it does not plan test activities early in the life cycle of RetireEZ, OPM increases the risk that it will again deploy a system that does not satisfy user expectations and meet requirements (i.e., accurately calculate retirement benefits) because of its potential inability to address a higher number of defects than expected. Moreover, criteria used to develop requests for proposals and communicate testing expectations to potential vendors could be better informed if the agency plans RetireEZ test activities early in the life cycle. GAO and OMB guidance calls for agencies to ensure effective oversight of IT projects throughout all life-cycle phases. Critical to effective oversight are investment management boards made up of key executives who regularly track the progress of IT projects such as system acquisitions or modernizations. These boards should maintain adequate oversight and track project performance and progress toward predefined cost and schedule goals, as well as monitor project benefits and exposure to risk. Another element of effective IT oversight is employing early warning systems that enable management boards to take corrective actions at the first sign of cost, schedule, and performance slippages. OPM’s Investment Review Board was established to ensure that major investments are on track by reviewing their progress and determining appropriate actions when investments encounter challenges. Despite meeting regularly and being provided with information that indicated problems with the retirement modernization, the board did not ensure that the investment was on track, nor did it determine appropriate actions for course correction when needed. For example, from January 2007 to August 2008 the board met and was presented with reports that described problems the retirement modernization program was facing, such as the lack of an integrated master schedule and earned value data that did not reflect the “reality or current status” of the program. However, meeting minutes indicate that no discussion or action was taken to address these problems. According to a member of the board, OPM guidance regarding how the board is to communicate recommendations and corrective actions when needed for the investments it is responsible for overseeing has not been established. In addition, OPM established an Executive Steering Committee to oversee retirement modernization. According to its charter, the committee is to provide strategic direction, oversight, and issue resolution to ensure that the program maintains alignment with the mission, goals, and objectives of the agency and is supported with required resources and expertise. However, the committee was inactive for most of 2008 and, consequently, did not exercise oversight of the program during a crucial period in its development. For example, from January 2008 until October 2008, the committee discontinued its formal meetings, and as a result, it was not involved in key program decisions, including the deployment of DBTS. Further, a member of the committee noted that OPM guidance for making recommendations and taking corrective actions also has not been provided. The ineffectiveness of the board and the inactivity of the committee allowed program management weaknesses in the areas of cost estimation, EVM, requirements management, and testing to persist and raise concerns about OPM’s ability to provide meaningful oversight as the agency proceeds with its retirement modernization. Without fully functioning oversight bodies, OPM cannot monitor modernization activities and make the course corrections that effective boards and committees are intended to provide. OPM’s retirement modernization initiative is in transition from a program that was highly dependent on the success of a major contract that no longer exists, to a restructured program that has yet to be fully defined. Although the agency has been able to partially implement a database of retirement information and improvements to customer service, it remains far from implementing six other key capabilities. Recognizing that much work remains, OPM has undertaken steps to restructure the retirement modernization program, but it has not yet produced a complete description of its planned program, including fundamental information about the program’s scope, implementation strategy, lines of responsibility and authority, management processes, and schedule. Further, OPM’s retirement modernization program restructuring does not yet include definitions of results-oriented goals and measures against which program performance can be objectively and quantitatively assessed. In addition, OPM has not overcome managerial shortcomings in key areas of program management, including areas that we have previously reported. Specifically, the agency is not yet positioned to develop a reliable program cost estimate or perform reliable EVM, both of which are critical to effective program planning and oversight. Nor has OPM overcome weaknesses in its management of system testing and defects, two activities that proved problematic as the agency was preparing to deploy the RetireEZ system that subsequently was terminated. Adding to these long-standing concerns are weaknesses in OPM’s process to effectively develop and manage requirements for whatever system or service it intends to acquire or develop. Finally, these weaknesses have been allowed to persist by entities within the agency that were ineffective in overseeing the retirement modernization program. As a consequence, the agency is faced with significant challenges on two fronts: defining and transitioning to its restructured program, and addressing new and previously identified managerial weaknesses. Until OPM addresses these weaknesses, many of which were previously identified by GAO and others, the agency’s retirement modernization initiative remains at risk of failure. Institutionalizing effective planning and management is critical not only for the success of this initiative, but also for that of other modernization efforts within the agency. To improve OPM’s effort toward planning and implementing its retirement modernization program by addressing management weaknesses, we recommend that the Director of the Office of Personnel Management provide immediate attention to ensure the following six actions are taken: Develop a complete plan for the restructured program that defines the scope, implementation strategy, lines of responsibility and authority, management processes, and schedule. Further, the plan should establish results-oriented (i.e., objective, quantifiable, and measurable) goals and associated performance measures for the program. Develop a reliable cost estimate by following the best practice steps outlined in our Cost Estimating and Assessment Guide, including definition of the estimate’s purpose, development of an estimating plan, definition of the program’s characteristics, and determination of the estimating structure. Establish a basis for reliable EVM, when appropriate, by developing a reliable program cost estimate, performing an integrated baseline review, and validating a performance measurement baseline that reflects the program restructuring. Develop a requirements management plan and execute processes described in the plan to develop retirement modernization requirements in accordance with recognized guidance. Begin RetireEZ test planning activities early in the life cycle. Develop policies and procedures that would establish meaningful program oversight and require appropriate action to address management deficiencies. The Director of the Office of Personnel Management provided written comments on a draft of this report. (The comments are reproduced in app. II.) In the comments, OPM agreed with our recommendations and stated that it had begun to address them. To this end, the Director stated that the agency had, among other actions, begun revising its retirement modernization plans, developing a new program cost estimate, planning for accurate EVM reporting, incorporating recognized guidance in requirements management planning, and planning test activities during requirements development. If the recommendations are properly implemented, they should better position OPM to effectively manage its retirement modernization initiative. The agency also provided comments on the draft report regarding our description of the federal retirement application process, as well as our characterizations of OPM’s EVM and requirements management capabilities vis-à-vis the retirement modernization program. In each of these instances, we made revisions as appropriate. We are sending copies of this report to the Director of the Office of Personnel Management, appropriate congressional committees, and other interested parties. In addition, the report is available at no charge on the GAO Web site at http://www.gao.gov. If you or your staffs have questions about this report, please contact me at (202) 512-6304 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix III. As requested, the objectives of our study were to (1) assess the status of the Office of Personnel Management’s (OPM) efforts toward planning and implementing the RetireEZ program and (2) evaluate the effectiveness of the agency’s management of the modernization initiative. To assess the status of OPM’s efforts toward planning and implementing the RetireEZ program, we reviewed and analyzed program documentation, including program management plans, briefing slides, and project status documentation, to identify planned retirement modernization capabilities and determine to what extent these capabilities have been implemented; evaluated the agency’s documentation about restructuring the program and analyzed the extent to which the documentation describes current and planned RetireEZ program activities; identified and evaluated the agency’s program goals and measures and compared them to relevant guidance to determine the extent to which the goals and measures are described in results-oriented terms; supplemented agency program documentation and our analyses by interviewing agency and contractor officials, including the OPM Director, Chief Information Officer, Chief Financial Officer, Director of Modernization, Associate Director for Human Resources Products and Services Division, and executives from Hewitt Associates and Northrop Grumman Corporation; and observed retirement operations and ongoing modernization activities at OPM and contractor facilities in Washington, D.C.; Boyers, Pennsylvania; and Herndon, Virginia. To determine the effectiveness of OPM’s management of the retirement modernization initiative, we evaluated the agency’s management of program cost estimating, earned value management (EVM), requirements, test planning, and oversight and compared the agency’s work in each area with recognized best practices and guidance. Specifically, to evaluate whether OPM effectively developed a reliable program cost estimate, we analyzed the agency’s program documentation and determined to what extent the agency had completed key activities described in our Cost Estimating and Assessment Guide; to assess OPM’s implementation of EVM, we reviewed program progress reporting documentation and compared the agency’s plans for restarting its EVM-based progress reporting against relevant guidance, including our Cost Estimating and Assessment Guide; regarding requirements management, we evaluated OPM’s processes for developing and managing retirement systems modernization requirements and compared the effectiveness of those processes against recognized guidance; to determine the effectiveness of the agency’s test planning for the retirement modernization, we reviewed program activities and test plans against best practices and evaluated the extent to which the agency has begun planning for these activities; and we reviewed and analyzed documentation from program oversight entities and evaluated the extent to which these entities took actions toward ensuring the RetireEZ program was being effectively overseen. We also evaluated OPM’s progress toward implementing our open recommendations and interviewed OPM and contractor officials as noted. We conducted this performance audit at OPM headquarters in Washington, D.C., the Retirement Operations Center for OPM in Boyers, Pennsylvania, and contractor facilities in Herndon, Virginia, from May 2008 through April 2009, in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the contact named above, key contributions to this report were made by Mark T. Bird, Assistant Director; Barbara S. Collier; Neil J. Doherty; David A. Hong; Thomas J. Johnson; Rebecca E. LaPaze; Lee A. McCracken; Teresa M. Neven; Melissa K. Schermerhorn; Donald A. Sebers; and John P. Smith. | For the past two decades, the Office of Personnel Management (OPM) has been working to modernize the paper-intensive processes and antiquated systems used to support the retirement of federal employees. By moving to an automated system, OPM intends to improve the program's efficiency and effectiveness. In January 2008, GAO recommended that the agency address risks to successful system deployment. Nevertheless, OPM deployed a limited initial version of the modernized system in February 2008. After unsuccessful efforts to address system quality issues, OPM suspended system operation, terminated a major contract, and began restructuring the modernization effort, also referred to as RetireEZ. For this study, GAO was asked to (1) assess the status of OPM's efforts to plan and implement the RetireEZ program and (2) evaluate the effectiveness of the agency's management of the modernization initiative. To do this, GAO reviewed OPM program documentation and interviewed agency and contractor officials. OPM remains far from achieving the modernized capabilities it had planned. Specifically, the agency has partially implemented two of eight planned capabilities: (1) an integrated database of retirement information accessible to OPM and agency retirement processing personnel and (2) enhanced customer service capabilities that support customer needs and provide self-service tools. However, the remaining six capabilities have yet to be implemented because they depended on deliverables that were to be provided by a contract that is now terminated. Examples of these missing capabilities include: (1) automated submission of retirement information through interfaces with federal agencies and (2) Web-accessible self-service retirement information for active and retired federal employees. Further, OPM has not yet developed a complete plan that describes how the program is to proceed without the system that was to be provided under the terminated contract. Although agency documents describe program implementation activities, they do not include a definition of the program, its scope, lines of responsibility and authority, management processes, and a schedule. Also, modernization program documentation does not describe results-oriented performance goals and measures. Until the agency completes and uses a plan that includes all of the above elements to guide its efforts, it will not be properly positioned to move forward with its restructured retirement modernization initiative. Further, OPM has significant weaknesses in five key management areas that are vital for effective development and implementation of its modernization program: cost estimating, earned value management (a recognized means for measuring program progress), requirements management, testing, and oversight. For example, the agency has not developed a cost estimating plan or established a performance measurement baseline--prerequisites for effective cost estimating and earned value management. Further, although OPM is revising its previously developed system requirements, it has not established processes and plans to guide this work or addressed test activities even though developing processes and plans, as well as planning test activities early in the life cycle, are recognized best practices for effective requirements development and testing. Finally, although OPM's Executive Steering Committee and Investment Review Board have recently become more active regarding RetireEZ, these bodies did not exercise effective oversight in the past, which has allowed the aforementioned management weaknesses to persist and OPM has not established guidance regarding how these entities are to intervene when corrective actions are needed. Until OPM addresses these weaknesses, many of which GAO and others made recommendations to correct, the agency's retirement modernization initiative remains at risk of failure. Institutionalizing effective management is critical not only for the success of this initiative, but also for that of other modernization efforts within the agency. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
The secretary of the Department of the Interior created OAS in 1973 to resolve several aviation program problems: numerous accidents, improper budgeting and financial management, and poor utilization of aircraft. A 1973 task force, comprising representatives from across the Interior bureaus, attributed these problems to the decentralized aviation program—with each bureau responsible for all aviation functions. The secretary of the Department of the Interior charged OAS with responsibility for (1) coordinating and directing all fleet and contract aircraft; (2) establishing and maintaining standards for safety, procurement, and utilization; (3) budgeting for and financially controlling fleet and contract aircraft; and (4) providing technical aviation services to the bureaus. As the program evolved, OAS assumed responsibility for policy oversight and aviation services, while the bureaus became responsible for implementing safety requirements, deciding on whether to use fleet or contract aircraft, and the scheduling and use of their aircraft. OAS works with the Aviation Management Board of Directors to involve the bureaus in formulating policy and managing aviation activities. In addition, since 1996, the bureaus’ aviation managers have also participated with OAS in setting fleet rates and planning for aircraft replacement and projected aviation program requirements. Eight Interior bureaus use OAS’s services in varying degrees to carry out their respective missions as shown in figure 1. The Bureau of Land Management—which accounted for over one-third of the OAS program in flight hours for fiscal year 2000— uses aircraft to carry out its fire-fighting and resource management missions. The Fish and Wildlife Service and the National Park Service depend heavily on OAS to manage fleet aircraft to achieve their respective missions. OAS is headquartered in Boise, Idaho, with significant operations located in Anchorage, Alaska. It has additional offices in Boise; Atlanta, Georgia; and Phoenix, Arizona. OAS operated with approximately 94 FTE in fiscal year 2000, 63 located in the lower 48 states and 31 located in the Anchorage office. In fiscal year 2000, OAS managed 95 government-owned aircraft, 42 based in the lower 48 states and 53 based in Alaska. OAS contracts for aircraft maintenance of fleet aircraft in the lower 48 states. In Alaska, OAS contracts for maintenance of fleet aircraft with private vendors, but maintains an in-house core maintenance staff. To fulfill its responsibilities, OAS set up functional divisions, including financial and information management, acquisition, and technical services. However, OAS accounts for and reports costs across four lines of business: fleet, contract, rental, and other. Of the $117 million spent on aviation services in fiscal year 2000, OAS received an appropriation of only $800,000 (or approximately seven FTE) to provide oversight of OAS department-wide aviation policies and procedures. Most of OAS’s costs are financed through a working capital fund, established in the Office of the Secretary to finance a continuing cycle of operations, and must be repaid to the fund by the bureaus and others using the services based on rates determined by OAS. Since 1975 Interior’s aviation accident rate has been cut in half, from 18.8 accidents per 100,000 flight hours in fiscal year 1975 to 8.7 accidents per 100,000 flight hours in fiscal year 2001. A number of OAS efforts have contributed to this reduction. Prior to the establishment of OAS’s aviation safety efforts, safety standards varied from bureau to bureau and between regions within bureaus; in some cases, standards did not exist at all. According to the 1973 task force, virtually no control over aviation operations existed within the department, which resulted in a high accident rate and higher operational costs. OAS officials attribute the department’s reduced accident rate, in part, to the implementation of a standard aviation operating policy. OAS sets pilot qualifications and proficiency standards as well as standards for aircraft maintenance and equipment inspections. These standards exceed the Federal Aviation Administration’s (FAA) requirements. In addition, OAS periodically evaluates the bureaus’ implementation of the aviation program, with a special emphasis on safe operations. The OAS Aviation Safety Management Office, reporting to the OAS director, is responsible for policy development, implementation, and review of the department’s (1) aviation safety management and aircraft accident/incident prevention programs; (2) accident and incident investigation; (3) management of the department’s reporting system for aircraft accidents, incidents, and hazards; and (4) management of the OAS aviation and occupational safety and health programs. Since April 1995, OAS is required to report accidents involving fatalities, serious injuries, or substantial damage to the National Transportation Safety Board and to assist the board with accident investigations when appropriate. The OAS Division of Technical Services oversees many day-to-day safety concerns, such as pilot training, aircraft engineering and maintenance, and technical policy development. The bureau directors are ultimately responsible for adherence to standards and the implementation of an effective accident prevention program. Since safety oversight was centralized under OAS, Interior has seen a dramatic decline in the rate of accidents, as shown in figure 2. OAS accepts applicable FAA regulations as baseline criteria for its aviation operations and then applies additional standards in order to reduce accidents that occur during hazardous flying conditions and specialized operations required by the bureaus’ unique missions. These standards are published in the department’s manual and in OAS’s operational procedures memoranda. Additional policy directives issued by the bureaus may be more restrictive but may never be less restrictive than OAS’s standards. These manuals specify more stringent pilot qualifications than those required by federal aviation regulations. For example, FAA requires pilots who fly passengers on commuter aircraft to have a commercial pilot certificate, which requires a minimum of 250 flight hours. However, OAS requires its contract pilots to have 1,500 flight hours to be eligible to fly missions for Interior. OAS also requires most of its fleet pilots to have a minimum of 500 hours of time commanding an aircraft to operate government-controlled aircraft, although there is no similar requirement in the federal aviation regulations. OAS has also developed additional aircraft maintenance standards for all Interior-owned aircraft and all contract aircraft that operate for Interior. For example, OAS requires a flight test following an aircraft overhaul, a major repair, or a replacement of engine or propeller. In addition to requirements for flight tests and 100-hour inspections, OAS developed standards for the inspection and maintenance of special use and mission- related equipment that is not covered by FAA regulations. Although OAS strives to meet or exceed all FAA regulatory standards on manufacturer requirements, OAS has granted exceptions to manufacturers’ weight requirements for certain aircraft—eight Cessna 206 Amphibians and one De Havilland DHC-2T Beaver aircraft. OAS granted these exceptions to the Fish and Wildlife Service to allow the aircraft to exceed the manufacturers’ weight limitations when the service conducts surveys of migratory birds. The exceptions were required to compensate for special equipment needed to conduct these surveys and to carry extra fuel during long flights over remote areas. OAS granted the exceptions with several stipulations designed to enhance the safety of these operations. Furthermore, to verify that the aircraft are operating under safe conditions, OAS had an engineering analysis conducted on the eight Cessna aircraft and has an engineering analysis in progress on the De Havilland Beaver. OAS also awarded a development contract on June 5, 2001, to provide a replacement aircraft that will meet all migratory bird mission requirements, thereby eliminating the need for all overweight exceptions to policy. From fiscal year 1997 through fiscal year 2000, OAS did not recover about $4 million from Interior’s bureaus. We found two primary reasons why OAS set rates too low to fully recover its costs: (1) actual flight hours were lower than the projected hours based on historical usage and (2) all costs were not included in the estimates. As a result, OAS had to subsidize the costs of the aircraft used by Interior bureaus in part with funds from its reserve accounts, collected in prior years, such as the reserve fund for replacing aircraft. OAS’s failure to recover all its costs from the bureaus was not attributable to any faults in OAS’s accounting system but to deficiencies in the fleet rate model and rate process. We found the accounting system capable of producing financial information that is reasonably complete, reliable, and useful to OAS management for the purposes of setting rates. OAS recovers its costs from users by charging for its services. Costs for fleet aircraft are recovered based on fleet rates, and costs for contract aircraft are recovered based on agreements for the cost of the contract plus OAS’s costs for servicing these agreements. OAS provides four lines of services—fleet, contract, rental, and miscellaneous (other)—to Interior’s bureaus and other agencies, such as those within the Departments of Defense and of Agriculture. For fiscal years 1997 through fiscal year 2000, OAS failed to recover about $4 million from the Interior sector of its business while realizing a slight overcharge of approximately $400,000 from agencies outside Interior. Table 1 shows, by business line, where these unrecovered costs occurred. As table 1 shows, the majority of unrecovered costs were in the fleet business line. The fleet business recovered less of its costs because OAS and the bureaus’ aviation managers had not correctly determined and set the appropriate rates. To determine the rates it needs to charge its users to recover the costs of its services, OAS captures the historical costs associated with each aircraft. OAS then projects the future costs based on its analysis of the historical costs, adjusted for inflation, and determines a means by which to allocate projected costs to the appropriate user. Based on this allocation, OAS calculates the hourly and monthly rates of the fleet rates using a fleet rate model. OAS then meets with the bureaus’ aviation managers to get their input on the rates and makes subsequent adjustments to its projections of future costs if necessary. Finally, the aviation managers and OAS agree to the fleet rates, and OAS and each bureau sign an interagency agreement that sets the rate. In order to allow the bureaus lead time to budget for future costs, rates are set 2 years in advance and adjusted, if necessary. OAS and the aviation managers do not have a process to monitor rates periodically to determine if the rates fully recover costs. Using this process for setting the fleet aircraft rates, OAS has not recovered all costs because it relies on 5-year historical averages of flight hours in its calculation of rates and has no provision for projecting flight hours in its rate-setting process. If OAS had solicited the bureaus for projected flight hours, which may change from year to year because of changes in mission requirements, it would have had a more accurate projection of usage and therefore could have set the rates more precisely. The use of 5-year historical averages has resulted in an overestimation of the number of flight hours when compared to declining actual usage in recent years. According to an OAS official, the bureaus accept this higher projection of flight hours based on 5-year historical usage, because it results in lower rates. For example, if an aircraft has (1) an estimated cost of $100,000 based on historical costs and (2) an estimated usage of 200 flight hours based on the historical averages, the resulting rate would be $500 per flight hour. However, if the actual usage were reduced to 100 flight hours, the actual cost recovery for that aircraft would only be $50,000 or one-half of the projected recovery. As a result, the rate set would not fully recover the costs. While it is to be expected that flight hours vary to some degree from the projected usage, the use of more accurate projections and resulting rates would result in more accurate recovery of the costs. Additionally, OAS did not include in its calculations all the costs that needed to be considered in setting rates. From 1991 through 2000, in the Alaskan operations, OAS omitted from its rate calculation approximately $1.9 million in costs for aircraft maintenance. Fleet rates were therefore significantly lower than needed to recover the costs. OAS did not have a process in place to recognize the error and the resulting underrecovery of costs in a timely fashion. OAS has since taken actions to recoup the costs of the Alaska fleet maintenance operations and now includes these costs in its rate calculations. OAS has also not included in its projection all the costs of employees’ postretirement health benefits and of the Civil Service Retirement System employee pension plan for current OAS employees engaged in work directly related to aviation services and therefore is not recovering these costs from its users. OAS has taken steps to control increases in program costs, but could potentially save several million dollars more annually if it implemented a more cost-effective approach to using aircraft. In an effort to control costs, OAS has reduced staff and implemented strategies to operate more efficiently. As a further effort, OAS conducted cost comparisons and determined that it was more cost effective to maintain aircraft under government ownership than to contract for aircraft. Despite these efforts, OAS has not managed the use and scheduling of aircraft, a major factor of the aviation program’s cost. We analyzed the savings attributable to improvements in fleet and contract utilization and found that a moderate increase in average annual flight hours per aircraft could translate into savings of several million dollars annually. However, until OAS sets results-oriented performance goals and measures as part of a strategic aviation planning process and monitors its performance on an ongoing basis, it cannot track its progress in achieving additional program savings. OAS has taken several actions to control the cost of operations to maintain fleet rate cost increases consistent with the producer price index for transportation since 1995. In particular: OAS decreased staffing levels from 124 staff in fiscal year 1992 to 94 staff in fiscal year 2000, a 24-percent decrease. Because most OAS costs are personnel-related, this reduction significantly decreased OAS’s costs. The OAS Acquisition Management Division implemented new contracting procedures to streamline the contracting process and established interdepartmental agreements with the Department of Agriculture’s Forest Service to facilitate aircraft sharing arrangements. OAS is developing Web-based training for bureau aviation personnel, reducing training cost by more than $100,000 during the first 6 months of program implementation. To examine the cost effectiveness of government ownership, OAS compared the costs of fleet aircraft with the costs of contracted aircraft. OAS found that, given the existing fleet aircraft, equipment, locations, and missions, retaining the fleet under government ownership to be $243 per flight hour less, on average, than contract aircraft. In making these comparisons, OAS contracted for two comprehensive studies—one in 1996 and one in 2001—that were to follow the standard requirements laid out in Office of Management and Budget Circular A-76, “Performance of Commercial Activities,” for ensuring that the cost comparisons between government and contracted operations were conducted appropriately. The 1996 study concluded that all but 2 of the 84 aircraft examined were, on average, significantly more cost effective under government ownership. The 2001 study found that all but 1 of the 89 aircraft reviewed to be cost effective. OAS also contracted for a cost comparison of aviation maintenance costs and solicited bids from private vendors to maintain the fleet in Alaska during 1995. As part of the A-76 process, OAS also prepared a bid proposing a streamlined government operation that would lower its maintenance costs by reducing the number of maintenance personnel. While several vendors expressed interest, none ultimately bid on the contract to assume maintenance operations for the Alaskan service. Some bidders took exception with the minimum wage provisions issued by the Department of Labor that were included in the solicitation. OAS requested a clarification regarding wage determination rates, but did not receive a reply; therefore, the wage provisions remained in the solicitation as issued. OAS won the bid to continue in-house maintenance and implemented the streamlined organization, reducing the number of maintenance personnel from 13 to 9. Although OAS was organized in 1973 to help improve the utilization of government-controlled aircraft, the use of fleet aircraft declined from about 350 hours per aircraft in fiscal year 1973 to 246 hours per aircraft in fiscal year 2000. The task force and several recent reports recommended more centralization of scheduling; however, OAS has not been able to fully implement these recommendations because the bureaus determine the aviation resources needed to accomplish their missions. In 1995, the inspector general of the Department of the Interior estimated that Interior spent $2.3 million throughout 1992 and 1993 in unnecessary costs because the bureaus did not schedule flights when fleet aircraft were available and did not coordinate these aircraft either within each bureau or among the bureaus. The report suggested that OAS could be a focal point for scheduling and use of the government-owned fleet, or designate a bureau as the schedule coordinator within specified regional areas. In 1996, the General Services Administration also reviewed the Interior aviation program and identified the potential for significant savings related to utilization. At the time, the Interior average of 252 hours was significantly less than the federal average of 350 hours per year, according to the report. The report estimated that increasing the average hours per aircraft to the federal average of 350 hours per year would result in an annual savings of $715,000 in fixed costs and more than $4 million from the disposal of multiple fleet aircraft. The General Services Administration did not estimate any savings for variable costs. We also analyzed the potential for program savings resulting from improved aircraft utilization. Our analysis is meant to illustrate the potential for savings—not to identify what utilization improvements should be made by OAS and the bureaus. We considered two strategies to increase the fleet’s average number of flight hours per year—either reduce the size of the fleet or increase the total hours flown. Reducing the number of fleet aircraft could reduce fixed program costs, while increasing the total number of hours flown by fleet aircraft could reduce the variable program costs. If fewer fleet aircraft could fly the required missions, then the utilization of the fleet could be increased and the fixed cost associated with some fleet aircraft could be eliminated. As shown in table 2, a 30-percent reduction in the size of the fleet increases average flight hours per aircraft per year from 221 to 316 hours per year based on actual fiscal year 2000 fleet flight hours. We also looked at the potential to realize variable cost savings. These savings could be achieved by using fleet aircraft instead of contract aircraft when fleet costs are less than contract costs. For example, according to the OAS 2001 cost comparison study, certain contract aircraft are 100 to 235 percent more expensive to operate. For these aircraft, the OAS’s estimated average net variable cost savings between the fleet and contract aircraft was $778 per flight hour. As shown in table 3, if it were possible to convert 4,425 flight hours to fleet operations, then the average utilization per fleet aircraft would increase by 20 percent, and the potential variable cost saving would be about $3.4 million annually. However, in order to determine the actual savings potential, OAS and the bureaus would need to conduct a detailed review of opportunities on an aircraft- by-aircraft basis. OAS and the bureaus have not been able to improve aircraft utilization. Citing its history and relationship with the bureaus, OAS did not implement all the utilization recommendations made in the prior studies because it believes it lacks the authority and responsibility to mandate bureau program and mission requirements—and hence, utilization—under departmental regulations. While bureau aviation managers point to some examples in which improved utilization has resulted in savings, they have not attempted to make a systemwide improvement in utilization. Bureau aviation managers noted that improvements in utilization are difficult to implement because of other factors: weather, high-priority or time-critical missions, workload peaks, mission-required equipment, and the aircraft’s physical location. OAS does not set results-oriented performance goals and measures as part of a strategic aviation planning process and does not monitor its performance on an ongoing basis. As a result, it cannot effectively track its performance or measure its results on a consistent basis. OAS has tracked its performance on a sporadic basis in response to requests for information, legislative requirements, or, most recently, as part of the rate-setting process, but it has not linked performance measurement to results-oriented goals. For example, OAS tracked the cost and performance of the Alaskan operations as part of the reorganization, but discontinued monitoring the operations’ performance after 2 years. Rate setting is a critical component of OAS’s program operations because OAS must recover its costs and maintain adequate funding for operations, future aircraft replacement, and accident reserves. Shortfalls in program costs, such as those resulting from inaccurately setting rates, would have been less likely to occur year after year if the bureaus had evaluated whether their reliance on historical averages correctly predicted future costs and usage. Consideration of both historical and projected data would help OAS bring the best available information to bear in estimating usage and setting rates. Periodic comparisons of the rates set with the actual costs incurred would have helped ensure that all costs were recovered. OAS acting alone cannot improve the utilization of aircraft. Traditionally, the bureaus have not coordinated their efforts to use their aviation resources in a more cost-effective manner. As a result, fleet aircraft are not being fully utilized; better utilization could lead to significant savings. Absent a strategic aviation plan for the department, it is difficult to analyze future requirements by mission and flight hours. OAS and the bureaus could begin the process for fuller utilization if they established a strategic aviation plan that, among other things, sets results-oriented performance goals and measures for the department and then, following that plan, analyzed future requirements for the department. Such an analysis could help them identify new opportunities to reduce cost, maintain the quality of services, and maximize the value of the aviation program for the department. To ensure that all program costs are fully recovered and to improve the rate-setting process, we recommend that the secretary of the Department of the Interior direct OAS to obtain forecasts of future usage from the bureaus and use these forecasts, as well as other relevant information, to set rates; and direct OAS and the bureaus, upon completion of the rate-setting process and calculation of associated payments, to determine whether the rates recovered all costs and, if not, whether adjustments in the process used to calculate the rates are necessary. We also recommend that the secretary of the Department of the Interior instruct the directors of the Office of Aircraft Services and of each bureau to improve scheduling and use of aircraft and establish performance measures to monitor and assess progress. We provided the Department of the Interior with a draft of this report for review and comment. Interior agreed with the information presented in the draft, and stated that our findings and recommendations are reasonable. It stated that the department’s aviation program is complex and multi-faceted due to the widely diverse missions of the bureaus. Further, it stated that our report recognizes that successful aviation management within the department depends on a partnership between OAS and the bureaus to seek more efficient and cost-effective ways to manage the program. The comments of the Department of the Interior and our responses to those comments are included in appendix I. We performed our review at OAS’s headquarters in Boise, Idaho, and at OAS, Fish and Wildlife Service, and the National Park Service offices located in Anchorage, Alaska. We discussed the OAS aviation program with aviation managers and others from Interior’s Bureau of Land Management, Fish and Wildlife Service, and National Park Service. For additional perspective, we interviewed private-sector maintenance vendors in Alaska and representatives of the state of Alaska aviation program. We reviewed OAS’s and bureaus’ aviation program documents and prior audit reports, including laws, regulations, program plans, financial data, fleet rate meeting minutes, and other documents. Although we did not conduct audit procedures designed to completely evaluate or give an opinion on the OAS accounting system and corresponding internal controls, we did review work conducted by the Office of the Inspector General and also performed limited testing of data reliability. We examined OAS’s cost comparisons as part of the A-76 process; we did not, however, evaluate the bureaus’ future mission needs or flight hour forecasts on which the study was based. To illustrate the potential improvements in aircraft utilization, we relied on OAS’s most recent comparison of contract and fleet costs and applied the estimated costs to actual OAS fiscal year 2000 aircraft and flight hours. We conducted our work from July 2001 through April 2002 in accordance with generally accepted government auditing standards. As we agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution of it until 30 days from the date of this letter. We will then send copies to other interested parties and make copies available to others who request them. If you or your staff has any questions about this report, please call me or Peg Reese at (202) 512-3841. Key contributors to this report are listed in appendix II. The following are GAO comments on the Department of the Interior’s letter dated March 27, 2002. 1. Interior agreed with our recommendation that historical and projected data should be used to set rates but stated that our report implies that fleet aircraft flight hour projections might be intentionally overestimated in an effort to reduce planned hourly rates. We disagree. Our report describes the process for making projections and attributes comments about projections to OAS, but draws no conclusions about the intent on the part of OAS or the bureaus. During our review, we noted that when total flight hours decline year after year, projections based on historical averages will inherently result in over-estimating future flight hour requirements. 2. Interior agrees with our findings and recommendation that periodic monitoring of fleet cost and subsequent adjustment of rates would result in more complete recovery of costs. Interior points out that, once rates are established for budgeting purposes, increasing rates after budget allocation would reduce flying hours, which in turn could adversely impact cost recovery. We agree. Our report, however, recommends that actual costs be compared with estimated costs and that adjustments be made as needed. We acknowledge Interior’s concurrence to work with the bureaus and periodically compare the rates set with actual cost incurred, examine usage, and establish a methodology that will assist in more fully recovering fleet costs. 3. We support Interior’s proposed actions to recover personnel costs, and its actions to improve use and scheduling of aircraft. 4. Interior agrees that there may be opportunities to improve the efficiency of its use of fleet aircraft. Interior stated that it will be reviewing its scheduling policies to identify such opportunities. We support this initiative. 5. Interior emphasizes that the department’s aviation program is complex and multi-faceted due to the diverse missions of the bureaus and the high priority of safety and mission accomplishment. We agree with this assessment. Aviation program responsibility is shared by OAS and the bureaus. We support OAS and bureau partnerships to seek more efficient and cost-effective ways to manage the aviation program. In addition to those named above, Mark Connelly, Robert E. Kigerl, Lisa Knight, Dawn Shorey, and Carol Herrnstadt Shulman made key contributions to this report. | The Department of the Interior has cut its aviation accident rate in half since 1975--from 18.8 accidents to 8.7 per 100,000 flight hours. The department's lower accident rate can be attributed to the implementation of a standard aviation operating policy and to aviation safety standards that exceed the Federal Aviation Administration's requirements. The Office of Aircraft Services (OAS) has not fully recovered aviation program costs. From fiscal years 1999 to 2000, OAS has charged bureaus about $4 million less than actual costs, representing an undercharge of about two percent. OAS set rates that were based on flight hour projections of actual usage that turned out to be low, and OAS did not include all the cost elements that needed to be considered. Periodic monitoring of the rates and actual costs would ensure that all costs are recovered. OAS has yet to develop a more cost-effective approach for using aircraft. To cut costs, OAS has reduced its staffing levels by 24 percent since 1992. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
Refundable tax credits (RTC) differ from other credits because a taxpayer is able to receive a refund check from IRS for the amount their credit exceeds their tax liability. For example, a person who owed $2,000 in taxes, but qualified for $3,000 in EITC would receive a $1,000 refund from IRS. A nonrefundable credit can be used to offset tax liability, but any excess of the credit over the tax liability is not refunded to the taxpayer. If, instead of claiming the EITC, that same person claimed $3,000 in a nonrefundable credit, the person would use $2,000 to reduce the tax liability to zero, but would not receive the remaining credit amount as a refund. According to the Congressional Budget Office (CBO), the number and costs associated with refundable tax credits have varied over the past 40 years. The first refundable credit, the EITC, was enacted in 1975. In 1998, additional RTCs became effective and by 2010 there were 11 different refundable tax credits. The cost of refundable tax credits peaked in 2008 at $238 billion, but declined over the next 4 years because of the expiration of several credits designed to provide temporary economic stimulus. Starting in 2014, the refundable Premium Tax Credit (PTC) was made available to some low-income households for the purchase of health insurance through newly created exchanges, as part of the Patient Protection and Affordable Care Act (PPACA). According to estimates from the Joint Committee on Taxation (JCT) and CBO, the cost of the PTC in its first year was $35 billion and will be about $110 billion by 2021. In 2015, there were five refundable credits in effect. Four of those were available to individuals—the EITC, ACTC, AOTC, and PTC. We issued a report last year assessing IRS’s implementation of PPACA requirements, including efforts to verify taxpayers’ PTC claims. This report focuses on the design and administration of the other three refundable tax credits available to individuals. Congress enacted the EITC in 1975 to offset the impact of Social Security taxes on low-income families and encourage low-income families to seek employment rather than public assistance. The credit was also meant to encourage economic growth in the face of a recession and rising food and energy prices. Since the credit’s enactment, it has been modified to provide larger refunds and differentiate between family size and structure. In fiscal year 2013, taxpayers received $68.1 billion in EITC; an average amount of $2,362 was distributed to about 29 million taxpayers. Beginning in 1979, the credit was also available as an advance credit. This meant that filers had the option to receive their predicted credit in smaller payments throughout the preceding year and reconcile the amount received with the amount they were actually eligible for upon filing their taxes. However, as we reported, the advanced payment option had a low take-up rate of 3 percent and high levels of noncompliance (as many as 80 percent of recipients did not comply with at least one of the program requirements), which led to its repeal in 2010. The EITC divides the eligible population into eight different groups based on the number of eligible children claimed by the filer and filing status. The basic structure of the credit remains the same for each group: the credit phases in as a percentage of earned income; upon reaching the maximum benefit, the credit plateaus; and when income reaches a designated point, the benefit begins to phase out as a percentage of income. The phase-in and phase-out rates, maximum benefit, and phase- out point all differ depending on filing status (such as single or married filing jointly) and the number of eligible children claimed. In order to claim the EITC, the tax filer must work and have earnings that do not exceed the phase-out income of the credit. Additional eligibility rules apply to any children that a tax filer claims for the purpose of calculating the credit. A qualifying child must meet certain age, relationship, and residency requirements. For example, the child must be younger than 19 (or 24 if a full-time student) and be a biological, adopted, or foster child, grandchild, niece/nephew, or sibling of the filer and live with the filer in the United States for at least 6 months of the year. Additionally, the child must have a valid Social Security number (SSN). The Improper Payments Information Act (IPIA) of 2002, as amended, requires federal agencies to review programs and activities that may be susceptible to significant improper payments and report on actions taken to reduce improper payments. In addition, the Office of Management and Budget (OMB) identifies high-priority (or high-risk) programs, one of which is EITC, for greater levels of oversight and review. For fiscal year 2015, IRS estimated that, $15.6 billion—or 23.8 percent—of EITC program payments were improper. The estimated improper payment rate for EITC has remained relatively unchanged since fiscal year 2003 (the first year IRS had to report estimates of these payments to Congress), but the amount of improper EITC payments increased from an estimated $10.5 billion in fiscal year 2003 to nearly $16 billion in fiscal year 2015 because of growth in the EITC program overall. The Additional Child Tax Credit (ACTC) is the refundable portion of the Child Tax Credit (CTC) and provides tax relief to low-income families with children. It also adds to the positive reward the EITC provides to those who work. The credit was initially created in 1997 by the Taxpayer Relief Act of 1997 as a nonrefundable child tax credit for most families, but in 2001 was expanded to include the current refundable ACTC for which more low-income families were eligible. Like the EITC, taxpayers can use the child tax credits to both offset tax liabilities (CTC) and receive a refund (ACTC); however, unlike the EITC, the nonrefundable CTC and the refundable ACTC amounts are entered separately on the Form 1040. In fiscal year 2013, taxpayers claimed $27.9 billion in ACTC and $27.2 billion in the nonrefundable CTC. Thus, the total revenue cost of the CTC and ACTC was $55.1 billion. This report will sometimes combine these credits (referring to them as CTC/ACTC) when their combined effect is at issue or to facilitate comparison with other RTCs that do not break out refundable and nonrefundable components. In general, the ACTC is claimed by those with lower tax liabilities and lower income than those that claim only the CTC. As reported by the SOI Division of the Internal Revenue Service, in 2012, 88 percent of the ACTC went to taxpayers with adjusted gross income below $40,000, while 17 percent of the CTC went to taxpayers below that income. Under current law, taxpayers can use the CTC to offset their tax liabilities by up to $1,000 per qualifying child. If the available CTC exceeds the filer’s tax liability, they may be able to receive a portion of the unused amount through the refundable ACTC. The ACTC phases in at 15 percent of every dollar in earnings above $3,000 up to the unused portion of the CTC amount. To claim the CTC or ACTC, taxpayers must have at least one qualifying child. The criteria for qualifying children are slightly different from that used to determine eligibility with the EITC. For the CTC and ACTC, the child must be under the age of 17 and a U.S. citizen, national, or resident, but taxpayers file using either a SSN or individual taxpayer identification number (ITIN). However, the relationship and residency requirements are similar for the ACTC and EITC. See figure 1 for a description of the credits and their requirements. The American Opportunity Tax Credit (AOTC) offsets certain higher education related expenses in an effort to lessen the financial burden of a college or professional degree for taxpayers and their dependents. The credit was created by the American Recovery and Reinvestment Act of 2009 as a modification of the nonrefundable Hope Credit and was made permanent in 2015 with the Protecting Americans from Tax Hikes (PATH) Act. In 2013, taxpayers claimed $17.8 billion in AOTC. The AOTC is designed as a partially refundable credit. The entire credit is worth up to $2,500 and a taxpayer can receive a refundable credit equal to 40 percent of their credit (for a maximum of $1,000). The size of the entire credit is determined by taking 100 percent of the first $2,000 in qualified education expenses and 25 percent of the next $2,000 in qualified expenses, which include tuition, required enrollment fees, and course materials. The value of the limit on expenses qualifying for the credit is not indexed for inflation. In order to claim the AOTC a tax filer or their dependent must meet certain requirements including adjusted gross income requirements. Furthermore, they must be in their first 4 years of enrollment and be at least a half-time student at an eligible post- secondary school. Taxpayers may only claim the AOTC for 4 years. More taxpayers claim the EITC than the other two refundable credits we examine in this report. The EITC is also the most expensive in terms of tax revenue forgone and refunds paid. In 2013, taxpayers claimed a total of $68.1 billion in EITC with $59 billion (87 percent) of this amount refunded; the total was $55.1 billion for the CTC and ACTC with $26.7 billion (48 percent) refunded as ACTC and a total of $17.8 billion in AOTC with $5 billion refunded (28 percent). There are several reasons why the ratio between the amount received as tax refunds and the amount used to offset tax liabilities varies from credit to credit including whether the credits are partially or fully refundable as well as income levels of the recipients. The number of taxpayers claiming the earned income credit increased 50 percent from1999 to 2013, and the total amount claimed after adjusting for inflation increased 60 percent, due in part to legislative changes which increased the number of people eligible for the credit and the amount they could claim. Over that same period, the ACTC also increased, with 20 times more taxpayers receiving the credit in 2013 than 1999. The AOTC did not see similar constant growth. See figures 2 and 3 for the number of taxpayers claiming credits and the amounts of credits received over time. As figure 4 shows, a greater share of EITC benefits goes to lower-income taxpayers. More than half (62 percent) of EITC benefits go to taxpayers making less than $20,000, with the largest share (48 percent) going to those making from $10,000 to less than $20,000. For the other credits, the benefits are spread more evenly among income groups. The CTC and AOTC do not have the same income restrictions as the EITC, so higher income taxpayers also benefit from those credits. For example, taxpayers making $100,000 or more receive 22 percent of the AOTC. Figure 4 also shows the percent of each credit claimed per adjusted gross income (AGI). Examined separately from the nonrefundable CTC, the ACTC also benefits lower income groups, but is less concentrated on the lowest income groups than the EITC, with 42 percent going to taxpayers making less than $20,000. (See figure 11 in appendix III for a comparison of CTC and ACTC benefits by AGI.) In addition to being lower income, EITC and ACTC claimants are more likely to be sole proprietors—persons who own unincorporated businesses by themselves—and to be heads of households than the general taxpayer population. As table 1 shows, 16 percent of taxpayers are sole proprietors, but they represent 25 percent of EITC and ACTC claimants. (Additionally, but not shown in the table, 29 percent of all EITC dollars go to sole proprietors.) EITC and ACTC are claimed mostly by heads of households. While people filing as head of household make up only 15 percent of the taxpayer population, they represent 56 percent of ACTC claimants and 47 percent of EITC claimants. AOTC claimants, on the other hand, are most likely to be married filing jointly (43 percent) or single (34 percent). Workers without qualifying children, or childless workers, make up 25 percent of EITC claimants, but receive 3 percent of benefits. Table 1 shows additional detail on how these characteristics differ across the three credits. IRS relies on pre-refund controls and filters to detect, prevent, and correct errors, a selection of which is shown in figure 5. Before accepting a return, IRS checks it for completeness and attempts to verify the taxpayer’s identity and credit eligibility. A series of systems use IRS and other government data to check whether returns meet certain eligibility requirements (like whether earned income falls within EITC income limits) and include the required forms (such as a Schedule EIC). IRS can use its math error authority (MEA) to correct or request information on electronic returns with these errors. During return processing, IRS runs returns through additional systems to screen for fraud and errors. One system, IRS’s Electronic Fraud Detection System (EFDS), screens returns for fraud including possible identity theft. If flagged, IRS stops processing the return and sends a letter asking the taxpayer to confirm his or her identity. Another system— the Dependent Database (DDb)—incorporates IRS and other government data, such as the National Prisoner File or child custody information from the Department of Health and Human Services, along with rules and scoring models to identify questionable tax returns and further detect identity theft. Once the suspicious tax returns are identified, the DDb assigns a score to each tax return. Based in large part on these scores, as well as available resources, IRS selects a portion of suspicious returns for correspondence audits, which are audits conducted through the mail. IRS conducts most of its EITC audits (about 80 percent) and ACTC audits (about 64 percent) prior to issuing refunds. In these pre-refund audits, IRS freezes the refund and sends a letter to the taxpayer requesting documentation such as birth certificates or school or medical records to verify eligibility. During the audit process, IRS will also freeze and examine other refundable credits claimed on the return. See table 2 for a description of how many audits IRS selects specifically for each credit and the total amount audited including returns selected for other reasons. IRS’s compliance activities continue after it issues refunds. In addition to post-refund audits, IRS also conducts the automated underreporter program (AUR) which matches income data reported on a tax return with third-party information about income and expenses provided to IRS by employers or financial institutions. In 2014, this document matching review process included just over 1 million EITC returns and IRS recommended $1.5 billion in additional tax. Lack of third party data complicates IRS’s ability to administer these credits, but such data are not easy to identify. According to IRS, the data it uses should be complete and accurate enough to allow IRS to select returns with the highest potential for change without placing an undue burden on taxpayers. IRS reported that it evaluated several different databases to determine if they were reliable enough to be used under MEA to make changes to tax returns without going through the audit process. For example, IRS tested the Federal Case Registry (FCR), a national database that aids the administration and enforcement of child support laws. IRS determined that it could not identify errors related to qualifying children from this database with enough accuracy under its standards. In addition, IRS participated in a project led by Treasury and conducted by the Urban Institute that assessed the overall usefulness of state-level benefit data to help validate EITC eligibility. The study concluded, based on a number of issues, including different data collection practices across states that this data would not improve the administration of the EITC. Without data reliable enough to be used under MEA, IRS generally conducts a correspondence audit to verify that a taxpayer meets the requirements for income and that their children meet both residency and relationship requirements. Audits are more costly than issuing MEA notices and they can be lengthy. For example, in 2014 it cost IRS on average $.21 to process an electronic return (including issuing math error notices), while an EITC audit cost $410.74. However, as mentioned above, cost savings should be weighed against other goals such as fairness and burden on taxpayers. More EITC claimants make income errors than qualifying children errors, but the dollar value of the errors due to noncompliance with qualifying children requirements is larger than the dollar value of the income errors. Verifying eligibility with residency and relationship requirements can be complicated and subject to interpretation. IRS offers training to tax examiners on various types of documentation that could be used to verify EITC requirements and tax examiners are allowed to use their judgment to evaluate whether residency or relationships requirements are satisfied. This lack of available, accurate, and complete third party data complicates IRS’s efforts to verify qualifying children eligibility requirements, increasing IRS’s administrative costs and taxpayer burden. Filing and refund timelines also complicate IRS’s ability to administer these credits. IRS states on its website that more than 90 percent of refunds are issued within 21 days. It is important that IRS issues refunds on time because when it is late, taxpayers’ refunds are delayed, and IRS is required to pay interest on delayed refunds. However, it is also important to allow enough time to ensure refunds are accurate and issued to the correct individuals. The IRS strategy with respect to improper payments is to intervene early to ensure compliance through outreach and education efforts as well as various compliance programs. Even so, in order to meet timeliness goals, IRS issues most refunds months before receiving and matching information returns, such as the W-2 to tax returns, rather than holding refunds until all compliance checks can be completed. As a result, IRS ends up trying to recover fraudulent refunds and unpaid taxes after matching information and pursuing discrepancies. We previously reported that, in 2010, it took IRS over a year on average to notify taxpayers of matching discrepancies, increasing taxpayer burden. In August 2014, we recommended that IRS estimate the costs and benefits of accelerating W-2 deadlines and identify options to implement pre-refund matching using W-2 data as a method to combat the billions of dollars lost to identity theft refund fraud, allowing the agency more opportunity to match employers’ and taxpayers’ information. In response to our recommendation, IRS conducted such a study and presented the results to Congress in 2015. In December 2015, Congress moved the W-2 filing deadlines to January 31 and required IRS to take additional time to review refund claims based on the EITC and the ACTC. As such, most individual taxpayers who claim either credit would not receive a refund prior to February 15. JCT estimated that the entire provision will result in $779 million in revenue from fiscal years 2016 to 2025. According to IRS officials, they are evaluating how to implement these changes and the impact on the administration of the credits. The complexity of eligibility requirements, besides being a major driver of noncompliance and complicating IRS’s ability to administer these credits, are also a major source of taxpayer burden. For example, for the EITC and ACTC, each child must meet certain age, residency and relationship tests. However, given complicated family relationships, determining whether children meet these eligibility requirements is not always clear- cut, nor easily understood by taxpayers. This is especially true when filers share responsibility for the child with parents, former spouses, and other relatives or caretakers, as the following figure illustrates. Examples of Complications that Can Arise when Applying the EITC Eligibility Rules Scenario 1: A woman separated from and stopped living with her husband in January of last year, but they are still married. She has custody of their children. She is likely eligible for the Earned Income Tax Credit (EITC) because she can file using the head of household status. However…..If the couple separated in November, she is likely not eligible for the EITC because she was not living apart from her husband for the last 6 months of the year and therefore cannot claim the head of household filing status. Scenario 2: An 18-year old woman and her daughter moved home to her parents’ house in November of last year. She is likely eligible for the EITC because she was supporting herself and her child. However…..If she always lived at her parents’ house, she is likely NOT eligible for the EITC because she was a dependent of her parents for the full tax year and therefore cannot claim the EITC on her own behalf. Scenario 3: A young man lives with and supports his girlfriend and her two kids. He and the mom used to be married, got divorced, and are now back together. He is likely eligible for the EITC because the children are his stepchildren and therefore meet the relationship requirement. However…If he and the mom were never married, he is likely NOT eligible for the EITC because the children are not related to him. The complexity of eligibility requirements, besides being a major driver of noncompliance and complicating IRS’s ability to administer these credits, is also a major source of taxpayer burden. For example, for the EITC and ACTC, each child must meet certain age, residency, and relationship tests. However, given complicated family relationships, determining whether children meet these eligibility requirements is not always clear- cut, nor easily understood by taxpayers. This is especially true when filers share responsibility for the child with parents, former spouses, and other relatives or caretakers, as the following textbox illustrates. Differences in eligibility requirements among the RTCs also contribute to complexity. In 2013, according to our analysis of IRS data, 11.4 million taxpayers claimed both the EITC and ACTC while another 5.3 million claimed the EITC, ACTC, and CTC, navigating multiple sets of requirements for income levels and child qualifications. We have also previously reported that the complexity of education credits like the AOTC means that some taxpayers do not make optimal choices about which education credits to claim. Faced with these complexities, many potential credit recipients seek help filing their tax returns, typically from paid preparers. Fifty-four percent of taxpayers claiming the EITC use paid preparers to help them navigate these requirements and complete the tax forms. These preparers provide a service that relieves taxpayers of costs in terms of their own time, resources, and anxiety about the accuracy of their returns. However, the preparer costs may be an additional burden if their fees are excessive or their advice inaccurate. As we previously reported, the fees charged for tax preparation services vary widely and may not always be explicitly stated upfront. As noted later in this report, unenrolled paid preparers—those generally not subject to IRS regulation—have higher error rates for the RTCs than taxpayers who choose to prepare their own returns. Taxpayers who choose to prepare their own returns file a tax return (some version of Form1040) along with additional forms, such as the Earned Income Credit schedule, Schedule 8812 for the CTC, or Form 8863 to claim education credits. To determine both eligibility and the amount of the credit, taxpayers can consult separate worksheets included with the forms. These can be long and detailed; Publication 596, which includes instructions and worksheets for claiming the EITC, is 37 pages long. IRS reported that most taxpayers who self-prepare use tax software when they file their returns and that, on average, the burden for RTC returns was about 11 hours per return in 2013. In addition to the costs of filing a claim for a credit, complying with IRS enforcement activities also contributes to taxpayer burden. In tax year 2013, IRS rejected over 2 million electronically filed EITC claims. IRS rejects these claims for a variety of reasons, such as missing forms, incorrect SSNs, or if another taxpayer has claimed the same child. Taxpayers can handle some of these issues, such as a mistyped SSN, by correcting their electronic returns. IRS reported that a majority (74.4 percent) of rejected returns are corrected and resubmitted electronically. IRS also reported that this process takes taxpayers on average half an hour—shorter than if they had to make this correction after filing. Other issues impose a larger burden. To claim a child that someone else has already claimed for the EITC, taxpayers can fill out and resubmit their return on paper and then face a possible audit with its associated costs. When processing the tax return, if IRS identifies potential noncompliance with eligibility requirements it can initiate a correspondence audit and send a letter to the taxpayer requesting documentation showing that the taxpayer meets those eligibility requirements. For taxpayers overall, IRS estimated that participating in a correspondence exam takes taxpayers 30 hours, which, combined with any out of pocket costs, is valued on average at $500. In 2015, IRS conducted just under 446,000 EITC exams, which means that approximately 1.6 percent of people filing a EITC claim were audited compared to about .9 percent for individual taxpayers overall in 2014. However, this compliance burden may be larger for some populations. For example, according to attorneys who represent low-income tax filers, these filers may have difficulty proving they meet residency and relationship requirements due in part to language barriers, limited computer literacy, and complicated family structures. To prove a residency requirement—that a child lived with the taxpayer in the United States for more than half the year—taxpayers may submit a document with their address, name, and the child’s name that could include school or medical records or statements on letterhead from a child-care provider, employer, or doctor. Again, according to low-income tax clinic representatives, these can be hard to cobble together for families with limited English proficiency or who move multiple times throughout the year. To prove a relationship requirement, unless they are claiming their son or daughter, taxpayers must submit birth certificates proving the relationship. For example, to claim a great-grandchild, the taxpayer must submit the child’s, grandchild’s, and great-grandchild’s birth certificates. The names must be on the birth certificates, or they will also need to submit another type of document such as a court decree or paternity test. For multigenerational families or situations in which another relative is taking care of the child, locating and assembling the necessary chain of birth certificates can be a challenge. If IRS determines that a taxpayer improperly claimed the EITC due to reckless or intentional disregard of rules or regulations, it may ban the taxpayer from claiming the credit for 2 years—even if the taxpayer qualifies for it. However, the National Taxpayer Advocate reported that IRS’s procedures automatically imposed the ban on taxpayers who did not respond to IRS’s notices and put the burden of proof onto taxpayers to show they should not have received the ban. According to IRS officials, in response to these concerns, IRS implemented new training programs, strengthened managerial oversight, and added protections for taxpayers to ensure they only systematically issue bans to taxpayers with a history of noncompliance. In 2015, IRS issued fewer 2-year bans than in previous years. Despite the compliance burden and costs associated with these RTCs, the burden may be lower than benefits from spending programs. For example, tax credit recipients can self-certify, they do not need to meet with caseworkers, nor submit up-front documentation as is required with some direct service antipoverty programs such as Supplemental Security Income (SSI) or Temporary Assistance for Needy Families (TANF). The simplified up-front process may contribute to higher participation rates. The EITC participation rate — over 85 percent as reported by Treasury— is in the high end of the range for antipoverty programs. GAO previously reported that the SSI participation rate in 2011 was about 67 percent of adults who were estimated to be eligible, while the TANF participation rate was about 34 percent. IRS does not estimate participation rates for AOTC or ACTC. Sustained annual budget reductions at IRS have heightened the importance of determining how best to allocate declining resources to ensure it can still meet agency-wide strategic goals of increasing taxpayer compliance, using resources more efficiently, and minimizing taxpayer burden. In an effort to improve efficiency, IRS consolidated administration of the EITC, ACTC, and AOTC across several different offices within the Wage & Investment Division. Return Integrity and Compliance Services (RICS) oversees the division’s audit functions. Within RICS, Refundable Credits Policy and Program Management (RCPPM) is responsible for refundable credit policy, enforcement, and establishing filters for computerized selection of returns for audit. Refundable Credits Examination Operations is responsible for conducting the audits, oversight and training of personnel, maintaining the phone and mail operations, and addressing personnel and union issues. Although these offices work collaboratively to formulate and implement policies and process workload, they lack a comprehensive strategy for RTC compliance efforts. IRS is working on an operational strategy to document all current EITC compliance efforts and identify and evaluate potential new solutions to address improper payments. However, this review only focuses on efforts to improve EITC compliance and does not include the other refundable credits. The lack of a comprehensive strategy that takes into account all ongoing compliance efforts for the three RTCs (the EITC, ACTC, and AOTC) presents several potential challenges, as discussed below. IRS measures compliance by estimating an aggregate error rate for the EITC and error rates for certain subcategories of EITC claimants (e.g., claimants grouped by type of tax preparer). IRS uses National Research Program (NRP) data for these estimates because it employs a representative sample that can be used to estimate error rates for the universe of taxpayers. In addition to measuring compliance with the tax code, the error rates help IRS understand taxpayer behavior; information IRS could use to develop compliance strategies and allocate resources. According to IRS, it estimates net overclaim percentages (net misreported amount divided by the amount reported) for the RTCs. IRS reported it uses these overclaim percentages to identify areas for potential future research. However, IRS does not report the frequency of these errors or amounts claimed in error across credits, which makes it difficult to compare noncompliance across the credits. Analyses which incorporate relative frequencies and the magnitudes of these errors could be used by IRS to inform resource allocation decisions. In order to show how IRS can use these error rates to inform its compliance strategy and resource allocations, we estimated aggregate error rates for the EITC, the AOTC, and the CTC/ACTC, which combines the refundable ACTC with its nonrefundable counterpart the CTC. Estimating the CTC/ACTC makes it possible to compare error rates for this credit with those for the EITC and AOTC because these credits include the refunded amounts as well as the amounts used to offset tax liabilities. The CTC/ACTC error rate estimate will exclude any adjustments due to dollars shifted between refundable ACTC and nonrefundable CTC. For example, a taxpayer who understates her income may claim a higher ACTC, but if IRS adjusts the income, the effect could be that the refundable ACTC decreases and the nonrefundable CTC increases. This adjustment does not necessarily result in saved dollars or revenue protected, but rather a shifting of dollars from a refund to a lower tax liability, depending where the taxpayer is in relation to the income phase-out rate. Without making these adjustments for the CTC/ACTC estimates, the error rates for the credits would not be comparable. The relative frequency of error rates by different types of credit could be useful information for determining the allocation of enforcement resources. As figure 6 shows, the estimated average error rates for overclaims and underclaims from 2009 to 2011 can vary considerably by credit type. The EITC and AOTC have similar average error rates for overclaims of 29 percent and 25 percent, respectively, but the CTC/ACTC error rate for overclaims is 12 percent—less than half of the other two credits. Although they are much smaller, the underclaim rates vary in a similar way, with the 4 percent AOTC error rate being twice as large as the CTC/ACTC rate. The relative frequency of errors by type of credit may help IRS better focus its limited resources. In addition to the error rates, information about the amount estimated to be claimed in error would also be useful for resource allocation. From 2009 to 2011, the average amount overclaimed for the RTCs also had considerable variation by credit type. The average yearly amount overclaimed for the EITC was $18.1 billion, for the CTC/ACTC was $6.4 billion, and for the AOTC was $5.0 billion. (See appendix II for more details about credit amounts erroneously claimed.) Combining these dollar amounts with the error rate information can further inform resource allocation. For example, although the AOTC had an overclaim rate of 25 percent—nearly as large as the EITC’s 29 percent rate—the amount overclaimed was only about one-third of the EITC’s amount. Both the rate and the amount—among other considerations like effects on equity and compliance burden—would factor into a plan for allocating enforcement resources. The lack of a comprehensive compliance strategy that includes information on error rates by type of credit and categories of taxpayers could limit IRS’s ability to recognize gaps in its enforcement coverage and compliance efforts. For example, IRS previously reported in its EITC compliance studies that unenrolled paid preparers have higher error rates than other preparer types. Our analysis of NRP data, discussed later in this report, showed that this pattern of noncompliance by type of preparer is also true for the ACTC and AOTC. With this information, a compliance strategy can be devised that takes into account these other credits. Additional information could also help IRS better plan resource allocations among the RTCs. IRS devotes a large percentage of its RTC enforcement resources to the EITC, but has not made clear the basis for this allocation. As previously noted, in 2014, IRS selected 87 percent (or 435,000) of its RTC audits based on issues related to the EITC and 6 percent (or 31,000) of its audits based on issues related to the ACTC. The returns that IRS selects for EITC audit may also be audited for other RTC issues. For example, in addition to the 31,000 returns selected for ACTC audits in 2014, another 382,000 returns were audited for the ACTC even though they were selected for another RTC issue—almost always an EITC issue. This approach allows IRS to pick up a lot of potentially erroneous ACTC claims, which IRS can then also freeze as part of the EITC audit. However, this approach raises several concerns about whether IRS is achieving an optimal resource allocation: (1) the very low audit coverage of the approximately 5 million claimants who claim the ACTC but not the EITC could risk a reduction in voluntary compliance, (2) using EITC tax returns as a selection mechanism for ACTC audits may not be the best way to identify ACTC noncompliance, and (3) questions about equity in audit selection for ACTC arise because EITC claimants are generally lower-income than claimants for other credits. Weighing these concerns and other factors like administrative costs could help IRS create a comprehensive strategy for the RTCs that could provide a framework for IRS to make decisions about how to allocate resources and to communicate what criteria it uses to make these allocations. Although IRS lacks a comprehensive RTC strategy, it has been able to identify some compliance trends for other credits besides the EITC. IRS officials observed an increase in the ACTC overclaim percentage from 2009 to 2011. According to IRS, confirming and understanding the nature of that potential increase will require more research. To that end, IRS plans to begin work in 2016 on an ACTC compliance study similar in nature to the recent EITC 2006-2008 Compliance Study. Officials could not provide a start date or timeline for completion and said the rate at which this work progresses will depend on competing priorities given limited budget and staff. However, they stated that the CTC/ACTC compliance study remains a high priority project. Previously, we reported that IRS could identify ways to reduce taxpayer noncompliance through better use of NRP data and that ACTC was one area where further research could provide information on how to address noncompliance. Another challenge related to the lack of a comprehensive plan is that certain IRS performance indicators may be difficult to interpret. IRS relies on the no-change rate and default rates to make resource allocation decisions. IRS closes audits as defaults when the taxpayer (1) does not respond to any IRS notice or (2) responds to some notices but not the last one asking for agreement with a recommended additional tax assessment. IRS officials stated that they believe that taxpayers who default are generally noncompliant because taxpayers selected for audit receive multiple notices and the refunds can equal several thousand dollars, giving them the information and incentive to engage with IRS. Therefore, when there is a high default and a low no-change rate, IRS officials said that they interpret that as an indicator that the taxpayers selected for audit were not entitled to the credit claimed. Even so, it can be difficult to interpret a low no-change rate when it includes defaults. As we previously reported, in fiscal years 2009 through 2013, the no-change rate ranged from 11 percent to 21 percent for all closed correspondence audits but rose to 28 percent to 45 percent when IRS had contact with the taxpayers throughout the audit and did not close the audit through a default. Without knowing the reasons why taxpayers default, it is difficult to know how to interpret the no-change rate. To the extent that some of the taxpayers who default are compliant, the reported no-change rate underestimates what would be the actual no-change rate. The Taxpayer Advocate has raised concerns that taxpayers may not understand the notices, which could be contributing to the low response rate. The difficulty interpreting the no-change rates and default rates can make the results of IRS’s assessments of its programs less certain. According to IRS, two of the most effective and reliable enforcement programs for addressing RTC compliance and reducing improper payments are post- refund document matching and audits. IRS stated that it protects over $3 billion dollars in revenue based on these enforcement activities, but the default rate is over 50 percent. The no-change rate indicates that the overwhelming majority of the cases IRS selects have mistakes that require an adjustment. However, because the defaults are included among the no-change audits and the default rate is high, it calls into question the extent to which the cases being selected are actually noncompliant. Table 3 shows the number of returns IRS identifies through these various enforcement activities, the no-change rate, and the default rate. The no-change rates for these enforcement activities are very low but the associated default rates are high. This disproportion can make the no- change rate misleading as an indicator of noncompliance. For example, if 10 percent of the defaulting taxpayers in the case of document matching were actually compliant, the no-change rate would double to about 14 percent, and if 50 percent were compliant, the no-change rate would increase to about 40 percent. These figures could call into question whether IRS is getting useful information out of no-change rates when the default rate is so high and little is known about the compliance characteristics of defaulting taxpayers. Another challenge that IRS faces is that the set of indicators that it uses to make resource allocation decisions does not include indicators for equity and compliance burden. When evaluating enforcement strategies, such as developing new screening filters for exam selection, IRS officials look at filters that produce a low response rate and a low no-change rate. For example, at the 2015 annual strategy meeting, IRS managers recommended increasing the number of Disabled Qualifying Child (DQC) cases that they plan to work each year based on a high default rate (70 percent compared to a 54 percent default rate for other programs) and a low no-change rate of between 3 and 6 percent. Based on these high default and low no-change rates, program managers recommended increasing the number of cases that they plan to work or replacing cases waiting to be worked with DQC cases as a way to reduce their backlog of unclosed cases. The managers did not evaluate the recommendation on the basis of equity or compliance burden. In addition, IRS did not provide any reliable indicator of compliance burden associated with any of the refundable tax credits that we reviewed. According to IRS officials, reviewing taxpayers’ responses is resource intensive, and by reducing that process, IRS could perform more audits elsewhere. However, as discussed above, the no-change rate on which they based their decision may be an unreliable estimate of actual taxpayer noncompliance when, as the officials said, they do not know why taxpayers did not respond to notices. A more comprehensive strategy that documents RTC compliance efforts could help IRS officials determine whether their current performance indicators are giving them reliable information and their current allocation of resources is optimal, and if not, what adjustments are needed. IRS officials could also use this review as an opportunity to ensure program managers have a balanced suite of performance measures which adequately address all priority goals. For example, the desire to reduce inventory or concentrate resources on efforts with the lowest no-change rate could take precedence over undue taxpayer burden. IRS faces administrative and compliance challenges which also complicate the administration of RTCs. Due in part to long-standing concerns about the EITC improper payment rate, EITC examinations account for nearly 39 percent of all individual income tax return audits each year. However, the EITC only accounts for about 5 percent of the tax gap in tax year 2006 (the most recent estimate available). In a 2013 report, we demonstrated that a hypothetical shift of about $124 million in enforcement resources among different types of audits could have increased direct revenue by $1 billion over the $5.5 billion per year IRS actually collected in 2013. An agency-wide approach that incorporates ROI calculations could help IRS allocate enforcement resources more efficiently not just among the credits, but also across EITC and non-EITC returns. We previously recommended that IRS develop a long-term strategy and use actual ROI calculations as part of resource allocation decisions to help it operate more effectively and efficiently in an environment of budget uncertainty. In response to our recommendation, IRS has begun a project to develop ROI measures that could be used for resource allocation decisions. We have previously reported that while IRS publishes information regarding the coverage rates and additional taxes assessed through various programs, relatively little information is available on how much revenue is actually collected as a result of these enforcement activities. Additional analysis of available RTC collections data could also inform resource-allocation decisions. Currently, IRS reviews the amount of revenue collected annually based on EITC post-refund enforcement activities, but it could not verify the reliability of that data during the timeframe of the GAO audit. Such data could be used to calculate a collections rate—the percentage of tax amounts assessed that is actually collected. A reliable collections rate could be used as an additional data point for informing and assessing allocation decisions. According to federal internal control standards, managers need accurate and complete information to help ensure efficient and effective use of resources in making decisions. Recognizing that not all recommended taxes would be collected or collected soon after the audit, IRS could still use available data to compute a collections rate for post-refund enforcement activities and conduct further analyses of assessments from post-refund audits and document-matching reviews. IRS officials said they have conducted such studies in the past, and they were resource- intensive. Nonetheless, given that collections data are needed for both the detailed analyses described above, as well as for an agency-wide analysis of the relative costs and results of various enforcement activities to inform resource-allocation decisions, there may be opportunities to coordinate the data collection efforts to reduce overall costs. In addition to collections, an agency-wide approach could help IRS develop a strategy for addressing Schedule C income misreporting—a long-time challenge for IRS—and a key driver of EITC noncompliance. According to IRS, income misreporting is the most commonly made error on returns claiming the EITC, occurring on about 67 percent of returns with overclaims. Self-employment income misreporting represents the largest share of overclaims (15 to 23 percent) while wage income misreporting represents the smallest (3 to 6 percent). In the claimant population as a whole, 76 percent of taxpayers earn only wage income, while the remaining 24 percent earn at least some self-employment income. As shown in figure 7, error rates in terms of overclaimed amounts of credit were largest for Schedule C filers for the EITC and AOTC. The error rate for Schedule C filers claiming the CTC/ACTC was not statistically different from the error rate for filers without a Schedule C. Although Schedule C income misreporting is larger for EITC claimants, IRS’s enforcement strategies are more likely to be effective with wage income misreporting than Schedule C income misreporting. According to IRS, it addresses income misreporting through (1) DDb filters designed to identify taxpayers making up a fake business; (2) the questionable refund program designed to identify and follow-up with taxpayers lying about where and how long they worked; and (3) the post-refund document matching program that matches returns with other information such as W- 2s. While these methods may catch some income misreporting by the self-employed, they rely to a great extent on the types of third party income and employment documentation that are likely to be available for wage earners but are largely absent for the self-employed. According to IRS officials, starting in tax year 2011, IRS started matching other information such as Form 1099K Merchant Card payments to tax returns to verify self-employment income. IRS also addresses EITC noncompliance through correspondence audits but Schedule C income issues are more conducive to field audits than correspondence audits. However, EITC Schedule C returns are less likely to be selected for field audits because the dollar amounts do not meet IRS thresholds. Addressing Schedule C income misreporting has been a long-standing challenge for IRS. In 2009, we reported that according to IRS, sole proprietor income was responsible for about 20 percent of the tax gap. A key reason for this misreporting is well known. Unlike wage and some investment income, sole proprietors’ income is not subject to withholding and only a portion is subject to information reporting to IRS by third parties. We have made several recommendations over the years to address this issue. In 2007, we recommended that Treasury’s tax gap strategy should cover sole proprietor compliance in detail while coordinating it with broader tax gap reduction efforts. As of March 2015, no executive action has been taken to address this recommendation, nor has Treasury provided us with plans to do so. We maintain that without taking these steps, Treasury has less assurance that IRS is using resources efficiently to promote sole proprietor compliance. In 2009, we recommended IRS develop a better understanding of sole proprietor noncompliance, including sole proprietors improperly claiming business losses. As of November 2015, IRS partially addressed this recommendation by researching sole proprietor noncompliance and focusing on those who improperly claim business losses. The results of this research will take several years to compile but IRS plans to provide at least rough estimates of disallowed losses in 2016. This research, when completed, could help IRS to identify noncompliant sole proprietor issues and address one of the drivers of EITC noncompliance. IRS does not track the number of returns erroneously claiming the ACTC and AOTC identified through screening activities. (IRS currently tracks this information for the EITC). As we noted earlier, according to federal internal control standards, managers need accurate and complete information to help ensure efficient and effective use of resources in making decisions. IRS conducts various activities to identify and prevent the payment of an erroneous refund, such as screening returns for obvious mistakes and omissions. IRS officials said this information would help them deepen their understanding of common errors made by taxpayers claiming these credits and the insights could then be used to develop strategies to educate taxpayers. IRS officials reported that they are working to figure out how to extract these data for the ACTC and AOTC so they can begin to track the data and use them to refine their overall compliance strategy. Although IRS said that it understands the potential usefulness of these data, it has not yet developed a plan that includes such desirable features as timing goals and resource requirements and a way to develop indicators from the data that would be most effective for understanding and increasing compliance. IRS may also be missing an opportunity to use information from the Department of Education (Education) to detect and correct AOTC errors. Education collects in its Postsecondary Education Participants System (PEPS) a list of institutions and their employer identification numbers (EIN), which would indicate whether the institution the student attends is eligible under the AOTC. The PATH Act of 2015 requires taxpayers claiming the AOTC to report the EIN for the education institutions to which they made payments. There is some evidence that PEPS may be a useful tool for detecting noncompliance. In a review of the AOTC, the Treasury Inspector General for Tax Administration (TIGTA) used PEPS data and identified 1.6 million taxpayers claiming the AOTC for an ineligible institution in 2012. TIGTA recommended that IRS coordinate with Education to determine whether IRS could use Education data to verify the eligibility of educational institutions claimed on tax returns. While IRS agreed that these PEPS data could identify potentially erroneous claims, it did not agree to further explore using the data. IRS has not determined whether PEPS can be used for enhancing AOTC compliance for two reasons. First, IRS does not have math error authority (MEA) to correct errors in cases where taxpayer-provided information does not match corresponding information in government databases. IRS would still need to conduct an exam to reject a claim with an ineligible institution. For example, if the EIN on a submitted return is not contained in the PEPS database of eligible institutions, IRS does not have the authority to automatically correct the return and notify the taxpayer of the change. Instead, IRS would have to contact the taxpayer for additional documentation or open an examination to resolve discrepancies between PEPS data and the tax return information. Secondly, IRS believes its current selection process is sufficient because IRS already identifies more potentially fraudulent returns with its filters than it can examine given its current resources. In 2012, IRS identified 1.8 million returns with potentially erroneous education claims and selected 9,574 for exam, for an exam rate of 0.5 percent. To identify these returns for exam, IRS used its pre-refund filters of students claiming the credit for more than 4 years, returns without the 1098-T form, or students in an unexpected age range. The administration submitted legislative proposals for fiscal years 2015 and 2016 that, among other things, would establish a category of correctable errors. Under the proposals, Treasury would be granted MEA to permit IRS to correct errors in cases where information provided by a taxpayer does not match corresponding information provided in government databases. We have previously reported that expanding MEA with appropriate safeguards could help IRS meet its goals for the timely processing of tax returns, reduce the burden on taxpayers of responding to IRS correspondence, and reduce the need for IRS to resolve discrepancies in post-refund compliance, which, as we previously concluded, is less effective and more costly than at-filing compliance. However, Congress has not granted this broad authority. Although correctable error authority may reduce compliance and administrative burden, it raises a number of concerns. Experts have raised concerns that such broad authority could put undue burden on taxpayers. For example, the National Taxpayer Advocate has raised concerns that IRS’s current math error notices are confusing and place a burden on taxpayers as they try to get answers from IRS. The JCT also raised concerns about whether all government databases are considered sufficiently reliable under this proposal. However, an assessment of the completeness and accuracy of PEPS data may be useful for IRS enforcement efforts even in the absence of correctable error authority. First, while IRS believes its current selection process is sufficient, without assessing the PEPS data, it cannot know whether its case selection could be improved by this additional information about ineligible institutions. Second, if an IRS assessment of PEPS data determined that pre-refund corrections based on those data would be effective, the case for correctable error authority would be easier to make to Congress. As our work on strategies for building a results-oriented and collaborative culture in the federal government has shown, stakeholders, including Congress, need timely, action-oriented information in a format that helps them make decisions that improve program performance. Taxpayers can only claim the AOTC for 4 years, but IRS does not have MEA to freeze a refund on a claim that exceeds the lifetime-limit rule. In 2015, TIGTA found that more than 400,000 taxpayers in 2012 received over $650 million for students claiming the AOTC for more than 4 years. According to IRS officials, they have processes to identify students who exceed the 4-year lifetime limit based on information from prior returns. Those returns are candidates for audits. However, as noted earlier, IRS identifies far more candidates for audits than it can perform given current staffing levels. In 2011, we recommended that Congress consider providing IRS with MEA to use tax return information from previous years to ensure that taxpayers do not improperly claim credits or deductions in excess of lifetime limits where applicable. Granting this authority would help IRS disallow clearly erroneous claims, reduce the need for an audit, and promote fairness by limiting claims to taxpayers who are entitled to them. It would also assist taxpayers in self-correcting unintentional mistakes where they may have chosen an incorrect educational tax benefit since they exceeded the lifetime limit. As we recommended in 2011, we continue to believe that Congress should consider providing MEA to be used with credits and deductions with lifetime limits. Any RTCs that contain these limits such as the AOTC should fall under this authority as well if it is granted by Congress. IRS has several efforts intended to educate taxpayers about eligibility requirements and improve compliance including social media messaging, webinars, and tax forum presentations. According to IRS, these efforts are intended to promote participation among taxpayers eligible for these credits, ensure that taxpayers are aware of the eligibility requirements before filing a tax return, and prevent unintentional errors before they occur. Additionally, IRS designated an EITC Awareness Day to increase awareness among potentially eligible taxpayers at a time when most are filing their federal income tax returns. The 10th Annual EITC Awareness Day was January 29, 2016. According to IRS, it currently has limited ability to measure the effectiveness of its outreach efforts. As recently as 2011, IRS officials said they were able to measure the effectiveness of the efforts through a semi-annual survey where they tested, for example, the effect of concentrating messaging in certain areas on taxpayer awareness of the EITC. Although IRS reported it no longer has the funds for that survey, officials said IRS still commissions an annual survey intended to improve services to volunteers and external stakeholders. IRS officials also said that they collect user feedback to assess use and effectiveness of their EITC website and make changes accordingly. For example, after users cited problems with easily locating information on maximum income limits for the EITC, IRS reported that it revised its website to make income information more prominent. To address underutilization of the AOTC, IRS has been working to improve the quality and usefulness of information about the credit. We reported in 2012 that about 14 percent of filers in 2009 (1.5 million of almost 11 million eligible returns) failed to claim an education credit or deduction for which they appeared to be eligible, possibly because filers were unaware of their eligibility or were confused. In response to the recommendation in our 2012 report, IRS conducted a limited review in 2013 that determined that over 15 million eligible students and families may not have been or were not claiming an education benefit. Identifying these potentially eligible taxpayers will help IRS develop a comprehensive strategy to improve use of these tax provisions. We also recommended in 2012 that IRS and Education work together to develop a strategy to improve information provided to tax filers who appear eligible to claim a tax provision but do not. IRS has been implementing this recommendation by coordinating with Education to (1) create an education credit web page on the department’s Federal Student Aid website and (2) improve IRS’s AOTC and Lifetime Learning Credit Communication Plan. To improve understanding of requirements for education credits, IRS has enhanced information and resources on IRS.gov and revised the tax form for claiming education credits (Form 8863, Education Credits American Opportunity and Lifetime Learning Credits) to include a series of questions for the taxpayer to ascertain credit eligibility. IRS has also made efforts to address compliance issues associated with certain tax preparers. As shown in figure 8, unenrolled preparers have the highest error rates for RTCs among preparers. For the EITC, unenrolled preparers have the highest overclaimed rate at 34 percent of total credit claimed, and, as IRS reported, they are the type of preparer most often used by EITC claimants, preparing 26 percent of all EITC returns. In contrast, although comprising only 3 percent of all returns with the EITC, returns prepared by volunteers in the IRS-sponsored Volunteer Income Tax Assistance and Tax Counseling for the Elderly programs have the lowest error rate at 16 percent. IRS’s chief compliance effort for paid preparers is the EITC Return Preparer Strategy designed to identify preparers submitting the highest number of EITC overclaims and tailor education and enforcement treatments to change their behavior. The strategy uses a variety of methods to address preparer noncompliance including (1) educational “knock-and-talk” visits with preparers before filing season; (2) due diligence visits where IRS officials determine whether preparers complied with due diligence regulations, such as documenting efforts to evaluate the accuracy of information received from clients; and (3) warning and compliance letters to preparers explaining that IRS has found errors in their prior returns. The EITC preparers that appear to be associated with the most noncompliance receive the most severe treatments, which include visits from revenue agents, and if necessary, an assessment of penalties: $500 per noncompliant return, or if the preparer used a bad preparer tax identification number, penalties of $50 per return, up to a maximum of $25,000. (The PATH Act of 2015 expanded preparer due diligence requirements and penalties to the CTC and AOTC.) These preparers can also be referred to the Department of Justice for civil injunction proceedings. If fraud is identified, these preparers can be referred to criminal investigation. The project recently found that less severe, lower cost treatments, such as warning letters, affect preparer behavior but more severe, higher cost due diligence visits improve preparer behavior the most. IRS expanded the number of preparers it selected to contact from 2,000 in fiscal year 2012 to around 31,000 in fiscal year 2015. According to IRS data, the EITC Return Preparer Strategy has protected around $1.7 billion in revenue of EITC and CTC/ACTC claims since fiscal year 2012. In fiscal year 2015, the project protected over $465 million in revenue ($386 million in EITC savings and $79 million in CTC/ACTC). Also, the proposed preparer penalties for the 2015 effort totaled $30 million with an overall due diligence visit penalty rate of around 85 percent. Any attempts to improve preparer compliance through increased regulation by Treasury and IRS are likely to require congressional action. IRS issued regulations in 2010 and 2011 to require registration, competency testing, and continuing education for paid tax return preparers and to subject these new registrants to standards of conduct in their practice. However, the courts ruled that IRS did not have the statutory authority to regulate these preparers. In 2014, we suggested Congress consider granting IRS the authority to regulate paid tax preparers. Establishing requirements for paid tax return preparers could improve the accuracy of the tax returns they prepare, not just returns claiming EITC. A variety of proposals have been made to change the design of the EITC, ACTC, and AOTC. The proposals generally focus modifications on one or more elements of the credits such as how much of the credit is refundable, the maximum amount of credit, the level of the phase-in and phase-out income ranges, and rates. Changing these elements will have certain effects on their equity, efficiency, and simplicity that are common across the credits. For example, increasing or decreasing refundability affects the distribution of the credits’ benefits by income level which has implications for whether the change is viewed as increasing or decreasing equity. The following review of proposals has been organized according to the basic design elements of the credits where the effects of certain proposals to change these elements are evaluated according to the standard criteria of a good tax system. Evaluating tax credits requires identifying their purpose (or purposes) and determining their effectiveness. The tax credits reviewed in this report are intended to encourage taxpayers to engage in particular activities, to offset the effect of other taxes, and to provide assistance for certain categories of taxpayers. The EITC, for example, has the purposes of offsetting the payroll tax, encouraging employment among low-income taxpayers and reducing poverty rates. Determining effectiveness can be challenging due to the need to separate the effect of a tax credit from other factors that can influence behavior. Even if the credit claimants increase their subsidized activities, the credits are ineffective if they merely provide windfall benefits to taxpayers who would have engaged in the activities in the absence of the credit. Even when the credits are determined to be effective, broader questions can still be asked about whether they are good tax policy. As explained in our 2012 report, these questions are addressed by applying criteria such as economic efficiency, equity, and simplicity which have long been used to evaluate proposed changes to the tax system. The criteria may sometimes conflict with one another and some are subjective. As a result, there are often trade-offs between the criteria when evaluating a particular tax credit. Economic efficiency deals with how resources are allocated in the economy to produce outcomes that are consistent with the greatest well- being (or standard of living) of society. Tax credits may affect the allocation of resources by favoring certain activities. A credit’s effect on efficiency depends on its effectiveness—whether people change their behavior in response to the credit to do more or less of the activity as intended—and its effect on resource allocation— whether the effect of the credit increases the overall well-being of society. The tax credit can increase efficiency when, for example, it is directed at addressing an externality like spillovers from research where the researchers do not gain the full benefit of their activities and might, without the credit, invest too little in research from the point of view of society as a whole. Finally, a tax credit may be justified as promoting a social good like improving access to higher education for disadvantaged groups. Equity deals with how fair the tax system is perceived to be by participants in the system. There are a wide range of opinions regarding what constitutes an equitable, or fair, tax system. However, there are some principles—for example, a taxpayer’s ability to pay taxes—that have gained acceptance as useful for thinking about the equity of the tax system. The ability-to-pay principle requires that those who are more capable of bearing the burden of taxes should pay more taxes than those that are less capable. Equity judgments based on the ability-to-pay principle can be separated into two types. The first is horizontal equity where taxpayers who have similar ability to pay taxes receive similar tax treatment. Tax credits affect horizontal equity when, for example, they favor certain types of economic behavior over others by taxpayers in similar financial conditions. Views of a credit’s effect on horizontal equity usually depend on whether eligibility requirements that exclude some filers and include others are viewed as appropriate. The second type is vertical equity where taxpayers with different abilities to pay are required to pay different amounts of tax. Tax credits affect vertical equity through how their benefits are distributed among people at different income levels (or other indicators of ability to pay such as their level of consumption spending). Distribution tables, where the tax benefits of the credits are grouped by the income level of the recipients, are often used by policy analysts to help them make informed judgments about the equity of tax policies like the RTCs. People may have different notions about what is a fair distribution but they cannot make a judgment about the fairness of a particular policy without consulting the actual distribution of tax benefits. Simplicity is a criterion used to evaluate tax systems because simple tax systems tend to impose less compliance burden on the taxpayer and less cost on tax administrators than more complex tax systems. Taxpayer compliance burden is the value of the taxpayer’s own time and resources, along with any out-of-pocket costs paid to tax preparers and other tax advisors, invested to ensure their compliance with tax laws. Compliance costs include the value of time and resources devoted to activities like record keeping (for the purpose of tax compliance and not records that would be kept in any case), learning about requirements and planning, preparing and filing tax returns, and responding to IRS notices and audits. The administrative costs include the resources used to process tax returns, inform taxpayers about their obligations, detect noncompliance, and enforce compliance with the provisions of the tax code. However, while simplicity is linked to administrability, they are not always the same. For example, a national sales tax may be relatively simple for taxpayer compliance but difficult to administer as it requires distinguishing between tax-exempt and taxable commodities and between taxable retail sales and nontaxable sales among companies. Changes to the RTCs can be analyzed using the above criteria where the changes are grouped according to the key design elements of the credits that are most affected by the changes. The key design elements are (1) the degree to which the credit is refundable; (2) the eligibility rules for filers and qualifying children or dependent students; (3) the structure of the credit consisting of parameters that determine credit rates and phase- in and phase-out ranges; and (4) the credit’s interaction with other code provisions. As mentioned above, changing these elements will have effects that are common for all the credits. In the following review of proposals, a description of the effect on revenue will be a provided where possible but a dollar estimate of revenue costs cannot be provided because it depends too much on variable details of proposals. For example, increasing refundability would increase revenue costs but the amount would depend, as explained below, on factors like the refundability rate and income or spending threshold of refundability. Refundability can affect judgments about vertical equity by providing a larger share of the tax benefits to lower income filers than a nonrefundable credit does. These filers are more likely to have little or no tax liability and thus are not able to fully benefit from the nonrefundable credit. Refundability, as such, may have little effect on judgments on horizontal equity because these judgments depend chiefly on the eligibility rules which need not be different from those under a nonrefundable credit. The effect of refundability on compliance and administrative costs depends on how the change in refundability is implemented. If the eligibility rules, a major source of complexity as described above, are not changed when refundability is introduced, it may have less impact on compliance burden and administrative costs. However, other structural changes may be needed when refundability is introduced that can add complexity and compliance burden for the taxpayer. For example, additional calculations were made necessary for the CTC when the ACTC was introduced as its partially refundable counterpart with a phase-in range and rate. In addition, administrative burden could increase if the population of claimants changes when refundability is introduced. IRS costs could increase if IRS reviews more returns when the number of claimants increases in response to refundability and taxpayer compliance burden may increase if the claimants include more taxpayers for whom understanding or documenting compliance is more difficult. Changes have been proposed to expand refundability for the currently partially refundable CTC/ACTC and AOTC. For the CTC/ACTC, the refundable ACTC is limited to 15 percent of income in excess of the $3,000 refundability threshold up to a maximum of $1,000 for each child and for the AOTC the refund is limited to 40 percent of qualified spending up to a maximum of $1,000. Modifications of these credits that have been proposed include raising the refundability rate and reducing the refundability threshold for the CTC/ACTC or in the case of the AOTC, making the credit fully refundable. The principal effect of these modifications is to increase the share of benefits going to low-income filers by increasing their access to the credit. In the AOTC, the expansion could also increase effectiveness as described in appendix III by increasing access to the credits by low-income filers who are more responsive to changes in the price of education. The effect on revenue of these changes would vary considerably depending chiefly on the extent to which refundability is increased. Modifications to the RTCs’ eligibility rules affect the criteria of a good tax system by changing taxpayers’ access to the credits. The change in access in turn can affect judgments about equity and effectiveness. For example, expanding the availability of the AOTC to part-time in addition to half-time and full-time students could affect judgments about vertical equity by increasing access for lower income filers if they are more represented among part-time students. This proposal may also increase the effectiveness of the AOTC by targeting more of the population that is more responsive to education price changes, but, as described in appendix III, these effects have not been tested. Another change to eligibility rules that has been proposed for RTC filers would require that SSNs be provided by all claimants of the AOTC and the ACTC and that, in some cases, claimants’ qualifying children or student dependents have SSNs. SSNs are currently required for all EITC claimants and qualifying children but claimants of the other RTCs can use individual taxpayer identification numbers (ITIN). IRS issues ITINs to individuals who are required to have a taxpayer identification number for tax purposes, but who are not eligible to obtain an SSN because they are not authorized to work in the United States. In 2013, 4.38 million tax returns were filed with ITINs (about 3 percent of all returns) which claimed $1.31 billion in CTC, $4.72 billion in ACTC, and $ 204 million in AOTC, or 5 percent, 17 percent, and 1.1 percent of the total credits claimed, respectively. The effect of restrictions on access to the credits by ITIN users depends on whether all filers claiming refundable tax credits and their qualifying children or permit “mixed-use” households to obtain a partial credit. Most households using ITINs are mixed-use households in the sense that they use both ITINs and SSNs on their returns. In 2013, 2.68 million returns (or 61 percent of all ITIN returns) were mixed-use returns having (1) a parent with an ITIN and at least one child with an SSN or (2) a parent with an SSN and at least one child with an ITIN. If the change requires that the parent have an SSN, about 82 percent of current ITIN users will be excluded. A change that permits RTCs for a child or parent with an SSN would exclude 39 percent of current ITIN filers. Restrictions on access to RTCs by ITIN users may affect judgments about vertical equity of the credits. ITIN claimants of the CTC, ACTC, and AOTC tend to have similar or lower levels of income than claimants who do not use ITINs. As figure 9 shows, 31 percent of CTC claimants with ITINs have incomes less than $40,000 while 17 percent of all CTC claimants have incomes as low and 56 percent of AOTC claimants have incomes less than $40,000 while 41 percent of all AOTC claimants have incomes this low. On the other hand, the income levels of the ACTC claimants with ITINs generally track those of all ACTC claimants: 87 percent of all ACTC claimants and 88 percent of ACTC claimants with ITINs have incomes less than $40,000. Restrictions on ITIN use may also have implications for compliance. From 2009 through 2011, credit claimants using ITINs had higher overclaim error rates than other claimants. The overclaim error rate for CTC claimants using ITINs was 14 percent as opposed to 6 percent for all CTC claimants. Similarly, the CTC/ACTC error rate was 32 percent for ITIN users and 10 percent for all claimants. As we discussed above, complying with the eligibility rules can be challenging for everyone and the ITIN users may have greater difficulty from factors like language barriers which could contribute to these higher error rates. The scope of the SSN requirement—whether it includes the taxpayer, the spouse if married filing jointly, or the qualifying dependents—would add to the complexity of administering and complying with the credits. For example, the value of the credit could be apportioned among taxpayers who meet the criteria (e.g., if three of the four individuals claimed on a tax return have SSNs, the taxpayers would be eligible for 75 percent of the total value of the credit). Determining and enforcing compliance with these apportionment rules could be difficult. On the other hand, as noted above, a majority of ITIN households are mixed use and in the absence of an apportionment procedure, taxpayers with valid SSNs could be denied access to the credits entirely. Lastly, the AOTC is likely to be less effective to the extent that ITIN users are excluded because, as they have lower incomes than other claimants, they are more likely to respond to an effectively lower cost of education due to the credit by increasing attendance. A change in the structure of the RTCs can affect all the criteria for evaluating the credits as part of a good tax system. The credit structure includes features that determine the rate at which the credit is calculated. The phase-in range – the range of income levels over which the credit amount is increasing; the plateau range – the range where the credit amount is unchanged and reaches the maximum amount and the phase- out range – where the credit amount is declining. The cut-off amount of income determines the end of the phase-out range and maximum income that can qualify for the credit. All the RTCs have phase-in and phase-out ranges subject to different phase-in and phase-out rates and the EITC also has different values for these ranges that vary according to the number of qualifying children being claimed. The phase-in range generally provides incentives for increasing the activity promoted by the credit: as they work more, EITC recipients receive a larger credit amount and, as they spend more on education, AOTC recipients also get a larger credit. The phase-out ranges generally introduce disincentives by reducing the credit benefit for any increase in the activity that the credit is intended to promote. One of the key trade-offs in this structure is between the size of the maximum credit amount and the steepness of the phase-out range. If the maximum credit amount is increased with no change in the qualifying income cut-off amount, the phase-out range becomes steeper—the phase-out rate increases—and therefore disincentives increase over the phase-out range. In this case, the increase in the maximum credit reduces efficiency in the phase-out range. On the other hand, if disincentives are to be reduced without reducing the maximum credit, the qualifying income cut-off amount must be increased in order to flatten the phase-out range and thereby lower the phase-out rate. However, by increasing the cut-off income amount, the credit becomes available to people with higher incomes, affecting judgments about the equity of the credit and increasing its revenue cost. Structural modifications proposed for the EITC include expanding the credit for childless workers. As described in appendix III, the EITC for childless workers is much lower than the credit for workers with children and has not been shown to have an effect on workforce participation or raising these workers out of poverty. Expanding the credit for childless workers generally means increasing the maximum credit with the follow- on effects described above on other parameters like the phase-out rate. The effect on efficiency, equity, and simplicity will depend upon which parameters are changed and will have similar trade-offs. Although the relative effects of expanding the credit for childless workers will depend on details of the parameter changes, the overall effect is likely to increase the effectiveness of the credit. Increasing the credit for childless workers would increase work incentives for individuals for whom, as described in appendix III, the current EITC is ineffective because it provides little or no work incentive. The expansion of the credit for childless workers could also affect judgments about equity of the EITC by decreasing the percentage of taxpayers living in poverty and by changing how benefits are distributed by income level. The expansion would also affect judgments about horizontal equity concerns arising from the current large disparity in the credit available to filers with and without children. In addition, expanding the EITC for childless workers is unlikely to add complexity to the filing process for taxpayers, although it would increase the number of taxpayers claiming the credit. A major source of complexity for the EITC that increases both compliance and administration burden is determining whether a dependent meets the requirements for a qualifying child. These determinations would not be necessary for the childless worker. However, again depending on specifics of proposals like the size of the maximum credit, the revenue cost could be high. Proposed structural changes for the AOTC can impact its effectiveness by increasing or decreasing access to the credit. Modifications that expand access include increasing the maximum credit, raising the upper limit on income for credit claimants and lowering the phase-out rate. Changes like these may also reduce effectiveness because the credit is now more available to taxpayers for whom it is likely to be a windfall while less of the increase is available to lower income people who are more responsive to education price changes. These changes may also affect judgments about equity because the increase in the phase-out range would increase the share of the credit going to higher income taxpayers. However, the increase in the maximum credit benefits the lower income filers as well as those with higher income. Modifications that reduce access include reducing the maximum credit and phase-out income and increasing the phase-out rate. Modifications like these may concentrate the AOTC’s benefit on lower income individuals and could increase effectiveness by reducing the windfall going to higher income taxpayers. Changes to the CTC/ACTC illustrate how structural changes interact to affect the criteria for evaluating the credit. For example, a modification that increases the credit per child and increases the income limit may have offsetting effects on judgments about equity by reducing the share of benefits going to low-income taxpayers but at the same time increasing the credit amount per child. However, raising the amount of the credit may not benefit lower income taxpayers to the extent that the refundability threshold and rate prevent them from accessing the full credit. Further adjustments such as eliminating the current refundability threshold of $3,000 and making the credit refundable up to $1,000 at a refundability rate of 25 percent may provide more benefits to lower income taxpayers. However, the more adjustments are made the harder it is to determine the net effect on equity. The RTCs share purposes and target populations with a variety of government spending programs and other provisions of the tax code. We previously estimated that, in 2012, 106 million people, or one-third of the U.S. population, received benefits from at least one or more of eight selected federal low-income programs: the ACTC, the EITC, SNAP, SSI, and four others. Almost two-thirds of the eight programs’ recipients were in households with children, including many married families. Without these programs’ benefits, we estimated that 25 million of these recipients would have been below the Census Bureau’s Supplemental Poverty Measure (SPM) poverty threshold. Of the eight programs, the EITC and SNAP moved the most people out of poverty. In addition, the AOTC interacts with other spending provisions like Pell grants and tax provisions like the Lifetime Learning Credit and the deduction for tuition and fees to provide subsidies for college attendance. This shared focus of certain tax benefits has led to consideration of their combined effect on incentives and complexity. As figure 10 shows, the combined effects of the EITC, CTC/ACTC, and the dependent exemption produce a steeper phase-in of total benefit amounts than that attributable to any of the tax benefits alone. As incomes increase, total benefits peak and then decline sharply when the phase-out range of the EITC is reached. How taxpayers respond to the RTCs will depend on the taxpayer’s ability to sort out and assess the combined effects of all these tax benefits. Each RTC was the product of unique social forces and was designed to address a specific social need. As a result, it is unlikely that attempts were made to coordinate and focus on the combined tax rates, combined subsidy rate and combined incentive effects and effects on compliance and administration. The lack of coordination that leads to increased administrative and compliance burden is exemplified in the differing age limits of what constitutes an eligible child for different tax benefits. Interactions like these have raised concerns that the RTCs and other provisions may not be coordinated to be most effective. To increase coordination and transparency, a number of different ways have been proposed to consolidate the tax benefits. Proposals include combining tax benefits for low income taxpayers (such as CTC/ACTC, dependent exemption and child related EITC ) into a single credit or combining child related benefits into a single credit while creating a separate work credit based on earnings and unrelated to the number of children in the family. In a similar vein, proposals have been made to combine education tax benefits by using the AOTC to replace all other education tax credits, the student loan interest deduction and the deduction for tuition and fees. These proposals may also expand certain features of the credit like increasing refundability or making the credit available for more years of post-secondary education. Consolidation can make incentives more transparent to taxpayers and increase simplicity and decrease compliance and administrative burden to the extent it includes harmonizing and simplifying the eligibility requirements. Each year the EITC, ACTC, and AOTC help millions of taxpayers—many of whom are low-income—who are working, raising children, and paying tuition. Nonetheless, challenges related to the RTCs’ design and administration contribute to errors, improper payments, and taxpayer burden. Annual budget cuts have forced IRS officials to make difficult decisions about how best to target declining resources to ensure they can still meet agency-wide strategic goals of increasing taxpayer compliance, using resources more efficiently, and minimizing taxpayer burden. In light of these budget cuts, it is essential that IRS take a strategic approach to identifying and addressing RTC noncompliance in an uncertain budget environment. IRS is working on a strategy to document current EITC compliance efforts and identify and evaluate potential new solutions to address improper payments, but this review does not include the other refundable credits. A more comprehensive approach could help IRS determine whether its current allocation of resources is optimal, and if not, what adjustments are needed. IRS is also missing opportunities to use available data to identify potential sources of noncompliance and develop strategies for addressing them. For example, IRS does not track the number of returns erroneously claiming the ACTC and AOTC identified through screening activities. This information would help IRS deepen its understanding of common errors made by taxpayers claiming these credits; IRS could then use these insights to develop strategies to educate taxpayers. IRS has also not yet evaluated the Department of Education’s PEPS database of eligible educational institutions; these data could help IRS identify potentially erroneous AOTC returns. Finally, although IRS reviews the amount of revenue collected from EITC post-refund enforcement activities, it could not verify the reliability of that data during the timeframe of the GAO audit. By not taking necessary steps to ensure the reliability of that data and linking them to tax assessments to calculate a collections rate, IRS lacks information required to assess its allocation decisions. Periodic reviews of collections data and analyses could help IRS officials more efficiently allocate limited enforcement resources by providing a more complete picture about compliance results and costs. Over the years we have recommended various actions IRS and Congress could take to reduce the tax gap; several of these would also help bolster IRS’s efforts to address noncompliance with these credits. For example, developing a better understanding of sole proprietor noncompliance and linking sole proprietor compliance efforts with broader tax gap reduction could help IRS to identify noncompliant sole proprietor issues and address one of the drivers of EITC noncompliance. Providing IRS with the authority to regulate paid preparers would also help. In addition, as we recommended in 2011, we continue to believe that Congress should consider providing IRS with math error authority to use tax return information from previous years to enforce lifetime limit rules. Any refundable tax credits that contain these limits such as the AOTC should fall under this authority as well if it is granted by Congress. Structural changes to the credits, such as changes to eligibility rules, will involve trade-offs with respect to standard tax reform criteria, such as effectiveness, efficiency, equity, simplicity, and revenue adequacy. To strengthen efforts to identify and address noncompliance with the EITC, ACTC, and AOTC, we recommend that the Commissioner of Internal Revenue direct Refundable Credits Policy and Program Management (RCPPM) to take the following steps: 1. Building on current efforts, develop a comprehensive operational strategy that includes all the RTCs for which RCPPM is responsible. The strategy could include use of error rates and amounts, evaluation and guidance on the proper use of indicators like no-change and default rates, and guidance on how to weigh trade-offs between equity and return on investment in resource allocations. 2. As RCPPM begins efforts to track the number of erroneous returns claiming the ACTC or AOTC identified through pre-refund enforcement activities, such as screening filters and use of math error authority, it should develop and implement a plan to collect and analyze these data that includes such characteristics as identifying timing goals, resource requirements, and the appropriate methodologies for analyzing and applying the data to compliance issues. 3. Assess whether the data received from the Department of Education’s PEPS database (a) are sufficiently complete and accurate to reliably correct tax returns at filing and (b) provide additional information that could be used to identify returns for examination; if warranted by this research, IRS should use this information to seek legislative authority to correct tax returns at filing based on PEPS data. 4. Take necessary steps to ensure the reliability of collections data and periodically review that data to (a) compute a collections rate for post- refund enforcement activities and (b) determine what additional analyses would provide useful information about compliance results and costs of post-refund audits and document-matching reviews. We provided a draft of this report to Treasury and IRS. Treasury provided technical comments which we incorporated where appropriate. In written comments, reproduced in appendix IV, IRS agreed with three of our four recommendations and described certain actions that it plans or is undertaking to implement them. After sending us written comments, IRS informed us it could not verify the reliability of the collections data it provided during the timeframe of our audit. We removed this data from the report and modified our fourth recommendation to address data reliability. The revised recommendation states that IRS should take necessary steps to ensure the reliability of collections data and then periodically review that data to compute a collections rate for post-refund enforcement activities and determine what additional analyses would provide useful information. In response to this recommendation, IRS stated it is taking steps to verify the reliability of the collections data, but further analysis would not be beneficial because the majority of RTC audits are pre-refund. However, we found that a significant amount of enforcement activity is occurring in the post-refund environment. According to IRS data, IRS conducted 87,000 EITC post-refund audits and over 1 million document-matching reviews in 2014. We recognize that gathering collections data has costs and the data have limitations, notably that not all recommended taxes are collected. However, use of these data— once IRS is able to verify its reliability – could better inform resource allocation decisions and improve the overall efficiency of enforcement efforts. In fact, the Internal Revenue Manual states that examiners are expected to consider collectability as a factor in determining the scope and depth of an examination. IRS also stated that previous studies have indicated that post-refund audits of RTCs have a high collectability rate. However, the studies that IRS provided did not include collection rates for the EITC, ACTC, or AOTC. IRS further cautioned that collections can be influenced by factors like the state of the economy; however an appropriate statistical methodology would take such factors into account. Finally, opportunities may exist to reduce the costs of data collection efforts, for example, if coordinated as part of an agency wide analysis of the costs and results of various enforcement efforts. IRS disagreed with our conclusion that its compliance strategy and selection criteria for its prefund compliance program do not consider equity and compliance burden. In its comments, IRS describes its audit selection process but did not explain how it measures equity or compliance burden. Without such measures, it is not possible to assess whether IRS is achieving its strategic goals of increasing taxpayer compliance, using resources more efficiently, and minimizing taxpayer burden. Finally, IRS stated that nonresponse to its taxpayer enquiries is a strong indicator of noncompliance but did not provide data to support this assumption. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the Secretary of the Treasury, Commissioner of Internal Revenue, and other interested parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff has any questions about this report, please contact me at (202) 512-9110 or [email protected]. Contact points for our offices of Congressional Relations and Public Affairs are on the last page of this report. GAO staff members who made major contributions to this report are listed in appendix V. This report (1) describes the claimant population including the number of taxpayers and the amount they claim along with other selected characteristics for the Earned Income Tax Credit (EITC), Additional Child Tax Credit (ACTC), and American Opportunity Tax Credit ( AOTC); (2) describes how the Internal Revenue Service (IRS) administers these credits and what is known about the administrative costs and compliance burden associated with each credit; (3) assesses the extent to which IRS identifies and addresses noncompliance with these credits and collects improperly refunded credits; and (4) assesses the impact of selected proposed changes to elements of the EITC, ACTC, and AOTC with respect to three criteria for a good tax system: efficiency, equity, and simplicity. To describe the taxpayer population claiming the EITC, ACTC, and AOTC, we used the IRS Statistics of Income (SOI) Individual Study for tax years 1999 to 2013. The SOI Individual Study is intended to represent all tax returns filed through annual samples of unaudited individual tax returns (about 330,000 returns in 2013), which are selected using a stratified, random sample. IRS performs a number of quality control steps to verify the internal consistency of SOI sample data. For example, it performs computerized tests to verify the relationships between values on the returns selected as part of the SOI sample and edits data items to correct for problems, such as missing items. The SOI data are widely used for research purposes and include information on returns prior to changes due to IRS audits. We used SOI data to describe the number of returns claiming credits, the credit amounts, and characteristics about credit claimants, such as filing status or adjusted gross income (AGI) for each credit. When necessary, we combined the nonrefundable Child Tax Credit (CTC) with the ACTC, referring to the combined credit as the CTC/ACTC. We did this when their combined effect is at issue or to facilitate comparison with other RTCs that do not break out refundable and nonrefundable components. Similarly we combined the refundable and nonrefundable portions for AOTC estimates. However, unlike the other credit amounts, SOI data do not report the nonrefundable AOTC amounts. Estimating the level of nonrefundable AOTC requires decomposing the nonrefundable education credits into AOTC and other nonrefundable education credit amounts using education expenses amounts and other line items reported on the tax return that determine the taxpayer’s eligibility for claiming the credit. These computations are done by tax return prior to producing the aggregate total AOTC estimates. We reviewed documentation on SOI data, interviewed IRS officials about the data, and conducted several reliability tests to ensure that the data excerpts we used for this report were sufficiently complete and accurate for our purposes. For example, we electronically tested the data for obvious errors and used published data as a comparison to ensure that the data set was complete. The SOI estimates of totals and averages in the report, excluding ITIN estimates, have a margin of error of less than 3.5 percent of the estimates unless otherwise noted. The SOI percentages, excluding ITIN percentages, have a margin of error of less than 1 percentage points unless otherwise noted. Totals based on ITIN returns have a margin of error less than 18 percentage points unless otherwise noted. Percentages and ratios based on ITIN filers have a margin of error of less than 8 percentage points unless otherwise noted. We concluded that the data were sufficiently reliable for the purposes of this report. To describe how IRS administers these credits, we reviewed documentation on program procedures from the Internal Revenue Manual (IRM), internal documents describing audit procedures, and memorandums from IRS officials. We also interviewed IRS officials who oversee or who work on administering the refundable tax credits. To describe what is known about the administrative costs, we reviewed information IRS provided us on processing returns and conducting audits. To supplement these cost data, we spoke with IRS and Treasury officials about challenges IRS faces in administering the credits. To describe the compliance burden associated with each credit, we collected and reviewed IRS forms, worksheets, and instructions for each credit. We also reviewed the National Taxpayer Advocate’s annual reports to Congress, including the most serious issues affecting taxpayers. Finally, we interviewed experts involved with tax preparation to determine challenges taxpayers face when claiming the credits. To assess the extent to which IRS identifies and addresses noncompliance with these credits and collects improperly refunded credits, we reviewed reports by GAO, IRS, the Treasury Inspector General for Tax Administration (TIGTA) National Taxpayer Advocate (NTA), Congressional Research Service (CRS), and Congressional Budget Office (CBO) on challenges IRS faces to reduce EITC, ACTC, and AOTC noncompliance and steps IRS is taking to address those challenges. We also reviewed relevant strategic and performance documents such as annual financial and performance reports; education and outreach plans; annual planning meeting minutes; and project summary reports. We met on a regular basis throughout the engagement with IRS officials responsible for developing and implementing RTC policy to determine the scope and primary drivers of RTC noncompliance as well as the steps IRS is taking to address those challenges. We integrated information from our document review and interviews to describe and asses IRS compliance efforts—including steps IRS is taking to implement specific programs and projects, how IRS’s internal controls ensure that specific efforts are being pursued as intended, how IRS monitors and assesses the progress of specific efforts toward reducing noncompliance, and how IRS incorporates new data to adjust its strategy as needed. We compared IRS efforts to develop, implement, and monitor compliance efforts to criteria in Standards for Internal Control in the Federal Government and federal guidance on performance management. We also applied the criteria concerning the administration, compliance burden, and transparency that characterize a good tax system, as developed in our guide for evaluating tax reform proposals. To evaluate compliance within the refundable credits, we used audit data from the National Research Program (NRP) for tax years 2009 to 2011, the most recent years for which data were available. NRP audits are like other IRS audits, but they can be used for population estimates of taxpayer reporting compliance. The goal of the NRP is to provide data to measure payment, filing, and reporting compliance of taxpayers, which are used to inform estimates of the tax gap and provide information to support development of IRS strategic plans and improvements in workload identification. The NRP audits provide a reflection of the domestic taxpayer populations through an annual sample of returns (about 14,000 returns in 2011), which are selected for NRP audits using a stratified, random sample. One potential source of nonsampling error comes from NRP audits where the taxpayer does not respond to the NRP audit, so audit results may not reflect the taxpayer’s true eligibility for the RTCs. For the calculations in this report, audit observations within the data that correspond to nonrespondent filers are given observation weights of zero (i.e., the observations do not influence the calculations). In contrast, IRS’s compliance study of the EITC produced high and low estimates for overclaim rates, where the former assumes the nonrespondents to be generally noncompliant and the latter assumes the nonrespondents to be as compliant as the respondent observations. Data for analysis include amounts reported by taxpayers on their tax returns and corrected amounts that were determined by examiners. Using NRP data, we estimated the errors and mistakes individual taxpayers made claiming the EITC, ACTC, and AOTC on their Forms 1040, U.S. Individual Income Tax Return. We present the results as a percent of the credit amounts claimed. We reviewed documentation on the NRP, interviewed IRS officials about the data, and conducted several reliability tests to ensure that the data excerpts we used for this report were sufficiently complete and accurate for our purposes. For example, we electronically tested the data for obvious errors and used totals from our analysis of SOI data as a comparison to ensure that the data set was complete. We concluded that the data were sufficiently reliable for the purposes of this report. See appendix II for further discussion of our NRP estimation techniques and for information about the sampling errors of our estimates. To assess the impact of selected proposed changes to elements of the EITC, ACTC, and AOTC, we first identified proposals to improve the three refundable tax credits through a literature review on RTCs. Our literature search started with a review of studies and reports issued by government agencies including GAO, IRS, CRS, CBO, JCT, and TIGTA. We supplemented this search with academic literature and studies produced by think tanks and professional organizations. Additionally, we inquired of agency officials and subject-matter experts for relevant studies. We then interviewed external subject-matter experts from government, academia, think tanks, and professional organizations knowledgeable about refundable tax credits in general and specifically the EITC, ACTC, and AOTC. We spoke to those with expertise on how IRS administers RTCs, how low-income taxpayers claim the credits, and how tax preparers interact with the credits. We conducted interviews to obtain views of experts on criteria commonly used to evaluate refundable tax credits and possible modifications to the credit. The experts were from across the ideological spectrum. The views from these interviews are not generalizable. Based on these interviews and our review of studies, we drew conclusions about the likely impact of modifying elements of the RTC with respect to three criteria we identified for a good tax system: efficiency, equity, and simplicity. We conducted this performance audit from July 2015 to May 2016 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Error rates by credit are computed using National Research Program (NRP) data. The Child Tax Credit (CTC) is combined with the Additional Child Tax Credit (ACTC) and shown as an aggregated credit amount for the CTC/ACTC. The American Opportunity Tax Credit (AOTC) includes refundable and nonrefundable portions, where the refundable portion of the credit benefits the taxpayer regardless of the tax liability. The AOTC estimates combine refundable and nonrefundable portions. The nonrefundable portion of the AOTC is estimated as the proportion of total nonrefundable education credits that is from claiming the AOTC. Eligibility for claiming the different education credits can vary by adjusted gross income (AGI), filing status, and the year the return was filed. Statistics of Income (SOI) data were used to estimate these proportions of AOTC to total nonrefundable education credits. These proportions were multiplied by NRP total nonrefundable credits values for each tax return, which estimates the nonrefundable portion of AOTC for that tax return. Measurement errors for AOTC estimates shown in tables 4 through 8 reflect sampling errors from NRP data only and do not reflect sampling errors from SOI data, which was used to estimate the proportion of nonrefundable AOTC claimed from nonrefundable education credits within NRP data. The credit adjustment or error is the difference between the credit amount originally claimed by the taxpayer and the correct credit amount, as determined by the NRP audit. The net credit adjustments can be separated into audited returns that received negative and positive adjustments. Negative adjustments, or credit overclaims, occur when the taxpayer claimed the credit, but either did not qualify for the credit or the credit amount originally claimed was adjusted downward. Credit overclaim amounts represent a potential for revenue loss to the government, where taxpayers incorrectly claim a tax benefit. Similarly, positive adjustments, or credit underclaims, occur when the taxpayer either failed to claim the credit or the credit amount originally claimed was adjusted upward. Credit underclaim amounts represent a potential expense for the government, where taxpayers forego available tax benefits. Using NRP data (2009 to 2011), the annual average credit and credit adjustment amounts are shown in table 4. The error rates are computed as the credit adjustment amount divided by the net credit amount claimed by the taxpayers prior to the NRP audit, where the credit adjustment may represent all returns claiming, overclaiming, or underclaiming the credit. These error rates for all credit claimants are computed for 2011 and 2009 to 2011, as shown in table 5. The precision of these estimates generally increases when using 3 years instead of a single year of data. The numbers of overclaim and underclaim returns as a percent of all returns claiming the credits are shown in table 6. The overclaim error rates are computed for Schedule C and non-Schedule C returns and for returns based on the preparer of the return, as shown in tables 7 and 8. The following is a summary of the findings in the policy literature of the effect of the current design of the Earned Income Tax Credit (EITC), the Additional Child Tax Credit (ACTC), and the American Opportunity Tax Credit (AOTC) on the effectiveness, efficiency, equity, and simplicity of these credits. This description can be viewed as a baseline against which to compare specific proposals that are advanced to improve the credits. For example, a proposal to change the EITC would be evaluated, at least in part, on its effect on poverty rates judged against the poverty reduction under the current EITC structure. The EITC provides financial assistance to a relatively large proportion of its target population of low-income taxpayers. As mentioned earlier in this report, the EITC was claimed by about 29 million people in 2013 for an average amount of about $2,300.These claimants represent over 85 percent of the eligible population – a large participation rate for a government ant-poverty program. For example, the participation rate for TANF recipients is estimated at about 34 percent and 67 percent for SSI recipients in 2011 and the rate for SNAP was 83 percent in 2012. One purpose of the EITC is to increase employment among low-income taxpayers by providing incentives for claimants to become employed or to increase the hours they work if they are already employed. The empirical evidence shows that the EITC has had a strong effect on labor force participation for certain claimants but much less, if any, effect on hours worked. The EITC has led more single mothers to enter the workforce. However, the effect on labor force participation for secondary workers (for example, a spouse of someone already in the labor force) is inconclusive with studies showing no effect or a small reduction in labor force participation. In addition, studies have shown that the EITC has little or no effect on hours worked by credit claimants already in the labor force. The EITC affects efficiency directly because it changes the behavior of workers that claim it and indirectly because it is funded through the tax system where tax rate differences can also change taxpayer behavior. However, the size of these effects, if any, has not been measured. As described in our 2012 report, a full evaluation of the EITC or any tax expenditure would require information on the total benefits of the credit as well as its costs, including efficiency costs. When examining the impact the EITC has on fairness or equity, research has tended to focus on how the credit affects poverty rates and tax burdens among different groups of recipients. The EITC has also been shown to be effective in reducing the percentage of low-income working people living in poverty. Nearly all studies that we reviewed show that the EITC has had a substantial effect on reducing poverty on average among all recipients and particularly those with children. For example, the U.S. Census Bureau found that in 2012 the refundable tax credits reduced the poverty rate by 3 percentage points for all claimants and by 6.7 percentage points for claimants with children. However, studies show a much smaller effect on poverty for childless workers. A Congressional Research Service analysis found that in 2012 the EITC reduced unmarried and married childless workers’ poverty rates by 0.14 percentage points and 1.39 percentage points respectively. These differences in the effect on poverty rates are not unexpected given the much smaller credit amounts available for childless workers. The effect of the EITC on vertical equity can be judged based, at least in part, on the distribution of the credit’s benefits by income level. As figure 4 earlier in this report shows, EITC claimants have lower incomes than the population of claimants for the other refundable tax credits. As Figure 4 also shows, a greater share of EITC benefits goes to lower-income taxpayers. More than half (62 percent) of the EITC benefits go to taxpayers making less than $20,000. The EITC’s effect on horizontal equity depends on whether its eligibility rules and the credit rates that apply to different types of taxpayers are viewed as appropriate. For example, the current credit has very different rates for taxpayers with and without children (for 2016, a maximum of $503 for childless workers vs. a maximum of $6,242 for families of three or more children). The result is that the EITC benefits mostly families with children and provides very little benefit to childless workers. This difference in credit amounts may reflect, in part, judgements about horizontal equity because larger families may be viewed as having greater costs to achieve the same standard of living than smaller families. However, some studies have shown that differences in EITC benefits may overstate the difference in costs between childless and other families. For example, one study estimated the credit’s benefits in terms of the reduction in effective tax rates and found that benefits were considerably larger for households with children compared to those without even after family incomes were adjusted to account for family size. When the study compared families with incomes equivalent to $10,000, it found that effective tax rates range from -1.47 percent for a married couple with no children to -39.21 percent for a head-of-household return with two children, a difference of more than a third of income Concerns have been raised that the credit may provide unintended incentives that discourage people from marrying to avoid a reduction in their EITC (the “marriage penalty”). The marriage penalty occurs when married EITC recipients receive a smaller EITC as married couples than their combined EITCs as single tax filers. The EITC can create marriage penalties for low-income working couples who qualify for the EITC if, when they marry, the combined household income rises into the EITC phase-out range or beyond, reducing or completely eliminating the credit. However, while limited, the research on this issue indicates that the EITC’s effects on marriage patterns are small and ambiguous. In addition, a marriage bonus is also possible when two very low-income people marry and their earnings increase but not enough to put them into the phase-out range of the credit. The EITC is a complicated tax provision that is difficult for taxpayers to comply with and IRS to administer. As explained earlier in this report, the difficulties arise from the EITC’s complex rules and formulas. In particular, as described above, the rules that determine whether a child qualifies the taxpayer to claim the credit are a major source of most of the taxpayer compliance burden. However, the participation rate for eligible taxpayers is relatively high when compared to other antipoverty programs and administrative and compliance costs are likely to be lower for the EITC. The CTC was created in 1997 as a nonrefundable tax credit for most families to help ease the financial burden that families incur when they have children. Since then, the amount of the credit per child has increased and the current ACTC was introduced to make the CTC credit partially refundable for more families. The current structure of the CTC/ACTC also subsidizes the costs of rearing children by the $1,000 per child credit and employment by the ACTC’s phase-in income range which increases the amount of credit as the taxpayer’s earned income increases. The CTC/ACTC provides financial assistance to a relatively large number of people in its target population of families with children. According to our analysis of IRS data, the CTC/ACTC was claimed on about 36 million returns in 2013 for an average amount claimed of $1,537. The credit supplies up to $1,000 per child in assistance which may be a significant amount for lower income taxpayers but becomes a decreasing percentage of income as income increases toward the phase-out threshold of $110,000 for taxpayers who are married and filing jointly. There is currently little research evaluating the impact of the CTC/ACTC on how taxpayers respond to the wage incentives. The ACTC encourages work by providing a wage subsidy of 15 cents for every dollar of earnings above $3,000 until the credit maximum of $1,000 per child is reached. Because both the ACTC and EITC subsidize earnings over the same income range, researchers find it difficult to isolate the ACTC’s effects on employment from the similarly structured but larger subsidy provided by the EITC In the absence of any evidence concerning the effectiveness of the credits, no conclusions can be drawn about its effect on efficiency. The conversion of the CTC into the broader partially refundable CTC/ACTC may affect judgments about vertical equity by changing the income distribution of tax credit benefits from what it would be under the CTC alone. The ACTC concentrates more of the benefits of the CTC/ACTC among lower income households. Because the ACTC is refundable and the refundability threshold has been reduced to $3,000, more lower income filers with no or very low tax liability can qualify for the ACTC than qualify for the CTC. As figure 11 shows, the ACTC significantly increases the availability of the tax benefit for lower income taxpayers with children. However, according to our analysis of IRS data, the combined CTC/ACTC does not provide as great a share of benefits to lower income taxpayers as the EITC. About 22 percent of the CTC/ACTC is claimed by taxpayers with less than $20,000 in income whereas 62 percent of EITC is claimed by taxpayers in this income range. The difference may be due in part to differences in the phase-in rates and ranges. The ACTC phases in at 15 percent beginning when earnings exceed $3,000 while the EITC has no phase-in threshold and can have a phase-in rate as high as 45 percent depending on the number of children. The EITC benefits are more front-loaded for lower income taxpayers than the CTC/ACTC benefits. Views differ on the effect of the CTC/ACTC on horizontal equity. Some argue that these families should get this tax relief because the additional children reduce their ability to pay relative to families or individuals without children. Others, however, regard children as a choice that parents make about how they use their resources and horizontal equity requires that people with the same income pay similar taxes. Their view is that parents have children because they get satisfaction from this choice and that subsidies are no more warranted for this choice (on an ability to pay basis) than any other purchase the parents make. This disagreement highlights that, although the credit may promote a social good by providing assistance to families with children, the equity of this approach is still a matter of judgment. The CTC/ACTC shares the complexity of the EITC and other tax provisions directed toward children and families which derives from the rules for determining whether a child qualifies for the tax benefit. Like the EITC, the CTC/ACTC has relationship, age, and residency requirements that contribute to complexity. Applying the rules can be complicated because the CTC/ACTC rules may be similar but not always the same as the EITC. For example, the EITC requires that qualifying children be under 19 years old (or under 24 and in school) and the CTC/ACTC requires that the qualifying children be under 17 years old. To further complicate matters, the CTC/ACTC adds a support test to the age residency and relationship requirements. Furthermore, these family centered provisions are currently structured very differently and the amount of the tax benefits change with changing circumstances. The benefits can change when the parent marries, has an additional child or the child gets older, or their income changes. The AOTC provides financial assistance to students from middle-income families (like its predecessor the Hope credit) who may not benefit from other forms of traditional student aid, like Pell Grants. But the AOTC, through its refundability provisions, also expands financial assistance to students from lower income families. Under the AOTC, claimants can receive up to $2,500 per student in credits for qualifying education expenses with up to $1,000 of the credit being refundable. The AOTC was claimed on about 10 million returns in 2013. The Protecting Americans from Tax Hikes Act of 2015 made the AOTC a permanent feature of the tax code, replacing the nonrefundable Hope credit. The effectiveness of the AOTC in getting financial assistance to its target population depends in part on the incidence of the credit. The AOTC’s benefits may be shifted to the educational institutions if the colleges and universities respond to the availability of the AOTC by increasing their tuition. We identified no current research on this institutional response to the AOTC but there is evidence that institutions have not raised tuition in response to the Hope and Lifetime Learning Credits. However, recent research indicates that colleges may react by reducing other forms of financial aid provided by the colleges so that the credit claimants receive no net benefit from the credits. In contrast to the other education credits, the AOTC may also affect tuition if its refundability makes it more available to lower income claimants. If these students attend schools like community colleges with more scope to raise tuitions because their tuition is initially relatively low, they may face increased tuition and a reduced effective value of their AOTC. In this case, if tuitions rise, the cost of college for students ineligible for the AOTC would go up. To the extent that the AOTC reduces the after-tax cost of education, it provides a benefit that may influence decisions about college attendance. A goal of education tax benefits like the Hope Credit has been to increase college attendance and the AOTC shares some of the education cost reducing features of this credit that could increase attendance. Research on education credits has not focused on the AOTC because, due to its relatively recent enactment, data are less available for the AOTC than other education credits like the Hope and Lifetime Learning Credits. Studies have shown some but not a large impact on college attendance due to these credits and other education tax incentives. For example, a study found that tax-based aid increases full-time enrollment in the first 2 years of college for 18 to 19 years old by 7 percent and that the price sensitivity of enrollment suggests that college enrollment increases 0.3 percentage points per $100 of tax-based aid. The AOTC shares features with other education credits related to the timing of the credit that may limit its effectiveness in promoting college attendance. The AOTC may be received months after education expenses are incurred, making it less useful for families with limited resources to pay education expenses. However, the refundability of the AOTC has made it more accessible to lower income households where it may have a greater impact on college attendance than the Hope Credit. Research indicates that students from lower income households are more sensitive to changes in the price of a college education than higher income households when deciding whether to attend college. If the AOTC can be shown to influence attendance decisions it may also affect efficiency by increasing an activity with a positive externality. Education would have a positive externality if the benefit to society of increased productivity and innovation that is due to a more educated populace is greater than the benefit to the individuals who make the college attendance decision and consider only their private benefit. When this is the case, the result may be under-investment in education from a social perspective. By lowering costs, the credit may increase the private return to investment in education, bringing it closer to the social return. The conversion of the Hope Credit into the partially refundable AOTC may affect judgments about vertical equity by changing the income distribution of tax credit benefits. The refundability of the AOTC has increased the share of the credit’s benefits received by lower income filers when compared to its predecessor, the Hope Credit. According to our analysis of IRS data, about 20 percent of the AOTC in 2013 was claimed by filers making less than $20,000 per year. In the case of the Hope Credit in 2008 (the last year this credit was in effect) only about 6.8 percent of the credit was claimed by taxpayers earning less than $20,000 per year. As mentioned above, this shift to lower income taxpayers also has the potential to make the credit more effective and efficient. The effect on horizontal equity as in the case of the child credits described above depends on judgements about whether taxpayers should pay different taxes based on decisions about whether or not to attend college. The complexity of the AOTC is derived largely from its relationship to other education tax preferences. The AOTC is one of a variety of education tax benefits that students or their families can claim which include the Lifetime Learning Credit and the tuition and fees deduction. These tax preferences differ in terms of their eligibility criteria, benefit levels, and income phase-outs. The value of the tax benefit also depends on the amount of student aid taxpayers or their children receive. Evidence indicates that due to this complexity, taxpayers may not know which education tax preference provides the most benefit until they file their taxes—and calculating the tax benefit of each provision can “place substantial demands on the knowledge and skills of millions of students and families. In addition, as described in our 2012 report, filing for AOTC is complex enough to raise concerns that some taxpayers choose not to claim a tax benefit like the AOTC or are not claiming the tax provision that provides the greatest benefit. In addition to the contact named above, Kevin Daly, Assistant Director, Susan Baker, Russell Burnett, Jehan Chase, Adrianne Cline, Nina Crocker, Sara Daleski, Catrin Jones, Diana Lee, Robert MacKay, Ed Nannenhorn, Jessica Nierenberg, Karen O’Conor, Robert Robinson, Max Sawicky, Stewart Small, and Sonya Vartivarian made major contributions to this report. | Refundable tax credits are policy tools available to encourage certain behavior, such as entering the workforce or attending college. GAO was asked to review the design and administration of three large RTCs (the EITC, AOTC, and ACTC). The ACTC is sometimes combined with its nonrefundable counterpart, the Child Tax Credit. For this report GAO described RTC claimants and how IRS administers the RTCs. GAO also assessed the extent to which IRS addresses RTC noncompliance and reviewed proposed changes to the RTCs. GAO reviewed and analyzed IRS data, forms and instructions for claiming the credits, and planning and performance documents. GAO also interviewed IRS officials, tax preparers, and other subject-matter experts. The Earned Income Tax Credit (EITC), the Additional Child Tax Credit (ACTC), and the American Opportunity Tax Credit (AOTC) provide tax benefits to millions of taxpayers—many of whom are low-income—who are working, raising children, or pursuing higher education. These credits are refundable in that, in addition to offsetting tax liability, any excess credit over the tax liability is refunded to the taxpayer. In 2013, the most recent year available, taxpayers claimed $68.1 billion of the EITC, $55.1 billion of the CTC/ACTC, and $17.8 billion of the AOTC. Eligibility rules for refundable tax credits (RTCs) contribute to compliance burden for taxpayers and administrative costs for the Internal Revenue Service (IRS). These rules are often complex because they must address complicated family relationships and residency arrangements to determine who is a qualifying child. Compliance with the rules is also difficult for IRS to verify due to the lack of available third party data. The relatively high overclaim error rates for these credits (as shown below) are a result, in part, of this complexity. The average dollar amounts overclaimed per year for 2009 to 2011, the most recent years available, are $18.1 billion for the EITC, $6.4 billion for the CTC/ACTC, and $5.0 billion for the AOTC. IRS uses audits and automated filters to detect errors before a refund is sent, and it uses education campaigns and other methods to address RTC noncompliance. IRS is working on a strategy to address EITC noncompliance but this strategy does not include the other RTCs. Without a comprehensive compliance strategy that includes all RTCs, IRS may be limited in its ability to assess and improve resource allocations. A lack of reliable collections data also hampers IRS's ability to assess allocation decisions. IRS is also missing opportunities to use available data to identify potential noncompliance. For example, tracking the number of returns erroneously claiming the ACTC and AOTC and evaluating the usefulness of certain third party data on educational institutions could help IRS identify common errors and detect noncompliance. Proposals to change the design of RTCs--such as changing eligibility rules--will involve trade-offs in effectiveness, efficiency, equity, and simplicity. GAO recommends 1) IRS develop a comprehensive compliance strategy that includes all RTCs, 2) use available data to identify potential sources of noncompliance, 3) ensure reliability of collections data and use them to inform allocation decisions, and 4) assess usefulness of third-party data to detect AOTC noncompliance. IRS agreed with three of GAO's recommendations, but raised concerns about cost of studying collections data for post-refund enforcement activities. GAO recognizes that gathering collections data has costs. However, a significant amount of enforcement activity is occurring in the post-refund environment and use of these data could better inform resource allocation decisions and improve the overall efficiency of enforcement efforts. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
FMCSA was established within DOT in January 2000 and was tasked with promoting safe commercial motor vehicle operations and preventing large truck and bus crashes, injuries, and fatalities. The commercial motor carrier industry is a vital part of the U.S. economy and, as of December 2015, FMCSA estimated that there were 551,150 active carriers and approximately 6 million commercial drivers operating in the United States. The domestic commercial motor carrier industry covers a range of businesses, including private and for-hire freight transportation, passenger carriers, and specialized transporters of hazardous materials. These carriers also range from small carriers with only one vehicle that is owned and operated by a single individual, to large corporations that own thousands of vehicles. In carrying out its mission, FMCSA is responsible for four key safety service areas. Registration Services: Motor carriers are required to register with FMCSA; have insurance; and attest that they are fit, willing, and able to follow safety standards. Vehicles must be properly registered and insured with the state of domicile and are subject to random and scheduled inspections by both state and FMCSA agents. Drivers must have a valid commercial driver’s license issued by their state of residence and pass a physical examination as evidenced by a current valid medical card every 2 years. In calendar year 2015, there were 57,358 active interstate new entrant carriers that registered with FMCSA. Inspection Services: Conducting roadside inspections is central to FMCSA’s mission. States and, to a lesser extent, FMCSA staff, perform roadside inspections of vehicles to check for driver and maintenance violations and then provide the data from those inspections to the agency for analysis and determinations about a carrier’s safety performance. FMCSA also obtains data from the reports filed by state and local law enforcement officers when investigating commercial motor vehicle accidents or regulatory violations. The agency provides grants to states that may be used to offset the costs of conducting roadside inspections and improve the quality of the crash data the states report to it. In addition, the field offices in each state, known as divisions, have investigators who conduct compliance reviews of carriers identified by state inspection and other data as unsafe or at risk of being unsafe. FMCSA and its state partners conduct about 3.4 million inspections a year. Compliance Services: FMCSA monitors and ensures compliance with regulations governing both safety and commerce. The compliance review process is performed by safety auditors and investigators who collect safety compliance data by visiting a motor carrier’s location to review safety and personnel records. In the instances of new carriers entering the commercial market, FMCSA audits these carriers within 12 months of service. In 2015, FMCSA conducted 14,656 investigations and 30,000 new entrant safety audits, and sent about 21,000 warning letters. FMCSA uses data collected from motor carriers, federal and state agencies, and other sources to monitor motor carrier compliance with the Federal Motor Carrier Safety Regulations and Hazardous Materials Regulations. These data are also used to evaluate the safety performance of motor carriers, drivers, and vehicle fleets. The agency uses the data to characterize and evaluate the safety experience of motor carrier operations to help federal safety investigators focus their enforcement resources by identifying the highest-risk carriers, drivers, and vehicles. Enforcement Services: FMCSA is responsible for bringing legal action against companies that are not in compliance with motor carrier safety policies. In fiscal year 2015, FMCSA closed 4,766 enforcement cases. FMCSA’s estimated budget for fiscal year 2017 is approximately $794.2 million. The agency employs more than 1,000 staff members who are located in its Washington, D.C., headquarters, 4 regional service centers, and 52 division offices. FMCSA’s Chief Information Officer (CIO) oversees the development, implementation, and maintenance of the IT systems and infrastructure that serve as the key enabler in executing FMCSA’s mission. The CIO reports directly to the Chief Safety Officer within FMCSA’s Office of Information Technology. This office supports a highly mobile workforce by operating the agency’s field IT network of regional and state service centers, and ensuring that inspectors have the tools and mobile infrastructure necessary to perform their roadside duties. In addition, the office supports FMCSA headquarters, regional, and state service centers, which depend on the agency’s IT infrastructure including servers, laptops, desktops, printers, and mobile devices. Currently, the Office of Information Technology is undergoing a reorganization to establish an Office of the CIO. While a revised structure has been proposed, it has not yet been approved. Of its total budget, in fiscal year 2017, FMCSA’s expected IT budget is $58 million, of which approximately 60 percent ($34.4 million) is to be spent on the O&M of existing systems. In fiscal year 2013, the Office of Information Technology led an effort to establish a new IT portfolio that was intended to provide FMCSA with the ability to look across the investments in these portfolios and identify the linkages of business processes and strategic improvement opportunities to enhance mission effectiveness. To do so, the office implemented a product development team to integrate activities within and across the portfolio, interacting with business and program stakeholders. Specifically, it established four key safety process areas—registration, inspection, compliance, and enforcement—and two operations process areas—mission support systems and infrastructure. The registration portfolio includes systems that process and review applications for operating authority. The inspection portfolio includes systems that aid inspectors in conducting roadside inspections of large trucks and buses and ensure inspection data are available and useable. The compliance portfolio includes systems that help investigators to identify and investigate carriers for safe operations and maintain high safety standards to remain in the industry. The enforcement portfolio includes systems to assist the agency in ensuring that carriers and drivers are operating in compliance with regulations. The mission support portfolio includes systems and services that crosscut multiple portfolios. The infrastructure portfolio includes those systems that provide support services, hardware, software, licenses, and tools. As of August 2016, FMCSA had identified and categorized 40 investments in its IT portfolio, as described in table 1. According to the Acting CIO, by creating the IT portfolio, the agency determined that the functionality of these investments was not redundant, but that the aging legacy systems were in need of modernization. Further, the Acting CIO stated that the agency is planning to consolidate many of the systems that are in O&M, which, as of fiscal year 2016, had a combined cost of $2.9 million. FMCSA has acknowledged the need to upgrade its aging systems to improve data processing and data quality, and reduce system maintenance costs. Accordingly, in 2013, it began a modernization effort that includes both developing new systems and retiring legacy systems for each of its four key safety process areas—registration, inspection, compliance, and enforcement. To modernize its registration systems, in 2013, the agency began developing the URS system to streamline and strengthen the registration process. When fully implemented, URS is intended to replace the current registration systems with a single, online federal system. Program officials stated that the Licensing and Insurance system, Operations Authority Management system, and the registration function in MCMIS are to be retired upon URS’s deployment. The Acting CIO stated that the agency has not determined when URS will be fully deployed. To modernize its inspection systems, FMCSA began planning efforts in 2014 to develop Integrated Inspection Management System (IIMS), which is intended to provide inspectors with a single system to perform checks. As of May 2017, the agency was still in the planning stage of this effort, as it was assessing the current state of its inspection processes and data management systems, and planning to issue a report detailing actions the agency needs to take. According to officials from the Office of Information Technology, subsequent to this report, a detailed analysis will be conducted, including development of acquisition and development plans. According to agency officials, its six operational inspection systems—Query Central, Safety and Fitness Electronic Records, SAFETYNET, Aspen, Inspection Selection System, and Commercial Driver’s License Information System Access—are intended to be retired upon deployment of IIMS. To modernize its compliance systems, FMCSA began developing Sentri 2.1. According to the Acting CIO, the agency’s three legacy compliance systems—ProVu, National Registry of Certified Medical Examiners, and Compliance Analysis and Performance Review Information—are to be retired upon deployment of Sentri 2.1. As of May 2017, agency officials from the Office of Information Technology stated they have stopped the development of Sentri 2.1. To modernize its enforcement systems, FMCSA intends to migrate the functionality of its current enforcement systems into an existing mission support system. Specifically, the functionality of FMCSA’s three operational enforcement systems—CaseRite, Electronic Management Information System, and Uniform Fine Assessment—is to be migrated into its Portal system, which is a website that provides users a single sign-on to access applications. The agency did not provide a date for when this effort is expected to be completed. A federal agency’s ability to effectively and efficiently maintain and modernize its existing IT environment depends, in large part, on how well it employs certain IT management controls, including strategic planning. Strategic planning is essential for an agency to define what it seeks to accomplish, identify strategies to efficiently achieve the desired results, and effectively guide modernization efforts. Key elements of IT strategic planning include establishing a plan with well-defined goals, strategies, measures, and timelines to guide these efforts. Our prior work stressed that an IT strategic plan should define the agency’s vision and provide a road map to help align information resources with business strategies and investment decisions. Additionally, as we have previously reported, effective modernization planning is essential. Such planning includes defining the scope of the modernization effort, an implementation strategy, and a schedule, as well as establishing results-oriented goals and measures. However, FMCSA lacks complete plans to guide its systems modernization efforts. Specifically, the agency’s IT strategic plan lacks key elements. While the agency has an IT strategic plan that describes the technical strategy, vision, mission, and direction for managing its IT modernization programs, and defines the strategic goals and objectives to support its mission, the plan lacks timelines to guide its goals and strategies related to integrated project planning and execution, IT security, and innovative IT business solutions, among others. For example, there were no identified milestones for achieving efficient, consolidated, and reliable IT solutions for IT modernization that meet the changing business needs of users and improve safety. The Acting CIO acknowledged that the strategic plan is not complete and that a date by which a revised plan will be completed has not been established. The official further acknowledged that updating the current strategic plan has not been a priority. However, until the agency establishes a complete strategic plan, it is likely to face challenges in aligning its information resources with its business strategies and investment decisions. In addition, FMCSA has not yet developed an effective modernization plan that defines the overall scope, implementation strategy, and schedule for its efforts. According to the Acting CIO, the agency has recognized the need for such a plan and has recently awarded a contract to develop one by June 2017. If FMSCA develops an effective modernization plan and uses it to guide its efforts, it should be better positioned to successfully modernize its aging legacy systems. GAO’s IT investment management framework is comprised of five progressive stages of maturity that mark an agency’s level of sophistication with regard to its IT investment management capabilities. Such capabilities are essential to the governance of an agency’s IT investments. At the Stage 2 level of maturity, an agency lays the foundation for sound IT investment management to help it attain successful, predictable, and repeatable investment governance processes at the project level. These processes focus on the agency’s ability to select, oversee, and review IT projects by defining and developing its IT governance board(s) and documented processes for directing the governance boards operations. According to the framework, Stage 2 includes the following three processes: Instituting the investment board: As part of this process, an agency is to establish an investment review board comprised of senior executives, including the agency’s head or a designee, the CIO or other senior executive representing the CIO’s interests, and heads of business units that are responsible for defining and implementing the department’s IT investment governance process. The agency’s IT investment process guidance should lay out the roles of investment review boards, working groups, and individuals involved in the agency’s IT investment processes. Selecting investments that meet business needs: As part of the process for selecting and reselecting investments, an agency is to establish and implement policies and procedures made by senior executives that meet the agency’s needs. This includes selecting projects by identifying and analyzing projects’ risks and returns before committing any significant funds to them and selecting those that will best support the agency’s mission needs. Providing investment oversight: This process includes establishing and implementing policies and procedures for overseeing IT projects by reviewing the performance of projects against expectations and taking corrective action when these expectations are not being met. FMCSA has partially addressed the three processes associated with having a sound governance structure to manage its modernization efforts. Table 2 provides a summary of the extent to which the agency’s IT investment management structure implemented the key processes. With regard to establishing an IT investment review board, FMCSA recently restructured its governance boards. Specifically, in January 2017, FMCSA finalized its IT governance order to have three major governance boards that are to serve as the decision-making structure for how IT investment decisions are made and escalated—the Executive Management Team, the Technical Review Board, and the Change Control Board. At the highest level, the Executive Management Team is to provide strategic direction and decision making for major IT investments. The team, which is to meet at least quarterly, is chaired by the FMCSA Deputy Administrator. Below this team, the Technical Review Board is to provide oversight for all IT investments and is chaired by the Director of the Office of Information Technology Policy, Plans, and Oversight. According to the governance order, this team is to meet monthly. Further, underneath the Technical Review Board is the Change Control Board that has responsibility for reviewing and approving system change requests associated with a new system, a major release or modification to an existing system, a change in contract funding, or a change in contract scope. This board, which also is to meet monthly, is chaired by the Enterprise Architect of the Office of Information Technology Policy, Plans, and Oversight. Figure 1 depicts the agency’s governance structure. Nevertheless, FMCSA has not yet clearly defined roles and responsibilities of all working groups and individuals involved in the agency’s IT governance process. For example, FMCSA’s governance order calls for the Office of Information Technology Policy, Plans, and Oversight to adopt specific IT performance measures, but does not define the manner in which these measures should be tracked. Moreover, in August 2016, the agency finalized an order that established 10 integrated functional areas of IT management and the development of an Office of the CIO. However, FMCSA has not yet finalized a new structure for the Office of the CIO or clearly defined how this office and the CIO will manage, direct, and oversee the implementation of these areas as it relates to the agency’s IT governance process. Further, FMCSA officials have not identified time frames for doing so. Without clearly defined roles and responsibilities for the agency’s working groups and individuals involved in the governance process, FMCSA has less assurance that its modernization investments will be reviewed by those with the appropriate authority and aligned with agency goals. With regard to selecting and reselecting IT investments, FMCSA’s January 2017 governance order requires participation and collaboration of the IT system owner, business owner, IT planning staff, and governance boards during the select phases for all investments. However, the agency lacks procedures for selecting new modernization investments and for reselecting investments that are already operational (which makes up the majority of the agency’s IT portfolio) for continued funding. For example, the order calls for the Executive Management Team, comprised of senior executives, to make decisions regarding the funding of the IT portfolio, among other things, and for the Technical Review Board to provide recommendations to the team on the prioritization of IT investments including the allocation of funds. However, the order does not specify the procedures for approving the movement of funds within the IT and capital planning and investment control portfolio. According to the Acting CIO, FMCSA is currently drafting procedures for selecting new investments and reselecting investments that are already operational and intends to finalize the procedures by the end of May 2017. Upon establishing and implementing such procedures, FMCSA’s decision makers should have a common understanding of the process and the cost, benefit, schedule, and risk criteria that will be used to reselect IT projects. With regard to IT investment oversight, the agency’s order established policies and procedures to ensure that governance bodies review investments and track corrective actions to closure. However, the policies and procedures for reviewing and tracking actions have not yet been fully implemented by the three governance bodies. For example, The boards have not met regularly to review the performance of IT investments, including those investments that are part of its modernization efforts, against expectations. In particular, in calendar year 2016, the Executive Management Team met once and the Technical Review Board met four times. The Change Control Board was not formally approved until January 2017 and, thus, has held no meetings. Also, while the Technical Review Board met four times in calendar year 2016, none of the meetings discussed the cost, schedule, performance, and risks for FMCSA’s major IT modernization investment, systems in development, or existing systems. For example, in February 2016, the IT Director presented to the board members an overview of the statutory provisions commonly referred to as the Federal Information Technology Acquisition Reform Act and their implications for FMCSA. In April 2016, the board members were provided with an overview of OMB’s regulatory guidance for the budget process. In addition, in August 2016, the Technical Review Board met to discuss the planned fiscal year 2017 budget for its IT investments and, in November 2016, the Director of the Office of Information Technology discussed with board members the status of the planning efforts for the IIMS project. The Acting CIO did not attend any of the four meetings. Further, neither the Executive Management Team nor the Technical Review Board discussed with its members the transition of FMCSA’s investments into the cloud environment, to include identifying any key risks. For example, in November 2016, over 70 issues regarding the migration effort were identified by the contractor and a FMCSA official, but none were discussed at the Technical Review Board or Executive Management Team board meetings. As a result, program officials stated that there were delays to program’s transition to the cloud environment because additional time was needed to securely migrate data from multiple legacy platforms into a new central database and conduct further testing. Action items have been noted in meeting minutes, but have not been fully addressed or updated to closure. For example, in August 2016, the Capital Planning and Investment Control Coordinator, within the Office of Information Technology, provided an overview of the fiscal year 2017 budget to the Technical Review board members. As part of this discussion, the Director of the Office of Information Technology stated that, during the next board meeting, additional details would be provided on the planned budget for fiscal year 2018. However, the meeting minutes from November 2016 did not include any evidence that this subject was discussed at the next meeting. These weaknesses were due, in part, to the agency not adhering to its IT orders and governance board charters, which establish FMCSA’s governance structure, as described above. As a result, the agency lacks adequate visibility into and oversight of IT investment decisions and activities, and cannot ensure that its investments are meeting cost and schedule expectations and that appropriate actions are taken if these expectations are not being met. According to OMB guidance, the O&M phase is often the longest phase of an investment and can consume more than 80 percent of the total lifecycle costs. Thus, it is essential that agencies effectively manage this phase to ensure that the investments continue to meet agency needs. As such, OMB and DOT direct agencies to monitor all O&M investments through operational analyses, which should be performed annually. These analyses should include assessments of four key factors: costs, schedules, investment performance (i.e., structured assessments of performance goals), and customer and business needs (i.e., whether the investment is still meeting customer and business needs, and identifies any areas for innovation in the area of customer satisfaction). FMCSA had not fully ensured that the selected systems—Aspen, MCMIS, Sentri 2.0, and URS—were effectively meeting the needs of the agency. Specifically, none of the program offices conducted the required operational analyses for the four systems. The program offices stated that, in lieu of conducting these analyses, they assessed the key factors of costs, schedules, investment performance, and customer and business needs as part of the capital planning and investment control process. Nonetheless, only one program office (URS) partially met the four key factors. Table 3 provides a summary of the extent to which the four selected systems implemented the key operational analysis factors. Aspen: The Aspen program office had partially implemented one of the required operational analysis factors and had not implemented the three other factors. Specifically, as part of its plans to modernize this system, FMCSA had taken steps to assess customer and business needs. For example, it reached out to users and found that 33 states use Aspen and the remaining states use their own in-house developed programs or third-party vendor-based systems. However, while the agency collected feedback from users via phone calls and meetings, it had not yet assessed this feedback, including identifying any opportunities for innovation in the areas of customer satisfaction, strategic and business results, and financial performance. In addition, the program office did not assess current costs against life-cycle costs, perform a structured schedule assessment, or compare current performance against cost baseline and estimates developed when the investment was being planned. MCMIS: The MCMIS program office had not implemented any of the required operational analysis factors. Specifically, program officials did not assess current costs against life-cycle costs, perform structured assessments of schedule and performance goals, or identify whether the investment supports business and customer needs and is delivering the services it was designed to, including identifying whether the system overlaps with other systems. This is particularly concerning given that all seven users we interviewed stated that the system does not interact well with other systems and users have to access other systems to gather information that they cannot obtain in MCMIS. Sentri 2.0: Sentri’s program office partially implemented one of the required operational analysis factors and did not implement the three other factors for the component that has been operational since May 2010, also known as Sentri 2.0. Specifically, the program had partially implemented assessments of customer and business needs by reviewing Sentri 2.0 user needs as it develops the business and user requirements for development of Sentri 2.1. However, while all five users we interviewed stated that their feedback regarding Sentri was provided to FMCSA, they were not sure whether the feedback was being implemented. Moreover, the program office had not identified whether the investment supports customer processes, as designed, and is delivering the goods and services it was intended to deliver. In addition, the program did not assess current costs against life-cycle costs or perform structured schedule and performance goal assessments. URS: The URS program office partially implemented four of the required operational analysis factors for functionality of the system that was delivered in December 2015. Specifically, the program office developed a business case that outlines costs, schedules, investment performance goals, and customer and business needs. Additionally, the program office communicated with stakeholders through meetings, conferences, webinars, and call centers. For example, it has hosted over 30 webinars to better understand how the system is working for the users. Nevertheless, the program office had not yet conducted an analysis to assess current costs against life-cycle costs, performed a structured assessment of the schedule or performance goals, or ensured the functionality delivered is operating as intended and is meeting user needs. The need for conducting an analysis is particularly pressing for this program since all four system users we interviewed stated that URS is difficult to use and does not work as intended: they stated that they are unable to complete filings, carrier registration, and request changes to DOT numbers. With regard to the deficiencies we identified, the Acting CIO stated that the agency does not yet have FMCSA-specific guidance to assist programs to conduct operational analyses on an annual basis. The Acting CIO stated that FMCSA has drafted guidance, including templates, to assist programs in conducting these analyses and officials in the Office of Information Technology stated that the agency planned to have the guidance finalized by end of June 2017. While finalizing this guidance is a positive step to assist programs in conducting operational analyses, FMCSA does not adequately ensure its systems are effective at meeting user needs. Until FMCSA fully reviews its O&M investments as part of its annual operational analyses, the agency will lack assurance that these systems meet mission needs, and the associated spending could be wasteful. While FMCSA has recognized the need to develop an effective modernization plan and has awarded a contract to do so, it has not completed an IT strategic plan needed for modernizing its existing legacy systems. In addition, while the agency has established governance boards for overseeing IT systems, these boards do not exhibit key processes of a sound governance approach, such as ensuring corrective actions are executed and tracked to closure. Further, FMCSA does not have the processes in place for ensuring that systems currently in use are meeting agency needs or for overseeing its IT portfolio. The four systems we reviewed did not have completed operational analyses that show if a system is, among other things, effective at meeting users’ needs. Until the agency addresses shortcomings in strategic planning, IT governance, and oversight, its progress in modernizing its systems will likely be limited and the agency will be unable to ensure that the systems are working effectively. To help improve the modernization of FMCSA’s IT systems, we are recommending that the Secretary of Transportation direct the FMCSA Administrator to take the following five actions: Update FMCSA’s IT strategic plan to include well-defined goals, strategies, measures, and timelines for modernizing its systems. Ensure that the IT investment process guidance lays out the roles and responsibilities of all working groups and individuals involved in the agency’s governance process. Finalize the restructure of the Office of Information Technology, including fully defining the roles and responsibilities of the CIO. Ensure that appropriate governance bodies review all IT investments and track corrective actions to closure. Ensure that required operational analyses are performed for Aspen, MCMIS, Sentri 2.0, and URS on an annual basis. We provided a draft of this report to the Department of Transportation for review and comment. In its written comments, reproduced in appendix II, the department concurred with our five recommendations. The department also described actions that FMCSA has completed or is finalizing to improve its IT strategic planning and investment governance processes. These actions include updating the FMCSA IT strategic plan and finalizing investment review board charters to better define all stakeholders roles and responsibilities. Effective implementation of these actions should help FMCSA improve the modernization of its IT systems. In addition to the written comments, the department provided technical comments on the draft report, which we incorporated as appropriate. We are sending copies of this report to the appropriate congressional committees, the Secretary of Transportation, the Administrator of FMCSA, and other interested parties. This report also is available at no charge on the GAO website at http://www.gao.gov. Should you or your staff have any questions on information discussed in this report, please contact me at (202) 512-4456 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix III. The Fixing America’s Surface Transportation Act included a provision for us to conduct a comprehensive analysis of the information technology (IT) and data collection management systems of the Federal Motor Carrier Safety Administration (FMCSA) by June 4, 2017. Our objectives were to (1) assess the extent to which the agency has plans to modernize its existing systems, (2) assess the extent to which FMCSA has implemented an IT governance structure, and (3) determine the extent to which FMCSA has ensured selected IT systems are effective. To address the first objective, we obtained and evaluated FMCSA IT systems modernization documentation that discuss future changes to ensure user needs are met, including its IT strategic plan for fiscal years 2014 to 2016 and systems modernization plans. We analyzed whether these plans complied with best practices that we have previously identified. These practices call for developing a strategic plan that includes defining the agency’s vision and providing a road map to help align information resources with business strategies and investment decisions. We also interviewed agency officials including those from the Office of Information Technology; Enforcement and Compliance, Information Security, and Privacy divisions to discuss the agency’s plans to modernize existing systems, including any actions the agency is taking to identify redundancies among the systems and explore the feasibility of consolidating data collection and processing systems. To corroborate this information, we reviewed the FMCSA’s budgetary data (i.e., its fiscal year 2016 IT portfolio summary) submitted to the Office of Management and Budget (OMB) that identifies all of the agency’s IT investments to identify whether it included any potentially redundant systems. Specifically, we reviewed the name and narrative description of each investment’s purpose to identify any similarities among related investments and discussed any potential redundancies with the Acting Chief Information Officer (CIO). For the second objective, we compared agency documentation, including executive board meeting minutes and briefings from fiscal years 2015 and 2016, FMCSA IT governance orders, and charters, against critical processes associated with Stage 2 of GAO’s IT investment management framework. In particular, Stage 2 of the framework includes the following key processes for effective governance: instituting the investment board; selecting and reselecting investments that meet business needs; and providing investment oversight. We also interviewed agency officials to better understand FMCSA’s governance structure, which included identifying whether the agency is taking appropriate steps with respect to IT governance. To address the third objective, we selected four existing IT systems to review. In selecting these investments, we analyzed FMCSA’s fiscal year 2016 IT portfolio summary submitted to OMB which included the agency’s existing IT, data collection, processing systems, data correction procedures, and data management systems and programs. To assess the reliability of the OMB budget data, we reviewed related documentation, such as OMB guidance on budget preparation and capital planning. In addition, we corroborated with FMCSA that the data was accurate and reflected the data it had reported to OMB. We determined that the budget data was reliable for our purposes of selecting these systems. Specifically, we used the following criteria to select four systems to review: At least one investment must have been identified as a major IT investment, as defined by OMB. FMCSA had only identified one major IT investment in fiscal year 2016. The remaining non-major systems must have had planned operations and maintenance (O&M) spending in fiscal year 2017. The system is mission critical. The program must not have been included in a recent GAO or inspector general review that examined the program’s effectiveness. Using the above criteria, we selected the following four systems: 1. Aspen: A non-major desktop application that collects commercial driver/vehicle inspection details, performs some immediate data analysis, creates and prints a vehicle inspection report, and transfers inspection data into the FMCSA information systems. 2. Motor Carrier Management Information System (MCMIS): A non- major information system that captures FMCSA inspection, crash, compliance review, safety audit, and registration data. It is FMCSA’s authoritative source for the safety performance records for all commercial motor carriers and hazardous materials shippers. 3. Safety Enforcement Tracking and Investigation System (Sentri): A non-major application used to facilitate safety audits and interventions by FMCSA and state users. It is intended to combine roadside inspection, investigative, and enforcement functions into a single interface. 4. Unified Registration System (URS): A major system that is intended to replace the existing registration systems with a single comprehensive, online system and provide FMCSA-regulated entities a more efficient means of submission and management of data pertaining to registration applications. We then assessed the agency’s efforts to determine the effectiveness of these systems in meeting the needs of the agency by reviewing documentation from the four selected systems and compared it to key factors identified in OMB’s guidance on conducting annual operational analysis, which are a key method for examining the performance of investments with O&M funding. More specifically, we assessed whether FMCSA had conducted an operational analysis on each of the systems. For those systems that did not have an analysis performed, we reviewed FMCSA’s IT documentation on the performance of these systems (i.e., business cases and performance management reviews) to determine whether key factors of an operational analysis were conducted. For example, we assessed whether the agency assessed cost, schedule, and investment performance, including its interaction with other systems; and customer and business needs, including adaptability of the system in order to make necessary future changes to ensure user needs are met and areas for innovation in the areas of customer satisfaction. We also conducted interviews with 22 selected system users to obtain insight into whether the identified systems are meeting their needs and any challenges users face in using these systems, including whether the systems are adaptable to future needs and methods to improve user interface. We selected these users based on recommendations from FMCSA program officials and industry stakeholder representatives. Based on these recommendations, we then selected users based on the type of users, including FMCSA users, state agencies, law enforcement officials, and private sector individuals involved in the motor carrier industry. While these user interviews are illustrative, they cannot be used to make generalizable statements about users’ experience as a whole. Based on our work to determine selected programs’ effectiveness, we made recommendations regarding deficiencies identified in the report. We did not make recommendations regarding methods to improve user interfaces since two of the selected systems (Aspen and MCMIS) are planned to be modernized and the remaining two systems (Sentri and URS) have components still under development, as discussed in our report. We conducted this performance audit from April 2016 to July 2017 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the contact name above, the following staff also made key contributions to this report: Eric Winter (Assistant Director), Niti Tandon (Analyst in Charge), Rebecca Eyler, Lisa Maine, and Tyler Mountjoy. | FMCSA, established within the Department of Transportation in January 2000, is charged with reducing crashes involving commercial motor carriers (i.e., large trucks and buses) and saving lives. IT systems and infrastructure serve as a key enabler for FMCSA to achieve its mission. The agency reported spending about $46 million for its IT investments in fiscal year 2016. In December 2015, the Fixing America's Surface Transportation Act was enacted and required GAO to review the agency's IT, data collection, and management systems. GAO's objectives were to (1) assess the extent to which the agency has plans to modernize its existing systems, (2) assess the extent to which FMCSA has implemented an IT governance structure, and (3) determine the extent to which FMCSA has ensured selected IT systems are effective. To do so, GAO analyzed FMCSA's strategic plan and modernization plans; compared governance documentation to best practices; selected four investments based on operations and maintenance spending for fiscal year 2016, among other factors, and compared assessments for the investments against OMB criteria; and interviewed officials. The Federal Motor Carrier Safety Administration (FMCSA) initiated a modernization effort in 2011 and developed an information technology (IT) strategic plan that describes the technical strategy, vision, mission, direction, and goals and objectives to support the agency's mission; however, the plan lacks timelines to guide FMCSA's goals and strategies. In addition, the agency has not completed a modernization plan for its existing IT systems that includes scope, an implementation strategy, schedule, results-oriented goals, and measures, although it has recently awarded a contract to develop such a plan. The Acting Chief Information Officer (CIO) said that updating FMCSA's IT strategic plan had not been a priority for the agency. However, without a complete IT strategic plan, FMCSA will be less likely to move toward its ultimate goal of modernizing its aging legacy systems. FMCSA has begun to address leading practices of IT governance, but its investment governance framework does not adequately establish an investment board, select and reselect investments, and provide investment oversight. Specifically, regarding the practice of establishing an IT investment review board, FMCSA has not yet clearly defined roles and responsibilities for key working groups and individuals, including the Office of the CIO. Regarding selecting and reselecting IT investments, FMCSA requires participation and collaboration during the select phases for all IT investments; however, it lacks procedures for selecting new investments and reselecting investments that are already operational for continued funding. According to the Acting CIO, the agency is currently drafting these procedures and intends to finalize them by the end of May 2017. Regarding the practice of IT investment oversight, the agency has policies and procedures to ensure that corrective actions and related efforts are executed and tracked, but they have not yet been fully implemented by the three boards. These weaknesses are due to the agency not adhering to its IT orders that establish its governance structure. As a result, FMCSA lacks adequate visibility into and oversight of IT investment decisions and activities, which could ultimately hinder its modernization efforts. FMCSA had not fully ensured that the four systems GAO selected to review are effectively meeting the needs of the agency because none of the program offices completed operational analyses as required by the Office of Management and Budget (OMB). However, as part of its capital planning and investment control process, FMCSA assessed the four key factors of an operational analysis—costs, schedules, investment performance, and customer and business needs. One of the selected programs had partially implemented all four of these factors; two programs had partially implemented one factor, and one program had not addressed any of these factors. This was due to FMCSA not having guidance for conducting operational analyses for investments in operations and maintenance. Until FMCSA fully reviews its operational investments, the agency will lack assurance that these systems meet mission needs. GAO is making five recommendations to FMCSA to improve its IT strategic planning, oversight, and operational analyses. The Department of Transportation concurred with all of the recommendations. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
Consistent with the premise that physicians play a central role in the generation of most health care expenditures, some health care purchasers employ physician profiling to promote efficiency. We selected 10 health care purchasers that profiled physicians in their networks—that is, compared physicians’ performance to an efficiency standard to identify those who practiced inefficiently. To measure efficiency, the purchasers we spoke with generally compared actual spending for physicians’ patients to the expected spending for those same patients, given their clinical and demographic characteristics. Most purchasers said they also evaluated physicians on quality. The purchasers linked their efficiency profiling results and other measures to a range of physician-focused strategies to encourage the efficient provision of care. Some of the purchasers said their profiling efforts produced savings. The 10 health care purchasers we examined used two basic profiling approaches to identify physicians whose medical practices were inefficient. One approach focused on the costs associated with treating a specific episode of illness—such as a stroke or heart attack. The other approach focused on costs, within a specific period, associated with the patients in a physician’s practice. Both approaches used information from medical claims data to measure resource use and account for differences in patients’ health status. In addition, both approaches assessed physicians (or physician groups) based on the costs associated with services that they may not have provided directly, such as costs associated with a hospitalization or services provided by a different physician. Although the methods used by purchasers to predict patient spending varied, all used patient demographics and diagnoses. The methods they used generally computed efficiency measures as the ratio of actual to expected spending for patients of similar health status. In addition, all of the purchasers we interviewed profiled specialists and all but one also profiled primary care physicians. Several purchasers said they would only profile physicians who treated an adequate number of cases, since such analyses typically require a minimum sample size to be valid. The health care purchasers we examined directly tied the results of their profiling methods to incentives that encourage physicians in their networks to practice efficiently. The incentives varied widely in design, application, and severity of consequences. Purchasers used incentives that included educating physicians to encourage more efficient care, designating in their physician directories those physicians who met efficiency and quality standards, dividing physicians into tiers based on efficiency and giving enrollees financial incentives to see physicians in particular tiers, providing bonuses or imposing penalties based on efficiency and quality excluding inefficient physicians from the network. Evidence from our interviews with the health care purchasers suggests that physician profiling programs may have the potential to generate savings for health care purchasers. Three of the 10 purchasers reported that the profiling programs produced savings and provided us with estimates of savings attributable to their physician-focused efficiency efforts. For example, 1 of those purchasers reported that growth in spending fell from 12 percent to about 1 percent in the first year after it restructured its network as part of its efficiency program, and an actuarial firm hired by the purchaser estimated that about three quarters of the reduction in expenditure growth was most likely a result of the efficiency program. Three other purchasers suggested their programs might have achieved savings but did not provide savings estimates, while four said they had not attempted to measure savings at the time of our interviews. Having considered the efforts of other health care purchasers in profiling physicians for efficiency, we conducted our own profiling analysis of physician practices in Medicare and found individual physicians who were likely to practice medicine inefficiently in each of 12 metropolitan areas studied. We focused our analysis on generalists—physicians who described their specialty as general practice, internal medicine, or family practice. We did not include specialists in our analysis. We selected areas that were diverse geographically and in terms of Medicare spending per beneficiary. Under our methodology, we computed the percentage of overly expensive patients in each physician’s Medicare practice. To identify overly expensive patients, we grouped the Medicare beneficiaries in the 12 locations according to their health status, using diagnosis and demographic information. Patients whose total Medicare expenditures— for services provided by all health providers, not just physicians—far exceeded those of other patients in their same health status grouping were classified as overly expensive. Once these patients were identified and linked to the physicians who treated them, we were able to determine which physicians treated a disproportionate share of these patients compared with their generalist peers in the same location. We classified these physicians as outliers—that is, physicians whose proportions of overly expensive patients would occur by chance less than 1 time in 100. We concluded that these outlier physicians were likely to be practicing medicine inefficiently. Based on 2003 Medicare claims data, our analysis found outlier generalist physicians in all 12 metropolitan areas we studied. In two of the areas, outlier generalists accounted for more than 10 percent of the area’s generalist physician population. In the remaining areas, the proportion of outlier generalists ranged from 2 percent to about 6 percent of the area’s generalist population. Medicare’s data-rich environment is conducive to identifying physicians who are likely to practice medicine inefficiently. Fundamental to this effort is the ability to make statistical comparisons that enable health care purchasers to identify physicians practicing outside of established standards. CMS has the tools to make statistically valid comparisons, including comprehensive medical claims information, sufficient numbers of physicians in most areas to construct adequate sample sizes, and methods to adjust for differences in patient health status. Among the resources available to CMS are the following: Comprehensive source of medical claims information. CMS maintains a centralized repository, or database, of all Medicare claims that provides a comprehensive source of information on patients’ Medicare-covered medical encounters. Using claims from the central database, each of which includes the beneficiary’s unique identification number, CMS can identify and link patients to the various types of services they received and to the physicians who treated them. Data samples large enough to ensure meaningful comparisons across physicians. The feasibility of using efficiency measures to compare physicians’ performance depends, in part, on two factors: the availability of enough data on each physician to compute an efficiency measure and numbers of physicians large enough to provide meaningful comparisons. In 2005, Medicare’s 33.6 million fee-for-service enrollees were served by about 618,800 physicians. These figures suggest that CMS has enough clinical and expenditure data to compute efficiency measures for most physicians billing Medicare. Methods to account for differences in patient health status. Because sicker patients are expected to use more health care resources than healthier patients, the health status of patients must be taken into account to make meaningful comparisons among physicians. Medicare has significant experience with risk adjustment. Specifically, CMS has used increasingly sophisticated risk adjustment methodologies over the past decade to set payment rates for beneficiaries enrolled in managed care plans. To conduct profiling analyses, CMS would likely make methodological decisions similar to those made by the health care purchasers we interviewed. For example, the health care purchasers we spoke with made choices about whether to profile individual physicians or group practices; which risk adjustment tool was best suited for a purchaser’s physician and enrollee population; whether to measure costs associated with episodes of care or the costs, within a specific time period, associated with the patients in a physician’s practice; and what criteria to use to identify inefficient practice patterns. Our experience in examining what health care purchasers other than Medicare are doing to improve physician efficiency and in analyzing Medicare claims has enabled us to gain some insights into the potential of physician profiling to improve Medicare program efficiency. A primary virtue of profiling is that, coupled with incentives to encourage efficiency, it can create a system that operates at the individual physician level. In this way, profiling can address a principal criticism of the SGR system, which only operates at the aggregate physician level. Although savings from physician profiling alone would clearly not be sufficient to correct Medicare’s long-term fiscal imbalance, it could be an important part of a package of reforms aimed at future program sustainability. Mr. Chairman, this concludes my prepared remarks. I will be pleased to answer any questions you or the subcommittee members may have. For future contacts regarding this testimony, please contact A. Bruce Steinwald at (202) 512-7101 or at [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Other individuals who made key contributions include James Cosgrove and Phyllis Thorburn, Assistant Directors; Todd Anderson; Alex Dworkowitz; Hannah Fein; Gregory Giusto; Richard Lipinski; and Eric Wedum. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | Medicare's current system of spending targets used to moderate spending growth for physician services and annually update physician fees is problematic. This spending target system--called the sustainable growth rate (SGR) system--adjusts physician fees based on the extent to which actual spending aligns with specified targets. In recent years, because spending has exceeded the targets, the system has called for fee cuts. Since 2003, the cuts have been averted through administrative or legislative action, thus postponing the budgetary consequences of excess spending. Under these circumstances, policymakers are seeking reforms that can help moderate spending growth while ensuring that beneficiaries have appropriate access to care. For today's hearing, the Subcommittee on Health, House Committee on Energy and Commerce, which is exploring options for improving how Medicare pays physicians, asked GAO to share the preliminary results of its ongoing study related to this topic. GAO's statement addresses (1) approaches taken by other health care purchasers to address physicians' inefficient practice patterns, (2) GAO's efforts to estimate the prevalence of inefficient physicians in Medicare, and (3) the methodological tools available to identify inefficient practice patterns programwide. GAO ensured the reliability of the claims data used in this report by performing appropriate electronic data checks and by interviewing agency officials who were knowledgeable about the data. Consistent with the premise that physicians play a central role in the generation of health care expenditures, some health care purchasers examine the practice patterns of physicians in their network to promote efficiency. GAO selected 10 health care purchasers for review because they assess physicians' performance against an efficiency standard. To measure efficiency, the purchasers we spoke with generally compared actual spending for physicians' patients to the expected spending for those same patients, given their clinical and demographic characteristics. Most purchasers said they also evaluated physicians on quality. The purchasers linked their efficiency analysis results and other measures to a range of strategies--from steering patients toward the most efficient providers to excluding a physician from the purchaser's provider network because of poor performance. Some of the purchasers said these efforts produced savings. Having considered the efforts of other health care purchasers in evaluating physicians for efficiency, GAO conducted its own analysis of physician practices in Medicare. GAO used the term efficiency to mean providing and ordering a level of services that is sufficient to meet patients' health care needs but not excessive, given a patient's health status. GAO focused the analysis on generalists--physicians who described their specialty as general practice, internal medicine, or family practice--and selected metropolitan areas that were diverse geographically and in terms of Medicare spending per beneficiary. GAO found that individual physicians who were likely to practice medicine inefficiently were present in each of 12 metropolitan areas studied. The Centers for Medicare & Medicaid Services (CMS), the agency that administers Medicare, also has the tools to identify physicians who are likely to practice medicine inefficiently. Specifically, CMS has at its disposal comprehensive medical claims information, sufficient numbers of physicians in most areas to construct adequate sample sizes, and methods to adjust for differences in beneficiary health status. A primary virtue of examining physician practices for efficiency is that the information can be coupled with incentives that operate at the individual physician level, in contrast with the SGR system, which operates at the aggregate physician level. Efforts to improve physician efficiency would not, by themselves, be sufficient to correct Medicare's long-term fiscal imbalance, but such efforts could be an important part of a package of reforms aimed at future program sustainability. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
As part of a multilayered defense strategy, MTSA required vessels and port facilities to have security plans in place by July 1, 2004, including provisions establishing and controlling access to secure areas of vessels and ports. Given that ports are not only centers for passenger traffic and import and export of cargo, but also sites for oil refineries, power plants, factories, and other facilities important to the nation’s economy, securing sensitive sites of ports and vessels against access from unauthorized persons is critical. But because ports are often large and diverse places, controlling access can be difficult. To facilitate access control, MTSA required the DHS Secretary to issue a biometric identification card to individuals who required unescorted access to secure areas of port facilities or to vessels. These secure areas are to be defined by port facilities and vessels in designated security plans they were to submit to the United States Coast Guard (USCG) in July 2004. About 1 year before the passage of MTSA in 2002, work on a biometric identification card began at the Department of Transportation (DOT), partly in response to provisions in the Aviation and Transportation Security Act and the USA PATRIOT Act that relate to access control in transportation sectors. TSA—then a part of DOT—began to develop a transportation worker identification credential (TWIC) as an identity authentication tool that would ensure individuals with such an identification card had undergone an assessment verifying that they do not pose a terrorism security risk. The credential was designed by TSA to be a universally recognized identification card accepted across all modes of the national transportation system, including airports, seaports, and railroad terminals, for transportation workers requiring unescorted physical access to secure areas in this system. The credential is also to be used to help secure access to computers, networks, and applications. As shown in figure 1, ports or facilities could use an identification credential that stored a biometric, such as a fingerprint, to verify a worker’s identity and, through a comparison with data in a local facility database, determine the worker’s authority to enter a secure area. During early planning stages in 2003 and while still a part of DOT, TSA decided that the most feasible approach to issue a worker identification card would be a cost-sharing partnership between the federal government and local entities, with the federal government providing the biometric card and a database to confirm a worker’s identity and local entities providing the equipment to read the identity credential and to control access to a port’s secure areas. In 2003, TSA projected that it would test a prototype of such a card system within the year and issue the first of the cards in August 2004. In March 2003, as part of a governmentwide reorganization, TSA became a part of DHS and was charged with implementing MTSA’s requirement for a maritime worker identification card. TSA decided to use the prototype card system to issue the maritime identification card required under MTSA. At that time, TSA was preparing to test a prototype card system; later, DHS policy officials directed the agency to explore additional options for issuing the identification card required by MTSA. As a result, in addition to testing its prototype card system, TSA is exploring the cost- effectiveness of two other program alternatives: (1) a federal approach: a program wholly designed, financed, and managed by the federal government and (2) a decentralized approach: a program requiring ports and port facilities to design, finance, and manage programs to issue identification cards. According to TSA documents, each approach is to meet federally established standards for technical performance and interoperability across different transportation modes (such as air, surface, or rail). Appropriations committee conference reports, for fiscal years 2003 and 2004, directed up to $85 million of appropriated funds for the development and testing of a maritime worker identification card system prototype. With respect to fiscal year 2005 appropriations, $15 million was directed for the card program. The fiscal year 2005 funding was decreased from the $65 million as proposed by the House and the $53 million as proposed by the Senate because of delays in prototyping and evaluating the card system, according to the conference committee report. Several forms of guidance and established best practices apply to the acquisition and management of a major information technology system such as the maritime worker identification card program. For major information technology investments, DHS provided capital planning and investment control guidance as early as May 2003 that established four levels of investments, the top three of which are subject to review by department-level boards, including the Investment Review Board (IRB) and the Enterprise Architecture Board. The guidance also laid out a process for selecting, controlling, and managing investments. For example, DHS guidance suggests that as part of the control process, the agency should consider alternative means of achieving program objectives, such as different methods of providing services and different degrees of federal involvement. The guidance recommends that an alternatives analysis—a comparison of various approaches that demonstrates one approach is more cost-effective than others—should be conducted and a preferred alternative selected on the basis of that analysis. For projects like the maritime worker identification card program, whose costs and benefits extend 3 or more years, OMB also instructs federal agencies, including TSA, to complete an alternative analysis as well as a cost-benefit analysis. This analysis is to include intangible and tangible benefits and costs and willingness to pay for those benefits. In addition to DHS and OMB guidance, established industry best practices identify project management and planning best practices for major information technology system acquisition, including the development of a comprehensive plan to guide the project as detailed later in this report. Three main factors, all of which resulted in delays for testing the prototype card system, caused the agency to miss its initial August 2004 target date for issuing maritime worker identification cards. First, program officials said that although they received permission from TSA and DHS information technology officials to test a card system prototype, TSA officials had difficulty obtaining a response from DHS policy officials, contributing to the schedule slippage. Program officials said that although DHS officials reviewed the proposed card system during late 2003, senior officials provided no formal direction to program staff. Senior DHS officials said that while they were consistently briefed throughout the development of the worker identification card system, they did not provide formal direction regarding the prototype test because other important statutory and security requirements required their attention. For example, the creation and consolidation of DHS and the planning and execution of measures to close security gaps in the international aviation arena led to competition for executive-level attention and agency resources. DHS policy officials subsequently approved the test of a card system prototype. Second, while providing this approval, DHS officials also directed TSA, as part of the prototype test, to conduct a cost-benefit analysis and to evaluate the feasibility of other program alternatives for providing a card. TSA had completed these analyses earlier in the project, but DHS officials said they did not provide sufficiently detailed information on the costs and benefits of the various program alternatives. TSA officials said that because of the urgency to establish an identification card program after the terrorist attacks of September 11, 2001, the earlier cost-benefit and alternatives analyses were not completely documented as typically required by OMB regulations and DHS guidance. Working with DHS and OMB officials to identify additional information needed for a cost-benefit analysis and alternatives analysis required additional time, further delaying the prototype test. Third, TSA officials said that before testing the card system prototype, in response to direction from congressional committees, TSA conducted additional tests of various card technologies. Officials assessed the capabilities of various card technologies, such as their reliability, to determine which technology was most appropriate for controlling access in seaports. This technology assessment required 7 months to complete, more time than anticipated, delaying the prototype test. This analysis is typical of good program management and planning and, while it may have delayed the original schedule, the purpose of such assessments is to prevent delays in the future. DHS has not determined when it may begin issuing cards under any of the three proposed program alternatives—the federal, decentralized, or TWIC programs. Because of the delays in the program, some port facilities have made temporary security improvements while waiting for TSA’s maritime worker identification card system. Others, recognizing an immediate need to enhance access control systems, are proceeding with plans for local or regional identification cards that may require additional investment in order to make them compatible with TSA’s system. For example, the state of Georgia is implementing a state-based maritime worker identification card, and ports along the eastern seaboard are pursuing plans for a regional identification card. TSA officials indicated that in the near future, as they move forward with developing and operating a maritime worker identification card program, they face a number of challenges, including resolving issues with stakeholders, such as how to share costs of the program, determining the fee for the maritime worker identification card, obtaining funding for the next phase of the program. Further, in the coming months, regardless of which approach the DHS chooses—the federal, decentralized, or TWIC approach—TSA will also face challenges completing key program policies, regulatory processes, and other work as indicated in table 1. While TSA officials acknowledged the importance of completing key program policies, for example, establishing the eligibility requirements a worker must meet before receiving a card and processes for adjudicating appeals and requests for waivers from workers denied a card, officials also said that this work had not yet been completed. A senior TSA official and DHS officials said they plan to base these policies and regulations for the maritime worker identification card on those TSA is currently completing for the hazardous materials endorsement for commercial truck drivers. According to a senior TSA official who was in charge of the card program, TSA placed a higher priority on completing regulations for the hazardous materials endorsement than completing those for the maritime worker identification card. TSA has other work to complete in addition to these policies and regulations. TSA officials said OMB recently directed them and DHS officials to develop the TWIC program card in a way that allows its processes and procedures to also be used for other DHS credentialing programs. To develop such a system, DHS expects TSA to standardize, to some degree, eligibility requirements for the maritime worker identification card with those for surface and aviation workers, a task that will be challenging, according to officials. In the near future, TSA will need to produce other work, for instance, it has initiated but not yet finalized cost estimates for the card program and a cost-benefit analysis, which is a necessary part of a regulatory impact analysis required by OMB regulations. Our analysis, however, indicates that TSA faces another significant challenge besides the ones it has identified. This challenge is that TSA is attempting to proceed with the program without following certain industry-established best practices for project planning and management. Two key components of these practices are missing. The first is a comprehensive plan that identifies work to be completed, milestones for completing this work, and project budgets for the project’s remaining life. The second is detailed plans for specific and important components of the project—particularly mitigating risks and assessing alternative approaches—that would support the overall project plan. Failure to develop these plans holds significant potential to adversely affect the card program, putting it at higher risk of cost overruns, missed deadlines, and underperformance. Over the years, we have analyzed information technology systems across a broad range of federal programs and agencies, and these analyses have repeatedly shown that without adequate planning, the risks increase for cost overruns, schedule slippages, and systems that are not effective or usable. According to industry best practices for managing information technology projects like the maritime worker identification card, program managers should develop a comprehensive project plan that governs and defines all aspects of the project, tying them together in a logical manner. A documented comprehensive project plan is necessary to achieve the mutual understanding, commitment, and performance of individuals, groups, and organizations that must execute or support the plans. A comprehensive project plan identifies work to be completed, milestones for completing this work, and project budgets as well as identifying other specific, detailed plans that are to be completed to support the comprehensive project plan. The comprehensive plan, in turn, needs to be supplemented by specific, detailed plans that support the plan where necessary. Such plans might be needed to address such matters as the program’s budget and schedule, data to be analyzed, risk management and mitigation, staffing. For example, a risk mitigation plan would be important in situations where potential problems exist. One purpose of risk management is to identify potential problems before they occur; a risk mitigation plan specifies risk mitigation strategies and when they should be invoked to mitigate adverse outcomes. Effective risk management includes early and aggressive identification of risks because it is typically easier, less costly, and less disruptive to make changes and correct work efforts during the earlier phases of the project. In addition, plans for activities such as cost-benefit and alternatives analyses should be developed to help facilitate data collection and analysis. These types of plans typically describe, among other things, the data to be collected, the source of these data, and how the data will be analyzed. Such plans are important to guide needed data analysis as well as prevent unnecessary data collection, which can be costly. For this program, both risk mitigation and data analysis are key, because the program runs significant risks with regard to ensuring cooperation of stakeholders, and because TSA still faces considerable analytical work in deciding which approach to adopt. According to TSA officials, the agency lacks an approved, comprehensive project plan to guide the remaining phases of the project, which include the testing of a maritime worker identification card system prototype and issuance of the cards. While it has initiated some project planning, according to officials, the agency has not completed a comprehensive project plan, which is to identify work to be completed, milestones for completing this work, and project budgets as well as identifying other specific, detailed plans that are to be completed. Officials said that with contractor support they intended to develop a plan to manage the prototype test. However, officials did not intend to develop a plan for the remainder of the project until key policy decisions had been made, such as what type of card program will be selected to issue the cards. Once key policies are determined, TSA may move forward with a comprehensive plan. As a consequence of not having such a plan in place, officials have not documented work to be completed, milestones for completing it, or accountability for ensuring that the work is done. Without a comprehensive project plan and agreement to follow the plan from the appropriate DHS and TSA officials, TSA program staff may have difficulty managing future work, putting the program at higher risk of additional delays and cost overruns. Officials did not provide a timeframe for completing such a project plan. According to TSA planning documents and discussions with officials, TSA lacks a risk management plan that specifies strategies for mitigating known risks which could limit TSA’s ability to manage these risks. For instance, TSA documents identified failure to sustain the support of external stakeholders, such as labor unions for port workers, as a program risk and indicated a mitigation strategy was needed to address this risk. But, TSA has not developed such a strategy to address this specific risk. TSA documents also indicated that involving stakeholders in decision making could help mitigate program risks associated with defining the eligibility requirements for the card. However, TSA has not planned for stakeholder involvement in decision-making. Several stakeholders at ports and port facilities told us that while TSA solicited their input on some issues, TSA did not respond to their input or involve them in making decisions regarding eligibility requirements for the card. In particular, some stakeholders said they had not been included in discussions about which felony convictions should disqualify a worker from receiving a card, even though they had expected and requested that DHS and TSA involve them in these decisions. One port security director said TSA promised the port a “large role” in determining the eligibility requirements which has not materialized, and others said that in the absence of TSA defining the eligibility requirements for the card, they recently drafted and sent proposed eligibility requirements to TSA. TSA officials said they have an extensive outreach program to inform external stakeholders about the program, for instance, by frequently attending industry conferences and maritime association meetings. Obtaining stakeholder involvement is important because achieving program goals hinges on the federal government’s ability to form effective partnerships among many public and private stakeholders. If such partnerships are not in place—and equally important, if they do not work effectively—TSA may not be able to test and deliver a program that performs as expected. For example, TSA currently relies on facilities and workers to voluntarily participate in tests of the prototype card system. Without this and other support provided by stakeholders, the prototype card system could not be tested as planned. Planning for stakeholder involvement is also important because in the future other groups or organizations, for instance, other federal agencies or states, may be charged with developing biometric identification card programs and emerge as important external stakeholders for the maritime worker identification card program. According to best practices, in order to ensure that the appropriate data are collected to support analyses on which program decisions are made, managers should develop a plan that describes data to be collected, the source of these data, and how the data will be analyzed. During the test of the prototype card system, officials said they are to collect data on the feasibility of the federal and decentralized approaches in order to conduct an alternatives analysis—a comparison of the three possible approaches that demonstrates one approach is more cost-effective than the others. TSA officials acknowledge they have not yet completed a plan; however, they said they intend to do so with contractor support. On the basis of interviews with a number of officials and review of documents, we determined TSA has not identified who would be responsible for collecting the data; the sources for the data, and how it will be analyzed. These details are needed to ensure production of a good result. Completing the cost-benefit and alternatives analyses is important because not only do OMB regulations and DHS guidance instruct agencies to complete them, but DHS officials said the alternatives analysis would guide their decision regarding which approach is the most cost-effective way to provide the card. Without a plan to guide this activity, TSA may not perform the necessary analysis to inform sound decision making, possibly causing further delays. With the passage of MTSA, Congress established a framework for homeland security that relies on a multilayered defense strategy to enhance port security. Improving access control by providing ports a maritime worker identification card is an important part of this strategy. Each delay in TSA’s program to develop the card postpones enhancements to port security and complicates port stakeholders’ efforts to make wise investment decisions regarding security infrastructure. Despite delays and the difficulties of a major governmentwide reorganization, DHS and TSA have made some progress in developing a maritime worker identification card. Nevertheless, without developing a comprehensive project plan and its component parts—an established industry best practice for project planning and management—TSA is placing the program’s schedule and performance at higher risk. More delays could occur, for example, unless DHS and TSA agree on a comprehensive project plan to guide the remainder of the project, identify work that TSA and DHS officials must complete, and set deadlines for completing it. Without adequate risk mitigation plans, TSA may not be able to resolve problems that could adversely affect the card program objectives, such as insufficient stakeholder support to successfully develop, test, and implement the card program. Further, without a plan to guide the cost-benefit and alternatives analyses, TSA increases the risk that it may fail to sufficiently analyze the feasibility of various approaches to issue the card, an analysis needed by DHS policy officials to make informed decisions about the program, putting the program at risk for further delays. To help ensure that TSA meets the challenges it is facing in developing and operating its maritime worker identification card program, we are recommending that the Secretary of Homeland Security direct the TSA Administrator to employ industry best practices for project planning and management, by taking the following two actions: Develop a comprehensive project plan for managing the remaining life of the project. Develop specific, detailed plans for risk mitigation and cost-benefit and alternatives analyses. We provided a draft of this report to DHS and TSA for their review and comment. DHS and TSA generally concurred with the findings and recommendations that we made in our report and provided technical comments that we incorporated where appropriate. DHS and TSA also provided written comments on a draft of this report (see app. I). In its comments, DHS noted actions that it has recently taken or plans to take to address concerns we raised regarding outstanding regulatory and policy issues. Although DHS and TSA concurred with our recommendations, in their comments, they contend that project plans and program management controls are currently in place to manage their test of the TWIC prototype. However, at the time of our review, the project planning documents identified by DHS and TSA in their comments were incomplete, lacked the necessary approvals from appropriate officials, or were not provided during our audit. Furthermore, project plans and other management controls have not been developed for the remaining life of the project. We are sending copies of this report to other interested Members of Congress. We are also sending copies to the Secretary of Homeland Security. We will make copies available to others upon request. In addition, the report will be available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staffs have any questions about this report, please contact me at (415) 904-2200 or at [email protected]. Other major contributors to this report included Jonathan Bachman, Chuck Bausell, Tom Beall, Steve Calvo, Ellen Chu, Matt Coco, Lester Diamond, Geoffrey Hamilton, Rich Hung, Lori Kmetz, Anne Laffoon, Jeff Larson, David Powner, Tomas Ramirez, and Stan Stenerson. | As part of a multilayered effort to strengthen port security, the Maritime Transportation Security Act (MTSA) of 2002 calls for the Department of Homeland Security (DHS) to issue a worker identification card that uses biological metrics, such as fingerprints, to control access to secure areas of ports or ships. Charged with the responsibility for developing this card, the Transportation Security Administration (TSA), within DHS, initially planned to issue a Transportation Worker Identification Credential in August 2004 to about 6 million maritime workers. GAO assessed what factors limited TSA's ability to meet its August 2004 target date for issuing cards and what challenges remain for TSA to implement the card. Three main factors, all of which resulted in delays for testing a prototype of the maritime worker identification card system, caused the agency to miss its initial August 2004 target date for issuing the cards: (1) officials had difficulty obtaining timely approval to proceed with the prototype test from DHS, (2) extra time was required to identify data to be collected for a cost-benefit analysis, and (3) additional work to assess card technologies was required. DHS has not determined when it may begin issuing cards. In the future, TSA will face difficult challenges as it moves forward with developing and operating the card program, for example, developing regulations that identify eligibility requirements for the card. An additional challenge--and one that holds potential to adversely affect the entire program--is that TSA does not yet have a comprehensive plan in place for managing the project. Failure to develop such a plan places the card program at higher risk of cost overruns, missed deadlines, and underperformance. Following established, industry best practices for project planning and management could help TSA address these challenges. Best practices suggest managers develop a comprehensive project plan and other, detailed component plans. However, while TSA has initiated some project planning, the agency lacks an approved comprehensive project plan to govern the life of the project and has not yet developed other, detailed component plans for risk mitigation or the cost-benefit and alternatives analyses. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
Federal regulations set requirements for a small business to qualify as an SDVOSB. SDVOSB eligibility regulations mandate that a firm must be a small business and at least 51 percent-owned by one or more service- disabled veterans who control the management and daily business operations of the firm. Federal statutes and the Federal Acquisition Regulations (FAR) require all prospective contractors to update the ORCA to state whether their firm qualifies as an SDVOSB. Additionally, the SDVOSB, as a contractor, is required to register in CCR. Contracting officials are required to check CCR, which includes information such as a firm’s status as an SDVOSB, prior to awarding most federal contracts, including an SDVOSB set-aside or sole-source contract. Once an SDVOSB receives a contract, SDVOSB regulations also place restrictions on the amount of work that can be subcontracted. Once CVE verifies a business, it sends an approval letter to the firm. Under regulations first promulgated in 2008, firms retained their eligibility status for 1 year from the date of the letter. However, on June 27, 2012, VA issued updated regulations extending the eligibility period to 2 years before reverification is required. SDVOSB status are required by law to be debarred from contracting with VA for a reasonable period of time, as determined by VA. Additionally, VA regulations state that if a firm or owner is currently debarred or suspended, or is delinquent or in default on significant financial obligations owed to the federal government, then the firm or owner is ineligible for VA’s VetBiz verification program. Federal law has established government-wide goals for specific types of small businesses to receive a percentage of the total value of all prime- contract and subcontract awards for each fiscal year. The statutorily- of all mandated goal for SDVOSB participation is not less than 3 percent federal contract dollars awarded each fiscal year. SBA stated in its most recent report that, in fiscal year 2010, $10.8 billion in small-business obligations were awarded to firms that self-certified themselves in the CCR as SDVOSBs. DOD SDVOSB contracts accounted for $5.3 billion or 49 percent of government-wide SDVOSB contracts during fiscal year 2010, and VA SDVOSB contracts accounted for $3.2 billion, or 30 percent during the same period. Figure 1 summarizes the federal contracts awarded in fiscal year 2010 by federal agencies. Since 2009, GAO has issued nine reports or testimonies on the SDVOSB program, focusing on its vulnerability to fraud and abuse, and agencies’ actions to prevent contracts from going to firms that misrepresent themselves as SDVOSBs. When discussing the SDVOSB program, we have shown that a well-designed fraud-prevention system should consist of three crucial elements: (1) up-front preventive controls, (2) detection and monitoring, and (3) investigations and prosecutions. Figure 2 below outlines the key aspects of an effective fraud-prevention framework. The most effective and most efficient part of a fraud-prevention framework involves the institution of rigorous controls at the beginning of the process. At a minimum, preventive controls for the SDVOSB program should be designed to verify that a firm seeking SDVOSB status is eligible for the program. Even with effective prevention controls, there is residual risk that firms that appeared to meet SDVOSB program requirements initially will violate program rules once they obtain contracts. This fact makes effective monitoring and detection controls essential in a robust fraud-prevention framework. Detection and monitoring efforts include activities such as periodic reviews of suspicious firms and evaluating firms to provide reasonable assurance that they continue to meet program requirements. Finally, fraud-prevention controls are not fully effective unless identified fraud is aggressively prosecuted or companies are suspended, debarred, or otherwise held accountable, or both. VA has made numerous conflicting statements about its progress verifying firms listed in VetBiz under the more-thorough process the agency implemented in response to the 2010 Act. These statements indicate that VA has taken an inconsistent approach to prioritizing the verification of firms and has been unable to accurately track the status of its efforts. Specifically, at the close of our audit work, documentation provided by VA indicated that thousands of SDVOSBs listed as eligible in VetBiz received millions of dollars in SDVOSB sole-source and set-aside contract obligations even though they had not been verified under the more-thorough process implemented in response to the 2010 Act. At that time, VA told us it planned to remove all firms that had their 1-year verification period expired and had not provided documentation for reverification under the 2010 Act process. Since then, on June 27, 2012, VA implemented an interim final rule that extends the eligibility of verified firms to 2 years, including firms for which the eligibility period had expired but that had not yet been reverified. Extending the eligibility period may allow VA to focus its efforts on more thoroughly verifying firms that were previously verified under VA’s less-stringent 2006 Act process. However, the extension also allows thousands of firms to continue to be eligible for contracts even though they have not undergone the more-thorough process. With regard to our previous work, VA has taken some positive action to enhance its fraud prevention efforts by establishing processes in response to 6 of 13 recommendations we issued in October 2011. VA has also begun action on some remaining recommendations. VA has provided a number of conflicting statements and explanations related to the status of its verification program, indicating that it is having difficulty tracking its inventory of firms and whether they were verified under the process implemented to carry out the 2010 Act. As we previously stated, the process VA implemented to review firms under the 2006 Act consisted of checking whether a firm’s owner was listed in VA’s database of service-disabled veterans and conducting searches on publicly available websites such as the EPLS, which lists firms that have been debarred from doing business with the federal government. In contrast, VA stated that it implemented a more-thorough verification process under the 2010 Act that included unannounced and announced site visits and a review and analysis of company documentation. Although the 2010 Act did not include a date by which VA must complete the verification of firms, within 60 days of the law’s enactment VA was required to notify all unverified firms listed in its VetBiz database about the need to apply for verification by submitting documents to establish veteran ownership and control. Firms were required to do so within 90 days of receipt of the notification in order to avoid removal of the firm from VetBiz. VA officials told us that the agency prioritized its verification under the process implemented in response to the 2010 Act by reviewing (1) new applications for firms that had previously only self-certified in VetBiz (i.e., firms that had not been reviewed under the processes VA created for the 2006 Act or 2010 Act); (2) new firms that had initially applied for verification after the 2010 Act, to include reprocessing any firms that were denied through the new requirements and subsequently requested reconsideration; and (3) applications for firms initially verified in VetBiz under the process VA chose to implement for the 2006 Act. However, our review of information provided by VA raises concerns about the status of this process and whether VA knows how many of its firms have actually been verified under the processes implemented in response to the 2010 Act. In one communication, VA stated that as of February 2011, VA’s 2006 Act verification process had been discontinued, and all new verifications would use the process implemented in response to the 2010 Act going forward. Because firms would need to reverify 1 year later, this meant that only firms verified under the 2010 Act process should have been in VetBiz as of February 2012. In November 2011, VA reported that it had removed all unverified firms from its database on September 4, 2011. Subsequently, while reviewing new cases involving firms that had received VA SDVOSB contracts, we found instances where firms were not verified under VA’s 2010 Act process, but rather were verified under its 2006 Act process. When we met with VA in February 2012 to discuss our new cases, officials confirmed that there were still firms in VetBiz that had not been through the processes implemented in response to the 2010 Act, but did not explain how many firms still had not gone through the new process. Then, on April 23, 2012, officials told us that they had recently removed thousands of firms from VetBiz because these firms had not supplied the supporting documentation that VA decided was required for verification under the process implemented in response to the 2010 Act; VA indicated that it planned to remove hundreds of additional firms for the same reason. VA has provided conflicting statements about whether these firms received the December 2010 request to supply documentation. Further, over the next month, VA officials provided us with at least seven differing accounts of the number of SDVOSBs verified under the processes implemented for the 2006 Act and 2010 Act, the number of SDVOSBs they planned to remove, and the timing of the removals. VA’s conflicting statements create uncertainty about the status of the agency’s efforts to verify firms under the process implemented for the 2010 Act. Without a clear inventory and methods designed to track the verification process firms have undergone, VA cannot provide reasonable assurance that all firms appearing in VetBiz have been verified as owned and controlled by a veteran or service-disabled veteran. In its agency comments, VA explained these inventory issues by noting (1) the lack of a comprehensive case-management system has created the need for aggregate workarounds and resulted in inconsistent aggregate reporting; (2) the limitations of its current case-management system make it difficult to track the inventory of firms; and (3) as the limitations of the case-management system increase over time, the potential of CVE to lose track of how many firms have been verified also increases. VA also noted that its verification priorities have evolved over time. As of the close of our audit work, the information provided by VA indicated that thousands of potentially ineligible firms remain listed in VetBiz because they have not been verified under the more thorough process implemented for the 2010 Act. Our analysis shows that as of April 1, 2012, 3,717 of the 6,178 SDVOSBs (60 percent) listed as eligible in VetBiz had yet to be verified using the more-thorough verification process. Of these 3,717 firms listed as eligible on April 1, 2012, 134 received a total of $90 million in new VA SDVOSB sole-source or set- aside contract obligations during the 4-month period from November 30, 2011, to April 1, 2012. On May 14, 2012, VA told us that it removed 1,857 of these 3,717 SDVOSBs from April 2 to April 10, 2012, so that they are no longer eligible for VA SDVOSB sole-source and set-aside contracts. According to VA, the remaining 1,860 firms that had not received a review under the 2010 Act process were projected to be removed in July 2012 unless the firms provided adequate documentation supporting their eligibility. VA also stated that these firms were identified as being in “reverification” and no such expired firm was eligible for an actual contract award until the reverification decision had been completed. Since then, on June 27, VA implemented an interim final rule that extends the eligibility of verified firms to 2 years. VA told us it interprets “verified” to include any firm that has been verified under either its 2006 or 2010 Act processes. Therefore, according to the interim rule, as long as a firm is verified under either process and is in its 2-year eligibility period, VA is only authorized to initiate a verification examination if it receives credible evidence calling into question a participant’s eligibility. Furthermore, VA considered firms whose prior 1-year eligibility period had recently expired, but who had not yet been through reverification, to be within the scope of the new rule, thus extending their eligibility another year. Extending the eligibility period may allow VA to focus its efforts on more thoroughly verifying firms that were previously verified under its less-stringent 2006 Act process. However, the extension also allows thousands of firms to continue to be eligible for contracts even though they have not undergone the more-thorough process. For example, according to information provided by VA in its comments, as of July 13, 2012, there are 6,079 SDVOSBs and VOSBs listed in VetBiz. Of these, 3,724 were verified under the more-through process implemented under the 2010 Act and 2,355—over 38 percent--were verified under VA’s less-rigorous 2006 Act process. As VA acknowledges in its agency comments, “the retention of firms verified prior to the 2010 Act increases the possibility awards will go to firms that will not be verified when the more rigorous process is applied.” Moreover, past audits show the risk of providing SDVOSB contracts to firms reviewed under VA’s 2006 Act process. For example, in 2011, VA’s own OIG issued a report that reviewed both SDVOSBs and VOSBs listed in VetBiz and found that 10 of 14 SDVOSBs and VOSBs verified under VA’s 2006 Act process and listed as eligible were in fact ineligible for these respective programs. The report identified several reasons for why these firms were ineligible, including improper subcontracting practices, lack of control and ownership, and improper use of SDVOSB status, among others. Further, the report noted VA’s document-review process under the 2006 Act “in many cases was insufficient to establish control and ownership… in effect allowed businesses to self-certify as a veteran-owned or service-disabled veteran-owned small business with little supporting documentation.” The report goes on to state that VA’s failure to maintain “accurate and current” information in the VetBiz database also exacerbated problems in the verification process. VA’s OIG also used statistical sampling methods to project that (1) $500 million of VA SDVOSB and VOSB contracts were awarded annually to ineligible firms and (2) VA will award about $2.5 billion in SDVOSB and VOSB contracts to ineligible firms over the next 5 years if it does not strengthen its oversight and verification procedures. In October 2011, we issued 13 recommendations to VA related to vulnerabilities in the verification process implemented by VA after the 2010 Act; VA generally concurred with our recommendations. As of June 2012, VA has provided us with documentation demonstrating that it has established procedures in response to 6 of these recommendations. Figure 3 shows the status of the recommendations; more specific information on each recommendation follows the figure. We have not assessed the effectiveness of any of the procedures that VA has established thus far as this is beyond the scope of this report. VA has provided additional guidance and training to the VA contracting personnel on the use of the VetBiz website. In December 2011, VA issued a guidance memo requiring VA contracting personnel to check VetBiz to ensure that a firm is verified both upon receipt of an offer and prior to award. In November 2011, VA also provided training to the contracting personnel on the use of VetBiz. Providing guidance and training to current and new contracting personnel will help to ensure that these staff are aware of the need to check VetBiz prior to awarding a contract. VA has established formal procedures for VA staff to refer suspicious applications to the OIG and provided guidance on what type of cases to refer to the OIG. In April 2012, VA issued procedures for VA staff to use if they identify suspicious information or possible misrepresentations on an application for eligibility during their initial review process. These procedures contain step-by-step instructions for how to notify the OIG about suspicious applications. Specifically, CVE’s “risk team” makes a determination as to whether or not an applicant has intentionally misrepresented its status in an apparent attempt to defraud the government. If the information is credible, the applicant is referred to the VA OIG. If VA OIG accepts the referral, it conducts preliminary inquiries to determine whether a full investigation into criminal activity is warranted. If the OIG declines the investigation, VA can refer the matter to VA’s Debarment Committee, which VA instituted in September 2010 specifically to debar firms that had violated SDVOSB regulations. In addition to these procedures, from November 2011 through January 2012, VA provided three training sessions to the VA staff on the type of red flags to note during the application review. VA has explored the feasibility of validating applicants’ information with third parties. In 2012, VA met with Dun and Bradstreet to explore the feasibility of utilizing their services to validate applicants’ information, such as names and titles of business owners. Validating applicants’ information with third parties may help enhance VA’s ability to assess the accuracy of self-reported information. VA has formalized a process for conducting unannounced site visits to firms identified as high-risk during the verification process. In June 2012, VA issued procedures for VA to conduct unannounced site visits on a sample of 50 percent of high-risk firms identified during the verification process. Formalizing this process with a focus on high-risk firms may help provide reasonable assurance that only eligible firms gained access to the VetBiz database. VA has developed and implemented a process for unannounced site visits to verified companies to obtain greater effectiveness and consistency in the verification process. VA’s aforementioned June 2012 procedures also apply to verified companies. VA developed a process to select on a weekly basis, based on a combination of random and risk-based factors, verified firms to receive an unannounced site visit. In addition, according to VA it has started making these unannounced site visits. Conducting these site visits may help provide reasonable assurance to VA that the verification process is effective. VA has developed and implemented specific procedures and criteria for staff to make referrals to the Debarment Committee and VA OIG as a result of misrepresentations identified during initial verification and periodic reviews. VA’s aforementioned April 2012 procedures also apply to false information or misrepresentations identified after VA’s initial review of the application, during the firm’s eligibility period. These procedures may increase VA’s success in pursuing firms that have misrepresented their eligibility for the program. VA has not provided regular fraud-awareness training to CVE and VA contracting personnel. One of the most significant challenges to an effective verification program is to have sufficient human capital with proper training and experience. Although VA has not established regular fraud-awareness training, it has made progress in this area. For example, VA told us that its OIG recently provided training on procurement fraud and that its General Counsel provides weekly training on examination procedures and policies in order to educate staff on fraud prevention. In addition, VA said that it has plans to require all CVE staff to attend a fraud examiners course; several CVE staff were already scheduled to attend fraud training in July 2012. Having sufficient human capital with the proper training and experience would enhance the effectiveness of the verification program. VA has not developed and implemented procedures for conducting unannounced site visits to contract performance locations and interviews with contracting officials to better assess whether verified companies comply with program rules after verification. VA has started conducting announced site visits as part of its subcontracting compliance review program. This program is used to determine if a firm is performing in accordance with percentage of work performance requirements and other subcontracting commitments. However, VA has not developed and implemented procedures for conducting unannounced site visits to contract performance locations and interviews with contracting officials. The unannounced site visits and interviews with contracting officials would allow VA to better assess whether verified firms comply with program rules after verification. VA has not developed procedures for risk-based periodic reviews of verified firms receiving contracts to assess compliance with North American Industry Classification System (NAICS) size standards and SDVOSB program rules. In order to be eligible for SDVOSB set-aside and sole-source contracts, a firm must qualify as a small business under NAICS size standards. In draft guidelines, VA included supplemental information for VA staff to review the firm’s NAICS codes size standards, but these guidelines have yet to be finalized. Moreover, the draft guidelines do not include procedures for periodic reviews of verified firms’ compliance with these standards. Such procedures would help improve continued compliance with SDVOSB program rules. VA has not developed and implemented specific processes and criteria for the Debarment Committee on compliance with the requirement in the 2006 Act to debar, for a reasonable period, firms and related parties that misrepresent their SDVOSB status. According to VA, its Debarment Committee relies on procedures outlined in the FAR and the VA Acquisition Regulations to determine the length of debarments. VA has not developed specific guidelines outlining the Debarment Committee’s decision process to debar firms that misrepresent their SDVOSB status. VA should provide the Debarment Committee with guidelines to aid its decision-making process in determining what constitutes a “misrepresentation” deserving of debarment, as that term is used in the 2006 Act. VA has not developed procedures on removing SDVOSB contracts from ineligible firms. According to the VA Acquisition Regulations, the Deputy Senior Procurement Executive has the authority to determine whether VA should terminate a contract with a debarred firm. However, VA has not developed procedures to remove SDVOSB contracts from ineligible firms. According to VA, it is in the process of developing a policy on removing SDVOSB contracts from ineligible firms as determined by status protests. In addition, VA is in the process of providing guidance to the acquisition workforce on removing SDVOSB contracts from ineligible firms. Until VA develops procedures on removing SDVOSB contracts from ineligible firms, the SDVOSB program is at risk for ineligible firms to abuse the program and retain contracts obtained through fraud and abuse. VA has not formalized procedures to advertise debarments and prosecutions. VA has not formalized procedures for advertising debarments and prosecutions, though the Debarment Committee, the OIG, and CVE have listed these actions on their websites. No action has been taken to improve government-wide SDVOSB fraud- prevention controls as the program continues to remain a self-certification program. Because federal law does not require it, SBA does not verify firms’ eligibility status, nor does it require that firms submit supporting documentation. According to SBA, it is only authorized to perform eligibility reviews in a protest situation, including those cases where SBA itself has reason to believe that a firm misrepresented its SDVOSB status. However, without basic checks on firms’ eligibility claims, SBA cannot provide reasonable assurance that legitimate SDVOSBs are receiving government contracts. In fact, five of our new case-study firms received SDVOSB set-aside and sole-source contract obligations, totaling approximately $190 million, of which $75 million were new SDVOSB set- aside and sole-source contract obligations, from October 1, 2009, to December 31, 2011, despite evidence indicating they are ineligible for the program. With regard to our original 10 case-study firms reported in October 2009, some are under investigation by SBA OIG and punitive actions have been taken against others. To address vulnerabilities in the government-wide program, we previously suggested that Congress consider providing VA with the authority necessary to expand its SDVOSB eligibility verification process government-wide. Such an action is supported by the fact that VA maintains the database identifying which individuals are service-disabled veterans and is consistent with VA’s mission of service to veterans. However, such action should not be undertaken until VA demonstrates that its verification process is successful in reducing the SDVOSB program’s vulnerability to fraud and abuse. In our previous work, we found that the SDVOSB program did not have effective government-wide fraud-prevention controls in place and was vulnerable to fraud and abuse.in place for SDVOSB contracting. Because federal law does not require it, SBA and agencies awarding contracts—other than VA—do not have a process in place to validate a firm’s eligibility for the program, and rely on the firms self-certifying as a service-disabled veteran-owned business in CCR. We found the only process in place to detect fraud in the government-wide SDVSOB program involved a formal bid-protest process at SBA, whereby interested parties to a contract award could protest another firm’s SDVOSB eligibility or small-business size. However, we reported that this self-policing process did not prevent ineligible firms from receiving SDVOSB contracts. SBA officials have told us that they have limited responsibility over the SDVOSB program, and that the agency’s only statutory obligation is to report on other agencies’ success in meeting SDVOSB contracting goals. Outside of VA, there was no verification Our new case studies highlight instances of the fraud and abuse that resulted from the lack of verification of firms’ SDVOSB status. In fact, five of our new case-study firms received SDVOSB set-aside and sole-source contract obligations, totaling approximately $190 million from October 1, 2009, to December 31, 2011, despite evidence indicating they are ineligible for the program. Of this $190 million, $75 million were new SDVOSB set-aside and sole-source contract obligations. In four of the cases we examined, we were able to substantiate informants’ allegations of ineligibility as follows: Non-SDVOSB joint venture. An SDVOSB entered a joint venture with a non-SDVOSB firm and received about $16 million in new government-wide SDVOSB set-aside contract obligations. Such joint ventures are eligible if the SDVOSB firm manages the joint venture and the contract work. However, the owner, a service-disabled veteran, admitted to our investigators that his SDVOSB firm did not manage the joint venture. Therefore, the joint venture is ineligible. This firm is currently listed as a SDVOSB in CCR, which allows the firm to compete for government-wide SDVOSB contracts. VA-denied firm. Though VA denied a firm SDVOSB status in 2010 because the firm was not controlled by a service-disabled veteran owner, the firm continued to self-certify in CCR. A VA site visit found the service-disabled veteran worked mostly at another company, and the non-service-disabled veteran vice president controlled the firm. In 2011, when the firm applied for VA verification again, the size of the firm was also questioned as it shared ownership or management with at least four different entities, including companies owned by a non- service-disabled veteran minority owner. The company withdrew its application to be a VA verified SDVOSB. In total, the firm received about $21 million in SDVOSB set-aside and sole-source contracts from DOD, the General Services Administration (GSA), the Department of the Interior (DOI), the U.S. Department of Agriculture and the VA, $16 million of which were new SDVOSB set-aside and sole-source contract obligations. After VA denied the firm, the firm continued to self-certify as a SDVOSB in CCR and GSA, and DOI awarded the firm about $860,000 in new SDVOSB set-aside contracts obligations. This firm is currently listed as a SDVOSB in CCR, which allows the firm to compete for government-wide SDVOSB contracts. Multiple firms not veteran–controlled. A service-disabled veteran and two non-service disabled veteran co-owners owned two firms and a joint venture at the same location. VA found one of the firms ineligible. The operating agreements of two of the firms allowed the two minority owners to control the firms, rather than the service- disabled veteran. Additionally, the joint venture, created by one of the firms, was also ineligible because the service-disabled veteran’s firm did not manage the joint venture and the contract work. Therefore, none of the three firms were eligible for the SDVOSB program. The three firms received over $91 million in SDVOSB set-aside and sole- source contract obligations, about $18 million of which were new SDVOSB set-aside and sole- source contract obligations, from VA and the Department of Health and Human Services. The three firms have been removed from VA VetBiz. However, these firms are currently listed as SDVOSBs in CCR, which allows the firms to compete for government-wide SDVOSB contracts. Not service-disabled veteran–controlled. This firm is ineligible for the SDVOSB program because the veteran does not control the daily operations. The service-disabled veteran was not the Chief Executive Officer, and the firm’s operating agreement did not give the service- disabled veteran the exclusivity to make decisions for the company. In addition, the service-disabled veteran owner lived 500 miles away from the firm, received only $12,000 compared to the non-service- disabled veteran minority owner’s $88,000 salary, and failed to meet or communicate with subcontractors. This firm received about $37 million in SDVOSB set-aside contract obligations, $446,000 of which were new SDVOSB set-aside contract obligations, from DOD and DOI. During the course of our work, SBA and VA found this company ineligible for the SDVOSB program. This firm no longer self-certifies as a SDVOSB in CCR. On May 25, 2012, SBA debarred the non- service-disabled veteran and the firm, making them ineligible for further contracts with the federal government. We were unable to substantiate allegations in a fifth case, but found evidence that the firm in question may be ineligible for the SDVOSB program because the service-disabled veteran owner may not spend sufficient time at the SDVOSB. The service-disabled veteran owner worked as an attorney at a legal services organization Monday through Friday about 40 hours a week, which could prevent the veteran from managing the day-to-day proceedings of the SDVOSB. This firm received about $25 million in new SDVOSB set-aside and sole-source contract obligations from VA and the Department of Transportation. This firm is now listed as verified in VetBiz and is currently listed as a SDVOSB in CCR, which allows the firm to compete for government-wide SDVOSB contracts. The DOD OIG likewise reported that DOD, which awarded about half of government-wide SDVOSB contracts in 2010, did not require adequate verification of contractor status before awarding contracts. After its review of DOD contracts awarded from October 2009 to July 2010, the OIG reported that $1.9 million in SDVOSB contracts went to firms that were not registered in CCR as SDVOSBs and $340.3 million went to contractors that potentially misstated their SDVOSB status. The OIG also found that DOD awarded 12 SDVOSB set-aside and sole-source contracts for a total of $11.5 million to six firms that VA rejected. The OIG went on to recommend that DOD create an SDVOSB verification program, but the agency disagreed, citing an absence of evidence indicating that such a program would produce a net benefit to eligible SDVOSBs, and that Congress had not provided DOD with either the resources or authority to establish such a system. To address the vulnerabilities within the government-wide program caused by reliance on a self-certification process, we suggested in 2009 that Congress consider providing VA with the authority and resources necessary to expand its SDVOSB eligibility verification process to all contractors seeking to bid on SDVOSB contracts government-wide. Such an action is supported by the fact that VA maintains the database identifying which individuals are service-disabled veterans and is consistent with VA’s mission of service to veterans. In 2011, legislation was also introduced and passed in the Senate requiring all agencies to use VA’s VetBiz for SDVOSB contract awards; this legislation has not become law. However, as shown by our current work, VA’s program remains vulnerable to fraud and abuse because the agency has been unable to accurately track the status of its efforts and because potentially ineligible firms remain listed in VetBiz. Consequently, VA’s ability to show that its process is successful in reducing the SDVOSBs program’s vulnerability to fraud and abuse remains an important factor in any consideration about the potential expansion of VA’s eligibility verification process government-wide. GAO has ongoing work that will, in part, examine some of the key issues that need to be addressed if VA’s verification program were to be implemented government-wide. In 2009, we found that ineligible firms in 10 cases received $100 million in SDVOSB contracts and $300 million in other federal contracts. We referred all 10 of these cases to the appropriate agency OIGs. As of April 2012, while none of the firms are currently suspended or debarred by the agencies that received our referrals, some actions have been taken: The SBA OIG is proceeding with six open investigations. In addition, the SBA OIG has joined forces with other agency OIGs to pursue several cases. Specific details cannot be provided until the cases have been fully adjudicated. One individual related to a case is being prosecuted by the U.S. Attorney for wire fraud and fraud against the United States involving a contract valued at $1 million or more related to its misrepresentation as an SDVOSB. In addition, this individual and a related firm were suspended by the Department of Transportation for procurement fraud. One individual related to a case-study is being charged by the U.S. Attorney with conspiracy to commit wire fraud and forfeiture of his assets up to $400,000. This individual allegedly conspired to defraud the SBA and other government contractors by falsely representing his business as a service-disabled veteran-owned and operated business. Another case-study firm pled guilty to wire fraud in relation to fraudulently receiving Historically Underutilized Business Zone (HUBZone) federal contracts.ineligible for the SDVOSB program, in conjunction with the firm’s admitting defrauding the HUBZone program, raises the concern of ineligible firms applying for multiple procurement programs. Our previous finding that the case was Actions taken against firms that violate the SDVOSB program requirements should help protect the government’s interest and help discourage ineligible firms from abusing the SDVOSB program. As previously discussed, providing more emphasis on debarments and investigations could further help the government deter firms from attempting to fraudulently gain access to the SDVOSB program. The SDVOSB program has provided billions of dollars in contracting opportunities to deserving service-disabled veterans. However, our body of work, along with work by the DOD OIG and VA OIG, has found that the program is vulnerable to fraud and abuse, which has allowed millions of dollars to be awarded to ineligible firms. The government-wide program remains particularly vulnerable since it relies on an honor-system-like process whereby firms self-certify their eligibility. VA has the only program within the government dedicated to verifying SDVOSB firms’ eligibility; VA also has responsibility for maintaining a database of service-disabled veterans and a listing of firms that are eligible for the SDVOSB program. Given VA’s mission of service to veterans, we previously suggested that Congress consider expanding VA’s program government-wide to employ more effective fraud-prevention controls over the billions of dollars awarded to SDVOSBs outside of VA. However, such action should not be undertaken until VA demonstrates that its verification process is successful in reducing the SDVOSB program’s vulnerability to fraud and abuse. Furthermore, while the results of this most-recent assessment show that VA has made some progress in improving its verification process in response to the 2010 Act, it has made conflicting statements regarding the verification of firms and has been unable to accurately track the status of its efforts. These problems have resulted in thousands of potentially ineligible SDVOSBs receiving millions of dollars in sole-source and set- aside contract obligations. By better managing its inventory of firms, maintaining the accuracy of firms’ status in VetBiz, and applying the 2010 Act verification process to all firms, VA can be more confident that the billions of dollars meant to provide VA contracting opportunities to our nation’s service-disabled veteran entrepreneurs make it to the intended beneficiaries. To minimize potential fraud and abuse in VA’s SDVOSB program and provide reasonable assurance that legitimate SDVOSB firms obtain the benefits of this program, we recommend that the Secretary of Veterans Affairs ensure that all firms within VetBiz have undergone its 2010 Act verification process. Specifically, this should include consideration of the following three actions: (1) inventory firms listed in VetBiz to establish a reliable beginning point for the verification status of each firm; (2) establish procedures to maintain the accuracy of the status of all firms listed in VetBiz, including which verification process they have undergone; and (3) expeditiously verify all current VetBiz firms and new applicants under the 2010 Act verification procedures. We provided a draft of our report to VA and SBA for comment. In its written comments, reproduced in appendix I, VA stated that it concurred with our first two recommendations. It concurred “in principle” with the third, to verify all current VetBiz firms and new applicants under the processes implemented under the 2010 Act. With respect to this recommendation, VA noted that it implemented an interim rule on June 27, 2012, that extends the eligibility of verified firms to 2 years. VA told us it interprets “verified” to include any firm that has been verified under either its 2006 or 2010 Act processes. Therefore, according to the interim final rule, as long as a firm is verified under either process and is in its 2- year eligibility period, VA is only authorized to initiate a verification examination if it receives credible evidence calling into question a participant’s eligibility. Extending the eligibility period may allow VA to focus its efforts on more thoroughly verifying firms that were previously verified under its less-stringent 2006 Act process. However, the extension also allows thousands of firms to continue to be eligible for contracts even though they have not undergone the more thorough process. We acknowledge that VA has latitude under the law to modify its own regulations as necessary. However, the interim final rule in effect removes a backlog of firms and appears to be a self-created impediment delaying verification under the 2010 Act process. We remain convinced that the verification process utilized by VA prior to the 2010 Act process does not provide reasonable assurance that only eligible SDVOSBs participate in the program. Given this ongoing vulnerability to fraud and abuse, we continue to believe that VA should expeditiously verify current VetBiz firms and new applicants under the 2010 Act verification process. Despite these concurrences, VA commented that our report was misleading and inaccurate with respect to (1) our characterizations of a 2011 VA OIG report, (2) conflicting statements made by VA, and (3) VA’s implementation of our previously issued recommendations. We disagree. First, VA stated that our use of the VA OIG’s 2011 report was misleading because the report examined a period when the VetBiz database included self-certified firms in addition to firms verified under the processes implemented under the 2006 Act. VA also claims the VA OIG report contains excessive extrapolations because it examined eligibility requirements beyond ownership and control. Specifically, VA notes that 14 of the 42 firms reviewed for the OIG report had been through the verification process VA used in response to the 2006 Act and claims that only 3 were determined to be ineligible based on ownership and control. VA’s statement is incomplete and misleading. According to the OIG, an additional 7 were determined to be ineligible for reasons that could be identified during a robust verification process. As a result, the OIG found 10 of 14 firms verified under VA’s 2006 Act process to be ineligible—an eligibility failure rate comparable to the overall eligibility failure rate cited in the report. With regard to the aforementioned 7 firms, the OIG determined they were ineligible because they were engaged in improper subcontracting practices, such as “pass-through” contracts. Pass-through contracts occur when businesses or joint venture/partnerships list veterans or service disabled veterans as majority owners of the business but, contrary to programs requirements, the non-veteran owned business either performed or managed the majority of the work and received a majority of the contracts’ funds. Given that the firms being reviewed by the OIG already had existing contracts in place, the OIG was able to identify the pass-through contracts by conducting site visits and reviewing business documentation, the same steps that VA claims are taken during the verification process it implemented in response to the 2010 Act. While we acknowledge that it is difficult to identify pass-through contracts for applicants to the program who don’t have any preexisting contracts, VA should be conducting such a review for those firms that have contracts in place. As we have noted in past reports, VA’s fraud prevention controls should include detection and monitoring measures to assure that firms are completing the work required of an SDVOSB contract. Second, VA disagrees that it provided numerous conflicting statements to us regarding its verification efforts, stating that the verification process has evolved and that VA faces technical limitations related to its case- management system. While we acknowledge these concerns, it is important to note that VA did not provide us with any explanation as to its evolving priorities during the course of our audit and instead repeatedly sent us contradictory information without any clarification. Moreover, not all of the conflicting statements VA made can be attributed to inadequacies in its case-management system or to evolving priorities. Specifically, the information we received during the course of our audit work changed so significantly over such a short period of time that the evidence GAO collected does not support VA’s assertion that it “knows how many firms have been verified” and can “track individual firms,” as VA claims in its agency comment letter. Examples of the conflicting statements we received include the following: Removal of firms: On April 23, 2012, VA told us that about 900 SDVOSBs and VOSBs listed in VetBiz were targeted for removal because they had not been verified under the 2010 Act process. By April 27, 2012, this number increased to approximately 3,500 SDVOSBs and VOSBs. On May 2, 2012, we received two more differing accounts of SDVOSBs and VOSBs targeted for removal-- 2,660 firms and 2,646 firms--in the same email. Implementation of the 2010 Act process: On February 16, 2012, VA told us that it continued to verify firms under the process implemented under the 2006 Act between January and May 2011. Then, on April 23, 2012, VA told us that it stopped verifying firms under its 2006 Act process in February 2011 and began verification under its 2010 Act process at the same time. Next, on May 12, 2012, VA told us that it stopped verifying firms under the 2006 Act process in January 2011 and began verifying under the 2010 Act at the end of December 2010. In the same communication, VA told us that no firm was approved under its 2006 Act process after February 2011. But on May 21, 2012, VA sent us a list of firms and verification dates showing that multiple firms were last verified under its 2006 Act process past February 2011, with at least two firms verified under iits 2006 Act process as late as May 2011. Finally, VA stated it believed all previous GAO recommendations issued in October 2011 should be closed. For GAO to close a recommendation, it must be implemented or actions must have been taken that essentially meet the recommendation’s intent. Further, the responsible agency must provide evidence, with sufficient supporting documentation, that the actions are being implemented adequately. By the end of our audit work, we were able to close 6 of the 13 recommendations that we issued to VA in October 2011 based on documentation VA provided demonstrating that the agency had taken specific actions in response to our recommendations. Although VA indicated that it would like to close out the remaining recommendations, it either did not demonstrate that it had taken an action to implement a recommendation or did not provide the supporting documentation needed to show that the recommendation was in fact implemented. We had several discussions with VA staff about our requirements for closing recommendations, the last occurring on June 22, 2012. Moreover, we noted in our report any progress VA has made with respect to each recommendation; the information VA provided in this letter had previously been acknowledged in our report. For the 7 recommendations that remain open after the issuance of this report, we will continue to seek from VA additional documentation necessary to demonstrate that implementation has occurred. At such time, we will close each recommendation, as appropriate. In addition, VA provided technical comments, which we addressed as appropriate. We provide annotated responses to VA’s more detailed comments in appendix I. In written comments received through e-mail, SBA stated that it is committed to eliminating fraud, waste, and abuse in all of its programs including the government-wide SDVOSB program. In addition, SBA stated that it maintains a “robust and thorough” protest and appeal process. However, as noted in our report, SBA’s bid-protest process alone—that is, without upfront eligibility verification and other related measures—cannot provide reasonable assurance that only legitimate firms are awarded SDVOSB contracts. In addition, five new case studies developed for this report highlight instances of fraud and abuse. SBA disagreed with the draft report’s portrayal of actions taken against the firms that were the subject of the 10 case studies developed as part of our October 2009 report. We revised our report where appropriate. SBA also stated that it had taken actions against firms in addition to those cited in our case studies, but did not provide specific examples. Finally, SBA stated that it was implementing training to help its staff identify fraud and abuse and working to improve its referral process and collaboration with other agencies. Such efforts could help reduce the SDVOSB program’s vulnerability. However, these efforts would affect only SBA’s investigation and prosecution efforts, and not prevention, detection, and monitoring. If the government-wide program included measures to prevent, detect, and monitor fraud in the SDVOSB program, SBA could be more confident that the billions of dollars meant to provide contracting opportunities to our service-disabled veteran entrepreneurs make it to the intended beneficiaries. We are sending copies of this report to interested congressional committees, the Administrator of SBA, the Secretary of Veterans Affairs, and other interested parties. The report is also available at no charge on the GAO website at http://www.gao.gov. If you have any questions concerning this report, please contact Richard J. Hillman at (202) 512-6722 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. 1. We clarified the report to indicate what the Department of Veterans Affairs (VA) Office of Inspector General (OIG) reported on its findings in 2011 and also to indicate that the report includes all firms in VetBiz, not just those verified under the Veterans Benefits, Health Care, and Information Technology Act of 2006 (2006 Act) process. The remainder of VA’s comments related to the OIG report are inaccurate, based on our review of the report and discussions with VA’s OIG staff. See the Agency Comments and Our Evaluation section of this report for more detail. 2. In the final report, we deleted the draft report’s discussion of information about the Center for Veterans Enterprise (CVE) being responsible for helping veterans who are interested in forming or expanding their own small businesses. 3. Our report’s characterization of the Veterans Small Business Verification Act (2010 Act), part of the Veterans’ Benefits Act of 2010, is correct and we did not make associated changes to the report. While VA’s recommended change points out that VA removed firms that self-represented or had expired eligibility periods, these categories of firms are included by the “all unverified businesses” language in the existing report language. 4. We deleted the sentence stating that SDVOSBs are required to receive a portion of government-wide contractual dollars annually. 5. We have revised our draft report to note that according to VA, (1) the lack of a comprehensive case-management system has created the need for aggregate workarounds and resulted in inconsistent aggregate reporting, (2) the limitations of the case-management system make it difficult to track the inventory of firms, and (3) as the limitations of the case-management system increase over time, the potential of CVE to lose track of how many firms have been verified also increases. We also acknowledge VA’s assertion that its verification priorities have evolved over time. However, not all of the conflicting statements VA made can be attributed to inadequacies in its case-management system or to evolving priorities. One of the many examples relates to the December 2010 request for documentation mentioned in the 2010 Act. Specifically, on April 23, 2012, VA told us that between late March 2012 and early April 2012 it had removed over 3,000 SDVOSBs and VOSBs because these firms had failed to provide requested business documentation. We asked whether the firms removed in April 2012 had been sent this request. In response, VA told us that the firms removed in April 2012 did not receive the December 2010 document. Then on May 12, 2012, VA told us the firms had in fact been sent the December 2010 letter. Later, on June 20, 2012, VA told us that it did not send the December 2010 letter to all firms listed in VetBiz at the time to avoid a flood of applications. In its agency comments, VA states that the 2010 Act did not require it to send all firms listed in VetBiz in December 2010 a request for documentation if the firms had been verified under the 2006 Act and this verification had not yet expired. 6. We revised the text in our draft report to more-clearly reflect that thousands of SDVOSBs listed as eligible in VetBiz received millions of dollars in contract obligations even though they had not been verified under the more-thorough process that VA implemented in response to the 2010 Act. VA’s recommended changes also suggest that firms that were verified under the 2006 Act process could not be immediately reverified under the more-thorough 2010 Act process because, in addition to resource-allocation priorities, VA was limited by the requirements of 38 C.F.R. § 74.15(c). However, we note that VA has latitude under the law to modify its own regulations as necessary to ensure that only valid SDVOSBs are included in VetBiz. Furthermore, VA’s recent decision to amend 38 C.F.R. § 74.15 and extend the VetBiz eligibility term from 1 year to 2 years appears to be a self-created impediment to ensuring all firms expeditiously undergo the more-thorough 2010 Act process. 7. We revised the text in our report to reflect that the 2010 Act required VA to notify all unverified firms about the need to apply for verification. 8. The language VA objects to concerning VA’s prioritization of verifications under the 2010 Act process is taken directly from documentation provided by VA during the course of our audit. Accordingly, we made no changes to the report. 9. The language VA objects to concerning removal of firms is taken directly from oral and written statements made by VA during the course of our audit. Accordingly, we made no changes to the report. 10. The firms mentioned in this footnote are related to one of the new cases we reviewed as a result of allegations we received from confidential informants. These firms were not verified under the process implemented under the 2010 Act and we determined that they were in fact ineligible for the SDVOSB program because the firms’ operating agreements allowed the two minority owners to control the firms, rather than the service-disabled veteran. These firms received approximately $16 million in VA SDVOSB set-aside and sole-source contract obligations from October 2010 to December 2011. Accordingly, we made no changes to the report. 11. We revised our report to make clear that we were referring to verification using the processes implemented under the 2010 Act. 12. We received conflicting statements from VA as to which firms received the December 2010 notification later and have revised the text to clearly reflect this fact. 13. We revised the text in our report to more-clearly reflect that thousands of potentially ineligible firms remain listed in VetBiz because they have not been verified under the more-thorough process implemented for the 2010 Act. While these firms have been verified under the 2006 Act process, past audits show the potential risk of providing SDVOSB contracts to firms reviewed under this process. VA’s recommended change does not acknowledge this risk and is therefore incomplete. Moreover, our statements that were related to the number of firms not verified under the requirements of the 2010 Act, the dollar amounts those firms received, and the number of firms VA planned to remove were all supported by evidence and were accurate at the close of our audit work. We have clarified the report to indicate that fact and included information on the requirements of the interim final rule VA implemented on June 27, 2012. Specifically, in our final report we have noted that the rule extends a firm’s eligibility period for 2 years. We also note that VA interprets “verified” to include any firm that has been verified under either the 2006 Act or 2010 Act processes, meaning that this rule will allow thousands of firms to remain eligible for contracts even though they have not undergone the more- thorough process implemented under the 2010 Act process. See the Agency Comments and Our Evaluation section of this report for a more-thorough discussion of this issue. 14. To this point, VA has not provided sufficient documentation to close the 7 recommendations that remain open. GAO will continue to work with VA to confirm the status of its efforts to address our recommendations and will close recommendations, as long as necessary supporting evidence is provided. 15. Our report states that VA has made progress in the area of fraud- awareness training. However, VA has not provided any documentation to show that fraud-awareness training is being provided on a regular basis, as we recommended. Our recommendation will remain open until necessary evidence to close it is provided. Accordingly, we have not changed the language in our report. 16. The FAR and the VA Acquisition Regulations do not provide specific processes and criteria for the Debarment Committee on compliance with the requirement in the 2006 Act to debar, for a reasonable period of time, firms and related parties that misrepresented their SDVOSB status. VA should provide additional guidance to the Debarment Committee on the specific process and criteria to use to debar firms as required by the 2006 Act. Accordingly we have not changed the language in our report. 17. The recommendation requested that VA develop specific guidelines outlining the Debarment Committee's decision process to debar firms that misrepresent their SDVOSB status. VA needs to provide supporting documentation demonstrating that VA provided the Debarment Committee with the guidance outlining the decision process to debar firms that misrepresent their SDVOSB status. Accordingly, we have not changed the language in our report. 18. VA cites provisions of the FAR and the VA Acquisition Regulations containing guidance for continuing current contracts to firms that were found ineligible through the debarment process. However, our recommendation asked VA to develop procedures to remove SDVOSB contracts from ineligible firms. Accordingly, we have not changed the language in our report. 19. Our report acknowledges that VA advertises the debarments and prosecutions on the Debarment Committee, VA OIG, and CVE websites. However, our recommendation specifically asked for VA to formalize procedures to advertise debarments and prosecutions, and we have not received any documentation related to such procedures. Accordingly, we have not changed the language in our report. Service-Disabled Veteran-Owned Small Business Program: Governmentwide Fraud Prevention Control Weaknesses Leave Program Vulnerable to Fraud and Abuse, but VA Has Made Progress in Improving Its Verification Process. GAO-12-443T. Washington, D.C.: February 7, 2012. Service-Disabled Veteran-Owned Small Business Program: Additional Improvements to Fraud Prevention Controls Are Needed. GAO-12-205T. Washington, D.C.: November 30, 2011. Service-Disabled Veteran-Owned Small Business Program: Additional Improvements to Fraud Prevention Controls Are Needed. GAO-12-152R. Washington, D.C.: October 26, 2011. Service-Disabled Veteran-Owned Small Business Program: Preliminary Information on Actions Taken by Agencies to Address Fraud and Abuse and Remaining Vulnerabilities. GAO-11-589T. Washington, D.C.: July 28, 2011. Department of Veterans Affairs: Agency Has Exceeded Contracting Goals for Veteran-Owned Small Businesses, but It Faces Challenges with Its Verification Program. GAO-10-458. Washington, D.C.: May 28, 2010. Service-Disabled Veteran-Owned Small Business Program: Fraud Prevention Controls Needed to Improve Program Integrity. GAO-10-740T. Washington, D.C.: May 24, 2010. Service-Disabled Veteran-Owned Small Business Program: Case Studies Show Fraud and Abuse Allowed Ineligible Firms to Obtain Millions of Dollars in Contracts. GAO-10-306T. Washington, D.C.: December 16, 2009. Service-Disabled Veteran-Owned Small Business Program: Case Studies Show Fraud and Abuse Allowed Ineligible Firms to Obtain Millions of Dollars in Contracts. GAO-10-255T. Washington, D.C.: November 19, 2009. Service-Disabled Veteran-Owned Small Business Program: Case Studies Show Fraud and Abuse Allowed Ineligible Firms to Obtain Millions of Dollars in Contracts. GAO-10-108. Washington, D.C.: October 23, 2009. | The SDVOSB program provides federal contracting opportunities to business-owning veterans who incurred or aggravated disabilities in the line of duty. SBA administers the government-wide program, while VA maintains databases of veterans and SDVOSBs and oversees its own contracts. GAO has reported several times since 2009 that both programs were vulnerable to fraud and abuse and recommended improvements. In October 2010, Congress passed the Veterans Small Business Verification Act (2010 Act), part of the Veterans Benefits Act of 2010, to provide tools to VA to more-thoroughly validate firms eligibility before listing them in VetBiz, the database used by VA contracting officials to award SDVOSB contracts. GAO was asked to assess (1) VAs progress in addressing remaining vulnerabilities to fraud and abuse in its SDVOSB program and (2) actions taken by SBA or other federal agencies to improve government-wide SDVOSB fraud-prevention controls. GAO reviewed agency documentation and interviewed agency officials. GAO also investigated cases of alleged fraud and abuse. GAO did not project the extent of fraud and abuse in the program. The Department of Veterans Affairs (VA) Service-Disabled Veteran-Owned Small Business (SDVOSB) program remains vulnerable to fraud and abuse. VA has made inconsistent statements about its progress verifying firms listed in VetBiz using the more-thorough process the agency implemented in response to the Veterans Small Business Verification Act (2010 Act). In one communication, VA stated that as of February 2011, all new verifications would use the 2010 Act process going forward. However, as of April 1, 2012, 3,717 of the 6,178 SDVOSB firms (60 percent) listed as eligible in VetBiz had not been verified under the 2010 Act process. Of these 3,717 firms, 134 received $90 million in new VA SDVOSB set-aside or sole-source contract obligations from November 30, 2011, to April 1, 2012. While the 2010 Act did not include a deadline for verification using the more-thorough process, the presence of firms that have only been subjected to the less-stringent process that VA previously used represents a continuing vulnerability. VAs Office of Inspector General (OIG) reported that the less-stringent process was in many cases insufficient to establish control and ownership and in effect allowed businesses to self-certify as SDVOSBs with little supporting documentation. VA has taken some positive action to enhance its fraud prevention efforts by establishing processes in response to 6 of 13 recommendations GAO issued in October 2011, including conducting unannounced site visits to high-risk firms and developing procedures for referring suspicious SDVOSB applications to the OIG. VA has also begun action on some remaining recommendations, such as providing fraud awareness training and removing contracts from ineligible firms, though these procedures need to be finalized. Regarding the government-wide SDVOSB program, no action has been taken by agencies to improve fraud-prevention controls. Relying almost solely on firms self-certification, the program continues to lack controls to prevent fraud and abuse. The Small Business Administration (SBA) does not verify firms eligibility status, nor does it require that they submit supporting documentation. While SBA is under no statutory obligation to create a verification process, five new cases of potentially ineligible firms highlight the danger of taking no action. These firms received approximately $190 million in SDVOSB contract obligations. In one case, a firm found ineligible by VA continued to self-certify as an SDVOSB and received about $860,000 from the General Services Administration and the Department of Interior. Further, the Department of Defense (DOD) OIG reported in 2012 that DOD provided $340 million to firms that potentially misstated their SDVOSB status. To address these vulnerabilities, GAO previously suggested that Congress consider providing VA with the authority necessary to expand its SDVOSB eligibility verification process government-wide. Such an action is supported by the fact that VA maintains the database identifying which individuals are service-disabled veterans and is consistent with VAs mission of service to veterans. However, the problems GAO identified with VAs verification process indicate that an expansion of VAs authority to address government-wide program problems should not be undertaken until VA demonstrates that its process is successful in reducing its own SDVOSB programs vulnerability to fraud and abuse. GAO recommends that VA take steps to ensure that all firms within VetBiz have undergone the 2010 Act verification process. VA generally concurred with the recommendation but expressed concern about how specific report language characterized its program. GAO made some changes to the report but continues to believe that the program remains vulnerable to fraud and abuse. |
Subsets and Splits