• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 691
  • 15
  • Tagged with
  • 709
  • 709
  • 709
  • 456
  • 431
  • 358
  • 171
  • 170
  • 170
  • 170
  • 70
  • 58
  • 58
  • 57
  • 57
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
121

Future naval ship procurement : a case study of the Navy's next-generation destroyer / DDG 1000

Jaglom, Peter Stampfl January 2006 (has links)
Thesis (S.M.)--Massachusetts Institute of Technology, Engineering Systems Division, Technology and Policy Program, 2006. / Includes bibliographical references. / Cost growth and inefficiencies are a serious problem in almost all major U.S. defense procurement programs, and have existed for many years despite repeated efforts to control them. These problems are particularly virulent in the design and acquisition of new naval warships. If the Navy cannot bring its costs under control, it will not be able to afford the capabilities it needs to execute the nation's national security. Several factors influence the cost growth of weapons procurement programs. Intentionally low estimates can help convince Congress to commit to programs that are actually very expensive. Bureaucratic politics can cause the Navy to spend money on superfluous features unjustified by strategic requirements. Private industry can push new, expensive technology on the Navy. Members of Congress can include pork-barrel provisions to bring more money to their constituents, often without national interest justifications. This thesis evaluates the development of the DDG 1000, the Navy's next-generation destroyer, and the dramatic change that occurred to the design of that ship during its development. Based on that analysis, it makes recommendations for the future of the DDG 1000 and for naval ship procurement more generally. / (cont.) The thesis finds that though a new ship was justified in the post-Cold War world, the actual design of that ship was determined by bureaucratic politics and the ship's procurement plan was determined by pork-barrel politics, neither of which properly served the nation's strategic interests. The thesis recommends that the DDG 1000 be used solely as a technology demonstration platform, reducing procurement spending while salvaging its technological advances; that the DDG 1000 be procured from a single shipyard; that the Navy design a smaller and cheaper warship to serve the needs of the future fleet; and that the nation implement specific measures to reduce the influence of bureaucratic politics and pork barrel politics on resource allocation and procurement. / by Peter Stampfl Jaglom. / S.M.
122

The economic effects of surface transport deregulation

Li, Yong, 1960- January 2002 (has links)
Thesis (S.M.)--Massachusetts Institute of Technology, Engineering Systems Division, Technology and Policy Program, 2002. / Includes bibliographical references (p. 71-72). / Over the past two decades, the deregulation of surface transport at both national and international levels has gathered momentum, particularly within the United States and European Union. The structural and performance changes associated with transport deregulation generated substantial redistribution of wealth among carriers, labor, . shippers, -and final customers and dramatically altered the costs and organization of transportation services. Many of these consequences were anticipated in the debate over deregulation; others have emerged during the regulatory transition. In general, economic regulation has led to net social benefits. This thesis will discuss the origin of transportation regulation and the forces for regulatory reform. The effects of the removal of economic control are assessed. It also examines the issues emerging after the deregulation and possibility for re-regulation in an effort to enhance safety and reduce the environmental impact of surface transport. / by Yong Li. / S.M.
123

Regulating mercury with the Clear Skies Act : the resulting impacts on innovation, human health, and the global community

Sweeney, Meghan (Meghan Kathleen) January 2006 (has links)
Thesis (S.M.)--Massachusetts Institute of Technology, Engineering Systems Division, Technology and Policy Program, 2006. / Includes bibliographical references (leaves 81-87). / The 1990 Clean Air Act Amendments require the U.S. EPA to control mercury emission outputs from coal-burning power plants through implementation of MACT, Maximum Achievable Control Technology, standards. However, in 2003 the Bush Administration revealed an alternative and controversial regulatory strategy for mercury, developing a cap and trade emissions credit trading program under the Clear Skies Initiative. Although emissions trading was proven to be a successful regulatory strategy for sulfur dioxide through the 1992 Acid Rain Program, the uniquely dangerous properties of mercury make this market-based regulation risky for certain vulnerable segments of the population. Since its unveiling, the Clear Skies cap and trade approach has been criticized for being too industry-friendly and inadequately setting limits on mercury emissions. Current challenges to the Clear Skies approach to the regulation of mercury claim that not only is it illegal under the Clear Air Act, but that it inhibits innovation and undermines an international strategy to reduce anthropogenic mercury emissions. This thesis evaluates the critiques of Clear Skies and the reasoning given by the EPA in defense of the regulation. / (cont.) Recent academic studies and a comparison case study with the Acid Rain Program are used to discuss the probable effects of Clear Skies on mercury reduction. The main questions addressed in the thesis are: 1) what is the motivation for Clear Skies? 2) what is the legal basis for the Initiative? 3) what are the potential failures of Clear Skies in protecting against mercury exposure? 4) what will be the resulting impact of Clear Skies on technological innovation? and 5) how does Clear Skies compare with international mercury reduction strategies? / by Meghan Sweeney. / S.M.
124

Estimating the economic cost of sea-level rise

Sugiyama, Masahiro January 2007 (has links)
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections. / Thesis (S.M.)--Massachusetts Institute of Technology, Engineering Systems Division, Technology and Policy Program, 2007. / Includes bibliographical references (p. 74-79). / (cont.) In the case of a classical linear sea-level rise of one meter per century, the use of DIVA generally decreases the protection fraction of the coastline, and results in a smaller protection cost because of high spatial concentration of capital. As in past studies, wetland loss continues to be dominant for most regions, and the total cost does not decline appreciably where wetland loss remains about the same. The total cost for the United States is about $320 billion (in 1995 U.S. dollars), an estimate comparable with other studies. Nevertheless, capital loss and protection cost may not be negligible for developing countries, in light of their small gross domestic product. Using realistic sea-level rise scenarios based on the Integrated Global System Model (IGSM) simulations substantially reduce the cost of sea-level rise for two reasons: a smaller rise of sea level in 2100 and a nonlinear form of the path of sea-level rise. As in many of the past studies, the thesis employs conventional but rather unrealistic assumptions: perfect information about future sea-level rise and neglect of the stochastic nature of storm surges. The author suggests that future work should tackle uncertain and stochastic sea-level rise damages. / To improve the estimate of economic costs of future sea-level rise associated with global climate change, the thesis generalizes the sea-level rise cost function originally proposed by Fankhauser, and applies it to a new database on coastal vulnerability, Dynamic Interactive Vulnerability Assessment (DIVA). With the new cost function, a new estimate of the cost present values over the 21st century is produced. An analytic expression for the generalized sea-level rise cost function is obtained to explore the effect of various spatial distributions of capital and nonlinear sea-level rise scenarios. With its high spatial resolution, DIVA shows that capital is usually highly spatially concentrated along a nation's coastline, and that previous studies, which assumed linear marginal capital loss for lack of this information, probably overestimated the fraction of a nation's coastline to be protected and protection cost. In addition, the new function can treat a sea-level rise that is nonlinear in time. As a nonlinear sea-level rise causes more costs in the future than an equivalent linear sea-level rise scenario, using the new equation with a nonlinear scenario also reduces the estimated damage and protection fraction through discounting of the costs in later periods. Numerical calculations are performed, applying the cost function to DIVA and socio-economic scenarios from the MIT Emissions Prediction and Policy Analysis (EPPA) model. / by Masahiro Sugiyama. / S.M.
125

Why did the solar power sector develop quickly in Japan?

Rogol, Michael G January 2007 (has links)
Thesis (S.M.)--Massachusetts Institute of Technology, Engineering Systems Division, Technology and Policy Program, 2007. / Includes bibliographical references (leaves 175-181). / The solar power sector grew quickly in Japan during the decade 1994 to 2003. During this period, annual installations increased 32-fold from 7MW in 1994 to 223MW in 2003, and annual production increased 22-fold, from 16MW in 1994 to 364MW in 2003. Over these years, the growth of Japan's solar power sector outpaced the global industry's growth, which is puzzling because Japan was in a recession during this period. At the same time, the U.S. was experiencing considerable economic expansion, yet the U.S. solar industry's growth was significantly slower than Japan's. This thesis focuses on the rapid development of Japan's solar power sector in order to address the central question, "Why did the solar power sector develop quickly in Japan?" To address this question, this thesis develops two comparative case studies: (1) Japan's solar power sector: 1994 to 2003 and (2) U.S. solar power sector: 1994 to 2003. These case studies provide detailed descriptions of the historical development of the solar power sectors in Japan and the U.S. based on data collected from International Energy Agency's PVPS program, Japan's New Energy Development Organization and the U.S. Energy Information Administration, among other sources. / (cont.) A comparative analysis of these cases suggests that the rapid growth of Japan's solar power sector was enabled by interplay among (a) decreasing gross system prices price, (b) increasing installations, (c) increasing production and (d) decreasing costs. The second-order explanation for this interplay is that a mosaic of factors led to (a) decreasing prices, (b) increasing installations, (c) increasing production and (d) decreasing costs. This mosaic included the extrinsic setting (solar resource, interest rate, grid price), industrial organization (including the structure of the electric power sector and the structure within the solar power sector), demand-side incentives that drove down the "gap" with and provided a "trigger" for supply-side growth, and supply-side expansion that enabled significant cost reductions and price reductions that more than offset the decline in demand-side incentives. Within this complex interplay of numerous factors, roadmapping and industry coordination efforts played an important role by shaping the direction of Japan's solar power sector. This thesis concludes with "lessons learned" from Japan's solar power sector development, how these lessons may be applicable in a U.S. context and open questions for further research. / by Michael G. Rogol. / S.M.
126

Westinghouse PWR : the rise and fall of a dominant design in the electric power industry / Westinghouse Pressurized Water Reactor : the rise and fall of a dominant design in the electric power industry

Barrientos, Carlos J. (Carlos Jose), 1966- January 2002 (has links)
Thesis (S.M.)--Massachusetts Institute of Technology, Engineering Systems Division, Technology and Policy Program, 2002. / Includes bibliographical references (p. [91]-94). / In the early 1950s the electric power industry was shaken by the introduction of a new type of energy source that pledged to be the most cost-efficient way of producing electricity. Counting on the promise of huge profits -attributed to large economies of scale- governments and utilities rushed to develop and construct nuclear power plants. From 1952 to 1985, four hundred units were built all across the industrialized world, from the United States to the Soviet Union and from Sweden to Taiwan. Nuclear power accounted for the most impressive capacity build-up in 100 years of electricity history. As was the case with most of the strategic technologies developed in the 20th century, the first nuclear prototypes were developed in the United States, with the sponsorship of the federal government. The funding ended up mainly in R&D expenditures administered by the Atomic Energy Commission (AEC). As a result of such a strong commitment to nuclear, the United States quickly became the technological leader in the world. Such leadership meant not only building most of the nuclear power plants in its territory -125 out of 480- but establishing industry standards in many areas: from design, licensing and construction to commercial operation, and decommissioning. This research explores one of the most substantial legacies of the U.S. nuclear power undertaking: the PWR reactor developed by Westinghouse. The research examines the story of the PWR from its origins in the drawing rooms of the Bettis Laboratories in the 1950s to its rapid adoption in the 1960s as the dominant design in the industry. The main goal of the research is describing the dynamics of the process, building at the same time a workable framework of analysis. The first part of the research digs down into plants' data. Using a large database of reactors' records, the dominant design hypothesis is tested thoroughly. The analysis confirms that the early design proposed by Westinghouse quickly became the standard of the industry. Nearly two-thirds of all reactors built in the world have their roots in the early Westinghouse design. The reasons for the emergence of such a dominant design are numerous: (a) the influence of military nuclear programs, such as the nuclear submarine; (b) the monopoly of the AEC regarding nuclear secrecy, technology transfer and industry partnership; (c) the role of the cold war as a driving force in nuclear and space policy; and (d) the obscure alliance between Westinghouse and GE regarding competition on electrical components, notably the large steam-turbines used in nuclear power plants. The emergence and consolidation of a dominant design in any industry has many consequences. In nuclear power, some of the relevant issues are standardization, learning effects, economies of scale, and regulation. All these issues are important to study not only in the United States context. The international consequences are vast on nuclear power programs and policies of countries such as Japan and France. Since most dominant designs have a rise and a fall, the research includes an analysis of why in the mid- 1980s the Westinghouse PWR collapsed, along with the entire nuclear power industry. The thesis is that incumbent firms that have successful dominant designs in the market, very often fail to be aware of subtle -but disrupting- shifts in customer needs. Westinghouse was busy building large and complex units in order to increase efficiency and profits, while customer needs were moving in a radically different direction, towards less investment risk. / by Carlos J. Barrientos. / S.M.
127

Prospects for increased low-grade bio-fuels use in home and commercial heating applications

Pendray, John Robert January 2007 (has links)
Thesis (S.M.)--Massachusetts Institute of Technology, Engineering Systems Division, Technology and Policy Program, 2007. / This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections. / Includes bibliographical references (p. 108-111). / Though we must eventually find viable alternatives for fossil fuels in large segments of the energy market, there are economically attractive fossil fuel alternatives today for niche markets. The easiest fossil fuels to replace are those with the highest cost and that provide the lowest-grade energy. Stationary heating with oil is one example of low quality use of a high quality fuel. Solid biomass fuels such as wood-pellets, switchgrass-pellets, and corn can displace up to 2% of the U.S. petroleum market through displacing oil used in home and commercial heating. Current technologies are inexpensive enough to enable consumers to save money by heating with solid bio-fuels instead of oil. Although these systems are currently difficult to operate, future systems can increase usability and potentially further reduce costs. Key developments for future adoption are fuel handling and ash cleaning automation as well as emissions reductions. These technologies exist in other industries, such as agriculture, but have not yet been integrated into U.S. solid bio-fuel heating systems. Solid bio-fuel heating is more effective at reducing environmental damage and increasing energy security than corn-ethanol. Net CO2 emissions from solid bio-fuel heating are 75% lower than oil heating, in contrast to the nearly equivalent CO2 emissions between corn-ethanol and gasoline. / (cont.) The total solid bio-fuel system evaluated included fuel feedstock cultivation, harvesting, processing, and processed fuel distribution. Solid bio-fuel heating also enables cellulosic feedstocks use today. Solid bio-fuel heating also displaces twice the oil of corn-ethanol for the same amount of corn consumed, displacing 7 to 11 times the petroleum consumed during solid bio-fuel production and distribution. Solid bio-fuels are also less likely to negatively impact the food supply, because heating oil demand matches biomass fuel supply more closely than transportation fuel demand. This decreases the likelihood of price shocks in the food supply. This paper does not advocate using food for fuel, but does show that burning corn for heat is a more energy and cost effective use for the limited food supply than corn-ethanol. Low grade biomass fuels provide the ecological benefits of alternative fuels while economically benefiting consumers. Solid bio-fuel heating is economically competitive with heating oil, utilizes existing infrastructures and technologies, and provides measurable reductions in oil consumption and greenhouse gas emissions. / by John Robert Pendray. / S.M.
128

Weather forecasting : the next generation : the potential use and implementation of ensemble forecasting / Potential use and implementation of ensemble forecasting

Goto, Susumu January 2007 (has links)
Thesis (S.M.)--Massachusetts Institute of Technology, Engineering Systems Division, Technology and Policy Program, 2007. / Includes bibliographical references (p. 102-112). / This thesis discusses ensemble forecasting, a promising new weather forecasting technique, from various viewpoints relating not only to its meteorological aspects but also to its user and policy aspects. Ensemble forecasting was developed to overcome the limitations of conventional deterministic weather forecasting. However, despite the achievements of ensemble forecasting techniques and efforts to put them into operation, the implementation and utilization of ensemble forecasting seems limited in society. This thesis studies meteorological aspects, potential uses and value, and policy issues to give an overall picture of ensemble forecasting and suggests directions of measures to increase its utilization. Conventional weather forecasting cannot achieve perfect forecasts due to the chaotic nature of the atmosphere and imperfect analyses of the current atmosphere. The imperfect description of numerical weather prediction models in the forecasting process is another source of the disparity between forecasts and the real atmosphere. Conventional weather forecasting offers only a single scenario, which sometimes fails to predict the actual weather; ensemble forecasting provides probabilistic weather forecasts based on multiple weather scenarios. / (cont.) This thesis also illustrates potential uses and values of ensemble forecasting. Ensemble forecasting could help disaster management officers prepare for probable hazardous conditions. It is also useful for risk management in business. Using concepts of information values and real options, this thesis demonstrates that ensemble forecasting can be valuable in decision making. Potential uses of ensemble forecasting in agriculture and the wind electricity sectors are also discussed. Implementation of ensemble forecasting requires huge costs, so collaboration within weather sectors and with non-weather sectors is key. Relationships between public, private, and academic sectors in the weather world are analyzed in this thesis. The public-private relationship seems characterized by dilemmas in both sectors. As for the public-academic relationship, there are different situations in the US and in Japan due to differences in research environment and policies. International collaboration and partnerships between weather sectors and non-weather sectors are also discussed. If all these collaborations among the sectors work well, then ensemble forecasting can give rise to a new generation of weather forecasting. / by Susumu Goto. / S.M.
129

Environmental technology and policy development in a regional system : transboundary water management and pollution prevention in southeastern Europe

Electris, Christi January 2007 (has links)
Thesis (S.M.)--Massachusetts Institute of Technology, Engineering Systems Division, Technology and Policy Program, 2007. / Includes bibliographical references (p. 256-270). / In order to surmount the barriers to transboundary integration and coordination of environmental technology and regulatory policy in Southeastern Europe, the environmental capabilities and needs of the region are discussed, and a regional cooperation and coordination systems framework is developed. The thesis focuses on a case study of transboundary water resource management of the Mesta/Nestos River Basin between Bulgaria and Greece is presented in order to understand the coordination problems between a particular locality's level of integration in environmental technology development and use, and environmental regulatory policy, as well as the barriers to cooperation between two localities sharing a transboundary resource. For the case study, the physical characteristics and environmental stresses on the basin are described in detail. Next the policy governing local water resource management and environmental technology development is reviewed in terms of national laws and regulations, the bilateral diplomatic agreements, and the EU framework that drives much of the current activity in the basin today. Finally, the gaps in current policy and the barriers to coordinating water resource-related technology policy and environmental regulatory policy development are analyzed. The end result is a set of recommendations pertaining to the particular basin, but which can be generalized to other basins in the region. The focus is primarily on the coordination in both countries at the local and transboundary levels, but will also be explored within the context of the nation-wide and region-wide levels. / (cont.) Through this narrow case study, insight is gained as to how environmental technology policy can be coordinated with regulatory policy to surmount the obstacles faced in water resource management and the broader context, and how the institutional and legal framework in place affects the regulatory scheme and in turn the technology placement in both countries. / by Christi Electris. / S.M.
130

Securing the safety net : applying manufacturing systems methods towards understanding and redesigning a hospital emergency department / Applying manufacturing systems methods towards understanding and redesigning a hospital emergency department

Peck, Jordan S. (Jordan Shefer) January 2008 (has links)
Thesis (S.M.)--Massachusetts Institute of Technology, Engineering Systems Division, Technology and Policy Program, 2008. / This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections. / Includes bibliographical references (p. 85-88). / Emergency Departments have been referred to as the "Safety Net" of our Healthcare system. This is because of their ability to catch all patients who would otherwise slip through the system, due to lack of funds, insurance, time, transportation and knowledge, etc. Because of this, as demand for health treatment increases, the occurrence of crowding in our nation's emergency departments is also increasing. At the same time hospitals are being expected to perform more, with lower funding. Observation of a hospital emergency department yields similarities between the emergency department and a manufacturing system. This is not completely a new concept, yet there have been barriers towards adopting manufacturing system practices into healthcare systems due to differences in culture, economics, politics, and the nature of the system itself. The focus of this thesis is to select manufacturing systems methods and apply them to an emergency department. This application is done with an understanding of the fundamental differences between the two systems. The first applied method is Axiomatic Design, a system design method that clearly maps out the functional requirements of a system to design solutions more efficiently. Upon applying Axiomatic Design to show that it can be used to discover and describe problems in an Emergency Department, the specific problem of patient flow is selected. Discrete Event Simulation is used in order to analyze patient flow in the Emergency Department. This results in actionable changes in the operations of an emergency department fast track. One significant actionable change is the creation of a new index for assigning patients a level based on their expected time in the Emergency Room to be used in conjunction with the current index which is based on acuity level. The purpose of this exercise is to show that manufacturing methods can be applied in an emergency department/healthcare system while taking the differences between the two systems into account. / by Jordan S. Peck / S.M.

Page generated in 0.06 seconds