• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 841
  • 456
  • 274
  • 143
  • 119
  • 53
  • 51
  • 24
  • 19
  • 19
  • 18
  • 16
  • 16
  • 15
  • 10
  • Tagged with
  • 2352
  • 627
  • 593
  • 461
  • 400
  • 273
  • 261
  • 231
  • 227
  • 188
  • 183
  • 179
  • 177
  • 155
  • 147
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
131

Quality capability self-diagnosis : a multicriteria evaluation approach

Huang, Kuei-Jung 24 January 1994 (has links)
Quality Capability Self-diagnosis is a convenient and economical way to assess the performance of an operating quality system, as well as a basis for initiating necessary corrective actions essential to quality improvement and preparation for certification. This paper describes the research leading to the development of a cost-effective and systematic methodology for performing quality capability self-diagnosis. ISO 9000 series standards and the methods used to implement multicriteria system evaluation are employed to provide a sound basis for the development of this quality capability self-diagnosis scheme (QCSDS). The QCSDS has been developed to assist manufacturers in the conduct of quality assurance audits using internal personnel. The methodological structure of QCSDS is presented in two major parts: a regular model and a refined model. The regular model includes: (1) development of quality system auditing criteria, (2) selection of a suitable checklist developed from ISO 9000 series requirements, (3) development of importance weights for applicable criteria, (4) performance measurement, (5) quality system rating, (6) analysis of quality auditing results, and (7) suggestions for improvement. The refined model is developed to strengthen capability of the model and its reliability for confirming the effectiveness of an operating quality system using quality cost analysis, utility theory and regression analysis. A decision support system (QCSDDSS) based on the Quattro Pro spreadsheet is incorporated to facilitate the application of QCSDS. The QCSDDSS development is based on the regular model using ISO 9002 to provide both tabular and graphical displays for performance demonstration and improvement analysis. / Graduation date: 1994
132

Developing Multi-Criteria Performance Estimation Tools for Systems-on-Chip

Vander Biest, Alexis GJE 23 March 2009 (has links)
The work presented in this thesis targets the analysis and implementation of multi-criteria performance prediction methods for System-on-Chips (SoC). These new SoC architectures offer the opportunity to integrate complete heterogeneous systems into a single chip and can be used to design battery powered handhelds, security critical systems, consumer electronics devices, etc. However, this variety in terms of application usually comes with a lot of different performance objectives like power consumption, yield, design cost, production cost, silicon area and many others. These performance requirements are often very difficult to meet together so that SoC design usually relies on making the right design choices and finding the best performance compromises. In parallel with this architectural paradigm shift, new Very Deep Submicron (VDSM) silicon processes have more and more impact on the performances and deeply modify the way a VLSI system is designed even at the first stages of a design flow. In such a context where many new technological and system related variables enter the game, early exploration of the impact of design choices becomes crucial to estimate the performance of the system to design and reduce its time-to-market. In this context, this thesis presents: - A study of state-of-the-art tools and methods used to estimate the performances of VLSI systems and an original classification based on several features and concepts that they use. Based on this comparison, we highlight their weaknesses and lacks to identify new opportunities in performance prediction. - The definition of new concepts to enable the automatic exploration of large design spaces based on flexible performance criteria and degrees of freedom representing design choices. - The implementation of a couple of two new tools of our own: - Nessie, a tool enabling hierarchical representation of an application along with its platform and automatically performs the mapping and the estimation of their performance. -Yeti, a C++ library enabling the defintion and value estimation of closed-formed expressions and table-based relations. It provides the user with input and model sensitivity analysis capability, simulation scripting, run-time building and automatic plotting of the results. Additionally, Yeti can work in standalone mode to provide the user with an independent framework for model estimation and analysis. To demonstrate the use and interest of these tools, we provide in this thesis several case studies whose results are discussed and compared with the literature. Using Yeti, we successfully reproduced the results of a model estimating multi-core computation power and extended them thanks to the representation flexibility of our tool. We also built several models from the ground up to help the dimensioning of interconnect links and clock frequency optimization. Thanks to Nessie, we were able to reproduce the NoC power consumption results of an H.264/AVC decoding application running on a multicore platform. These results were then extended to the case of a 3D die stacked architecture and the performance benets are then discussed. We end up by highlighting the advantages of our technique and discuss future opportunities for performance prediction tools to explore.
133

Multi-Criteria Planning of Local Energy Systems with Multiple Energy Carriers

Løken, Espen January 2007 (has links)
Background and Motivation Unlike what is common in Europe and the rest of the world, Norway has traditionally met most of its stationary energy demand (including heating) with electricity, because of abundant access to hydropower. However, after the deregulation of the Norwegian electricity market in the 1990s, the increase in the electricity generation capacity has been less than the load demand increase. This is due to the relatively low electricity prices during the period, together with the fact that Norway’s energy companies no longer have any obligations to meet the load growth. The country’s generation capacity is currently not sufficient to meet demand, and accordingly, Norway is now a net importer of electricity, even in normal hydrological years. The situation has led to an increased focus on alternative energy solutions. It has been common that different energy infrastructures – such as electricity, district heating and natural gas networks – have been planned and commissioned by independent companies. However, such an organization of the planning means that synergistic effects of a combined energy system to a large extent are neglected. During the last decades, several traditional electricity companies have started to offer alternative energy carriers to their customers. This has led to a need for a more comprehensive and sophisticated energy-planning process, where the various energy infrastructures are planned in a coordinated way. The use of multi-criteria decision analysis (MCDA) appears to be suited for coordinated planning of energy systems with multiple energy carriers. MCDA is a generic term for different methods that help people make decisions according to their preferences in situations characterized by multiple conflicting criteria. The thesis focuses on two important stages of a multi-criteria planning task: - The initial structuring and modelling phase - The decision-making phase The Initial Structuring and Modelling Phase It is important to spend sufficient time and resources on the problem definition and structuring, so that all disagreements among the decision-maker(s) (DM(s)) and the analyst regarding the nature of the problem and the desired goals are eliminated. After the problem has been properly identified, the next step of a multi-criteria energy-planning process is the building of an energy system model (impact model). The model is used to calculate the operational attributes necessary for the multi-criteria analysis; in other words, to determine the various alternatives’ performance values for some or all of the criteria being considered. It is important that the model accounts for both the physical characteristics of the energy system components and the complex relationships between the system parameters. However, it is not propitious to choose/build an energy system model with a greater level of detail than needed to achieve the aims of the planning project. In my PhD research, I have chosen to use the eTransport model as the energy system model. This model is especially designed for planning of local and regional energy systems, where different energy carriers and technologies are considered simultaneously. However, eTransport can currently provide information only about costs and emissions directly connected to the energy system’s operation. Details about the investment plans’ performance on the remaining criteria must be found from other information sources. Guidelines should be identified regarding the extent to which different aspects should be accounted for, and on the ways these impacts can be assessed for each investment plan under consideration. However, it is important to realize that there is not one solution for how to do this that is valid for all kind of local energy-planning problems. It is therefore necessary for the DM(s) and the analyst to discuss these issues before entering the decision-making phase. The Decision-Making Phase Two case studies have been undertaken to examine to what extent the use of MCDA is suitable for local energy-planning purposes. In the two case studies, two of the most well-known MCDA methods, the Multi-Attribute Utility Theory (MAUT) and the Analytical Hierarchy Process (AHP), have been tested. Other MCDA methods, such as GP or the outranking methods, could also have been applied. However, I chose to focus on value measurement methods as AHP and MAUT, and have not tested other methods. Accordingly, my research cannot determine if value measurement methods are better suited for energy-planning purposes than GP or outranking methods are. Although all MCDA methods are constructed to help DMs explore their ‘true values’ – which theoretically should be the same regardless of the method used to elicit them – our experiments showed that different MCDA methods do not necessarily provide the same results. Some of the differences are caused by the two methods’ different ways of asking questions, as well as the DMs’ inability to express clearly their value judgements by using one or both the methods. In particular, the MAUT preference-elicitation procedure was difficult to understand and accept for DMs without previous experience with the utility concept. An additional explanation of the differences is that the external uncertainties included in the problem formulation are better accounted for in MAUT than in AHP. There are also a number of essential weaknesses in the theoretical foundation of the AHP method that may have influenced the results using that method. However, the AHP method seems to be preferred by DMs, because the method is straightforward and easier to use and understand than the relatively complex MAUT method. It was found that the post-interview process is essential for a good decision outcome. For example, the results from the preference aggregation may indicate that according to the DM’s preferences, a modification of one of the alternatives might be propitious. In such cases, it is important to realize that MCDA is an iterative process. The post-interview process also includes presentation and discussion of results with the DMs. Our experiments showed that the DMs might discover inconsistencies in the results; that the results do not reflect the DM’s actual preferences for some reason; or that the results simply do not feel right. In these cases, it is again essential to return to an earlier phase of the MCDA process and conduct a new analysis where these problems or discrepancies are taken into account. The results from an MAUT analysis are usually presented to the DMs in the form of expected total utilities given on a scale from zero to one. Expected utilities are convenient for ranking and evaluation of alternatives. However, they do not have any direct physical meaning, which quite obviously is a disadvantage from an application point of view. In order to improve the understanding of the differences between the alternatives, the Equivalent Attribute Technique (EAT) can be applied. EAT was tested in the first of the two case studies. In this case study, the cost criterion was considered important by the DMs, and the utility differences were therefore converted to equivalent cost differences. In the second case study, the preference elicitation interviews showed, quite surprisingly, that cost was not considered among the most important criteria by the DMs, and none of the other attributes were suitable to be used as the equivalent attribute. Therefore, in this case study, the use of EAT could not help the DMs interpreting the differences between the alternatives. Summarizing For MCDA to be really useful for actual local energy planning, it is necessary to find/design an MCDA method which: (1) is easy to use and has a transparent logic; (2) presents results in a way easily understandable for the DM; (3) is able to elicit and aggregate the DMs' real preferences; and (4) can handle external uncertainties in a consistent way.
134

Deterministic/probabilistic evaluation in composite system planning

Mo, Ran 06 October 2003
The reliability of supply in a bulk electricity system is directly related to the availability of the generation and transmission facilities. In a conventional vertically integrated system these facilities are usually owned and operated by a single company. In the new deregulated utility environment, these facilities could be owned and operated by a number of independent organizations. In this case, the overall system reliability is the responsibility of an independent system operator (ISO). The load point and system reliabilities are a function of the capacities and availabilities of the generation and transmission facilities and the system topology. This research examines the effect of equipment unavailability on the load point and system reliability of two test systems. The unavailabilities of specific generation and transmission facilities have major impacts on the load point and system reliabilities. These impacts are not uniform throughout the system and are highly dependent on the overall system topology and the operational philosophy of the system. Contingency evaluation is a basic planning and operating procedure and different contingencies can have quite different system and load point impacts. The risk levels associated with a given contingency cannot be estimated using deterministic criteria. The studies presented in this thesis estimate the risk associated with each case using probability techniques and rank the cases based on the predicted risk levels. This information should assist power system managers and planners to make objective decisions regarding reliability and cost. Composite system preventive maintenance scheduling is a challenging task. The functional separation of generation and transmission in the new market environment creates operational and scheduling problems related to maintenance. Maintenance schedules must be coordinated through an independent entity (ISO) to assure reliable and economical service. The methods adopted by an ISO to coordinate planned outages are normally based on traditional load flow and stability analysis and deterministic operating criteria. A new method designated as the maintenance coordination technique (MCT) is proposed in this thesis to coordinate maintenance scheduling. The research work illustrated in this thesis indicates that probabilistic criteria and techniques for composite power system analysis can be effectively utilized in both vertically integrated and deregulated utility systems. The conclusions and the techniques presented in this thesis should prove valuable to those responsible for system planning and maintenance coordination.
135

Deterministic/probabilistic evaluation in composite system planning

Mo, Ran 06 October 2003 (has links)
The reliability of supply in a bulk electricity system is directly related to the availability of the generation and transmission facilities. In a conventional vertically integrated system these facilities are usually owned and operated by a single company. In the new deregulated utility environment, these facilities could be owned and operated by a number of independent organizations. In this case, the overall system reliability is the responsibility of an independent system operator (ISO). The load point and system reliabilities are a function of the capacities and availabilities of the generation and transmission facilities and the system topology. This research examines the effect of equipment unavailability on the load point and system reliability of two test systems. The unavailabilities of specific generation and transmission facilities have major impacts on the load point and system reliabilities. These impacts are not uniform throughout the system and are highly dependent on the overall system topology and the operational philosophy of the system. Contingency evaluation is a basic planning and operating procedure and different contingencies can have quite different system and load point impacts. The risk levels associated with a given contingency cannot be estimated using deterministic criteria. The studies presented in this thesis estimate the risk associated with each case using probability techniques and rank the cases based on the predicted risk levels. This information should assist power system managers and planners to make objective decisions regarding reliability and cost. Composite system preventive maintenance scheduling is a challenging task. The functional separation of generation and transmission in the new market environment creates operational and scheduling problems related to maintenance. Maintenance schedules must be coordinated through an independent entity (ISO) to assure reliable and economical service. The methods adopted by an ISO to coordinate planned outages are normally based on traditional load flow and stability analysis and deterministic operating criteria. A new method designated as the maintenance coordination technique (MCT) is proposed in this thesis to coordinate maintenance scheduling. The research work illustrated in this thesis indicates that probabilistic criteria and techniques for composite power system analysis can be effectively utilized in both vertically integrated and deregulated utility systems. The conclusions and the techniques presented in this thesis should prove valuable to those responsible for system planning and maintenance coordination.
136

Social Multi-Criteria Evaluation in practice: Two real-world case studies

Gamboa Jiménez, Gonzalo 11 January 2008 (has links)
La presente disertación presentan dos casos de estudio en los cuales se ha llevado a cabo una Evaluación Multi-Criterio Social (SMCE por sus siglas en inglés), además de las lecciones aprendidas a través de estas experiencias.El primer caso presenta el conflicto alrededor de la construcción de un complejo industrial (una planta reductora de aluminio y sus infraestructuras asociadas) en la Patagonia Chilena. Aquí, se analizan las ventajas de una SMCE comparada con los sistemas de Evaluación de Impacto Ambiental (EIA) comúnmente utilizados en el marco de las decisiones públicas. Se propone por tanto, la SMCE con el fin de resolver algunos de los inconvenientes ampliamente reconocidos de los EIAs.Luego, se exploran los problemas y conflictos alrededor de la construcción de parques eólicos, y se analizan los principales mecanismos para su implementación. Cabe destacar que existen diferentes niveles y dimensiones de aceptación social de tales infraestructuras: socio-política, de mercado y comunitaria. En esta disertación se sostiene que los mecanismos de mercado no son suficientes para la implementación de políticas públicas, y que la SMCE provee un marco adecuado para tratar la aceptación (o rechazo) de la comunidad local; es decir, para atender los aspectos relacionados con la justicia en términos distributivos y de proceso, y con la confianza a escala local. Finalmente, se desarrollan algunas ideas y lecciones aprendidas desde la aplicación práctica de metodologías participativas en combinación con la estructura del análisis multi-criterio, y se delinean algunas áreas para la investigación futura. / The following dissertation presents two case studies in which I have applied Social Multi-Criteria Evaluation (SMCE), and also presents some learned lessons from these experiences.The first case presents a conflict around the construction of an industrial complex (an aluminium smelter plant and its associated infrastructures) in the Chilean Patagonia. Here, I analyse the advantages of SMCE compared with the Environmental Impact Assessment systems (EIAS) commonly used in public decision-making. I propose the former in order to overcome some recognized pitfalls of the last.Then, I explore the problems and conflicts around the construction of windfarms, and I analyse the main mechanisms aimed at their implementation. There exist different levels and dimensions of social acceptance of windfarms: socio-political, market and social acceptance. I argue that market-based mechanisms are not enough for public policy implementation, and that SMCE is appropriate so as to deal with community acceptance; that is, to deal with issues related to distributional justice, procedural justice and trust at local level.Finally, I develop some ideas and learned lessons from the practical application of participatory approaches in combination with a multi-criteria analysis structure, and I delineate some areas and issues for further research.
137

Perceptual Criteria on Image Compression

Moreno Escobar, Jesús Jaime 01 July 2011 (has links)
Hoy en día las imágenes digitales son usadas en muchas areas de nuestra vida cotidiana, pero estas tienden a ser cada vez más grandes. Este incremento de información nos lleva al problema del almacenamiento de las mismas. Por ejemplo, es común que la representación de un pixel a color ocupe 24 bits, donde los canales rojo, verde y azul se almacenen en 8 bits. Por lo que, este tipo de pixeles en color pueden representar uno de los 224 ¼ 16:78 millones de colores. Así, una imagen de 512 £ 512 que representa con 24 bits un pixel ocupa 786,432 bytes. Es por ello que la compresión es importante. Una característica importante de la compresión de imágenes es que esta puede ser con per didas o sin ellas. Una imagen es aceptable siempre y cuando dichas perdidas en la información de la imagen no sean percibidas por el ojo. Esto es posible al asumir que una porción de esta información es redundante. La compresión de imágenes sin pérdidas es definida como deco dificar matemáticamente la misma imagen que fue codificada. En la compresión de imágenes con pérdidas se necesita identificar dos características: la redundancia y la irrelevancia de in formación. Así la compresión con pérdidas modifica los datos de la imagen de tal manera que cuando estos son codificados y decodificados, la imagen recuperada es lo suficientemente pare cida a la original. Que tan parecida es la imagen recuperada en comparación con la original es definido previamente en proceso de codificación y depende de la implementación a ser desarrollada. En cuanto a la compresión con pérdidas, los actuales esquemas de compresión de imágenes eliminan información irrelevante utilizando criterios matemáticos. Uno de los problemas de estos esquemas es que a pesar de la calidad numérica de la imagen comprimida es baja, esta muestra una alta calidad visual, dado que no muestra una gran cantidad de artefactos visuales. Esto es debido a que dichos criterios matemáticos no toman en cuenta la información visual percibida por el Sistema Visual Humano. Por lo tanto, el objetivo de un sistema de compresión de imágenes diseñado para obtener imágenes que no muestren artefactos, aunque su calidad numérica puede ser baja, es eliminar la información que no es visible por el Sistema Visual Humano. Así, este trabajo de tesis doctoral propone explotar la redundancia visual existente en una imagen, reduciendo frecuencias imperceptibles para el sistema visual humano. Por lo que primeramente, se define una métrica de calidad de imagen que está altamente correlacionada con opiniones de observadores. La métrica propuesta pondera el bien conocido PSNR por medio de una modelo de inducción cromática (CwPSNR). Después, se propone un algoritmo compresor de imágenes, llamado Hi-SET, el cual explota la alta correlación de un vecindario de pixeles por medio de una función Fractal. Hi-SET posee las mismas características que tiene un compresor de imágenes moderno, como ser una algoritmo embedded que permite la transmisión progresiva. También se propone un cuantificador perceptual(½SQ), el cual es una modificación a la clásica cuantificación Dead-zone. ½SQes aplicado a un grupo entero de pixelesen una sub-banda Wavelet dada, es decir, se aplica una cuantificación global. A diferencia de lo anterior, la modificación propuesta permite hacer una cuantificación local tanto directa como inversa pixel-por-pixel introduciéndoles una distorsión perceptual que depende directamente de la información espacial del entorno del pixel. Combinando el método ½SQ con Hi-SET, se define un compresor perceptual de imágenes, llamado ©SET. Finalmente se presenta un método de codificación de areas de la Región de Interés, ½GBbBShift, la cual pondera perceptualmente los pixeles en dichas areas, en tanto que las areas que no pertenecen a la Región de Interés o el Fondo sólo contendrán aquellas que perceptualmente sean las más importantes. Los resultados expuestos en esta tesis indican que CwPSNR es el mejor indicador de calidad de imagen en las distorsiones más comunes de compresión como son JPEG y JPEG2000, dado que CwPSNR posee la mejor correlación con la opinión de observadores, dicha opinión está sujeta a los experimentos psicofísicos de las más importantes bases de datos en este campo, como son la TID2008, LIVE, CSIQ y IVC. Además, el codificador de imágenes Hi-SET obtiene mejores resultados que los obtenidos por JPEG2000 u otros algoritmos que utilizan el fractal de Hilbert. Así cuando a Hi-SET se la aplica la cuantificación perceptual propuesta, ©SET, este incrementa su eficiencia tanto objetiva como subjetiva. Cuando el método ½GBbBShift es aplicado a Hi-SET y este es comparado contra el método MaxShift aplicado al estándar JPEG2000 y a Hi-SET, se obtienen mejores resultados perceptuales comparando la calidad subjetiva de toda la imagen de dichos métodos. Tanto la cuantificación perceptual propuesta ½SQ como el método ½GBbBShift son algoritmos generales, los cuales pueden ser aplicados a otros algoritmos de compresión de imágenes basados en Transformada Wavelet tales como el mismo JPEG2000, SPIHT o SPECK, por citar algunos ejemplos. / Nowadays, digital images are used in many areas in everyday life, but they tend to be big. This increases amount of information leads us to the problem of image data storage. For example, it is common to have a representation a color pixel as a 24-bit number, where the channels red, green, and blue employ 8 bits each. In consequence, this kind of color pixel can specify one of 224 ¼ 16:78 million colors. Therefore, an image at a resolution of 512 £ 512 that allocates 24 bits per pixel, occupies 786,432 bytes. That is why image compression is important. An important feature of image compression is that it can be lossy or lossless. A compressed image is acceptable provided these losses of image information are not perceived by the eye. It is possible to assume that a portion of this information is redundant. Lossless Image Compression is defined as to mathematically decode the same image which was encoded. In Lossy Image Compression needs to identify two features inside the image: the redundancy and the irrelevancy of information. Thus, lossy compression modifies the image data in such a way when they are encoded and decoded, the recovered image is similar enough to the original one. How similar is the recovered image in comparison to the original image is defined prior to the compression process, and it depends on the implementation to be performed. In lossy compression, current image compression schemes remove information considered irrelevant by using mathematical criteria. One of the problems of these schemes is that although the numerical quality of the compressed image is low, it shows a high visual image quality, e.g. it does not show a lot of visible artifacts. It is because these mathematical criteria, used to remove information, do not take into account if the viewed information is perceived by the Human Visual System. Therefore, the aim of an image compression scheme designed to obtain images that do not show artifacts although their numerical quality can be low, is to eliminate the information that is not visible by the Human Visual System. Hence, this Ph.D. thesis proposes to exploit the visual redundancy existing in an image by reducing those features that can be unperceivable for the Human Visual System. First, we define an image quality assessment, which is highly correlated with the psychophysical experiments performed by human observers. The proposed CwPSNR metrics weights the well-known PSNR by using a particular perceptual low level model of the Human Visual System, e.g. the Chromatic Induction Wavelet Model (CIWaM). Second, we propose an image compression algorithm (called Hi-SET), which exploits the high correlation and self-similarity of pixels in a given area or neighborhood by means of a fractal function. Hi-SET possesses the main features that modern image compressors have, that is, it is an embedded coder, which allows a progressive transmission. Third, we propose a perceptual quantizer (½SQ), which is a modification of the uniform scalar quantizer. The ½SQ is applied to a pixel set in a certain Wavelet sub-band, that is, a global quantization. Unlike this, the proposed modification allows to perform a local pixel-by-pixel forward and inverse quantization, introducing into this process a perceptual distortion which depends on the surround spatial information of the pixel. Combining ½SQ method with the Hi-SET image compressor, we define a perceptual image compressor, called ©SET. Finally, a coding method for Region of Interest areas is presented, ½GBbBShift, which perceptually weights pixels into these areas and maintains only the more important perceivable features in the rest of the image. Results presented in this report show that CwPSNR is the best-ranked image quality method when it is applied to the most common image compression distortions such as JPEG and JPEG2000. CwPSNR shows the best correlation with the judgement of human observers, which is based on the results of psychophysical experiments obtained for relevant image quality databases such as TID2008, LIVE, CSIQ and IVC. Furthermore, Hi-SET coder obtains better results both for compression ratios and perceptual image quality than the JPEG2000 coder and other coders that use a Hilbert Fractal for image compression. Hence, when the proposed perceptual quantization is introduced to Hi-SET coder, our compressor improves its numerical and perceptual e±ciency. When ½GBbBShift method applied to Hi-SET is compared against MaxShift method applied to the JPEG2000 standard and Hi-SET, the images coded by our ROI method get the best results when the overall image quality is estimated. Both the proposed perceptual quantization and the ½GBbBShift method are generalized algorithms that can be applied to other Wavelet based image compression algorithms such as JPEG2000, SPIHT or SPECK.
138

Social Multi-Criteria Evaluation and renewable energy policies. Two case-studies

Russi, Daniela 21 March 2007 (has links)
Social Multi-Criteria Evaluation is a kind of multi-criteria analysis that combines the technical evaluation of different options according to various assessment criteria with the analysis of the social actors' conflicting values and interests.Two main ideas are at the basis of SMCE: technical incommensurability (i.e. in a complex environment one cannot express all impacts of a policy using one only unit of measurement, or, in other words, an inter/multidisciplinary analysis is needed) and social incommensurability (i.e. the social actors have different and legitimately conflicting values and interests, which must be taken into account when evaluating a policy or a project).SMCE was applied to two case-studies. In the first one, the problem at hand was how to provide some isolated rural households in a natural park near Barcelona with electricity, whether by extending the grid or installing stand-alone photovoltaic systems. The issue caused a conflict between 1995 and 2000 among the Park administration (in favour of solar energy) and the household inhabitants and owners, plus the Mayor (in favour of traditional electricity). A retrospective SMCE was performed in order to explain the positions of the involved stakeholders and the factors that help the diffusion of off-grid photovoltaic systems in rural areas.The second part of the thesis deals with the opportunity for the Italian government of supporting a large-scale biofuels production. The pros and cons of satisfying part of the energy need of the transport sector with biodiesel were analyzed through a variety of assessment criteria and taking into account different scales and dimensions.
139

The Maastricht Convergence Criteria and Monetary and Fiscal Policies for the EMU Accession Countries

Lipinska, Anna 09 October 2008 (has links)
My PhD dissertation concentrates on the theoretical analysis of the way monetary and fiscal policies should be conducted in the European Monetary Union (EMU) accession countries. Importantly fiscal and monetary policies in these countries are required to satisfy the membership requirements of the EMU summarized in the Maastricht Treaty. My interest lies in identifying the implications of different monetary and fiscal policies on the compliance with the Maastricht criteria. I characterize the optimal monetary policy and also optimal interaction between monetary and fiscal policy in the EMU accession countries. I study how the Maastricht criteria affect the design of optimal policies and their ability to stabilize business cycle fluctuations. In order to address all these issues I perform the whole analysis in the framework of a two-sector small open economy model incorporating frictions such as price stickiness and distortionary taxation. The model is calibrated to match the moments of the Czech Republic economy. In Chapter 1 I study the ability of different monetary regimes to satisfy the Maastricht convergence criteria. I analyze regimes that reflect the policy choices observed in the EMU accession countries, i.e. a peg regime, a managed float and a flexible exchange rate regime with CPI inflation targeting. I find that there exists a significant trade-off between compliance with the CPI inflation criterion and the nominal interest rate criterion. Under the benchmark parameterization none of the regimes satisfies all the criteria. The sensitivity analysis reveals that the probability that some of the regimes will satisfy all the criteria increases with openness of the economy and degree of substitution between home and foreign traded goods. However the ultimate choice of the regime which satisfies all the criteria depends on the degree of exchange rate-pass through. Chapter 2 focuses on characterization of optimal monetary policy for EMU accession countries in the framework of the already developed model. I find that the optimal monetary policy in a two-sector small open economy should not only target inflation rates in the domestic sectors and aggregate output fluctuations, but also domestic and international terms of trade. Under the chosen parameterization optimal monetary policy does not satisfy the CPI inflation and the nominal interest rate criteria. The optimal constrained policy induces smaller variability of the CPI inflation and of the nominal interest rate. At the same, it is also characterized by a deflationary bias which results in targeting CPI inflation rate and nominal interest rate that are 0.7% p.a. lower than their equivalents in the reference countries. In Chapter 3 I incorporate fiscal policy by endogenising tax and debt decisions and restricting taxes to only distortionary ones. I find that targets of the unconstrained optimal monetary and fiscal policy are similar to those of the optimal monetary policy alone. Under the chosen parameterization, the optimal policy violates three Maastricht criteria: on the CPI inflation rate, the nominal interest rate and deficit to GDP ratio. Since monetary criteria play a dominant role in affecting the stabilization process of the constrained policy, CPI inflation and the nominal interest rate are characterized by a smaller variability at the expense of a higher variability of deficit to GDP ratio. The constrained policy is characterized by a deflationary bias which results in targeting the CPI inflation rate and the nominal interest rate that are lower by 1.3% p.a. than their equivalents in the countries taken as a reference. The constrained policy is also characterized by targeting surplus to GDP ratio at around 3.7%. / Mi tesis doctoral se centra en el análisis teórico de las políticas monetarias y fiscales que deben llevarse a cabo en los países candidatos a la Unión Monetaria Europea (UEM). Es importante destacar que las políticas fiscales y monetarias de estos países tienen la obligación de satisfacer las condiciones de adhesión a la UEM resumidos en el Tratado de Maastricht. Mi interés se concentra en la identificación de las consecuencias de las distintas políticas monetarias y fiscales sobre el cumplimiento de los criterios de Maastricht. Mi tesis describe tanto la política monetaria óptima como la interacción óptima entre la política monetaria y fiscal en los países candidatos a la UEM. También se analiza cómo las condiciones de Maastricht afectan al diseño de las políticas optímas y su capacidad para estabilizar las fluctuaciones del ciclo económico. A fin de abordar estas preguntas se realiza todo el análisis en el marco de un modelo de economía pequeña y abierta con dos sectores que incorpora fricciones, tales como rigidez de precios e impuestos distorsionantes. El modelo está calibrado para que coincida con los momentos estadísticos de variables económicas de la República Checa. En el capítulo 1 se estudia la capacidad de los diferentes regímenes monetarios para satisfacer las condiciones de convergencia de Maastricht. Se analizan los regímenes que reflejan las opciones políticas observadas en los países candidatos a la UEM, es decir, un régimen de paridad, de flotación administrada y de tipo de cambio flexible. Existe una fuerte relación inversa entre el cumplimiento de las condiciones de inflación y del tipo de interés nominal. Bajo la parametrización escogida ninguno de los regímenes satisface todas las condiciones. El análisis de sensibilidad pone de manifiesto que la probabilidad de que algunos de los regímenes cumplan todas las condiciones aumenta con la apertura de la economía y el grado de sustitución entre bienes nacionales y extranjeros. Sin embargo, la elección final del régimen que cumple todas las condiciones depende del efecto traspaso del tipo de cambio. En el capítulo 2 se describe la política monetaria óptima para los países adheridos a la UEM en el marco del modelo ya desarrollado. La política monetaria óptima en una economía pequeña y abierta con dos sectores no sólo debería centrarse en las tasas de inflación en los sectores domésticos y las fluctuaciones de la producción total, sino también en los términos de intercambio domésticos e internacionales. Bajo la parametrización elegida la política monetaria óptima no cumple las condiciones relacionadas a la inflación y a la tasa de interés nominal. La política óptima restringida induce menor variabilidad de la inflación y de la tasa de interés nominal. Al mismo tiempo, esta politica también se caracteriza por una tendencia deflacionaria que se traduce en la selección de los objetivos de tasa de inflación y tasa de interés nominal que son inferiores en 0.7% anual a sus equivalentes en los países de referencia. En el capítulo 3 se incorpora la política fiscal endogenizando las decisiones fiscales, de endeudamiento público y de impuestos distosionantes. Los objetivos de las políticas fiscal y monetaria son similares a los de la política monetaria óptima. Bajo la parametrización elegida, la política óptima no cumple con tres condiciones de Maastricht: la tasa de inflación, la tasa de interés nominal y el ratio déficit / PIB. Como las condiciones monetarias juegan un papel predominante en el diseño de la política restringida, la inflación y la tasa de interés nominal se caracterizan por una menor variabilidad a costa de una mayor variabilidad de la relación déficit / PIB. La política restringida se caracteriza por una tendencia deflacionaria que implica la selección de objetivos de tasa de inflación y tasa de interés nominal que son inferiores en 1.3% anual a sus equivalentes en los países tomados como referencia. La política restringida también requiere un objetivo de superávit en torno al 3,7% del PIB.
140

Multiple Criteria Decision Analysis: Classification Problems and Solutions

Chen, Ye January 2006 (has links)
Multiple criteria decision analysis (MCDA) techniques are developed to address challenging classification problems arising in engineering management and elsewhere. MCDA consists of a set of principles and tools to assist a decision maker (DM) to solve a decision problem with a finite set of alternatives compared according to two or more criteria, which are usually conflicting. The three types of classification problems to which original research contributions are made are <ol> <li>Screening: Reduce a large set of alternatives to a smaller set that most likely contains the best choice. </li> <li>Sorting: Arrange the alternatives into a few groups in preference order, so that the DM can manage them more effectively. </li> <li>Nominal classification: Assign alternatives to nominal groups structured by the DM, so that the number of groups, and the characteristics of each group, seem appropriate to the DM. </ol> Research on screening is divided into two parts: the design of a sequential screening procedure that is then applied to water resource planning in the Region of Waterloo, Ontario, Canada; and the development of a case-based distance method for screening that is then demonstrated using a numerical example. <br /><br /> Sorting problems are studied extensively under three headings. Case-based distance sorting is carried out with Model I, which is optimized for use with cardinal criteria only, and Model II, which is designed for both cardinal and ordinal criteria; both sorting approaches are applied to a case study in Canadian municipal water usage analysis. Sorting in inventory management is studied using a case-based distance method designed for multiple criteria ABC analysis, and then applied to a case study involving hospital inventory management. Finally sorting is applied to bilateral negotiation using a case-based distance model to assist negotiators that is then demonstrated on a negotiation regarding the supply of bicycle components. <br /><br /> A new kind of decision analysis problem, called multiple criteria nominal classification (MCNC), is addressed. Traditional classification methods in MCDA focus on sorting alternatives into groups ordered by preference. MCNC is the classification of alternatives into nominal groups, structured by the DM, who specifies multiple characteristics for each group. The features, definitions and structures of MCNC are presented, emphasizing criterion and alternative flexibility. An analysis procedure is proposed to solve MCNC problems systematically and applied to a water resources planning problem.

Page generated in 0.0406 seconds