• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 330
  • 58
  • 46
  • 35
  • 21
  • 9
  • 9
  • 8
  • 7
  • 6
  • 4
  • 4
  • 4
  • 3
  • 3
  • Tagged with
  • 639
  • 67
  • 66
  • 55
  • 54
  • 49
  • 47
  • 45
  • 41
  • 37
  • 35
  • 35
  • 34
  • 33
  • 33
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
181

Rights at Risk : Ethical Issues in Risk Management

Hermansson, Hélène January 2007 (has links)
he subject of this thesis is ethical aspects of decision-making concerning social risks. It is argued that a model for risk management must acknowledge several ethical aspects and, most crucial among these, the individual’s right not to be unfairly exposed to risks. Article I takes as its starting point the demand frequently expressed in the risk literature for consistent risk management. It is maintained that a model focusing on cost-benefit analysis does not respect the rights of the individual. Two alternative models are outlined. They evolve around the separateness of individuals, rights, and fair risk taking. It is claimed that a model that focuses on a fair procedure for risk decisions seems most fruitful to develop. Article II discusses the NIMBY (Not In My Backyard) conflict. The ethical premises behind the negative characterization of the NIMBY concept are investigated. It is argued that a collective weighing of risks and benefits ignores individuals’ rights not to be unfairly exposed to risks in siting scenarios. Article III presents a three-party model tool for ethical risk analysis. The focus in such analysis is a discussion of three parties that are involved in risk decisions: the risk-exposed, the beneficiary, and the decision-maker. Seven crucial ethical questions are discerned by combining these parties pairwise. Article IV discusses a model for procedural justice for risk decisions. Two theories of deliberative democracy are explored. The first focuses on a hypothetical contract, the second argues for the actual inclusion of affected parties. It is maintained that hypothetical reasoning should mainly serve as a guide concerning risk issues that affect people who cannot be included in the decision-making process. Otherwise an interactive dialogical reasoning is to be preferred. Article V explores the claim that there are no real, objective risks – only subjective descriptions of them. It is argued that even though every risk can be described in different ways, involve value judgements and emotions, the ideal of objectivity should not be abandoned. / QC 20100714
182

Varför gick jag på det där? : Konsumentens behov av att vara konsekvent

Thomsen, Linda January 2009 (has links)
Individer har olika grad av Preference for consistency (PFC) och är en bidragande orsak till hur man uppfattar och agerar i konsumentsammanhang. Studien undersökte om höga PFC- individer var mer positiva till ”ett erbjudande” med hög konsekvenskänsla. Tre betingelser med varierande manipulationsgrad användes och data samlades in från 74 studenter. Ett frågeformulär med tillhörande erbjudande presenterades för deltagarna som fyllde i ett antal frågor och en PFC-B skala. Studien lyckades inte ge stöd åt hypotesen. Däremot förekom en illusion av osårbarhet och en tredjepersoneffekt. Studien lyckades troligtvis inte konstruera ett tillräckligt bra instrument som skapade rätt konsekventkänsla vilket bidrog till att deltagarna inte blev påverkade av erbjudandet i den grad som var förväntat.
183

High consistency refining of mechanical pulps during varying refining conditions : High consistency refiner conditions effect on pulp quality

Muhic, Dino January 2008 (has links)
The correlation between pulp properties and operating conditions in high consistency (HC) refiners at Holmen Paper AB were studied. Two types of HC refiners were investigated: the Andritz RTS refiner at the Hallstavik Mill and the Sprout-Bauer Twin 60 refiner at the Braviken Mill. The objective of the study was to clarify the relationship between the pulp properties and refining conditions such as electrical energy input, housing- and feed- pressure and plate wear in high consistency refining. The results of this project show that worn segments reduce the operating energy maximum input and the pulp and handsheet properties in negative aspects such as lower tensile- and tear index, and shorter average fibre length. Energy input is an important factor in the refining process and influence Canadian Standard Freeness and the tensile index as evident from the probability residuals. Housing pressure and feed pressure influence the pulp quality and should be adjusted in order to optimise the refining process, although the effect is not as great as for energy input or plate wear. The results of the study indicate that Braviken Mill is operating at its optimum for the parameters measured in this project. Hallstaviks goal, to avoid fibre shortening and to obtain better tensile index, can be reached by making slight changes in pressure condition.
184

Stable and High-Order Finite Difference Methods for Multiphysics Flow Problems / Stabila finita differensmetoder med hög noggrannhetsordning för multifysik- och flödesproblem

Berg, Jens January 2013 (has links)
Partial differential equations (PDEs) are used to model various phenomena in nature and society, ranging from the motion of fluids and electromagnetic waves to the stock market and traffic jams. There are many methods for numerically approximating solutions to PDEs. Some of the most commonly used ones are the finite volume method, the finite element method, and the finite difference method. All methods have their strengths and weaknesses, and it is the problem at hand that determines which method that is suitable. In this thesis, we focus on the finite difference method which is conceptually easy to understand, has high-order accuracy, and can be efficiently implemented in computer software. We use the finite difference method on summation-by-parts (SBP) form, together with a weak implementation of the boundary conditions called the simultaneous approximation term (SAT). Together, SBP and SAT provide a technique for overcoming most of the drawbacks of the finite difference method. The SBP-SAT technique can be used to derive energy stable schemes for any linearly well-posed initial boundary value problem. The stability is not restricted by the order of accuracy, as long as the numerical scheme can be written in SBP form. The weak boundary conditions can be extended to interfaces which are used either in domain decomposition for geometric flexibility, or for coupling of different physics models. The contributions in this thesis are twofold. The first part, papers I-IV, develops stable boundary and interface procedures for computational fluid dynamics problems, in particular for problems related to the Navier-Stokes equations and conjugate heat transfer. The second part, papers V-VI, utilizes duality to construct numerical schemes which are not only energy stable, but also dual consistent. Dual consistency alone ensures superconvergence of linear integral functionals from the solutions of SBP-SAT discretizations. By simultaneously considering well-posedness of the primal and dual problems, new advanced boundary conditions can be derived. The new duality based boundary conditions are imposed by SATs, which by construction of the continuous boundary conditions ensure energy stability, dual consistency, and functional superconvergence of the SBP-SAT schemes.
185

A Model for Managing Data Integrity

Mallur, Vikram 22 September 2011 (has links)
Consistent, accurate and timely data are essential to the functioning of a modern organization. Managing the integrity of an organization’s data assets in a systematic manner is a challenging task in the face of continuous update, transformation and processing to support business operations. Classic approaches to constraint-based integrity focus on logical consistency within a database and reject any transaction that violates consistency, but leave unresolved how to fix or manage violations. More ad hoc approaches focus on the accuracy of the data and attempt to clean data assets after the fact, using queries to flag records with potential violations and using manual efforts to repair. Neither approach satisfactorily addresses the problem from an organizational point of view. In this thesis, we provide a conceptual model of constraint-based integrity management (CBIM) that flexibly combines both approaches in a systematic manner to provide improved integrity management. We perform a gap analysis that examines the criteria that are desirable for efficient management of data integrity. Our approach involves creating a Data Integrity Zone and an On Deck Zone in the database for separating the clean data from data that violates integrity constraints. We provide tool support for specifying constraints in a tabular form and generating triggers that flag violations of dependencies. We validate this by performing case studies on two systems used to manage healthcare data: PAL-IS and iMED-Learn. Our case studies show that using views to implement the zones does not cause any significant increase in the running time of a process.
186

A Model for Managing Data Integrity

Mallur, Vikram 22 September 2011 (has links)
Consistent, accurate and timely data are essential to the functioning of a modern organization. Managing the integrity of an organization’s data assets in a systematic manner is a challenging task in the face of continuous update, transformation and processing to support business operations. Classic approaches to constraint-based integrity focus on logical consistency within a database and reject any transaction that violates consistency, but leave unresolved how to fix or manage violations. More ad hoc approaches focus on the accuracy of the data and attempt to clean data assets after the fact, using queries to flag records with potential violations and using manual efforts to repair. Neither approach satisfactorily addresses the problem from an organizational point of view. In this thesis, we provide a conceptual model of constraint-based integrity management (CBIM) that flexibly combines both approaches in a systematic manner to provide improved integrity management. We perform a gap analysis that examines the criteria that are desirable for efficient management of data integrity. Our approach involves creating a Data Integrity Zone and an On Deck Zone in the database for separating the clean data from data that violates integrity constraints. We provide tool support for specifying constraints in a tabular form and generating triggers that flag violations of dependencies. We validate this by performing case studies on two systems used to manage healthcare data: PAL-IS and iMED-Learn. Our case studies show that using views to implement the zones does not cause any significant increase in the running time of a process.
187

Efficient openMP over sequentially consistent distributed shared memory systems

Costa Prats, Juan José 20 July 2011 (has links)
Nowadays clusters are one of the most used platforms in High Performance Computing and most programmers use the Message Passing Interface (MPI) library to program their applications in these distributed platforms getting their maximum performance, although it is a complex task. On the other side, OpenMP has been established as the de facto standard to program applications on shared memory platforms because it is easy to use and obtains good performance without too much effort. So, could it be possible to join both worlds? Could programmers use the easiness of OpenMP in distributed platforms? A lot of researchers think so. And one of the developed ideas is the distributed shared memory (DSM), a software layer on top of a distributed platform giving an abstract shared memory view to the applications. Even though it seems a good solution it also has some inconveniences. The memory coherence between the nodes in the platform is difficult to maintain (complex management, scalability issues, high overhead and others) and the latency of the remote-memory accesses which can be orders of magnitude greater than on a shared bus due to the interconnection network. Therefore this research improves the performance of OpenMP applications being executed on distributed memory platforms using a DSM with sequential consistency evaluating thoroughly the results from the NAS parallel benchmarks. The vast majority of designed DSMs use a relaxed consistency model because it avoids some major problems in the area. In contrast, we use a sequential consistency model because we think that showing these potential problems that otherwise are hidden may allow the finding of some solutions and, therefore, apply them to both models. The main idea behind this work is that both runtimes, the OpenMP and the DSM layer, should cooperate to achieve good performance, otherwise they interfere one each other trashing the final performance of applications. We develop three different contributions to improve the performance of these applications: (a) a technique to avoid false sharing at runtime, (b) a technique to mimic the MPI behaviour, where produced data is forwarded to their consumers and, finally, (c) a mechanism to avoid the network congestion due to the DSM coherence messages. The NAS Parallel Benchmarks are used to test the contributions. The results of this work shows that the false-sharing problem is a relative problem depending on each application. Another result is the importance to move the data flow outside of the critical path and to use techniques that forwards data as early as possible, similar to MPI, benefits the final application performance. Additionally, this data movement is usually concentrated at single points and affects the application performance due to the limited bandwidth of the network. Therefore it is necessary to provide mechanisms that allows the distribution of this data through the computation time using an otherwise idle network. Finally, results shows that the proposed contributions improve the performance of OpenMP applications on this kind of environments.
188

A new approach to stochastic frontier estimation: DEA+

Gstach, Dieter January 1996 (has links) (PDF)
The outcome of a production process might not only deviate from a theoretical maximum due to inefficiency, but also because of non-controllable influences. This raises the issue of reliability of Data Envelopment Analysis in noisy environments. I propose to assume an i.i.d. data generating process with bounded noise component, so that the following approach is feasible: Use DEA to estimate a pseudo frontier first (nonparametric shape estimation). Next apply a ML-technique to the DEA-estimated efficiencies, to estimate the scalar value by which this pseudo-frontier must be shifted downward to get the true production frontier (location estimation). I prove, that this approach yields consistent estimates of the true frontier. (author's abstract) / Series: Department of Economics Working Paper Series
189

Cost-effective Designs for Supporting Correct Execution and Scalable Performance in Many-core Processors

Romanescu, Bogdan Florin January 2010 (has links)
<p>Many-core processors offer new levels of on-chip performance by capitalizing on the increasing rate of device integration. Harnessing the full performance potential of these processors requires that hardware designers not only exploit the advantages, but also consider the problems introduced by the new architectures. Such challenges arise from both the processor's increased structural complexity and the reliability issues of the silicon substrate. In this thesis, we address these challenges in a framework that targets correct execution and performance on three coordinates: 1) tolerating permanent faults, 2) facilitating static and dynamic verification through precise specifications, and 3) designing scalable coherence protocols.</p> <p>First, we propose CCA, a new design paradigm for increasing the processor's lifetime performance in the presence of permanent faults in cores. CCA chips rely on a reconfiguration mechanism that allows cores to replace faulty components with fault-free structures borrowed from neighboring cores. In contrast with existing solutions for handling hard faults that simply shut down cores, CCA aims to maximize the utilization of defect-free resources and increase the availability of on-chip cores. We implement three-core and four-core CCA chips and demonstrate that they offer a cumulative lifetime performance improvement of up to 65% for industry-representative utilization periods. In addition, we show that CCA benefits systems that employ modular redundancy to guarantee correct execution by increasing their availability.</p> <p>Second, we target the correctness of the address translation system. Current processors often exhibit design bugs in their translation systems, and we believe one cause for these faults is a lack of precise specifications describing the interactions between address translation and the rest of the memory system, especially memory consistency. We address this aspect by introducing a framework for specifying translation-aware consistency models. As part of this framework, we identify the critical role played by address translation in supporting correct memory consistency implementations. Consequently, we propose a set of invariants that characterizes address translation. Based on these invariants, we develop DVAT, a dynamic verification mechanism for address translation. We demonstrate that DVAT is efficient in detecting translation-related faults, including several that mimic design bugs reported in processor errata. By checking the correctness of the address translation system, DVAT supports dynamic verification of translation-aware memory consistency.</p> <p>Finally, we address the scalability of translation coherence protocols. Current software-based solutions for maintaining translation coherence adversely impact performance and do not scale. We propose UNITD, a hardware coherence protocol that supports scalable performance and architectural decoupling. UNITD integrates translation coherence within the regular cache coherence protocol, such that TLBs participate in the cache coherence protocol similar to instruction or data caches. We evaluate snooping and directory UNITD coherence protocols on processors with up to 16 cores and demonstrate that UNITD reduces the performance penalty of translation coherence to almost zero.</p> / Dissertation
190

Bayesian Nonparametric Modeling and Theory for Complex Data

Pati, Debdeep January 2012 (has links)
<p>The dissertation focuses on solving some important theoretical and methodological problems associated with Bayesian modeling of infinite dimensional `objects', popularly called nonparametric Bayes. The term `infinite dimensional object' can refer to a density, a conditional density, a regression surface or even a manifold. Although Bayesian density estimation as well as function estimation are well-justified in the existing literature, there has been little or no theory justifying the estimation of more complex objects (e.g. conditional density, manifold, etc.). Part of this dissertation focuses on exploring the structure of the spaces on which the priors for conditional densities and manifolds are supported while studying how the posterior concentrates as increasing amounts of data are collected.</p><p>With the advent of new acquisition devices, there has been a need to model complex objects associated with complex data-types e.g. millions of genes affecting a bio-marker, 2D pixelated images, a cloud of points in the 3D space, etc. A significant portion of this dissertation has been devoted to developing adaptive nonparametric Bayes approaches for learning low-dimensional structures underlying higher-dimensional objects e.g. a high-dimensional regression function supported on a lower dimensional space, closed curves representing the boundaries of shapes in 2D images and closed surfaces located on or near the point cloud data. Characterizing the distribution of these objects has a tremendous impact in several application areas ranging from tumor tracking for targeted radiation therapy, to classifying cells in the brain, to model based methods for 3D animation and so on. </p><p> </p><p> The first three chapters are devoted to Bayesian nonparametric theory and modeling in unconstrained Euclidean spaces e.g. mean regression and density regression, the next two focus on Bayesian modeling of manifolds e.g. closed curves and surfaces, and the final one on nonparametric Bayes spatial point pattern data modeling when the sampling locations are informative of the outcomes.</p> / Dissertation

Page generated in 0.0466 seconds