• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 200
  • 39
  • 28
  • 12
  • 10
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 349
  • 130
  • 83
  • 54
  • 52
  • 33
  • 32
  • 27
  • 27
  • 25
  • 24
  • 24
  • 23
  • 20
  • 20
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
71

An Information Integration Research on the Fairness Measurement of the Policy Outcomes: the Case of Dengue Fever Prevention in Taiwan

Chen, Cheng-Liaou 24 January 2006 (has links)
It often appears some cognitive conflicts among the citizens¡¦ when evaluating the policy outcomes performance about Dengue fever prevention, since existed some characteristics in the input and outcome dimension, it include multi-involvers, multi-goals, multi-values and subjective judgement(Wang Ming-Shen and Chen Cheng-Liaou, 2004: 11). We taked the unfairness measurement approach of the Information Integration Theory (IIT) to explore the policy outcome performance about Dengue fever prevention. In this way, we constructed an algebraic model, namely ¡§Dengue Fever Basic Unfairness Measurement Model,¡¨ with accrute empirical test and validity criteria. According to the way of the factorial graph patterns and statistical interaction test of the model, we may analyze and collect these multiple information of policy outcomes(Farkas, 1991: 61 ; Anderson, 1996: 33). We obtained some important findings, include: 1.The citizen integrated the information of Dengue fever prevention outcomes by ¡¥averging¡¦ model, the experimental data support the mode and it had important policy implications. 2.The effort(implicit factor) of the governer¡¦s devoted to prevention tasks was more than the budget expendures(explicit factor) was the most important factor of outcome performance evaluation, and this had important cognitive implication. 3.According to the differences of the information qulity from the Dengue fever prevention, there was an interpersonal salience about the evaluation of the policy outcome performance. 4.It showed that highly corelation between the habitant¡¦s life modes and the prevention outcomes performance, this implied that the implicit social meaning of Dengue fever prevention were interpersonal interaction. 5.The citizen¡¦s perception of the unfairness measurement for the policy outcome performance of Dengue fever prevention were not satisfaction and sensitive to the interpersonal unfairness situation.
72

Solutions Of The Equations Of Change By The Averaging Technique

Dalgic, Meric 01 May 2008 (has links) (PDF)
Area averaging is one of the techniques used to solve problems encountered in the transport of momentum, heat, and mass. The application of this technique simplifies the mathematical solution of the problem. However, it necessitates expressing the local value of the dependent variable and/or its derivative(s) on the system boundaries in terms of the averaged variable. In this study, these expressions are obtained by the two-point Hermite expansion and this approximate method is applied to some specific problems, such as, unsteady flow in a concentric annulus, unequal cooling of a long slab, unsteady conduction in a cylindrical rod with internal heat generation, diffusion of a solute into a slab from limited volume of a well-mixed solution, convective mass transport between two parallel plates with a wall reaction, convective mass transport in a cylindrical tube with a wall reaction, and unsteady conduction in a two -layer composite slab. Comparison of the analytical and approximate solutions is shown to be in good agreement for a wide range of dimensionless parameters characterizing each system.
73

Design And Implementation Of Coupled Inductor Cuk Converter Operating In Continuous Conduction Mode

Ayhan, Mustafa Tufan 01 December 2011 (has links) (PDF)
The study involves the following stages: First, coupled-inductor and integrated magnetic structure used in Cuk converter circuit topologies are analyzed and the necessary information about these elements in circuit design is gathered. Also, benefits of using these magnetic elements are presented. Secondly / steady-state model, dynamic model and transfer functions of coupled-inductor Cuk converter topology are obtained via state-space averaging method. Third stage deals with determining the design criteria to be fulfilled by the implemented circuit. The selection of the circuit components and the design of the coupled-inductor providing ripple-free input current waveform are performed at this stage. Fourth stage introduces the experimental results of the implemented circuit operating in open loop mode. Besides, the controller design is carried out and the closed loop performance of the implemented circuit is presented in this stage.
74

A 1Gsample/s 6-bit flash A/D converter with a combined chopping and averaging technique for reduced distortion in 0.18(mu)m CMOS

Stefanou, Nikolaos 29 August 2005 (has links)
Hard disk drive applications require a high Spurious Free Dynamic Range (SFDR), 6-bit Analog-to-Digital Converter (ADC) at conversion rates of 1GHz and beyond. This work proposes a robust, fault-tolerant scheme to achieve high SFDR in an av- eraging flash A/D converter using comparator chopping. Chopping of comparators in a flash A/D converter was never previously implemented due to lack of feasibility in implementing multiple, uncorrelated, high speed random number generators. This work proposes a novel array of uncorrelated truly binary random number generators working at 1GHz to chop all comparators. Chopping randomizes the residual offset left after averaging, further pushing the dynamic range of the converter. This enables higher accuracy and lower bit-error rate for high speed disk-drive read channels. Power consumption and area are reduced because of the relaxed design requirements for the same linearity. The technique has been verified in Matlab simulations for a 6-bit 1Gsamples/s flash ADC under case of process gradients with non-zero mean offsets as high as 60mV and potentially serious spot offset errors as high as 1V for a 2V peak to peak input signal. The proposed technique exhibits an improvement of over 15dB compared to pure averaging flash converters for all cases. The circuit-level simulation results, for a 1V peak to peak input signal, demon- strate superior performance. The reported ADC was fabricated in TSMC 0.18 ??mCMOS process. It occupies 8.79mm2 and consumes about 400mW from 1.8V power supply at 1GHz. The targeted SFDR performance for the fabricated chip is at least 45dB for a 256MHz input sine wave, sampled at 1GHz, about 10dB improvement on the 6-bit flash ADCs in the literature.
75

Bayesian Hierarchical Model for Combining Two-resolution Metrology Data

Xia, Haifeng 14 January 2010 (has links)
This dissertation presents a Bayesian hierarchical model to combine two-resolution metrology data for inspecting the geometric quality of manufactured parts. The high- resolution data points are scarce, and thus scatter over the surface being measured, while the low-resolution data are pervasive, but less accurate or less precise. Combining the two datasets could supposedly make a better prediction of the geometric surface of a manufactured part than using a single dataset. One challenge in combining the metrology datasets is the misalignment which exists between the low- and high-resolution data points. This dissertation attempts to provide a Bayesian hierarchical model that can handle such misaligned datasets, and includes the following components: (a) a Gaussian process for modeling metrology data at the low-resolution level; (b) a heuristic matching and alignment method that produces a pool of candidate matches and transformations between the two datasets; (c) a linkage model, conditioned on a given match and its associated transformation, that connects a high-resolution data point to a set of low-resolution data points in its neighborhood and makes a combined prediction; and finally (d) Bayesian model averaging of the predictive models in (c) over the pool of candidate matches found in (b). This Bayesian model averaging procedure assigns weights to different matches according to how much they support the observed data, and then produces the final combined prediction of the surface based on the data of both resolutions. The proposed method improves upon the methods of using a single dataset as well as a combined prediction without addressing the misalignment problem. This dissertation demonstrates the improvements over alternative methods using both simulated data and the datasets from a milled sine-wave part, measured by two coordinate measuring machines of different resolutions, respectively.
76

Turbulent dispersion of bubbles in poly-dispersed gas-liquid flows in a vertical pipe

Shi, Jun-Mei, Prasser, Horst-Michael, Rohde, Ulrich 31 March 2010 (has links) (PDF)
Turbulence dispersion is a phenomenon of practical importance in many multiphase flow systems. It has a strong effect on the distribution of the dispersed phase. Physically, this phenomenon is a result of interactions between individual particles of the dispersed phase and the continuous phase turbulence eddies. In a Lagrangian simulation, a particle-eddy interaction sub-model can be introduced and the effect of turbulence dispersion is automatically accounted for during particle tracking. Nevertheless, tracking of particleturbulence interaction is extremely expensive for the small time steps required. For this reason, the Lagrangian method is restricted to small-scale dilute flow problems. In contrast, the Eulerian approach based on the continuum modeling of the dispersed phase is more efficient for densely laden flows. In the Eulerian frame, the effect of turbulence dispersion appears as a turbulent diffusion term in the scalar transport equations and the so-called turbulent dispersion force in the momentum equations. The former vanishes if the Favre (mass-weighted) averaged velocity is adopted for the transport equation system. The latter is actually the total account of the turbulence effect on the interfacial forces. In many cases, only the fluctuating effect of the drag force is important. Therefore, many models available in the literature only consider the drag contribution. A new, more general derivation of the FAD (Favre Averaged Drag) model in the multi-fluid modeling framework is presented and validated in this report.
77

Forecasting the Equity Premium and Optimal Portfolios

Bjurgert, Johan, Edstrand, Marcus January 2008 (has links)
<p>The expected equity premium is an important parameter in many financial models, especially within portfolio optimization. A good forecast of the future equity premium is therefore of great interest. In this thesis we seek to forecast the equity premium, use it in portfolio optimization and then give evidence on how sensitive the results are to estimation errors and how the impact of these can be minimized.</p><p>Linear prediction models are commonly used by practitioners to forecast the expected equity premium, this with mixed results. To only choose the model that performs the best in-sample for forecasting, does not take model uncertainty into account. Our approach is to still use linear prediction models, but also taking model uncertainty into consideration by applying Bayesian model averaging. The predictions are used in the optimization of a portfolio with risky assets to investigate how sensitive portfolio optimization is to estimation errors in the mean vector and covariance matrix. This is performed by using a Monte Carlo based heuristic called portfolio resampling.</p><p>The results show that the predictive ability of linear models is not substantially improved by taking model uncertainty into consideration. This could mean that the main problem with linear models is not model uncertainty, but rather too low predictive ability. However, we find that our approach gives better forecasts than just using the historical average as an estimate. Furthermore, we find some predictive ability in the the GDP, the short term spread and the volatility for the five years to come. Portfolio resampling proves to be useful when the input parameters in a portfolio optimization problem is suffering from vast uncertainty. </p>
78

Short circuit modeling of wind turbine generators

2013 August 1900 (has links)
Modeling of wind farms to determine their short circuit contribution in response to faults is a crucial part of system impact studies performed by power utilities. Short circuit calculations are necessary to determine protective relay settings, equipment ratings and to provide data for protection coordination. The plethora of different factors that influence the response of wind farms to short circuits makes short circuit modeling of wind farms an interesting, complex, and challenging task. Low voltage ride through (LVRT) requirements make it necessary for the latest generation of wind generators to be capable of providing reactive power support without disconnecting from the grid during and after voltage sags. If the wind generator must stay connected to the grid, a facility has to be provided to by-pass the high rotor current that occurs during voltage sags and prevent damage of the rotor side power electronic circuits. This is done through crowbar circuits which are of two types, namely active and passive crowbars, based on the power electronic device used in the crowbar triggering circuit. Power electronics-based converters and controls have become an integral part of wind generator systems like the Type 3 doubly fed induction generator based wind generators. The proprietary nature of the design of these power electronics makes it difficult to obtain the necessary information from the manufacturer to model them accurately. Also, the use of power electronic controllers has led to phenomena such as sub-synchronous control interactions (SSCI) in series compensated Type 3 wind farms which are characterized by non-fundamental frequency oscillations. SSCI affects fault current magnitude significantly and is a crucial factor that cannot be ignored while modeling series compensated Type 3 wind farms. These factors have led to disagreement and inconsistencies about which techniques are appropriate for short circuit modeling of wind farms. Fundamental frequency models like voltage behind transient reactance model are incapable of representing the majority of critical wind generator fault characteristics such as sub-synchronous interactions. The Detailed time domain models, though accurate, demand high levels of computation and modeling expertise. Voltage dependent current source modeling based on look up tables are not stand-alone models and provide only a black-box type of solution. The short circuit modeling methodology developed in this research work for representing a series compensated Type 3 wind farm is based on the generalized averaging theory, where the system variables are represented as time varying Fourier coefficients known as dynamic phasors. The modeling technique is also known as dynamic phasor modeling. The Type 3 wind generator has become the most popular type of wind generator, making it an ideal candidate for such a modeling method to be developed. The dynamic phasor model provides a generic model and achieves a middle ground between the conventional electromechanical models and the cumbersome electromagnetic time domain models. The essence of this scheme to model a periodically driven system, such as power converter circuits, is to retain only particular Fourier coefficients based on the behavior of interest of the system under study making it computationally efficient and inclusive of the required frequency components, even if non-fundamental in nature. The capability to model non-fundamental frequency components is critical for representing sub-synchronous interactions. A 450 MW Type 3 wind farm consisting of 150 generator units was modeled using the proposed approach. The method is shown to be highly accurate for representing faults at the point of interconnection of the wind farm to the grid for balanced and unbalanced faults as well as for non-fundamental frequency components present in fault currents during sub-synchronous interactions. Further, the model is shown to be accurate also for different degrees of transmission line compensation and different transformer configurations used in the test system.
79

Bayesian Hierarchical Models for Model Choice

Li, Yingbo January 2013 (has links)
<p>With the development of modern data collection approaches, researchers may collect hundreds to millions of variables, yet may not need to utilize all explanatory variables available in predictive models. Hence, choosing models that consist of a subset of variables often becomes a crucial step. In linear regression, variable selection not only reduces model complexity, but also prevents over-fitting. From a Bayesian perspective, prior specification of model parameters plays an important role in model selection as well as parameter estimation, and often prevents over-fitting through shrinkage and model averaging.</p><p>We develop two novel hierarchical priors for selection and model averaging, for Generalized Linear Models (GLMs) and normal linear regression, respectively. They can be considered as "spike-and-slab" prior distributions or more appropriately "spike- and-bell" distributions. Under these priors we achieve dimension reduction, since their point masses at zero allow predictors to be excluded with positive posterior probability. In addition, these hierarchical priors have heavy tails to provide robust- ness when MLE's are far from zero.</p><p>Zellner's g-prior is widely used in linear models. It preserves correlation structure among predictors in its prior covariance, and yields closed-form marginal likelihoods which leads to huge computational savings by avoiding sampling in the parameter space. Mixtures of g-priors avoid fixing g in advance, and can resolve consistency problems that arise with fixed g. For GLMs, we show that the mixture of g-priors using a Compound Confluent Hypergeometric distribution unifies existing choices in the literature and maintains their good properties such as tractable (approximate) marginal likelihoods and asymptotic consistency for model selection and parameter estimation under specific values of the hyper parameters.</p><p>While the g-prior is invariant under rotation within a model, a potential problem with the g-prior is that it inherits the instability of ordinary least squares (OLS) estimates when predictors are highly correlated. We build a hierarchical prior based on scale mixtures of independent normals, which incorporates invariance under rotations within models like ridge regression and the g-prior, but has heavy tails like the Zeller-Siow Cauchy prior. We find this method out-performs the gold standard mixture of g-priors and other methods in the case of highly correlated predictors in Gaussian linear models. We incorporate a non-parametric structure, the Dirichlet Process (DP) as a hyper prior, to allow more flexibility and adaptivity to the data.</p> / Dissertation
80

Volume-Preserving Coordinate Gauges in Linear Perturbation Theory

Herman, David Leigh 21 December 2012 (has links)
The main goal of this thesis is to present cosmological perturbation theory (based on the standard Friedmann cosmological model) in volume-preserving coordinates, which then provides a suitable basis for studies in cosmological averaging. We review perturbation theory to second order, allowing for averaging to second order in future research. To solve the averaging problem we need a method of covariantly and gauge invariantly averaging tensorial objects on a background manifold. This is a very difficult problem. However, the definition of an average takes on a particularly simple form when written in a system of volume-preserving coordinates. Therefore, we develop a three dimensional and a four dimensional volume-preserving coordinate gauge in this thesis that can be used for averaging in cosmological perturbation theory.

Page generated in 0.0494 seconds