• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 141
  • 67
  • 42
  • 21
  • 15
  • 7
  • 6
  • 6
  • 5
  • 4
  • 4
  • 3
  • 3
  • 2
  • 2
  • Tagged with
  • 386
  • 137
  • 120
  • 54
  • 44
  • 43
  • 40
  • 39
  • 38
  • 33
  • 29
  • 28
  • 28
  • 26
  • 25
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
241

Analytical Computation of Proper Orthogonal Decomposition Modes and n-Width Approximations for the Heat Equation with Boundary Control

Fernandez, Tasha N. 01 December 2010 (has links)
Model reduction is a powerful and ubiquitous tool used to reduce the complexity of a dynamical system while preserving the input-output behavior. It has been applied throughout many different disciplines, including controls, fluid and structural dynamics. Model reduction via proper orthogonal decomposition (POD) is utilized for of control of partial differential equations. In this thesis, the analytical expressions of POD modes are derived for the heat equation. The autocorrelation function of the latter is viewed as the kernel of a self adjoint compact operator, and the POD modes and corresponding eigenvalues are computed by solving homogeneous integral equations of the second kind. The computed POD modes are compared to the modes obtained from snapshots for both the one-dimensional and two-dimensional heat equation. Boundary feedback control is obtained through reduced-order POD models of the heat equation and the effectiveness of reduced-order control is compared to the full-order control. Moreover, the explicit computation of the POD modes and eigenvalues are shown to allow the computation of different n-widths approximations for the heat equation, including the linear, Kolmogorov, Gelfand, and Bernstein n-widths.
242

Visualization and quantification of hydrodynamics and dose in UV reactors by 3D laser induced fluorescence

Gandhi, Varun N. 13 November 2012 (has links)
The validation of UV reactors is currently accomplished by biodosimetry, in which the reactor is treated as a "black-box" and hence cannot account for the dependence of the dose delivery on the complex hydrodynamics and the spatial variation in UV intensity. Alternative methods, such as fluorescent microspheres as non-biological surrogates and computational fluid dynamics (CFD) simulations, have been developed; however, each method has its shortcomings. In this study, a novel technique for the spatial and temporal assessment of the hydrodynamics and the UV dose delivered and the link between these two factors in a lab-scale UV reactor using three dimensional laser induced fluorescence (3DLIF) is developed. This tool can also be utilized for the optimization of UV reactors and to provide data for validation of CFD-based simulation techniques. Regions of optimization include areas around the UV lamp where short-circuiting occurred, a longer inlet approach section that enhances the performance of the reactor by reducing short circuiting paths and a longer outlet region to provide greater mixing. 3DLIF allows real time characterization of mixing and dose delivery in a single lamp UV reactor placed perpendicular to flow by capturing fluorescence images emitted from a laser dye, Rhodamine 6G, using a high speed CCD camera. In addition to three-dimensional mixing, the technique successfully visualized the two-dimensional, transient mixing behaviors such as the recirculation zone and the von Karman vortices and the fluence delivery within the reactor, which has not been possible with traditional tracer test techniques. Finally, a decomposition technique was applied to the flow and fluence delivery based concentration data to reveal similar structures that affect these phenomena. Based on this analysis, changing the flow in the reactor, i.e. the Reynolds number, will directly affect the fluence delivery.
243

White Dwarfs in the Solar Neighborhood

Subasavage, Jr., John P. 03 August 2007 (has links)
The study of white dwarfs (WDs) provides insight into understanding WD formation rates, evolution, and space density. Individually, nearby WDs are excellent candidates for astrometric planetary searches because the astrometric signature is greater than for an identical, more distant WD system. As a population, a complete volume-limited sample is necessary to provide unbiased statistics; however, their intrinsic faintness has allowed some to escape detection. The aim of this dissertation is to identify nearby WDs, accurately characterize them, and target a subset of potentially interesting WDs for follow-up analyses. The most unambiguous method of identifying new WDs is by their proper motions. After evaluating all previous southern hemisphere proper motion catalogs and selecting viable candidates, we embarked on our own southern hemisphere proper motion survey, the SuperCOSMOS-RECONS (SCR) survey. A number of interesting objects were discovered during the survey, including the 24th nearest star system -- an M dwarf with a brown dwarf companion. After a series of spectroscopic observations, a total of 56 new WD systems was identified (18 from the SCR survey and 38 from other proper motion surveys). CCD photometry was obtained for most of the 56 new systems in an effort to model the physical parameters and obtain distance estimates via spectral energy distribution fitting. An independent distance estimate was also obtained by deriving a color-MV relation for several colors based on WDs with known distances. Any object whose distance estimate was within 25 pc was targeted for a trigonometric parallax via our parallax program, CTIOPI. Currently, there are 62 WD systems on CTIOPI. A subset of 53 systems has enough data for at least a preliminary parallax (24 are definitive). Of those 53 systems, nine are previously known WDs within 10 pc that we are monitoring for perturbations from unseen companions, and an additional 29 have distances within 25 pc. Previously, there were 109 known WDs with parallaxes placing them within 25 pc; therefore, our effort has already increased the nearby sample by 27%. In addition, at least two objects show hints of perturbations from unseen companions and need follow-up analyses.
244

Rigid Designation, the Modal Argument, and the Nominal Description Theory

Isenberg, Jillian January 2005 (has links)
In this thesis, I describe and evaluate two recent accounts of naming. These accounts are motivated by Kripke?s response to Russell?s Description Theory of Names (DTN). Particularly, I consider Kripke?s Modal Argument (MA) and various arguments that have been given against it, as well as Kripke?s responses to these arguments. Further, I outline a version of MA that has recently been presented by Scott Soames, and consider how he responds to the criticisms that the argument faces. In order to evaluate the claim that MA is decisive against all description theories, I outline the Nominal Description Theory (NDT) put forth by Kent Bach and consider whether it constitutes a principled response to MA. I do so by exploring how Bach both responds to Kripke?s arguments against descriptivism and highlights the problems with rigid designation as a purely semantic thesis. Finally, I consider the relative merits of the accounts put forth by Bach and Soames. Upon doing so, I argue that MA is not as decisive against description theories as it has long been thought to be. In fact, NDT seems to provide a better account of our uses of proper names than the rigid designation thesis as presented by Kripke and Soames.
245

Multi-Scale Thermal Modeling Methodology for High Power-Electronic Cabinets

Burton, Ludovic Nicolas 24 August 2007 (has links)
Future generation of all-electric ships will be highly dependent on electric power, since every single system aboard such as the drive propulsion, the weapon system, the communication and navigation systems will be electrically powered. Power conversion modules (PCM) will be used to transform and distribute the power as desired in various zone within the ships. As power densities increase at both components and systems-levels, high-fidelity thermal models of those PCMs are indispensable to reach high performance and energy efficient designs. Efficient systems-level thermal management requires modeling and analysis of complex turbulent fluid flow and heat transfer processes across several decades of length scales. In this thesis, a methodology for thermal modeling of complex PCM cabinets used in naval applications is offered. High fidelity computational fluid dynamics and heat transfer (CFD/HT) models are created in order to analyze the heat dissipation from the chip to the multi-cabinet level and optimize turbulent convection cooling inside the cabinet enclosure. Conventional CFD/HT modeling techniques for such complex and multi-scale systems are severely limited as a design or optimization tool. The large size of such models and the complex physics involved result in extremely slow processing time. A multi-scale approach has been developed to predict accurately the overall airflow conditions at the cabinet level as well as the airflow around components which dictates the chip temperature in details. Various models of different length scales are linked together by matching the boundary conditions. The advantage is that it allows high fidelity models at each length scale and more detailed simulations are obtained than what could have been accomplished with a single model methodology. It was found that the power cabinets under the prescribed design parameters, experience operating point airflow rates that are much lower than the design requirements. The flow is unevenly distributed through the various bays. Approximately 90 % of the cold plenum inlet flow rate goes exclusively through Bay 1 and Bay 2. Re-circulation and reverse flow are observed in regions experiencing a lack of flow motion. As a result high temperature of the air flow and consequently high component temperatures are also experienced in the upper bays of the cabinet. A proper orthogonal decomposition (POD) methodology has been performed to develop reduced-order compact models of the PCM cabinets. The reduced-order modeling approach based on POD reduces the numerical models containing 35 x 109 DOF down to less than 20 DOF, while still retaining a great accuracy. The reduced-order models developed yields prediction of the full-field 3-D cabinet within 30 seconds as opposed to the CFD/HT simulations that take more than 3 hours using a high power computer cluster. The reduced-order modeling methodology developed could be a useful tool to quickly and accurately characterize the thermal behavior of any electronics system and provides a good basis for thermal design and optimization purposes.
246

Dynamical Modeling Of The Flow Over Flapping Wing By Applying Proper Orthogonal Decomposition And System Identification

Durmaz, Oguz 01 September 2011 (has links) (PDF)
In this study the dynamical modeling of the unsteady flow over a flapping wing is considered. The technique is based on collecting instantaneous velocity field data of the flow using Particle Image Velocimetry (PIV), applying image processing to these snapshots to locate the airfoil, filling the airfoil and its surface with proper velocity data, applying Proper Orthogonal Decomposition (POD) to these post-processed images to compute the POD modes and time coefficients, and finally fitting a discrete time state space dynamical model to the trajectories of the time coefficients using subspace system identification (N4SID). The procedure is applied using MATLAB for the data obtained from NACA 0012, SD 7003, elliptic airfoil and flat plate, and the results show that the dynamical model obtained can represent the flow dynamics with acceptable accuracy.
247

A Study on English Article Acquisition by Mandarin-Chinese Speakers

Shao, Yea-chyi 27 August 2009 (has links)
Abstract The study aims to discuss how English article system is acquired by Mandarin-Chinese speakers at two domains, semantic domain and sentence level, by analyzing oral-story-telling data produced by forty 19-to-20-year old college students in Taiwan (20 males and 20 females), divided into low proficiency and high proficiency levels based on their results of Michigan Listening Comprehension Test. The production data was classified into four semantic types marked by a combination of two universal semantic concepts, specificity and definiteness for the purpose of examining Fluctuation Hypothesis (FH) proposed by Ionin (2004), who argued that L2 access to Universal Grammar by predicting L2 learners without article system will fluctuate between two parameter settings of specificity and definiteness. It is found that overuse of the did occur in [+specificity, -definiteness] contexts where target use is a, particularly for low-level learners. Besides, to closely probe into how L1 Mandarin-Chinese speakers use articles in L2 grammar within Ionin¡¦s framework, a model for linguistic properties marking specificity and definiteness in Chinese was proposed so as to compare the differences between English article system and Chinese classifier system, arguing that the interference of L1 may take place at semantic domain by L1 Mandarin-Chinese speakers. The evidence that the substitution of nage for definite article the in [+specificity, +definiteness] contexts and that of numeral one yige for indefinite article a in only [+specificity, -definiteness] and [-specificity, -definiteness] contexts sheds the light on the possibility of the occurrence of L1 transfer at the semantic domain. As for article use in sentential positions, due to definiteness effect and subject indefinite effect in Chinese, it is predicted that L1 Mandarin-Chinese speakers would drop articles more often in preverbal positions than in postverbal positions. The result showed that low-level learners did drop more articles in preverbal positions than in postverbal positions, but advanced learners showed the contrast, which implies that the beginners are easily governed by the definiteness effect, that is, L1 is at play at the initial state of L2 grammar. Overall, the advanced learners used articles more accurately than the low-level learners did, suggesting that the advanced Mandarin-Chinese L2 English learners may gradually reset the parameter of L2 grammars in acquiring English article system. Furthermore, different error types produced by the participants were classified in the study and provided with theoretical discussion. A surprising finding is that the low-level learners highly misused the Cinderella for Cinderella in the data. Such error production may show the evidence of L2 access to UG since the Cinderella cannot be used in English and there is no determiner the projecting in Chinese proper name. The overuse of the further illustrates the existence of projecting D for L1 Chinese learners. The acquisition rate of article use was measured by SOC (Suppliances in Obligatory Contexts) and TLU (Target-Like-Use). The results showed that the most difficult article use for both proficiency levels is zero article Ø. The advanced learners can use the more accurately than the learners at proficiency level due to the high occurrences of overgeneralization of the by the low-proficiency levels. In general, the result of the current study bears on the issue of accessibility of UG and the possibility of parameter (re-) setting. It is also shown that L1 plays a significant role in L2 article use not only at the semantic domain but also at the sentential level by L1 Mandarin-Chinese speakers, especially for those at the initial state of L2 grammar.
248

Energy efficient thermal management of data centers via open multi-scale design

Samadiani, Emad 20 August 2009 (has links)
Data centers are computing infrastructure facilities that house arrays of electronic racks containing high power dissipation data processing and storage equipment whose temperature must be maintained within allowable limits. In this research, the sustainable and reliable operations of the electronic equipment in data centers are shown to be possible through the Open Engineering Systems paradigm. A design approach is developed to bring adaptability and robustness, two main features of open systems, in multi-scale convective systems such as data centers. The presented approach is centered on the integration of three constructs: a) Proper Orthogonal Decomposition (POD) based multi-scale modeling, b) compromise Decision Support Problem (cDSP), and c) robust design to overcome the challenges in thermal-fluid modeling, having multiple objectives, and inherent variability management, respectively. Two new POD based reduced order thermal modeling methods are presented to simulate multi-parameter dependent temperature field in multi-scale thermal/fluid systems such as data centers. The methods are verified to achieve an adaptable, robust, and energy efficient thermal design of an air-cooled data center cell with an annual increase in the power consumption for the next ten years. Also, a simpler reduced order modeling approach centered on POD technique with modal coefficient interpolation is validated against experimental measurements in an operational data center facility.
249

Modelling temporal aspects of healthcare processes with Ontologies / Modelling temporal aspects of healthcare processes with Ontologies

Afzal, Muhammad January 2010 (has links)
<p>This thesis represents the ontological model for the Time Aspects for a Healthcare Organization. It provides information about activities which take place at different interval of time at Ryhov Hospital. These activities are series of actions which may be happen in predefined sequence and at predefined times or may be happen at any time in a General ward or in Emergency ward of a Ryhov Hospital.</p><p>For achieving above mentioned objective, our supervisor conducts a workshop at the start of thesis. In this workshop, the domain experts explain the main idea of ward activities. From this workshop; the author got a lot of knowledge about activities and time aspects. After this, the author start literature review for achieving valuable knowledge about ward activities, time aspects and also methodology steps which are essentials for ontological model. After developing ontological model for Time Aspects, our supervisor also conducts a second workshop. In this workshop, the author presents the model for evaluation purpose.</p>
250

$L_\infty$-Norm Computation for Descriptor Systems

Voigt, Matthias 15 July 2010 (has links) (PDF)
In many applications from industry and technology computer simulations are performed using models which can be formulated by systems of differential equations. Often the equations underlie additional algebraic constraints. In this context we speak of descriptor systems. Very important characteristic values of such systems are the $L_\infty$-norms of the corresponding transfer functions. The main goal of this thesis is to extend a numerical method for the computation of the $L_\infty$-norm for standard state space systems to descriptor systems. For this purpose we develop a numerical method to check whether the transfer function of a given descriptor system is proper or improper and additionally use this method to reduce the order of the system to decrease the costs of the $L_\infty$-norm computation. When computing the $L_\infty$-norm it is necessary to compute the eigenvalues of certain skew-Hamiltonian/Hamiltonian matrix pencils composed by the system matrices. We show how we extend these matrix pencils to skew-Hamiltonian/Hamiltonian matrix pencils of larger dimension to get more reliable and accurate results. We also consider discrete-time systems, apply the extension strategy to the arising symplectic matrix pencils and transform these to more convenient structures in order to apply structure-exploiting eigenvalue solvers to them. We also investige a new structure-preserving method for the computation of the eigenvalues of skew-Hamiltonian/Hamiltonian matrix pencils and use this to increase the accuracy of the computed eigenvalues even more. In particular we ensure the reliability of the $L_\infty$-norm algorithm by this new eigenvalue solver. Finally we describe the implementation of the algorithms in Fortran and test them using two real-world examples.

Page generated in 0.0326 seconds