• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 309
  • 113
  • 91
  • 74
  • 32
  • 18
  • 9
  • 7
  • 6
  • 5
  • 5
  • 4
  • 4
  • 3
  • 2
  • Tagged with
  • 823
  • 823
  • 139
  • 124
  • 121
  • 106
  • 105
  • 101
  • 97
  • 83
  • 79
  • 75
  • 73
  • 67
  • 64
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
91

Computational Tools for Chemical Data Assimilation with CMAQ

Gou, Tianyi 15 February 2010 (has links)
The Community Multiscale Air Quality (CMAQ) system is the Environmental Protection Agency's main modeling tool for atmospheric pollution studies. CMAQ-ADJ, the adjoint model of CMAQ, offers new analysis capabilities such as receptor-oriented sensitivity analysis and chemical data assimilation. This thesis presents the construction, validation, and properties of new adjoint modules in CMAQ, and illustrates their use in sensitivity analyses and data assimilation experiments. The new module of discrete adjoint of advection is implemented with the aid of automatic differentiation tool (TAMC) and is fully validated by comparing the adjoint sensitivities with finite difference values. In addition, adjoint sensitivity with respect to boundary conditions and boundary condition scaling factors are developed and validated in CMAQ. To investigate numerically the impact of the continuous and discrete advection adjoints on data assimilation, various four dimensional variational (4D-Var) data assimilation experiments are carried out with the 1D advection PDE, and with CMAQ advection using synthetic and real observation data. The results show that optimization procedure gives better estimates of the reference initial condition and converges faster when using gradients computed by the continuous adjoint approach. This counter-intuitive result is explained using the nonlinearity properties of the piecewise parabolic method (the numerical discretization of advection in CMAQ). Data assimilation experiments are carried out using real observation data. The simulation domain encompasses Texas and the simulation period is August 30 to September 1, 2006. Data assimilation is used to improve both initial and boundary conditions. These experiments further validate the tools developed in this thesis. / Master of Science
92

High-sensitivity Full-field Quantitative Phase Imaging Based on Wavelength Shifting Interferometry

Chen, Shichao 06 September 2019 (has links)
Quantitative phase imaging (QPI) is a category of imaging techniques that can retrieve the phase information of the sample quantitatively. QPI features label-free contrast and non-contact detection. It has thus gained rapidly growing attention in biomedical imaging. Capable of resolving biological specimens at tissue or cell level, QPI has become a powerful tool to reveal the structural, mechanical, physiological and spectroscopic properties. Over the past two decades, QPI has seen a broad spectrum of evolving implementations. However, only a few have seen successful commercialization. The challenges are manifold. A major problem for many QPI techniques is the necessity of a custom-made system which is hard to interface with existing commercial microscopes. For this type of QPI techniques, the cost is high and the integration of different imaging modes requires nontrivial hardware modifications. Another limiting factor is insufficient sensitivity. In QPI, sensitivity characterizes the system repeatability and determines the quantification resolution of the system. With more emerging applications in cell imaging, the requirement for sensitivity also becomes more stringent. In this work, a category of highly sensitive full-field QPI techniques based on wavelength shifting interferometry (WSI) is proposed. On one hand, the full-field implementations, compared to point-scanning, spectral domain QPI techniques, require no mechanical scanning to form a phase image. On the other, WSI has the advantage of preserving the integrity of the interferometer and compatibility with multi-modal imaging requirement. Therefore, the techniques proposed here have the potential to be readily integrated into the ubiquitous lab microscopes and equip them with quantitative imaging functionality. In WSI, the shifts in wavelength can be applied in fine steps, termed swept source digital holographic phase microscopy (SS-DHPM), or a multi-wavelength-band manner, termed low coherence wavelength shifting interferometry (LC-WSI). SS-DHPM brings in an additional capability to perform spectroscopy, whilst the LC-WSI achieves a faster imaging rate which has been demonstrated with live sperm cell imaging. In an attempt to integrate WSI with the existing commercial microscope, we also discuss the possibility of demodulation for low-cost sources and common path implementation. Besides experimentally demonstrating the high sensitivity (limited by only shot noise) with the proposed techniques, a novel sensitivity evaluation framework is also introduced for the first time in QPI. This framework examines the Cramér-Rao bound (CRB), algorithmic sensitivity and experimental sensitivity, and facilitates the diagnosis of algorithm efficiency and system efficiency. The framework can be applied not only to the WSI techniques we proposed, but also to a broad range of QPI techniques. Several popular phase shifting interferometry techniques as well as off-axis interferometry is studied. The comparisons between them are shown to provide insights into algorithm optimization and energy efficiency of sensitivity. / Doctor of Philosophy / The most common imaging systems nowadays capture the image of an object with the irradiance perceived by the camera. Based on the intensity contrast, morphological features, such as edges, humps, and grooves, can be inferred to qualitatively characterize the object. Nevertheless, in scientific measurements and research applications, a quantitative characterization of the object is desired. Quantitative phase imaging (QPI) is such a category of imaging techniques that can retrieve the phase information of the sample by properly design the irradiance capturing scheme and post-process the data, converting them to quantitative metrics such as surface height, material density and so on. The imaging process of QPI will neither harm the sample nor leave exogenous residuals. As a result, it has thus gained rapidly growing attention in biomedical imaging. Over the past two decades, QPI has seen a broad spectrum of evolving implementations, but only a few have seen successful commercialization. The challenges are manifold whilst one stands out - that they have expensive optical setups that are often incompatible with existing commercial microscope platforms. The setups are also very complicated such that without professionals having solid optics background, it is difficult to operate the system to perform imaging applications. Another limiting factor is the insufficient understanding of sensitivity. In QPI, sensitivity characterizes the system repeatability and determines its quantification resolution. With more emerging applications in cell imaging, the requirement for sensitivity also becomes more stringent. In this work, a category of highly sensitive full-field QPI techniques based on wavelength shifting interferometry (WSI) is proposed. WSI images the full-field of the sample simultaneously, unlike some other techniques requiring scanning one probe point across the sample. It also has the advantage of preserving the integrity of the interferometer, which is the key structure to enable highly sensitive measurement for QPI methods. Therefore, the techniques proposed here have the potential to be readily integrated into the ubiquitous lab microscopes and equip them with quantitative imaging functionality. Differed by implementations, two WSI techniques have been proposed, termed swept source digital holographic phase microscopy (SS-DHPM), and low coherence wavelength shifting interferometry (LC-WSI), respectively. SS-DHPM brings in an additional capability to perform spectroscopy, whilst the LC-WSI achieves a faster imaging rate which has been demonstrated with live sperm cell imaging. In an attempt to integrate WSI with the existing commercial microscope, we also discuss the possibility of demodulation for low-cost sources and common path implementation. Besides experimentally demonstrating the high sensitivity with the proposed techniques, a novel sensitivity evaluation framework is also introduced for the first time in QPI. This framework not only examines the realistic sensitivity obtained in experiments, but also compares it to the theoretical values. The framework can be widely applied to a broad range of QPI techniques, providing insights into algorithm optimization and energy efficiency of sensitivity.
93

Permanent Coexistence for Omnivory Models

Vance, James Aaron 06 September 2006 (has links)
One of the basic questions of concern in mathematical biology is the long-term survival of each species in a set of populations. This question is particularly puzzling for a natural system with omnivory due to the fact that simple mathematical models of omnivory are prone to species extinction. Omnivory is defined as the consumption of resources from more than one trophic level. In this work, we investigate three omnivory models of increasing complexity. We use the notion of permanent coexistence, or permanence, to study the long-term survival of three interacting species governed by a mixture of competition and predation. We show the permanence of our models under certain parameter restrictions and include the biological interpretations of these parameter restrictions. Sensitivity analysis is used to obtain important information about meaningful parameter data collection. Examples are also given that demonstrate the ubiquity of omnivory in natural systems. / Ph. D.
94

Modeling and Analysis for Optimization of Unsteady Aeroelastic Systems

Ghommem, Mehdi 06 December 2011 (has links)
Simulating the complex physics and dynamics associated with unsteady aeroelastic systems is often attempted with high-fidelity numerical models. While these high-fidelity approaches are powerful in terms of capturing the main physical features, they may not discern the role of underlying phenomena that are interrelated in a complex manner. This often makes it difficult to characterize the relevant causal mechanisms of the observed features. Besides, the extensive computational resources and time associated with the use these tools could limit the capability of assessing different configurations for design purposes. These shortcomings present the need for the development of simplified and reduced-order models that embody relevant physical aspects and elucidate the underlying phenomena that help in characterizing these aspects. In this work, different fluid and aeroelastic systems are considered and reduced-order models governing their behavior are developed. In the first part of the dissertation, a methodology, based on the method of multiple scales, is implemented to show its usefulness and effectiveness in the characterization of the physics underlying the system, the implementation of control strategies, and the identification of high-impact system parameters. In the second part, the unsteady aerodynamic aspects of flapping micro air vehicles (MAVs) are modeled. This modeling is required for evaluation of performance requirements associated with flapping flight. The extensive computational resources and time associated with the implementation of high-fidelity simulations limit the ability to perform optimization and sensitivity analyses in the early stages of MAV design. To overcome this and enable rapid and reasonably accurate exploration of a large design space, a medium-fidelity aerodynamic tool (the unsteady vortex lattice method) is implemented to simulate flapping wing flight. This model is then combined with uncertainty quantification and optimization tools to test and analyze the performance of flapping wing MAVs under varying conditions. This analysis can be used to provide guidance and baseline for assessment of MAVs performance in the early stages of decision making on flapping kinematics, flight mechanics, and control strategies. / Ph. D.
95

Barriers to the development of smart cities in Indian context

Rana, Nripendra P., Luthra, S., Mangla, S.K., Islam, R., Roderick, S., Dwivedi, Y.K. 26 September 2020 (has links)
Yes / Smart city development is gaining considerable recognition in the systematic literature and international policies throughout the world. The study aims to identify the key barriers of smart cities from a review of existing literature and views of experts in this area. This work further makes an attempt on the prioritisation of barriers to recognise the most important barrier category and ranking of specific barriers within the categories to the development of smart cities in India. Through the existing literature, this work explored 31 barriers of smart cities development and divided them into six categories. This research work employed fuzzy Analytic Hierarchy Process (AHP) technique to prioritise the selected barriers. Findings reveal that ‘Governance’ is documented as the most significant category of barriers for smart city development followed by ‘Economic; ‘Technology’; ‘Social’; ‘Environmental’ and ‘Legal and Ethical’. In this work, authors also performed sensitivity analysis to validate the findings of study. This research is useful to the government and policymakers for eradicating the potential interferences in smart city development initiatives in developing countries like India.
96

Improving Runoff Estimation at Ungauged Catchments

Zelelew, Mulugeta January 2012 (has links)
Water infrastructures have been implemented to support the vital activities of human society. The infrastructure developments at the same time have interrupted the natural catchment response characteristics, challenging society to implement effective water resources planning and management strategies. The Telemark area in southern Norway has seen a large number of water infrastructure developments, particularly hydropower, over more than a century. Recent developments in decision support tools for flood control and reservoir operation has raised the need to compute inflows from local catchments, most of which are regulated or have no observed data. This has contributed for the motivation of this PhD thesis work, with an aim of improving runoff estimation at ungauged catchments, and the research results are presented in four manuscript scientific papers.  The inverse distance weighting, inverse distance squared weighting, ordinary kriging, universal kriging and kriging with external drift were applied to analyse precipitation variability and estimate daily precipitation in the study area. The geostatistical based univariate and multivariate map-correlation concepts were applied to analyse and physically understand regional hydrological response patterns. The Sobol variance based sensitivity analysis (VBSA) method was used to investigate the HBV hydrological model parameterization significances on the model response variations and evaluate the model’s reliability as a prediction tool. The HBV hydrological model space transferability into ungauged catchments was also studied.  The analyses results showed that the inverse distance weighting variants are the preferred spatial data interpolation methods in areas where relatively dense precipitation station network can be found.  In mountainous areas and in areas where the precipitation station network is relatively sparse, the kriging variants are the preferred methods. The regional hydrological response correlation analyses suggested that geographic proximity alone cannot explain the entire hydrological response correlations in the study area. Besides, when the multivariate map-correlation analysis was applied, two distinct regional hydrological response patterns - the radial and elliptical-types were identified. The presence of these hydrological response patterns influenced the location of the best-correlated reference streamgauges to the ungauged catchments. As a result, the nearest streamgauge was found the best-correlated in areas where the radial-type hydrological response pattern is the dominant. In area where the elliptical-type hydrological response pattern is the dominant, the nearest reference streamgauge was not necessarily the best-correlated. The VBSA verified that varying up to a minimum of four to six influential HBV model parameters can sufficiently simulate the catchments' responses characteristics when emphasis is given to fit the high flows. Varying up to a minimum of six influential model parameters is necessary to sufficiently simulate the catchments’ responses and maintain the model performance when emphasis is given to fit the low flows. However, varying more than nine out of the fifteen HBV model parameters will not make any significant change on the model performance.  The hydrological model space transfer study indicated that estimation of representative runoff at ungauged catchments cannot be guaranteed by transferring model parameter sets from a single donor catchment. On the other hand, applying the ensemble based model space transferring approach and utilizing model parameter sets from multiple donor catchments improved the model performance at the ungauged catchments. The result also suggested that high model performance can be achieved by integrating model parameter sets from two to six donor catchments. Objectively minimizing the HBV model parametric dimensionality and only sampling the sensitive model parameters, maintained the model performance and limited the model prediction uncertainty.
97

Análise de Sensibilidade Topológica / Topological Sensitivity Analysis

Novotny, Antonio André 13 February 2003 (has links)
Made available in DSpace on 2015-03-04T18:50:29Z (GMT). No. of bitstreams: 1 Apresentacao.pdf: 103220 bytes, checksum: c76acce6b0debd619e9db9533aa20f11 (MD5) Previous issue date: 2003-02-13 / Conselho Nacional de Desenvolvimento Cientifico e Tecnologico / The Topological Sensitivity Analysis results in a scalar function, denoted as Topological Derivative, that supplies for each point of the domain of definition of the problem the sensitivity of a given cost function when a small hole is created. However, when a hole is introduced, it is no longer possible to stablish a homeomorphism between the domains. Due to this mathematical difficulty the Topological Derivative may become restrictive, nevertheless be extremely general. Thus, in the present work it is proposed a new method to calculte the Topological Derivative via Shape Sensitivity Analysis. This result, formally proved through a theorem, leads to a simpler and more general methodology than the others found in the literature. The Topological Sensitivity Analysis is performed for several Engineering problems, and the obtained results are used to improve the design of mechanical devices by introducing holes. The same theory developed to calculate the Topological Derivative is used to determine the sensitivity of the cost function when a small incrustation is introduced in each position of the domain, resulting in a novel concept denoted as Configurational Sensitivity Analysis, being discussed some possible applications in the context of Inverse Problems and modelling of phenomena that experiment changes in the physical properties of the medium. Thus, the methodology developed in the present work results in a framework with potential applications in Topology Optimization, Inverse Problems and Mechanical Modelling, which may be seen, from now on, not only as a method to calculate the Topological Derivative, but as a promising research area in Computational Modelling. / A análise de Sensibilidade Topológica resulta em uma função escalar, denominada Derivada Topológica, que fornece para cada ponto do domínio de definição do problema a sensibilidade de uma dada função custo quando um pequeno furo é criado. No entanto, ao introduzir um furo, não é mais possível estabelecer um homeomorfismo entre os domínios envolvidos. Devido a essa dificuldade matemática a Derivada Topológica pode se tornar restritiva, não obstante seja extremamente geral. No presente trabalho, portanto, é proposto um novo método de cálculo da Derivada Topológica via Análise de Sensibilidade à Mudança de Forma. Este resultado, formalmente demonstrado através de um teorema, conduz a uma metodologia mais simples e geral do que as demais encontradas na literatura. A Análise de Sensibilidade Topológica é então realizada em diversos problemas da Engenharia e os resultados obtidos são empregados para melhorar o projeto de componentes mecânicos mediante a introdução de furos. A mesma teoria desenvolvida para calcular a Derivada Topológica é utilizada para determinar a sensibilidade da função custo ao introduzir uma pequena incrustação numa dada posição do domínio, resultando em um novo conceito denominado Análise de Sensibilidade Configuracional, sendo discutidas suas possíveis aplicações no contexto de Problemas Inversos e de modelagem de fenômenos que experimentam mudanças nas propriedades físicas do meio. Assim, a metodologia aqui desenvolvida é uma ferramenta em potencial tanto de Otimização Topológica quanto de Problemas Inversos e de Modelagem Mecânica, podendo ser vista, a partir de agora, não somente como um método de cálculo da Derivada Topológica, mas como uma promissora área de pesquisa em Modelagem Computacional.
98

Análise de Sensibilidade Topológica / Topological Sensitivity Analysis

Antonio André Novotny 13 February 2003 (has links)
The Topological Sensitivity Analysis results in a scalar function, denoted as Topological Derivative, that supplies for each point of the domain of definition of the problem the sensitivity of a given cost function when a small hole is created. However, when a hole is introduced, it is no longer possible to stablish a homeomorphism between the domains. Due to this mathematical difficulty the Topological Derivative may become restrictive, nevertheless be extremely general. Thus, in the present work it is proposed a new method to calculte the Topological Derivative via Shape Sensitivity Analysis. This result, formally proved through a theorem, leads to a simpler and more general methodology than the others found in the literature. The Topological Sensitivity Analysis is performed for several Engineering problems, and the obtained results are used to improve the design of mechanical devices by introducing holes. The same theory developed to calculate the Topological Derivative is used to determine the sensitivity of the cost function when a small incrustation is introduced in each position of the domain, resulting in a novel concept denoted as Configurational Sensitivity Analysis, being discussed some possible applications in the context of Inverse Problems and modelling of phenomena that experiment changes in the physical properties of the medium. Thus, the methodology developed in the present work results in a framework with potential applications in Topology Optimization, Inverse Problems and Mechanical Modelling, which may be seen, from now on, not only as a method to calculate the Topological Derivative, but as a promising research area in Computational Modelling. / A análise de Sensibilidade Topológica resulta em uma função escalar, denominada Derivada Topológica, que fornece para cada ponto do domínio de definição do problema a sensibilidade de uma dada função custo quando um pequeno furo é criado. No entanto, ao introduzir um furo, não é mais possível estabelecer um homeomorfismo entre os domínios envolvidos. Devido a essa dificuldade matemática a Derivada Topológica pode se tornar restritiva, não obstante seja extremamente geral. No presente trabalho, portanto, é proposto um novo método de cálculo da Derivada Topológica via Análise de Sensibilidade à Mudança de Forma. Este resultado, formalmente demonstrado através de um teorema, conduz a uma metodologia mais simples e geral do que as demais encontradas na literatura. A Análise de Sensibilidade Topológica é então realizada em diversos problemas da Engenharia e os resultados obtidos são empregados para melhorar o projeto de componentes mecânicos mediante a introdução de furos. A mesma teoria desenvolvida para calcular a Derivada Topológica é utilizada para determinar a sensibilidade da função custo ao introduzir uma pequena incrustação numa dada posição do domínio, resultando em um novo conceito denominado Análise de Sensibilidade Configuracional, sendo discutidas suas possíveis aplicações no contexto de Problemas Inversos e de modelagem de fenômenos que experimentam mudanças nas propriedades físicas do meio. Assim, a metodologia aqui desenvolvida é uma ferramenta em potencial tanto de Otimização Topológica quanto de Problemas Inversos e de Modelagem Mecânica, podendo ser vista, a partir de agora, não somente como um método de cálculo da Derivada Topológica, mas como uma promissora área de pesquisa em Modelagem Computacional.
99

Robust design : Accounting for uncertainties in engineering

Lönn, David January 2008 (has links)
This thesis concerns optimization of structures considering various uncertainties. The overall objective is to find methods to create solutions that are optimal both in the sense of handling the typical load case and minimising the variability of the response, i.e. robust optimal designs. Traditionally optimized structures may show a tendency of being sensitive to small perturbations in the design or loading conditions, which of course are inevitable. To create robust designs, it is necessary to account for all conceivable variations (or at least the influencing ones) in the design process. The thesis is divided in two parts. The first part serves as a theoretical background to the second part, the two appended articles. This first part includes the concept of robust design, basic statistics, optimization theory and meta modelling. The first appended paper is an application of existing methods on a large industrial example problem. A sensitivity analysis is performed on a Scania truck cab subjected to impact loading in order to identify the most influencing variables on the crash responses. The second paper presents a new method that may be used in robust optimizations, that is, optimizations that account for variations and uncertainties. The method is demonstrated on both an analytical example and a Finite Element example of an aluminium extrusion subjected to axial crushing. / ROBDES
100

Computational Journalism: from Answering Question to Questioning Answers and Raising Good Questions

Wu, You January 2015 (has links)
<p>Our media is saturated with claims of ``facts'' made from data. Database research has in the past focused on how to answer queries, but has not devoted much attention to discerning more subtle qualities of the resulting claims, e.g., is a claim ``cherry-picking''? This paper proposes a Query Response Surface (QRS) based framework that models claims based on structured data as parameterized queries. A key insight is that we can learn a lot about a claim by perturbing its parameters and seeing how its conclusion changes. This framework lets us formulate and tackle practical fact-checking tasks --- reverse-engineering vague claims, and countering questionable claims --- as computational problems. Within the QRS based framework, we take one step further, and propose a problem along with efficient algorithms for finding high-quality claims of a given form from data, i.e. raising good questions, in the first place. This is achieved to using a limited number of high-valued claims to represent high-valued regions of the QRS. Besides the general purpose high-quality claim finding problem, lead-finding can be tailored towards specific claim quality measures, also defined within the QRS framework. An example of uniqueness-based lead-finding is presented for ``one-of-the-few'' claims, landing in interpretable high-quality claims, and an adjustable mechanism for ranking objects, e.g. NBA players, based on what claims can be made for them. Finally, we study the use of visualization as a powerful way of conveying results of a large number of claims. An efficient two stage sampling algorithm is proposed for generating input of 2d scatter plot with heatmap, evalutaing a limited amount of data, while preserving the two essential visual features, namely outliers and clusters. For all the problems, we present real-world examples and experiments that demonstrate the power of our model, efficiency of our algorithms, and usefulness of their results.</p> / Dissertation

Page generated in 0.03 seconds