• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 19
  • 7
  • 3
  • 3
  • 2
  • 1
  • Tagged with
  • 44
  • 44
  • 12
  • 10
  • 9
  • 9
  • 8
  • 7
  • 6
  • 6
  • 6
  • 5
  • 5
  • 5
  • 5
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Data Visualization to Evaluate and Facilitate Targeted Data Acquisitions in Support of a Real-time Ocean Forecasting System

Holmberg, Edward A, IV 13 August 2014 (has links)
A robust evaluation toolset has been designed for Naval Research Laboratory’s Real-Time Ocean Forecasting System RELO with the purpose of facilitating an adaptive sampling strategy and providing a more educated guidance for routing underwater gliders. The major challenges are to integrate into the existing operational system, and provide a bridge between the modeling and operative environments. Visualization is the selected approach and the developed software is divided into 3 packages: The first package is to verify that the glider is actually following the waypoints and to predict the position of the glider for the next cycle’s instructions. The second package helps ensures that the delivered waypoints are both useful and feasible. The third package provides the confidence levels for the suggested path. This software’s implementation is in Python for portability and modularity to allow for easy expansion for new visuals.
22

Tools for landscape-scale automated acoustic monitoring to characterize wildlife occurrence dynamics

Balantic, Cathleen Michelle 01 January 2019 (has links)
In a world confronting climate change and rapidly shifting land uses, effective methods for monitoring natural resources are critical to support scientifically-informed management decisions. By taking audio recordings of the environment, scientists can acquire presence-absence data to characterize populations of sound-producing wildlife over time and across vast spatial scales. Remote acoustic monitoring presents new challenges, however: monitoring programs are often constrained in the total time they can record, automated detection algorithms typically produce a prohibitive number of detection mistakes, and there is no streamlined framework for moving from raw acoustic data to models of wildlife occurrence dynamics. In partnership with a proof-of-concept field study in the U.S Bureau of Land Management’s Riverside East Solar Energy Zone in southern California, this dissertation introduces a new R software package, AMMonitor, alongside a novel body of work: 1) temporally-adaptive acoustic sampling to maximize the detection probabilities of target species despite recording constraints, 2) values-driven statistical learning tools for template-based automated detection of target species, and 3) methods supporting the construction of dynamic species occurrence models from automated acoustic detection data. Unifying these methods with streamlined data management, the AMMonitor software package supports the tracking of species occurrence, colonization, and extinction patterns through time, introducing the potential to perform adaptive management at landscape scales.
23

Bayesian collaborative sampling: adaptive learning for multidisciplinary design

Lee, Chung Hyun 14 November 2011 (has links)
A Bayesian adaptive sampling method is developed for highly coupled multidisciplinary design problems. The method addresses a major challenge in aerospace design: exploration of a design space with computationally expensive analysis tools such as computational fluid dynamics (CFD) or finite element analysis. With a limited analysis budget, it is often impossible to optimize directly or to explore a design space with off-line design of experiments (DoE) and surrogate models. This difficulty is magnified in multidisciplinary problems with feedbacks between disciplines because each design point may require iterative analyses to converge on a compatible solution between different disciplines. Bayesian Collaborative Sampling (BCS) is a bi-level architecture for adaptive sampling that simulataneously - concentrates disciplinary analyses in regions of a design space that are favorable to a system-level objective - guides analyses to regions where interdisciplinary coupling variables are probably compatible BCS uses Bayesian models and sequential sampling techniques along with elements of the collaborative optimization (CO) architecture for multidisciplinary optimization. The method is tested with the aero-structural design of a glider wing and the aero-propulsion design of a turbojet engine nacelle.
24

Pokročilé simulační metody pro spolehlivostní analýzu konstrukcí / Advanced simulation methods for reliability analysis of structures

Gerasimov, Aleksei January 2019 (has links)
The thesis apply to reliability problems approach of Voronoi tessellation, typically used in the field of samples designs evaluation and for Monte Carlo samples reweighing. It is shown, this general technique estimation converges to that of Importance Sampling method despite it does not rely on Importance Sampling's auxiliary density. Consequently, reliability analysis could be divided into sampling itself and assessment of simulation results. As an extension of this idea, adaptive statistical sampling using QHull library was attempted.
25

Computation of estimates in a complex survey sample design

Maremba, Thanyani Alpheus January 2019 (has links)
Thesis (M.Sc. (Statistics)) -- University of Limpopo, 2019 / This research study has demonstrated the complexity involved in complex survey sample design (CSSD). Furthermore the study has proposed methods to account for each step taken in sampling and at the estimation stage using the theory of survey sampling, CSSD-based case studies and practical implementation based on census attributes. CSSD methods are designed to improve statistical efficiency, reduce costs and improve precision for sub-group analyses relative to simple random sample(SRS).They are commonly used by statistical agencies as well as development and aid organisations. CSSDs provide one of the most challenging fields for applying a statistical methodology. Researchers encounter a vast diversity of unique practical problems in the course of studying populations. These include, interalia: non-sampling errors,specific population structures,contaminated distributions of study variables,non-satisfactory sample sizes, incorporation of the auxiliary information available on many levels, simultaneous estimation of characteristics in various sub-populations, integration of data from many waves or phases of the survey and incompletely specified sampling procedures accompanying published data. While the study has not exhausted all the available real-life scenarios, it has outlined potential problems illustrated using examples and suggested appropriate approaches at each stage. Dealing with the attributes of CSSDs mentioned above brings about the need for formulating sophisticated statistical procedures dedicated to specific conditions of a sample survey. CSSD methodologies give birth to a wide variety of approaches, methodologies and procedures of borrowing the strength from virtually all branches of statistics. The application of various statistical methods from sample design to weighting and estimation ensures that the optimal estimates of a population and various domains are obtained from the sample data.CSSDs are probability sampling methodologies from which inferences are drawn about the population. The methods used in the process of producing estimates include adjustment for unequal probability of selection (resulting from stratification, clustering and probability proportional to size (PPS), non-response adjustments and benchmarking to auxiliary totals. When estimates of survey totals, means and proportions are computed using various methods, results do not differ. The latter applies when estimates are calculated for planned domains that are taken into account in sample design and benchmarking. In contrast, when the measures of precision such as standard errors and coefficient of variation are produced, they yield different results depending on the extent to which the design information is incorporated during estimation. The literature has revealed that most statistical computer packages assume SRS design in estimating variances. The replication method was used to calculate measures of precision which take into account all the sampling parameters and weighting adjustments computed in the CSSD process. The creation of replicate weights and estimation of variances were done using WesVar, astatistical computer package capable of producing statistical inference from data collected through CSSD methods. Keywords: Complex sampling, Survey design, Probability sampling, Probability proportional to size, Stratification, Area sampling, Cluster sampling.
26

Adaptive Sampling Methods for Stochastic Optimization

Daniel Andres Vasquez Carvajal (10631270) 08 December 2022 (has links)
<p>This dissertation investigates the use of sampling methods for solving stochastic optimization problems using iterative algorithms. Two sampling paradigms are considered: (i) adaptive sampling, where, before each iterate update, the sample size for estimating the objective function and the gradient is adaptively chosen; and (ii) retrospective approximation (RA), where, iterate updates are performed using a chosen fixed sample size for as long as progress is deemed statistically significant, at which time the sample size is increased. We investigate adaptive sampling within the context of a trust-region framework for solving stochastic optimization problems in $\mathbb{R}^d$, and retrospective approximation within the broader context of solving stochastic optimization problems on a Hilbert space. In the first part of the dissertation, we propose Adaptive Sampling Trust-Region Optimization (ASTRO), a class of derivative-based stochastic trust-region (TR) algorithms developed to solve smooth stochastic unconstrained optimization problems in $\mathbb{R}^{d}$ where the objective function and its gradient are observable only through a noisy oracle or using a large dataset. Efficiency in ASTRO stems from two key aspects: (i) adaptive sampling to ensure that the objective function and its gradient are sampled only to the extent needed, so that small sample sizes are chosen when the iterates are far from a critical point and large sample sizes are chosen when iterates are near a critical point; and (ii) quasi-Newton Hessian updates using BFGS. We prove three main results for ASTRO and for general stochastic trust-region methods that estimate function and gradient values adaptively, using sample sizes that are stopping times with respect to the sigma algebra of the generated observations. The first asserts strong consistency when the adaptive sample sizes have a mild logarithmic lower bound, assuming that the oracle errors are light-tailed. The second and third results characterize the iteration and oracle complexities in terms of certain risk functions. Specifically, the second result asserts that the best achievable $\mathcal{O}(\epsilon^{-1})$ iteration complexity (of squared gradient norm) is attained when the total relative risk associated with the adaptive sample size sequence is finite; and the third result characterizes the corresponding oracle complexity in terms of the total generalized risk associated with the adaptive sample size sequence. We report encouraging numerical results in certain settings. In the second part of this dissertation, we consider the use of RA as an alternate adaptive sampling paradigm to solve smooth stochastic constrained optimization problems in infinite-dimensional Hilbert spaces. RA generates a sequence of subsampled deterministic infinite-dimensional problems that are approximately solved within a dynamic error tolerance. The bottleneck in RA becomes solving this sequence of problems efficiently. To this end, we propose a progressive subspace expansion (PSE) framework to solve smooth deterministic optimization problems in infinite-dimensional Hilbert spaces with a TR Sequential Quadratic Programming (SQP) solver. The infinite-dimensional optimization problem is discretized, and a sequence of finite-dimensional problems are solved where the problem dimension is progressively increased. Additionally, (i) we solve this sequence of finite-dimensional problems only to the extent necessary, i.e., we spend just enough computational work needed to solve each problem within a dynamic error tolerance, and (ii) we use the solution of the current optimization problem as the initial guess for the subsequent problem. We prove two main results for PSE. The first assesses convergence to a first-order critical point of a subsequence of iterates generated by the PSE TR-SQP algorithm. The second characterizes the relationship between the error tolerance and the problem dimension, and provides an oracle complexity result for the total amount of computational work incurred by PSE. This amount of computational work is closely connected to three quantities: the convergence rate of the finite-dimensional spaces to the infinite-dimensional space, the rate of increase of the cost of making oracle calls in finite-dimensional spaces, and the convergence rate of the solution method used. We also show encouraging numerical results on an optimal control problem supporting our theoretical findings.</p> <p>  </p>
27

PhenoBee: Drone-Based Robot for Advanced Field Proximal Phenotyping in Agriculture

Ziling Chen (8810570) 19 December 2023 (has links)
<p dir="ltr">The increasing global need for food security and sustainable agriculture underscores the urgency of advancing field phenotyping for enhanced plant breeding and crop management. Soybean, a major global protein source, is at the forefront of these advancements. Proximal sensing in soybean phenotyping offers a higher signal-to-noise ratio and resolution but has been underutilized in large-scale field applications due to low throughput and high labor costs. Moreover, there is an absence of automated solutions for in vivo proximal phenotyping of dicot plants. This thesis addresses these gaps by introducing a comprehensive, technologically sophisticated approach to modern field phenotyping.</p><p dir="ltr">Fully Automated Proximal Hyperspectral Imaging System: The first chapter presents the development of a cutting-edge hyperspectral imaging system integrated with a robotic arm. This system surpasses traditional imaging limitations, providing enhanced close-range data for accurate plant health assessment.</p><p dir="ltr">Robust Leaf Pose Estimation: The second chapter discusses the application of deep learning for accurate leaf pose estimation. This advancement is crucial for in-depth plant analysis, fostering better insights into plant health and growth, thereby contributing to increased crop yield and disease resistance.</p><p dir="ltr">PhenoBee – A Drone Mobility Platform: The third chapter introduces 'PhenoBee,' a dronebased platform designed for extensive field phenotyping. This innovative technology significantly broadens the capabilities of field data collection, showcasing its viability for widespread aerial phenotyping.</p><p dir="ltr">Adaptive Sampling for Dynamic Waypoint Planning: The final chapter details an adaptive sampling algorithm for efficient, real-time waypoint planning. This strategic approach enhances field scouting efficiency and precision, ensuring optimal data acquisition.</p><p dir="ltr">By integrating deep learning, robotic automation, aerial mobility, and intelligent sampling algorithms, the proposed solution revolutionizes the adaptation of in vivo proximal phenotyping on a large scale. The findings of this study highlight the potential to automate agriculture activities with high scalability and identify nutrient deficiencies, diseases, and chemical damage in crops earlier, thereby preventing yield loss, improving food quality, and expediting the development of agricultural products. Collectively, these advancements pave the way for more effective and efficient plant breeding and crop management, directly contributing to the enhancement of global food production systems. This study not only addresses current limitations in field phenotyping but also sets a new standard for technological innovation in agriculture.</p>
28

PERFORMANCE MACRO-MODELING TECHNIQUES FOR FAST ANALOG CIRCUIT SYNTHESIS

WOLFE, GLENN A. January 2004 (has links)
No description available.
29

Efficient adaptive sampling applied to multivariate, multiple output rational interpolation models, with applications in electromagnetics-based device modelling

Lehmensiek, Robert 12 1900 (has links)
Thesis (PhD) -- Stellenbosch University, 2001. / ENGLISH ABSTRACT: A robust and efficient adaptive sampling algorithm for multivariate, multiple output rational interpolation models, based on convergents of Thiele-type branched continued fractions, is presented. A variation of the standard branched continued fraction method is proposed that uses approximation to establish a non-rectangular grid of support points. Starting with a low order interpolant, the technique systematically increases the order by optimally choosing new support points in the areas of highest error, until the desired accuracy is achieved. In this way, accurate surrogate models are established by a small number of support points, without assuming any a priori knowledge of the microwave structure under study. The technique is illustrated and evaluated on several passive microwave structures, however it is general enough to be applied to many modelling problems. / AFRIKAANSE OPSOMMING: 'n Robuuste en effektiewe aanpasbare monsternemingsalgoritme vir multi-veranderlike, multi-uittree rasionale interpolasiemodelle, gegrond op konvergente van Thiele vertakte volgehoue breukuitbreidings, word beskryf. 'n Variasie op die konvensionele breukuitbreidingsmetode word voorgestel, wat 'n nie-reghoekige rooster van ondersteuningspunte gebruik in die funksiebenadering. Met 'n lae orde interpolant as beginpunt, verhoog die algoritme stelselmatig die orde van die interpolant deur optimaal verbeterde ondersteuningspunte te kies waar die grootste fout voorkom, totdat die gewensde akuraatheid bereik word. Hierdeur word akkurate surrogaat modelle opgebou ten spyte van min inisiele ondersteuningspunte, asook sonder voorkennis van die mikrogolfstruktuur ter sprake. Die algoritme word gedemonstreer en geevalueer op verskeie passiewe mikrogolfstrukture, maar is veelsydig genoeg om toepassing te vind in meer algemene modelleringsprobleme.
30

Statistical Yield Analysis and Design for Nanometer VLSI

Jaffari, Javid January 2010 (has links)
Process variability is the pivotal factor impacting the design of high yield integrated circuits and systems in deep sub-micron CMOS technologies. The electrical and physical properties of transistors and interconnects, the building blocks of integrated circuits, are prone to significant variations that directly impact the performance and power consumption of the fabricated devices, severely impacting the manufacturing yield. However, the large number of the transistors on a single chip adds even more challenges for the analysis of the variation effects, a critical task in diagnosing the cause of failure and designing for yield. Reliable and efficient statistical analysis methodologies in various design phases are key to predict the yield before entering such an expensive fabrication process. In this thesis, the impacts of process variations are examined at three different levels: device, circuit, and micro-architecture. The variation models are provided for each level of abstraction, and new methodologies are proposed for efficient statistical analysis and design under variation. At the circuit level, the variability analysis of three crucial sub-blocks of today's system-on-chips, namely, digital circuits, memory cells, and analog blocks, are targeted. The accurate and efficient yield analysis of circuits is recognized as an extremely challenging task within the electronic design automation community. The large scale of the digital circuits, the extremely high yield requirement for memory cells, and the time-consuming analog circuit simulation are major concerns in the development of any statistical analysis technique. In this thesis, several sampling-based methods have been proposed for these three types of circuits to significantly improve the run-time of the traditional Monte Carlo method, without compromising accuracy. The proposed sampling-based yield analysis methods benefit from the very appealing feature of the MC method, that is, the capability to consider any complex circuit model. However, through the use and engineering of advanced variance reduction and sampling methods, ultra-fast yield estimation solutions are provided for different types of VLSI circuits. Such methods include control variate, importance sampling, correlation-controlled Latin Hypercube Sampling, and Quasi Monte Carlo. At the device level, a methodology is proposed which introduces a variation-aware design perspective for designing MOS devices in aggressively scaled geometries. The method introduces a yield measure at the device level which targets the saturation and leakage currents of an MOS transistor. A statistical method is developed to optimize the advanced doping profiles and geometry features of a device for achieving a maximum device-level yield. Finally, a statistical thermal analysis framework is proposed. It accounts for the process and thermal variations simultaneously, at the micro-architectural level. The analyzer is developed, based on the fact that the process variations lead to uncertain leakage power sources, so that the thermal profile, itself, would have a probabilistic nature. Therefore, by a co-process-thermal-leakage analysis, a more reliable full-chip statistical leakage power yield is calculated.

Page generated in 0.0844 seconds