• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 254
  • 55
  • 32
  • 16
  • 15
  • 6
  • 3
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 466
  • 69
  • 60
  • 60
  • 58
  • 55
  • 49
  • 46
  • 45
  • 40
  • 37
  • 35
  • 34
  • 33
  • 33
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
131

Adaptive Range Counting and Other Frequency-Based Range Query Problems

Wilkinson, Bryan T. January 2012 (has links)
We consider variations of range searching in which, given a query range, our goal is to compute some function based on frequencies of points that lie in the range. The most basic such computation involves counting the number of points in a query range. Data structures that compute this function solve the well-studied range counting problem. We consider adaptive and approximate data structures for the 2-D orthogonal range counting problem under the w-bit word RAM model. The query time of an adaptive range counting data structure is sensitive to k, the number of points being counted. We give an adaptive data structure that requires O(n loglog n) space and O(loglog n + log_w k) query time. Non-adaptive data structures on the other hand require Ω(log_w n) query time (Pătraşcu, 2007). Our specific bounds are interesting for two reasons. First, when k=O(1), our bounds match the state of the art for the 2-D orthogonal range emptiness problem (Chan et al., 2011). Second, when k=Θ(n), our data structure is tight to the aforementioned Ω(log_w n) query time lower bound. We also give approximate data structures for 2-D orthogonal range counting whose bounds match the state of the art for the 2-D orthogonal range emptiness problem. Our first data structure requires O(n loglog n) space and O(loglog n) query time. Our second data structure requires O(n) space and O(log^ε n) query time for any fixed constant ε>0. These data structures compute an approximation k' such that (1-δ)k≤k'≤(1+δ)k for any fixed constant δ>0. The range selection query problem in an array involves finding the kth lowest element in a given subarray. Range selection in an array is very closely related to 3-sided 2-D orthogonal range counting. An extension of our technique for 3-sided 2-D range counting yields an efficient solution to adaptive range selection in an array. In particular, we present an adaptive data structure that requires O(n) space and O(log_w k) query time, exactly matching a recent lower bound (Jørgensen and Larsen, 2011). We next consider a variety of frequency-based range query problems in arrays. We give efficient data structures for the range mode and least frequent element query problems and also exhibit the hardness of these problems by reducing Boolean matrix multiplication to the construction and use of a range mode or least frequent element data structure. We also give data structures for the range α-majority and α-minority query problems. An α-majority is an element whose frequency in a subarray is greater than an α fraction of the size of the subarray; any other element is an α-minority. Surprisingly, geometric insights prove to be useful even in the design of our 1-D range α-majority and α-minority data structures.
132

Evaluation Of Pushover Analysis Procedures For Frame Structures

Oguz, Sermin 01 May 2005 (has links) (PDF)
Pushover analysis involves certain approximations and simplifications that some amount of variation is always expected to exist in seismic demand prediction of pushover analysis. In literature, some improved pushover procedures have been proposed to overcome the certain limitations of traditional pushover procedures. The effects and the accuracy of invariant lateral load patterns utilised in pushover analysis to predict the behavior imposed on the structure due to randomly selected individual ground motions causing elastic and various levels of nonlinear response were evaluated in this study. For this purpose, pushover analyses using various invariant lateral load patterns and Modal Pushover Analysis were performed on reinforced concrete and steel moment resisting frames covering a broad range of fundamental periods. Certain response parameters predicted by each pushover procedure were compared with the &#039 / exact&#039 / results obtained from nonlinear dynamic analysis. The primary observations from the study showed that the accuracy of the pushover results depends strongly on the load path, properties of the structure and the characteristics of the ground motion. Pushover analyses were performed by both DRAIN-2DX and SAP2000. Similar pushover results were obtained from the two different softwares employed in the study provided that similar approach is used in modeling the nonlinear properties of members as well as their structural features. The accuracy of approximate procedures utilised to estimate target displacement was also studied on frame structures. The accuracy of the predictions was observed to depend on the approximations involved in the theory of the procedures, structural properties and ground motion characteristics.
133

Development of an optimal spatial decision-making system using approximate reasoning

Bailey, David Thomas January 2005 (has links)
There is a recognised need for the continued improvement of both the techniques and technology for spatial decision support in infrastructure site selection. Many authors have noted that current methodologies are inadequate for real-world site selection decisions carried out by heterogeneous groups of decision-makers under uncertainty. Nevertheless despite numerous limitations inherent in current spatial problem solving methods, spatial decision support systems have been proven to increase decision-maker effectiveness when used. However, due to the real or perceived difficulty of using these systems few applications are actually in use to support decision-makers in siting decisions. The most common difficulties encountered involve standardising criterion ratings, and communicating results. This research has focused on the use of Approximate Reasoning to improve the techniques and technology of spatial decision support, and make them easier to use and understand. The algorithm developed in this research (ARAISS) is based on the use of natural language to describe problem variables such as suitability, certainty, risk and consensus. The algorithm uses a method based on type II fuzzy sets to represent problem variables. ARAISS was subsequently incorporated into a new Spatial Decision Support System (InfraPlanner) and validated by use in a real-world site selection problem at Australia's Brisbane Airport. Results indicate that Approximate Reasoning is a promising method for spatial infrastructure planning decisions. Natural language inputs and outputs, combined with an easily understandable multiple decision-maker framework created an environment conducive to information sharing and consensus building among parties. Future research should focus on the use of Genetic Algorithms and other Artificial Intelligence techniques to broaden the scope of existing work.
134

Um estudo do teorema de unicidade de Holmgren

Bonafim, Júnior César [UNESP] 17 June 2011 (has links) (PDF)
Made available in DSpace on 2014-06-11T19:26:55Z (GMT). No. of bitstreams: 0 Previous issue date: 2011-06-17Bitstream added on 2014-06-13T20:47:30Z : No. of bitstreams: 1 bonafim_jc_me_sjrp.pdf: 1698215 bytes, checksum: e68379e8a9579b9930c663c9be300f9a (MD5) / Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES) / O objetivo deste trabalho é apresentar o Teorema clássico de Unicidade de Holmgren e mostrar uma aplicação na teoria de controle em equações diferenciais parciais por meio de um exemplo relativamente simples / The aim of this work is to present the classical Holmgren's uniqueness theorem and give an application in control theory in partial di erential equations through a relatively simple example
135

Controle aproximado para sistemas não-lineares de equações diferenciais ordinárias /

Denadai, Daiani. January 2011 (has links)
Orientador: Adalberto Spezamiglio / Banca: Maria Aparecida Bená / Banca: Andréa Cristina P. Arita / Resumo: Neste trabalho provamos a existência de controle aproximado para certos sistemas não-lineares de equaçõ es diferenciais ordinárias de entrada e saída únicas e múltiplas. Utilizamos como técnica funções ou aplicações implícitas globais. / Abstract: In this work we prove the existence of approximate control for certain nonlinear sys-tems of ordinary differential equations of single-input single-output and multi-input multi-output. We use global implicit functions or mappings. / Mestre
136

Mixed-integer optimal control of fast dynamical systems

Stellato, Bartolomeo January 2017 (has links)
Many applications in engineering, computer science and economics involve mixed-integer optimal control problems. Solving these problems in real-time is a challenging task because of the explosion of integer combinations to evaluate. This thesis focuses on the development of new algorithms for mixed-integer programming with an emphasis on optimal control problems of fast dynamical systems with discrete controls. The first part proposes two reformulations to reduce the computational complexity. The first reformulation avoids integer variables altogether. By considering a sequence of switched dynamics, we analyze the switching time optimization problem. Even though it is a continuous smooth problem, it is non-convex and the cost function and derivatives are hard to compute. We develop a new efficient method to compute the cost function and its derivatives. Our technique brings up to two orders of magnitude speedups with respect to state-of-the-art tools. The second approach reduces the number of integer decisions. In hybrid model predictive control (MPC) the computational complexity grows exponentially with the horizon length. Using approximate dynamic programming (ADP) we reduce the horizon length while maintaining good control performance by approximating the tail cost offline. This approach allows, for the first time, the application of such control techniques to fast dynamical systems with sampling times of only a few microseconds. The second part investigates embedded branch-and-bound algorithms for mixed-integer quadratic programs (MIQPs). A core component of these methods is the solution of continuous quadratic programs (QPs). We develop OSQP, a new robust and efficient general-purpose QP solver based on the alternating direction method of multipliers (ADMM) and able, for the first time, to detect infeasible problems. We include OSQP into a custom branch-and-bound algorithm suitable for embedded systems. Our extension requires only a single matrix factorization and exploits warm-starting, thereby greatly reducing the number of ADMM iterations required. Numerical examples show that our algorithm solves small to medium scale MIQPs more quickly than commercial solvers.
137

Statistical tools and community resources for developing trusted models in biology and chemistry

Daly, Aidan C. January 2017 (has links)
Mathematical modeling has been instrumental to the development of natural sciences over the last half-century. Through iterated interactions between modeling and real-world exper- imentation, these models have furthered our understanding of the processes in biology and chemistry that they seek to represent. In certain application domains, such as the field of car- diac biology, communities of modelers with common interests have emerged, leading to the development of many models that attempt to explain the same or similar phenomena. As these communities have developed, however, reporting standards for modeling studies have been in- consistent, often focusing on the final parameterized result, and obscuring the assumptions and data used during their creation. These practices make it difficult for researchers to adapt exist- ing models to new systems or newly available data, and also to assess the identifiability of said models - the degree to which their optimal parameters are constrained by data - which is a key step in building trust that their formulation captures truth about the system of study. In this thesis, we develop tools that allow modelers working with biological or chemical time series data to assess identifiability in an automated fashion, and embed these tools within a novel online community resource that enforces reproducible standards of reporting and facilitates exchange of models and data. We begin by exploring the application of Bayesian and approximate Bayesian inference methods, which parameterize models while simultaneously assessing uncertainty about these estimates, to assess the identifiability of models of the cardiac action potential. We then demon- strate how the side-by-side application of these Bayesian and approximate Bayesian methods can be used to assess the information content of experiments where system observability is limited to "summary statistics" - low-dimensional representations of full time-series data. We next investigate how a posteriori methods of identifiability assessment, such as the above inference techniques, compare against a priori methods based on model structure. We compare these two approaches over a range of biologically relevant experimental conditions, and high- light the cases under which each strategy is preferable. We also explore the concept of optimal experimental design, in which measurements are chosen in order to maximize model identifia- bility, and compare the feasibility of established a priori approaches against a novel a posteriori approach. Finally, we propose a framework for representing and executing modeling experiments in a reproducible manner, and use this as the foundation for a prototype "Modeling Web Lab" where researchers may upload specifications for and share the results of the types of inference explored in this thesis. We demonstrate the Modeling Web Lab's utility across multiple mod- eling domains by re-creating the results of a contemporary modeling study of the hERG ion channel model, as well as the results of an original study of electrochemical redox reactions. We hope that this works serves to highlight the importance of both reproducible standards of model reporting, as well as identifiability assessment, which are inherently linked by the desire to foster trust in community-developed models in disciplines across the natural sciences.
138

Analyse, caractérisation et classification de signaux foetaux / Analysis, characterization and classification of fetal signals

Voicu, Iulian 13 December 2011 (has links)
L’objectif de ce travail est d’obtenir, grâce à un mélange de différentes informations, un monitorage de l’activité du fœtus pour apprécier son état de bien-être ou de souffrance.Actuellement, les paramètres qui caractérisent la souffrance fœtale, issus du rythme cardiaque et des mouvements fœtaux, sont évalués par le médecin et ils sont réunis dans le score de Manning. Deux inconvénients majeurs existent: a) l’évaluation du score est trop longue puisqu’elle dure 1 heure b) il existe des variations inter et intra-opérateur conduisant à différentes interprétations du bilan médical de la patiente.Pour s’affranchir de ces désavantages nous évaluons le bien-être fœtal d’une façon objective, à travers le calcul d’un score. Pour atteindre ce but, nous avons développé une technologie ultrasonore mufti-capteurs permettant de recueillir une soixantaine de signaux Doppler en provenance du cœur, des membres inférieurs et supérieurs. / The objective of this work is to assess the fetal parameters and the fetal well-being using a mixture of fetal parameters. In our days, the parameters derived from heart rate and fetal movements that characterize the fetal distress are assessed by the physician and unified in the score of Manning. Two major disadvantages of Manning’s score exist: a) the assessment of the score is time consuming; b) there are variations inter and intra operators leading to different interpretations of the patients medical record. To overcome these disadvantages we assess the fetal well-being objectively, by computing an automatic/electronic score. To achieve this goal, we developed an ultrasonic multi-sensor unit with 12 sensors allowing to collect sixty Doppler signals from the heart, lower and upper limbs.
139

Efficient deterministic approximate Bayesian inference for Gaussian process models

Bui, Thang Duc January 2018 (has links)
Gaussian processes are powerful nonparametric distributions over continuous functions that have become a standard tool in modern probabilistic machine learning. However, the applicability of Gaussian processes in the large-data regime and in hierarchical probabilistic models is severely limited by analytic and computational intractabilities. It is, therefore, important to develop practical approximate inference and learning algorithms that can address these challenges. To this end, this dissertation provides a comprehensive and unifying perspective of pseudo-point based deterministic approximate Bayesian learning for a wide variety of Gaussian process models, which connects previously disparate literature, greatly extends them and allows new state-of-the-art approximations to emerge. We start by building a posterior approximation framework based on Power-Expectation Propagation for Gaussian process regression and classification. This framework relies on a structured approximate Gaussian process posterior based on a small number of pseudo-points, which is judiciously chosen to summarise the actual data and enable tractable and efficient inference and hyperparameter learning. Many existing sparse approximations are recovered as special cases of this framework, and can now be understood as performing approximate posterior inference using a common approximate posterior. Critically, extensive empirical evidence suggests that new approximation methods arisen from this unifying perspective outperform existing approaches in many real-world regression and classification tasks. We explore the extensions of this framework to Gaussian process state space models, Gaussian process latent variable models and deep Gaussian processes, which also unify many recently developed approximation schemes for these models. Several mean-field and structured approximate posterior families for the hidden variables in these models are studied. We also discuss several methods for approximate uncertainty propagation in recurrent and deep architectures based on Gaussian projection, linearisation, and simple Monte Carlo. The benefit of the unified inference and learning frameworks for these models are illustrated in a variety of real-world state-space modelling and regression tasks.
140

Approximate inference : new visions

Li, Yingzhen January 2018 (has links)
Nowadays machine learning (especially deep learning) techniques are being incorporated to many intelligent systems affecting the quality of human life. The ultimate purpose of these systems is to perform automated decision making, and in order to achieve this, predictive systems need to return estimates of their confidence. Powered by the rules of probability, Bayesian inference is the gold standard method to perform coherent reasoning under uncertainty. It is generally believed that intelligent systems following the Bayesian approach can better incorporate uncertainty information for reliable decision making, and be less vulnerable to attacks such as data poisoning. Critically, the success of Bayesian methods in practice, including the recent resurgence of Bayesian deep learning, relies on fast and accurate approximate Bayesian inference applied to probabilistic models. These approximate inference methods perform (approximate) Bayesian reasoning at a relatively low cost in terms of time and memory, thus allowing the principles of Bayesian modelling to be applied to many practical settings. However, more work needs to be done to scale approximate Bayesian inference methods to big systems such as deep neural networks and large-scale dataset such as ImageNet. In this thesis we develop new algorithms towards addressing the open challenges in approximate inference. In the first part of the thesis we develop two new approximate inference algorithms, by drawing inspiration from the well known expectation propagation and message passing algorithms. Both approaches provide a unifying view of existing variational methods from different algorithmic perspectives. We also demonstrate that they lead to better calibrated inference results for complex models such as neural network classifiers and deep generative models, and scale to large datasets containing hundreds of thousands of data-points. In the second theme of the thesis we propose a new research direction for approximate inference: developing algorithms for fitting posterior approximations of arbitrary form, by rethinking the fundamental principles of Bayesian computation and the necessity of algorithmic constraints in traditional inference schemes. We specify four algorithmic options for the development of such new generation approximate inference methods, with one of them further investigated and applied to Bayesian deep learning tasks.

Page generated in 0.1034 seconds