• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 38
  • 19
  • 12
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 84
  • 84
  • 28
  • 14
  • 14
  • 12
  • 12
  • 12
  • 12
  • 11
  • 10
  • 9
  • 9
  • 8
  • 8
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

Inference Of Switching Networks By Using A Piecewise Linear Formulation

Akcay, Didem 01 December 2005 (has links) (PDF)
Inference of regulatory networks has received attention of researchers from many fields. The challenge offered by this problem is its being a typical modeling problem under insufficient information about the process. Hence, we need to derive the apriori unavailable information from the empirical observations. Modeling by inference consists of selecting or defining the most appropriate model structure and inferring the parameters. An appropriate model structure should have the following properties. The model parameters should be inferable. Given the observation and the model class, all parameters used in the model should have a unique solution restriction of the solution space). The forward model should be accurately computable (restriction of the solution space). The model should be capable of exhibiting the essential qualitative features of the system (limit of the restriction). The model should be relevant with the process (limit of the restriction). A piecewise linear formulation, described by a switching state transition matrix and a switching state transition vector with a Boolean function indicating the switching conditions is proposed for the inference of gene regulatory networks. This thesis mainly concerns using a formulation of switching networks obeying all the above mentioned requirements and developing an inference algorithm for estimating the parameters of the formulation. The methodologies used or developed during this study are applicable to various fields of science and engineering.
32

A piecewise linear generalized poisson regression approach to modeling longitudinal frequency data

Borgesi, Jennifer Jo. January 2004 (has links)
Thesis (M.S.)--Duquesne University, 2004. / Title from document title page. Abstract included in electronic submission form. Includes bibliographical references (p. 30).
33

Ciclos limites de sistemas lineares por partes

Moraes, Jaime Rezende de [UNESP] 22 February 2011 (has links) (PDF)
Made available in DSpace on 2014-06-11T19:26:15Z (GMT). No. of bitstreams: 0 Previous issue date: 2011-02-22Bitstream added on 2014-06-13T19:54:22Z : No. of bitstreams: 1 moraes_jr_me_sjrp.pdf: 1163228 bytes, checksum: 853fa9bee4a6a3c25b24de14990f3221 (MD5) / Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq) / Consideramos dois casos principais de bifurcação de órbitas periódicas não hiperbólicas que dão origem a ciclos limite. Nosso estudo é feito para sistemas lineares por partes com três zonas em sua fórmula mais geral, que inclui situações sem simetria. Obtemos estimativas tanto para a amplitude como para o período do ciclo limite e apresentamos uma aplicação de interesse em engenharia: sistemas de controle. / We consider two main cases of bifurcation of non hyperbolic periodic orbits that give rise to limit cycles. Our study is done concerning piecewise linear systems with three zones in the more general formula that includes situations without symmetry. We obtain estimates for both the amplitude and the period of limit cycles and we present a applications of interest in engineering: control systems.
34

Estudo de ciclos limites em sistemas diferenciais lineares por partes

Moretti Junior, Adimar [UNESP] 28 February 2012 (has links) (PDF)
Made available in DSpace on 2014-06-11T19:26:15Z (GMT). No. of bitstreams: 0 Previous issue date: 2012-02-28Bitstream added on 2014-06-13T19:06:23Z : No. of bitstreams: 1 morettijunior_a_me_sjrp.pdf: 762570 bytes, checksum: 59d4b94fad96e41726548c623175fe4e (MD5) / Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq) / Neste trabalho temos como objetivo estudar o número e a distribuição de ciclos limites em sistemas diferenciais lineares por partes. Em particular estudamos o número de ciclos limites do sistema diferencial linear por partes planar ˙x = −y − ε φ ( x) , ˙y = x, onde ε 6= 0 é um parâmetro pequeno e φ é uma função periódica linear por partes ímpar de período 4 . Provamos que dado um inteiro arbitário positivo n, o sistema acima possui exatamente n ciclos limites na faixa |x| ≤ 2 (n + 1 ). Consequentemente, existem sistemas diferenciais lineares por partes contendo uma infinidade de ciclos limites no plano real. Inicialmente obtemos uma quota inferior par a o número destes ciclos limites na faixa | x| ≤ 2 (n + 1 ) via Teoria do Averaging . Em seguida , utilizando a Teoria de Campos de Vetores Rodados, verificamos que o sistema acima tem exatamente n ciclos limites na faixa | x| ≤ 2 (n + 1 ) / The main goal of this work aim to study the number and distribution of limit cycles in piecewise linear differential systems. In particular we consider the planar piecewise linear differential system ˙x = −y − ε φ ( x) , ˙y = x, where ε 6= 0 is a small parameter and φ is an odd piecewise linear periodic function of period 4 . We prove that given an arbitrary positive integer n, the system above has exactly n limit cycles in the strip | x| ≤ 2 (n + 1 ) . Consequently, there are piecewise differential systems containing an infinite number of limit cycles in the real plane. First we get a lower bound on the number of limit cycles in the strip |x| ≤ 2 (n + 1 ) via Averaging Theory. In the following , using the Theory of Rotated Vector Fields, we see that above system has exactly n limit cycles in the strip | x| ≤ 2 (n + 1 )
35

Convex regression and its extensions to learning a Bregman divergence and difference of convex functions

Siahkamari, Ali 26 January 2022 (has links)
Nonparametric convex regression has been extensively studied over the last two decades. It has been shown any Lipschitz convex function can be approximated with arbitrarily accuracy with a max of linear functions. Using this framework, in this thesis we generalize convex regression to learning an arbitrary Bregman divergence and learning a difference of convex functions. We provide approximation guarantees and sample complexity bounds for both these extensions. Furthermore, we provide algorithms to solve the resulting optimization problems based on 2-block alternative direction method of multipliers (ADMM). For this algorithm, we provide convergence guarantees with iteration complexity of O(n√d/𝜖) for a dataset X 𝝐 ℝ^n,d and arbitrary positive 𝜖. Finally we provide experiments for both the Bregman divergence learning and difference of convex functions learning based on UCI datasets that demonstrate the state of the art on regression and classification datasets.
36

Stability of a Fuzzy Logic Based Piecewise Linear Hybrid System

Seyfried, Aaron W. 01 June 2013 (has links)
No description available.
37

On the Construction of Linear Prewavelets over a Regular Triangulation.

Xue, Qingbo 16 August 2002 (has links) (PDF)
In this thesis, all the possible semi-prewavelets over uniform refinements of regular triangulations have been studied. A corresponding theorem is given to ensure the linear independence of a set of different pre-wavelets obtained by summing pairs of these semi-prewavelets. This provides efficient multiresolutions of the spaces of functions over various regular triangulation domains since the bases of the orthogonal complements of the coarse spaces can be constructed very easily.
38

Theoretical and Experimental Investigation of Vibro-impacts of Drivetrains Subjected to External Torque Fluctuations

Donmez, Ata 07 September 2022 (has links)
No description available.
39

Supervised Learning of Piecewise Linear Models

Manwani, Naresh January 2012 (has links) (PDF)
Supervised learning of piecewise linear models is a well studied problem in machine learning community. The key idea in piecewise linear modeling is to properly partition the input space and learn a linear model for every partition. Decision trees and regression trees are classic examples of piecewise linear models for classification and regression problems. The existing approaches for learning decision/regression trees can be broadly classified in to two classes, namely, fixed structure approaches and greedy approaches. In the fixed structure approaches, tree structure is fixed before hand by fixing the number of non leaf nodes, height of the tree and paths from root node to every leaf node of the tree. Mixture of experts and hierarchical mixture of experts are examples of fixed structure approaches for learning piecewise linear models. Parameters of the models are found using, e.g., maximum likelihood estimation, for which expectation maximization(EM) algorithm can be used. Fixed structure piecewise linear models can also be learnt using risk minimization under an appropriate loss function. Learning an optimal decision tree using fixed structure approach is a hard problem. Constructing an optimal binary decision tree is known to be NP Complete. On the other hand, greedy approaches do not assume any parametric form or any fixed structure for the decision tree classifier. Most of the greedy approaches learn tree structured piecewise linear models in a top down fashion. These are built by binary or multi-way recursive partitioning of the input space. The main issues in top down decision tree induction is to choose an appropriate objective function to rate the split rules. The objective function should be easy to optimize. Top-down decision trees are easy to implement and understand, but there are no optimality guarantees due to their greedy nature. Regression trees are built in the similar way as decision trees. In regression trees, every leaf node is associated with a linear regression function. All piece wise linear modeling techniques deal with two main tasks, namely, partitioning of the input space and learning a linear model for every partition. However, Partitioning of the input space and learning linear models for different partitions are not independent problems. Simultaneous optimal estimation of partitions and learning linear models for every partition, is a combinatorial problem and hence computationally hard. However, piecewise linear models provide better insights in to the classification or regression problem by giving explicit representation of the structure in the data. The information captured by piecewise linear models can be summarized in terms of simple rules, so that, they can be used to analyze the properties of the domain from which the data originates. These properties make piecewise linear models, like decision trees and regression trees, extremely useful in many data mining applications and place them among top data mining algorithms. In this thesis, we address the problem of supervised learning of piecewise linear models for classification and regression. We propose novel algorithms for learning piecewise linear classifiers and regression functions. We also address the problem of noise tolerant learning of classifiers in presence of label noise. We propose a novel algorithm for learning polyhedral classifiers which are the simplest form of piecewise linear classifiers. Polyhedral classifiers are useful when points of positive class fall inside a convex region and all the negative class points are distributed outside the convex region. Then the region of positive class can be well approximated by a simple polyhedral set. The key challenge in optimally learning a fixed structure polyhedral classifier is to identify sub problems, where each sub problem is a linear classification problem. This is a hard problem and identifying polyhedral separability is known to be NP complete. The goal of any polyhedral learning algorithm is to efficiently handle underlying combinatorial problem while achieving good classification accuracy. Existing methods for learning a fixed structure polyhedral classifier are based on solving non convex constrained optimization problems. These approaches do not efficiently handle the combinatorial aspect of the problem and are computationally expensive. We propose a method of model based estimation of posterior class probability to learn polyhedral classifiers. We solve an unconstrained optimization problem using a simple two step algorithm (similar to EM algorithm) to find the model parameters. To the best of our knowledge, this is the first attempt to form an unconstrained optimization problem for learning polyhedral classifiers. We then modify our algorithm to find the number of required hyperplanes also automatically. We experimentally show that our approach is better than the existing polyhedral learning algorithms in terms of training time, performance and the complexity. Most often, class conditional densities are multimodal. In such cases, each class region may be represented as a union of polyhedral regions and hence a single polyhedral classifier is not sufficient. To handle such situation, a generic decision tree is required. Learning optimal fixed structure decision tree is a computationally hard problem. On the other hand, top-down decision trees have no optimality guarantees due to the greedy nature. However, top-down decision tree approaches are widely used as they are versatile and easy to implement. Most of the existing top-down decision tree algorithms (CART,OC1,C4.5, etc.) use impurity measures to assess the goodness of hyper planes at each node of the tree. These measures do not properly capture the geometric structures in the data. We propose a novel decision tree algorithm that ,at each node, selects hyperplanes based on an objective function which takes into consideration geometric structure of the class regions. The resulting optimization problem turns out to be a generalized eigen value problem and hence is efficiently solved. We show through empirical studies that our approach leads to smaller size trees and better performance compared to other top-down decision tree approaches. We also provide some theoretical justification for the proposed method of learning decision trees. Piecewise linear regression is similar to the corresponding classification problem. For example, in regression trees, each leaf node is associated with a linear regression model. Thus the problem is once again that of (simultaneous) estimation of optimal partitions and learning a linear model for each partition. Regression trees, hinge hyperplane method, mixture of experts are some of the approaches to learn continuous piecewise linear regression models. Many of these algorithms are computationally intensive. We present a method of learning piecewise linear regression model which is computationally simple and is capable of learning discontinuous functions as well. The method is based on the idea of K plane regression that can identify a set of linear models given the training data. K plane regression is a simple algorithm motivated by the philosophy of k means clustering. However this simple algorithm has several problems. It does not give a model function so that we can predict the target value for any given input. Also, it is very sensitive to noise. We propose a modified K plane regression algorithm which can learn continuous as well as discontinuous functions. The proposed algorithm still retains the spirit of k means algorithm and after every iteration it improves the objective function. The proposed method learns a proper Piece wise linear model that can be used for prediction. The algorithm is also more robust to additive noise than K plane regression. While learning classifiers, one normally assumes that the class labels in the training data set are noise free. However, in many applications like Spam filtering, text classification etc., the training data can be mislabeled due to subjective errors. In such cases, the standard learning algorithms (SVM, Adaboost, decision trees etc.) start over fitting on the noisy points and lead to poor test accuracy. Thus analyzing the vulnerabilities of classifiers to label noise has recently attracted growing interest from the machine learning community. The existing noise tolerant learning approaches first try to identify the noisy points and then learn classifier on remaining points. In this thesis, we address the issue of developing learning algorithms which are inherently noise tolerant. An algorithm is inherently noise tolerant if, the classifier it learns with noisy samples would have the same performance on test data as that learnt from noise free samples. Algorithms having such robustness (under suitable assumption on the noise) are attractive for learning with noisy samples. Here, we consider non uniform label noise which is a generic noise model. In non uniform label noise, the probability of the class label for an example being incorrect, is a function of the feature vector of the example.(We assume that this probability is less than 0.5 for all feature vectors.) This can account for most cases of noisy data sets. There is no provably optimal algorithm for learning noise tolerant classifiers in presence of non uniform label noise. We propose a novel characterization of noise tolerance of an algorithm. We analyze noise tolerance properties of risk minimization frame work as risk minimization is a common strategy for classifier learning. We show that risk minimization under 01 loss has the best noise tolerance properties. None of the other convex loss functions have such noise tolerance properties. Empirical risk minimization under 01 loss is a hard problem as 01 loss function is not differentiable. We propose a gradient free stochastic optimization technique to minimize risk under 01 loss function for noise tolerant learning of linear classifiers. We show (under some conditions) that the algorithm converges asymptotically to the global minima of the risk under 01 loss function. We illustrate the noise tolerance of our algorithm through simulations experiments. We demonstrate the noise tolerance of the algorithm through simulations.
40

Estabilidade estrutural dos campos vetoriais seccionalmente lineares no plano / Structural stability of piecewise-linear vector fields in the plane

Jacóia, Bruno de Paula 15 August 2013 (has links)
Estudamos uma classe de campos de vetores seccionalmente lineares no plano denotada por X. Tais campos aparecem frequentemente em modelos matemáticos aplicados à engenharia. Baseados no trabalho de J. Sotomayor e R. Garcia [SG03], impondo condições sobre as singularidades, órbitas periódicas e separatrizes, definimos um conjunto de campos de vetores que são estruturalmente estáveis em X. Provamos que esse conjunto é aberto, denso e tem medida de Lebesgue total em X, o qual é um espaço vetorial de dimensão finita. / We study a class of piecewise-linear vector fields in the plane denoted by X. These vector fields appear often in mathematical models applied to Engineering. Based on Jorge Sotomayor and Ronaldo Garcia paper [SG03], we impose conditions on singularities, periodic orbits and separatrices, to define a set of vector fields structurally stable in X. We give a proof that this set is open, dense and has full Lebesgue measure in X, that is a finite dimensional vector space.

Page generated in 0.0773 seconds