Spelling suggestions: "subject:"convex full"" "subject:"convex null""
11 |
Automated Real-time Objects Detection in Colonoscopy Videos for Quality MeasurementsKumara, Muthukudage Jayantha 08 1900 (has links)
The effectiveness of colonoscopy depends on the quality of the inspection of the colon. There was no automated measurement method to evaluate the quality of the inspection. This thesis addresses this issue by investigating an automated post-procedure quality measurement technique and proposing a novel approach automatically deciding a percentage of stool areas in images of digitized colonoscopy video files. It involves the classification of image pixels based on their color features using a new method of planes on RGB (red, green and blue) color space. The limitation of post-procedure quality measurement is that quality measurements are available long after the procedure was done and the patient was released. A better approach is to inform any sub-optimal inspection immediately so that the endoscopist can improve the quality in real-time during the procedure. This thesis also proposes an extension to post-procedure method to detect stool, bite-block, and blood regions in real-time using color features in HSV color space. These three objects play a major role in quality measurements in colonoscopy. The proposed method partitions very large positive examples of each of these objects into a number of groups. These groups are formed by taking intersection of positive examples with a hyper plane. This hyper plane is named as 'positive plane'. 'Convex hulls' are used to model positive planes. Comparisons with traditional classifiers such as K-nearest neighbor (K-NN) and support vector machines (SVM) proves the soundness of the proposed method in terms of accuracy and speed that are critical in the targeted real-time quality measurement system.
|
12 |
EVALUATION OF FLATNESS TOLERANCE AND DATUMS IN COMPUTATIONAL METROLOGYCHEPURI, SHAMBAIAH January 2000 (has links)
No description available.
|
13 |
FINITE DISJUNCTIVE PROGRAMMING METHODS FOR GENERAL MIXED INTEGER LINEAR PROGRAMSChen, Binyuan January 2011 (has links)
In this dissertation, a finitely convergent disjunctive programming procedure, the Convex Hull Tree (CHT) algorithm, is proposed to obtain the convex hull of a general mixed–integer linear program with bounded integer variables. The CHT algorithm constructs a linear program that has the same optimal solution as the associated mixed-integer linear program. The standard notion of sequential cutting planes is then combined with ideasunderlying the CHT algorithm to help guide the choice of disjunctions to use within a new cutting plane method, the Cutting Plane Tree (CPT) algorithm. We show that the CPT algorithm converges to an integer optimal solution of the general mixed-integer linear program with bounded integer variables in finitely many steps. We also enhance the CPT algorithm with several techniques including a “round-of-cuts” approach and an iterative method for solving the cut generation linear program (CGLP). Two normalization constraints are discussed in detail for solving the CGLP. For moderately sized instances, our study shows that the CPT algorithm provides significant gap closures with a pure cutting plane method.
|
14 |
Geometric Computing over Uncertain DataZhang, Wuzhou January 2015 (has links)
<p>Entering the era of big data, human beings are faced with an unprecedented amount of geometric data today. Many computational challenges arise in processing the new deluge of geometric data. A critical one is data uncertainty: the data is inherently noisy and inaccuracy, and often lacks of completeness. The past few decades have witnessed the influence of geometric algorithms in various fields including GIS, spatial databases, and computer vision, etc. Yet most of the existing geometric algorithms are built on the assumption of the data being precise and are incapable of properly handling data in the presence of uncertainty. This thesis explores a few algorithmic challenges in what we call geometric computing over uncertain data.</p><p>We study the nearest-neighbor searching problem, which returns the nearest neighbor of a query point in a set of points, in a probabilistic framework. This thesis investigates two different nearest-neighbor formulations: expected nearest neighbor (ENN), where we consider the expected distance between each input point and a query point, and probabilistic nearest neighbor (PNN), where we estimate the probability of each input point being the nearest neighbor of a query point.</p><p>For the ENN problem, we consider a probabilistic framework in which the location of each input point and/or query point is specified as a probability density function and the goal is to return the point that minimizes the expected distance. We present methods for computing an exact ENN or an \\eps-approximate ENN, for a given error parameter 0 < \\eps < 1, under different distance functions. These methods build an index of near-linear size and answer ENN queries in polylogarithmic or sublinear time, depending on the underlying function. As far as we know, these are the first nontrivial methods for answering exact or \\eps-approximate ENN queries with provable performance guarantees. Moreover, we extend our results to answer exact or \\eps-approximate k-ENN queries. Notably, when only the query points are uncertain, we obtain state-of-the-art results for top-k aggregate (group) nearest-neighbor queries in the L1 metric using the weighted SUM operator.</p><p>For the PNN problem, we consider a probabilistic framework in which the location of each input point is specified as a probability distribution function. We present efficient algorithms for (i) computing all points that are nearest neighbors of a query point with nonzero probability; (ii) estimating, within a specified additive error, the probability of a point being the nearest neighbor of a query point; (iii) using it to return the point that maximizes the probability being the nearest neighbor, or all the points with probabilities greater than some threshold to be the nearest neighbor. We also present some experimental results to demonstrate the effectiveness of our approach.</p><p>We study the convex-hull problem, which asks for the smallest convex set that contains a given point set, in a probabilistic setting. In our framework, the uncertainty of each input point is described by a probability distribution over a finite number of possible locations including a null location to account for non-existence of the point. Our results include both exact and approximation algorithms for computing the probability of a query point lying inside the convex hull of the input, time-space tradeoffs for the membership queries, a connection between Tukey depth and membership queries, as well as a new notion of \\beta-hull that may be a useful representation of uncertain hulls.</p><p>We study contour trees of terrains, which encode the topological changes of the level set of the height value \\ell as we raise \\ell from -\\infty to +\\infty on the terrains, in a probabilistic setting. We consider a terrain that is defined by linearly interpolating each triangle of a triangulation. In our framework, the uncertainty lies in the height of each vertex in the triangulation, and we assume that it is described by a probability distribution. We first show that the probability of a vertex being a critical point, and the expected number of nodes (resp. edges) of the contour tree, can be computed exactly efficiently. Then we present efficient sampling-based methods for estimating, with high probability, (i) the probability that two points lie on an edge of the contour tree, within additive error; (ii) the expected distance of two points p, q and the probability that the distance of p, q is at least \\ell on the contour tree, within additive error and/or relative error, where the distance of p, q on a contour tree is defined to be the difference between the maximum height and the minimum height on the unique path from p to q on the contour tree.</p> / Dissertation
|
15 |
Extremal Polyominoes / Extremal PolyominoesSteffanová, Veronika January 2015 (has links)
Title: Extremal Polyominoes Author: Veronika Steffanová Department: Department of Applied Mathematics Supervisor: Doc. RNDr. Pavel Valtr, Dr. Abstract: The thesis is focused on polyominoes and other planar figures consisting of regular polygons, namely polyiamonds and polyhexes. We study the basic geometrical properties: the perimeter, the convex hull and the bounding rectangle/hexagon. We maximise and minimise these parameters and for the fixed size of the polyomino, denoted by n. We compute the extremal values of a chosen parameter and then we try to enumerate all polyominoes of the size n, which has the extremal property. Some of the problems were solved by other authors. We summarise their results. Some of the problems were solved by us, namely the maximal bounding rectan- gle/hexagon and maximal convex hull of polyiamonds. There are still sev- eral topics which remain open. We summarise the literature and offer our observations for the following scientists. Keywords: Polyomino, convex hull, extremal questions, plane 1
|
16 |
Detecção de anomalias utilizando métodos paramétricos e múltiplos classificadores / Anomaly detection using parametric methods and multiple classifiersCosta, Gabriel de Barros Paranhos da 25 August 2014 (has links)
Anomalias ou outliers são exemplos ou grupo de exemplos que apresentam comportamento diferente do esperado. Na prática,esses exemplos podem representar doenças em um indivíduo ou em uma população, além de outros eventos como fraudes em operações bancárias e falhas em sistemas. Diversas técnicas existentes buscam identificar essas anomalias, incluindo adaptações de métodos de classificação e métodos estatísticos. Os principais desafios são o desbalanceamento do número de exemplos em cada uma das classes e a definição do comportamento normal associada à formalização de um modelo para esse comportamento. Nesta dissertação propõe-se a utilização de um novo espaço para realizar a detecção,esse espaço é chamado espaço de parâmetros. Um espaço de parâmetros é criado utilizando parâmetros estimados a partir da concatenação(encadeamento) de dois exemplos. Apresenta-se,então,um novo framework para realizar a detecção de anomalias através da fusão de detectores que utilizam fechos convexos em múltiplos espaços de parâmetros para realizar a detecção. O método é considerado um framework pois é possível escolher quais os espaços de parâmetros que serão utilizados pelo método de acordo como comportamento da base de dados alvo. Nesse trabalho utilizou-se,para experimentos,dois conjuntos de parâmetros(média e desvio padrão; média, variância, obliquidade e curtose) e os resultados obtidos foram comparados com alguns métodos comumente utilizados para detecção de anomalias. Os resultados atingidos foram comparáveis ou melhores aos obtidos pelos demais métodos. Além disso, acredita-se que a utilização de espaços de parâmetros cria uma grande flexibilidade do método proposto, já que o usuário pode escolher um espaço de parâmetros que se adeque a sua aplicação. Tanto a flexibilidade quanto a extensibilidade disponibilizada pelo espaço de parâmetros, em conjunto como bom desempenho do método proposto nos experimentos realizados, tornam atrativa a utilização de espaços de parâmetros e, mais especificamente, dos métodos apresentados na solução de problemas de detecção de anomalias. / Anomalies or outliers are examples or group of examples that have a behaviour different from the expected. These examples may represent diseases in individuals or populations,as well as other events such as fraud and failures in banking systems.Several existing techniques seek to identify these anomalies, including adaptations of classification methods, statistical methods and methods based on information theory. The main challenges are that the number of samples of each class is unbalanced, the cases when anomalies are disguised among normal samples and the definition of normal behaviour associated with the formalization of a model for this behaviour. In this dissertation,we propose the use of a new space to helpwith the detection task, this space is called parameter space. We also present a new framework to perform anomaly detection by using the fusion of convex hulls in multiple parameter spaces to perform the detection.The method is considered a framework because it is possible to choose which parameter spaces will be used by the method according to the behaviour of the target data set.For the experiments, two parameter spaces were used (mean and standard deviation; mean, variance, skewness and kurtosis) and the results were compared to some commonly used anomaly detection methods. The results achieved were comparable or better than those obtained by the other methods. Furthermore, we believe that a parameter space created great fexibility for the proposed method, since it allowed the user to choose a parameter space that best models the application. Both the flexibility and extensibility provided by the use of parameter spaces, together with the good performance achieved by the proposed method in the experiments, make parameter spaces and, more specifically, the proposed methods appealing when solving anomaly detection problems.
|
17 |
Introduction et analyse des schémas de cotation en avance de phaseSocoliuc, Michel 09 July 2010 (has links) (PDF)
Il y a peu, j'ai pu lire « qu'on pouvait considérer que les ponts romains de l'Antiquité, pouvaient être considérés comme inefficaces, au regard des standards actuels : ils utilisaient trop de pierre et énormément de travail était nécessaire à leur construction. Au fil des années, pour répondre à une problématique équivalente, nous avons appris à utiliser moins de matériaux et à réduire la charge de travail ». Ces problématiques nous les retrouvons aussi en conception mécanique où l'on essaye en continu de proposer des systèmes de plus en plus performants mais devant être conçus en moins de temps, étant moins cher à produire et fournissant des prestations au moins équivalentes à ce qui a déjà été conçu.Au cours d'un processus de conception classique, les concepteurs définissent une géométrie ne présentant aucun défaut puis, étant donné que les moyens de production ne permettent pas d'obtenir de telles pièces finales, ils spécifient les schémas de cotation définissant les écarts acceptables garantissant le bon fonctionnement du système. Seulement, cela est fait après avoir produit les dessins détaillés, c'est à dire trop tard. Pour répondre à cette problématique, je présenterai l'intégration, très tôt dans le cycle de vie de conception, d'un processus de validation optimisé, basé sur une maquette numérique directement en lien avec sa représentation fonctionnelle (maquette fonctionnelle), et permettant de valider des schémas de cotation 3D standardisés.Je décrirai d'abord ce que l'on entend par « maquette fonctionnelle » et surtout ce que cette nouvelle définition apporte en plus de la définition numérique. Une fois ce point abordé, je détaillerai les liens qui permettent d'avoir une unicité de l'information au sein de l'environnement de travail, tout comme les processus qui permettent de lier les représentations fonctionnelles et numériques.Ensuite, je détaillerai les processus basés sur ces concepts, et qui ont pour but de valider les choix qui sont effectués en avance de phase au niveau des schémas de cotation. Pour ce faire, je commencerai par présenter l'analyse au pire des cas (utilisant les modèles de domaines écarts notamment), permettant de garantir le bon fonctionnement de l'ensemble mécanique, dans le cas ou touts les écarts se retrouvent à l'intérieur des zones respectives (définies par les tolérances).Enfin, je finirai par introduire ce qu'une couche statistique, couplée à l'analyse au pire des cas utilisant les enveloppes convexes, peut amener dans le contexte industriel et notamment sous la contrainte temporelle.
|
18 |
Developing Parsimonious and Efficient Algorithms for Water Resources Optimization ProblemsAsadzadeh Esfahani, Masoud 13 November 2012 (has links)
In the current water resources scientific literature, a wide variety of engineering design problems are solved in a simulation-optimization framework. These problems can have single or multiple objective functions and their decision variables can have discrete or continuous values. The majority of current literature in the field of water resources systems optimization report using heuristic global optimization algorithms, including evolutionary algorithms, with great success. These algorithms have multiple parameters that control their behavior both in terms of computational efficiency and the ability to find near globally optimal solutions. Values of these parameters are generally obtained by trial and error and are case study dependent. On the other hand, water resources simulation-optimization problems often have computationally intensive simulation models that can require seconds to hours for a single simulation. Furthermore, analysts may have limited computational budget to solve these problems, as such, the analyst may not be able to spend some of the computational budget to fine-tune the algorithm settings and parameter values. So, in general, algorithm parsimony in the number of parameters is an important factor in the applicability and performance of optimization algorithms for solving computationally intensive problems.
A major contribution of this thesis is the development of a highly efficient, single objective, parsimonious optimization algorithm for solving problems with discrete decision variables. The algorithm is called Hybrid Discrete Dynamically Dimensioned Search, HD-DDS, and is designed based on Dynamically Dimensioned Search (DDS) that was developed by Tolson and Shoemaker (2007) for solving single objective hydrologic model calibration problems with continuous decision variables. The motivation for developing HD-DDS comes from the parsimony and high performance of original version of DDS. Similar to DDS, HD-DDS has a single parameter with a robust default value. HD-DDS is successfully applied to several benchmark water distribution system design problems where decision variables are pipe sizes among the available pipe size options. Results show that HD-DDS exhibits superior performance in specific comparisons to state-of-the-art optimization algorithms.
The parsimony and efficiency of the original and discrete versions of DDS and their successful application to single objective water resources optimization problems with discrete and continuous decision variables motivated the development of a multi-objective optimization algorithm based on DDS. This algorithm is called Pareto Archived Dynamically Dimensioned Search (PA-DDS). The algorithm parsimony is a major factor in the design of PA-DDS. PA-DDS has a single parameter from its search engine DDS. In each iteration, PA-DDS selects one archived non-dominated solution and perturbs it to search for new solutions. The solution perturbation scheme of PA-DDS is similar to the original and discrete versions of DDS depending on whether the decision variable is discrete or continuous. So, PA-DDS can handle both types of decision variables. PA-DDS is applied to several benchmark mathematical problems, water distribution system design problems, and water resources model calibration problems with great success.
It is shown that hypervolume contribution, HVC1, as defined in Knowles et al. (2003) is the superior selection metric for PA-DDS when solving multi-objective optimization problems with Pareto fronts that have a general (unknown) shape. However, one of the main contributions of this thesis is the development of a selection metric specifically designed for solving multi-objective optimization problems with a known or expected convex Pareto front such as water resources model calibration problems. The selection metric is called convex hull contribution (CHC) and makes the optimization algorithm sample solely from a subset of archived solutions that form the convex approximation of the Pareto front. Although CHC is generally applicable to any stochastic search optimization algorithm, it is applied to PA-DDS for solving six water resources calibration case studies with two or three objective functions. These case studies are solved by PA-DDS with CHC and HVC1 selections using 1,000 solution evaluations and by PA-DDS with CHC selection and two popular multi-objective optimization algorithms, AMALGAM and ε-NSGAII, using 10,000 solution evaluations. Results are compared based on the best case and worst case performances (out of multiple optimization trials) from each algorithm to measure the expected performance range for each algorithm. Comparing the best case performance of these algorithms shows that, PA-DDS with CHC selection using 1,000 solution evaluations perform very well in five out of six case studies. Comparing the worst case performance of the algorithms shows that with 1,000 solution evaluations, PA-DDS with CHC selection perform well in four out of six case studies. Furthermore, PA-DDS with CHC selection using 10,000 solution evaluations perform comparable to AMALGAM and ε-NSGAII. Therefore, it is concluded that PA-DDS with CHC selection is a powerful optimization algorithm for finding high quality solutions of multi-objective water resources model calibration problems with convex Pareto front especially when the computational budget is limited.
|
19 |
D-optimal designs for linear and quadratic polynomial modelsChen, Ya-Hui 12 June 2003 (has links)
This paper discusses the approximate and the exact n-point D-optimal design problems for the common multivariate linear and quadratic polynomial regression on some convex design spaces. For the linear polynomial regression, the design space considered are q-simplex, q-ball and convex hull of a set of finite points. It is shown that the approximate and the exact n-point
D-optimal designs are concentrated on the extreme points of the design space. The structure of the optimal designs on regular polygons or regular polyhedra is also discussed. For the
quadratic polynomial regression, the design space considered is a q-ball. The configuration of the approximate and the exact n-point D-optimal designs for quadratic model in two variables
on a disk are investigated.
|
20 |
Tightening and blending subject to set-theoretic constraintsWilliams, Jason Daniel 17 May 2012 (has links)
Our work applies techniques for blending and tightening solid shapes represented by sets. We require that the output contain one set and exclude a second set, and then we optimize the boundary separating the two sets. Working within that framework, we present mason, tightening, tight hulls, tight blends, and the medial cover, with details for implementation. Mason uses opening and closing techniques from mathematical morphology to smooth small features. By contrast, tightening uses mean curvature flow to minimize the measure of the boundary separating the opening of the interior of the closed input set from the opening of its complement, guaranteeing a mean curvature bound. The tight hull offers a significant generalization of the convex hull subject to volumetric constraints, introducing developable boundary patches connecting the constraints. Tight blends then use opening to replicate some of the behaviors from tightenings by applying tight hulls. The medial cover provides a means for adjusting the topology of a tight hull or tight blend, and it provides an implementation technique for two-dimensional polygonal inputs. Collectively, we offer applications for boundary estimation, three-dimensional solid design, blending, normal field simplification, and polygonal repair. We consequently establish the value of blending and tightening as tools for solid modeling.
|
Page generated in 0.0456 seconds