• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 14
  • 5
  • 5
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 37
  • 37
  • 6
  • 6
  • 6
  • 5
  • 5
  • 5
  • 4
  • 4
  • 4
  • 4
  • 4
  • 4
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

A Deterministic Approach to Partitioning Neural Network Training Data for the Classification Problem

Smith, Gregory Edward 28 September 2006 (has links)
The classification problem in discriminant analysis involves identifying a function that accurately classifies observations as originating from one of two or more mutually exclusive groups. Because no single classification technique works best for all problems, many different techniques have been developed. For business applications, neural networks have become the most commonly used classification technique and though they often outperform traditional statistical classification methods, their performance may be hindered because of failings in the use of training data. This problem can be exacerbated because of small data set size. In this dissertation, we identify and discuss a number of potential problems with typical random partitioning of neural network training data for the classification problem and introduce deterministic methods to partitioning that overcome these obstacles and improve classification accuracy on new validation data. A traditional statistical distance measure enables this deterministic partitioning. Heuristics for both the two-group classification problem and k-group classification problem are presented. We show that these heuristics result in generalizable neural network models that produce more accurate classification results, on average, than several commonly used classification techniques. In addition, we compare several two-group simulated and real-world data sets with respect to the interior and boundary positions of observations within their groups' convex polyhedrons. We show by example that projecting the interior points of simulated data to the boundary of their group polyhedrons generates convex shapes similar to real-world data group convex polyhedrons. Our two-group deterministic partitioning heuristic is then applied to the repositioned simulated data, producing results superior to several commonly used classification techniques. / Ph. D.
22

New results in detection, estimation, and model selection

Ni, Xuelei 08 December 2005 (has links)
This thesis contains two parts: the detectability of convex sets and the study on regression models In the first part of this dissertation, we investigate the problem of the detectability of an inhomogeneous convex region in a Gaussian random field. The first proposed detection method relies on checking a constructed statistic on each convex set within an nn image, which is proven to be un-applicable. We then consider using h(v)-parallelograms as the surrogate, which leads to a multiscale strategy. We prove that 2/9 is the minimum proportion of the maximally embedded h(v)-parallelogram in a convex set. Such a constant indicates the effectiveness of the above mentioned multiscale detection method. In the second part, we study the robustness, the optimality, and the computing for regression models. Firstly, for robustness, M-estimators in a regression model where the residuals are of unknown but stochastically bounded distribution are analyzed. An asymptotic minimax M-estimator (RSBN) is derived. Simulations demonstrate the robustness and advantages. Secondly, for optimality, the analysis on the least angle regressions inspired us to consider the conditions under which a vector is the solution of two optimization problems. For these two problems, one can be solved by certain stepwise algorithms, the other is the objective function in many existing subset selection criteria (including Cp, AIC, BIC, MDL, RIC, etc). The latter is proven to be NP-hard. Several conditions are derived. They tell us when a vector is the common optimizer. At last, extending the above idea about finding conditions into exhaustive subset selection in regression, we improve the widely used leaps-and-bounds algorithm (Furnival and Wilson). The proposed method further reduces the number of subsets needed to be considered in the exhaustive subset search by considering not only the residuals, but also the model matrix, and the current coefficients.
23

Exploring Polynomial Convexity Of Certain Classes Of Sets

Gorai, Sushil 07 1900 (has links) (PDF)
Let K be a compact subset of Cn . The polynomially convex hull of K is defined as The compact set K is said to be polynomially convex if = K. A closed subset is said to be locally polynomially convex at if there exists a closed ball centred at z such that is polynomially convex. The aim of this thesis is to derive easily checkable conditions to detect polynomial convexity in certain classes of sets in This thesis begins with the basic question: Let S1 and S2 be two smooth, totally real surfaces in C2 that contain the origin. If the union of their tangent planes is locally polynomially convex at the origin, then is locally polynomially convex at the origin? If then it is a folk result that the answer is, “Yes.” We discuss an obstruction to the presumed proof, and use a different approach to provide a proof. When dimR it turns out that the positioning of the complexification of controls the outcome in many situations. In general, however, local polynomial convexity of also depends on the degeneracy of the contact of T0Sj with We establish a result showing this. Next, we consider a generalization of Weinstock’s theorem for more than two totally real planes in C2 . Using a characterization, recently found by Florentino, for simultaneous triangularizability over R of real matrices, we present a sufficient condition for local polynomial convexity at of union of finitely many totally real planes is C2 . The next result is motivated by an approximation theorem of Axler and Shields, which says that the uniform algebra on the closed unit disc generated by z and h — where h is a nowhereholomorphic harmonic function on D that is continuous up to ∂D — equals . The abstract tools used by Axler and Shields make harmonicity of h an essential condition for their result. We use the concepts of plurisubharmonicity and polynomial convexity to show that, in fact, the same conclusion is reached if h is replaced by h+ R, where R is a nonharmonic perturbation whose Laplacian is “small” in a certain sense. Ideas developed for the latter result, especially the role of plurisubharmonicity, lead us to our final result: a characterization for compact patches of smooth, totallyreal graphs in to be polynomially convex.
24

Programação linear e suas aplicações: definição e métodos de soluções / Linear programming and its applications: definition and methods of solutions

Araújo, Pedro Felippe da Silva 18 March 2013 (has links)
Submitted by Luciana Ferreira (lucgeral@gmail.com) on 2014-09-23T11:12:32Z No. of bitstreams: 2 Araújo, Pedro Felippe da Silva.pdf: 1780566 bytes, checksum: d286e3b501489bf05fab04e9ab67bb26 (MD5) license_rdf: 23148 bytes, checksum: 9da0b6dfac957114c6a7714714b86306 (MD5) / Approved for entry into archive by Luciana Ferreira (lucgeral@gmail.com) on 2014-09-23T11:34:23Z (GMT) No. of bitstreams: 2 Araújo, Pedro Felippe da Silva.pdf: 1780566 bytes, checksum: d286e3b501489bf05fab04e9ab67bb26 (MD5) license_rdf: 23148 bytes, checksum: 9da0b6dfac957114c6a7714714b86306 (MD5) / Made available in DSpace on 2014-09-23T11:34:23Z (GMT). No. of bitstreams: 2 Araújo, Pedro Felippe da Silva.pdf: 1780566 bytes, checksum: d286e3b501489bf05fab04e9ab67bb26 (MD5) license_rdf: 23148 bytes, checksum: 9da0b6dfac957114c6a7714714b86306 (MD5) Previous issue date: 2013-03-18 / Coordenação de Aperfeiçoamento de Pessoal de Nível Superior - CAPES / Problems involving the idea of optimization are found in various elds of study, such as, in Economy is in search of cost minimization and pro t maximization in a rm or country, from the available budget; in Nutrition is seeking to redress the essential nutrients daily with the lowest possible cost, considering the nancial capacity of the individual; in Chemistry studies the pressure and temperature minimum necessary to accomplish a speci c chemical reaction in the shortest possible time; in Engineering seeks the lowest cost for the construction of an aluminium alloy mixing various raw materials and restrictions obeying minimum and maximum of the respective elements in the alloy. All examples cited, plus a multitude of other situations, seek their Remedy by Linear Programming. They are problems of minimizing or maximizing a linear function subject to linear inequalities or Equalities, in order to nd the best solution to this problem. For this show in this paper methods of problem solving Linear Programming. There is an emphasis on geometric solutions and Simplex Method, to form algebraic solution. Wanted to show various situations which may t some of these problems, some general cases more speci c cases. Before arriving eventually in solving linear programming problems, builds up the eld work of this type of optimization, Convex Sets. There are presentations of de nitions and theorems essential to the understanding and development of these problems, besides discussions on the e ciency of the methods applied. During the work, it is shown that there are cases which do not apply the solutions presented, but mostly t e ciently, even as a good approximation. / Problemas que envolvem a ideia de otimiza c~ao est~ao presentes em v arios campos de estudo como, por exemplo, na Economia se busca a minimiza c~ao de custos e a maximiza c~ao do lucro em uma rma ou pa s, a partir do or camento dispon vel; na Nutri c~ao se procura suprir os nutrientes essenciais di arios com o menor custo poss vel, considerando a capacidade nanceira do indiv duo; na Qu mica se estuda a press~ao e a temperatura m nimas necess arias para realizar uma rea c~ao qu mica espec ca no menor tempo poss vel; na Engenharia se busca o menor custo para a confec c~ao de uma liga de alum nio misturando v arias mat erias-primas e obedencendo as restri c~oes m nimas e m aximas dos respectivos elementos presentes na liga. Todos os exemplos citados, al em de uma in nidade de outras situa c~oes, buscam sua solu c~ao atrav es da Programa c~ao Linear. S~ao problemas de minimizar ou maximizar uma fun c~ao linear sujeito a Desigualdades ou Igualdades Lineares, com o intuito de encontrar a melhor solu c~ao deste problema. Para isso, mostram-se neste trabalho os m etodos de solu c~ao de problemas de Programa c~ao Linear. H a ^enfase nas solu c~oes geom etricas e no M etodo Simplex, a forma alg ebrica de solu c~ao. Procuram-se mostrar v arias situa c~oes as quais podem se encaixar alguns desses problemas, dos casos gerais a alguns casos mais espec cos. Antes de chegar, eventualmente, em como solucionar problemas de Programa c~ao Linear, constr oi-se o campo de trabalho deste tipo de otimiza c~ao, os Conjuntos Convexos. H a apresenta c~oes das de ni c~oes e teoremas essenciais para a compreens~ao e o desenvolvimento destes problemas; al em de discuss~oes sobre a e ci^encia dos m etodos aplicados. Durante o trabalho, mostra-se que h a casos os quais n~ao se aplicam as solu c~oes apresentadas, por em, em sua maioria, se enquadram de maneira e ciente, mesmo como uma boa aproxima c~ao.
25

Introdução à análise convexa: conjuntos e funções convexas / Introduction to convex analysis: convex sets and functions

Amorim, Ronan Gomes de 18 March 2013 (has links)
Submitted by Erika Demachki (erikademachki@gmail.com) on 2014-10-08T19:52:23Z No. of bitstreams: 2 Dissertação - Ronan Gomes de Amorim - 2013.pdf: 1551424 bytes, checksum: 2acf9af7fdc161d745d9a1fcf58ba4b0 (MD5) license_rdf: 23148 bytes, checksum: 9da0b6dfac957114c6a7714714b86306 (MD5) / Approved for entry into archive by Luciana Ferreira (lucgeral@gmail.com) on 2014-10-09T11:22:45Z (GMT) No. of bitstreams: 2 Dissertação - Ronan Gomes de Amorim - 2013.pdf: 1551424 bytes, checksum: 2acf9af7fdc161d745d9a1fcf58ba4b0 (MD5) license_rdf: 23148 bytes, checksum: 9da0b6dfac957114c6a7714714b86306 (MD5) / Made available in DSpace on 2014-10-09T11:22:45Z (GMT). No. of bitstreams: 2 Dissertação - Ronan Gomes de Amorim - 2013.pdf: 1551424 bytes, checksum: 2acf9af7fdc161d745d9a1fcf58ba4b0 (MD5) license_rdf: 23148 bytes, checksum: 9da0b6dfac957114c6a7714714b86306 (MD5) Previous issue date: 2013-03-18 / Coordenação de Aperfeiçoamento de Pessoal de Nível Superior - CAPES / This paper presents the main ideas concerning convex sets and functions. Our aim is to deal, didactically, with the main topics concerning convexity, as well as the consequent exploitation of the envolved mathematical concepts. In this sense, we have made a bibliographic revision approaching important theorems, lemmas, corollaries and propositions designed both to first readers and to those who want to work with applications arising from convexity. We hope that this study may constitute an important research source either for students, teachers or researchers who wish to learn more about convex sets. / Neste trabalho, apresentamos as principais ideias concernentes aos conjuntos convexos e às funções convexas. Nosso principal foco é tratar, de forma didática, os principais tópicos envolvidos na convexidade, bem como a consequente exploração dos conceitos matemáticos envolvidos. Nesse sentido, realizamos uma revisão bibliográfica que contemplou teoremas, lemas, corolários e proposições relevantes a um primeiro leitor e a todos que pretendem trabalhar com as aplicações decorrentes da convexidade. Assim, esperamos que este material constitua uma importante fonte de pesquisa a estudantes, professores e pesquisadores que almejem estudar conteúdos relacionados aos conjuntos convexos.
26

A novel Intra prediction for H.264/AVC using projections onto convex sets and direction-distance oriented prediction.

Jian, Zhi-zhong 25 August 2009 (has links)
H.264/AVC intra prediction method is an efficient tool to reduce spatial redundancies by using multidirectional spatial prediction modes. In this paper, a novel intra prediction method is designed to improve coding efficiency. Firstly, we propose a direction-distance oriented prediction which considers the distance between the predict value and the reference samples according to the direction of the prediction modes. Secondly, we apply the concept of image restoration by using the projections onto convex sets (POCS) to intra prediction which uses adaptively filtering based on the surrounding reconstructed pixel to predict blocks. The experimental results show that the average bit-rate reduction of 0.75% and PSNR gain improved of 0.119dB are achieved.
27

Tightening and blending subject to set-theoretic constraints

Williams, Jason Daniel 17 May 2012 (has links)
Our work applies techniques for blending and tightening solid shapes represented by sets. We require that the output contain one set and exclude a second set, and then we optimize the boundary separating the two sets. Working within that framework, we present mason, tightening, tight hulls, tight blends, and the medial cover, with details for implementation. Mason uses opening and closing techniques from mathematical morphology to smooth small features. By contrast, tightening uses mean curvature flow to minimize the measure of the boundary separating the opening of the interior of the closed input set from the opening of its complement, guaranteeing a mean curvature bound. The tight hull offers a significant generalization of the convex hull subject to volumetric constraints, introducing developable boundary patches connecting the constraints. Tight blends then use opening to replicate some of the behaviors from tightenings by applying tight hulls. The medial cover provides a means for adjusting the topology of a tight hull or tight blend, and it provides an implementation technique for two-dimensional polygonal inputs. Collectively, we offer applications for boundary estimation, three-dimensional solid design, blending, normal field simplification, and polygonal repair. We consequently establish the value of blending and tightening as tools for solid modeling.
28

Measurement of three-dimensional coherent fluid structure in high Reynolds number turbulent boundary layers

Clark, Thomas Henry January 2012 (has links)
The turbulent boundary layer is an aspect of fluid flow which dominates the performance of many engineering systems - yet the analytic solution of such flows is intractable for most applications. Our understanding of boundary layers is therefore limited by our ability to simulate and measure them. Tomographic Particle Image Velocimetry (TPIV) is a recently developed technique for direct measurement of fluid velocity within a 3D region. This allows new insight into the topological structure of turbulent boundary layers. Increasing Reynolds Number increases the range of scales at which turbulence exists; a measurement technique must have a larger 'dynamic range' to fully resolve the flow. Tomographic PIV is currently limited in spatial dynamic range (which is also linked to the spatial and temporal resolution) due to a high degree of noise. Results also contain significant bias error. This work proposes a modification of the technique to use more than two exposures in the PIV process, which (for four exposures) is shown to improve random error by a factor of 2 to 7 depending on experimental setup parameters. The dynamic range increases correspondingly and can be doubled again in highly turbulent flows. Bias error is reduced by up to 40%. An alternative reconstruction approach is also presented, based on application of a reduction strategy (elimination of coefficients based on a first guess) to the tomographic weightings matrix Wij. This facilitates a potentially significant increase in computational efficiency. Despite the achieved reduction in error, measurements contain non-zero divergence due to noise and sampling errors. The same problem affects visualisation of topology and coherent fluid structures. Using Projection Onto Convex Sets, a framework for post-processing operators is implemented which includes a divergence minimisation procedure and a scale-limited denoising strategy which is resilient to 'false' vectors contained in the data. Finally, developed techniques are showcased by visualisation of topological information in the inner region of a high Reynolds Number boundary layer (δ+ = 1890, Reθ = 3650). Comments are made on the visible flow structures and tentative conclusions are drawn.
29

Estudo comparativo de passos espectrais e buscas lineares não monótonas / Comparative study of spectral steplengths and nonmonotone linear searches

Camargo, Fernando Taietti 07 March 2008 (has links)
O método do Gradiente Espectral, introduzido por Barzilai e Borwein e analisado por Raydan, para minimização irrestrita, é um método simples cujo desempenho é comparável ao de métodos tradicionais como, por exemplo, gradientes conjugados. Desde a introdução do método, assim como da sua extensão para minimização em conjuntos convexos, foram introduzidas várias combinações de passos espectrais diferentes, assim como de buscas lineares não monótonas diferentes. Dos resultados numéricos apresentados em vários trabalhos não é possível inferir se existem diferenças significativas no desempenho dos diversos métodos. Além disso, também não fica clara a relevância das buscas não monótonas como uma ferramenta em si próprias ou se, na verdade, elas são úteis apenas para permitir que o método seja o mais parecido possível com o método original de Barzilai e Borwein. O objetivo deste trabalho é comparar os diversos métodos recentemente introduzidos como combinações de diferentes buscas lineares não monótonas e diferentes passos espectrais para encontrar a melhor combinação e, a partir daí, aferir o desempenho numérico do método. / The Spectral Gradient method, introduced by Barzilai and Borwein and analized by Raydan for unconstrained minimization, is a simple method whose performance is comparable to traditional methods, such as conjugate gradients. Since the introduction of method, as well as its extension to minimization of convex sets, there were introduced various combinations of different spectral steplengths, as well as different nonmonotone line searches. By the numerical results presented in many studies it is not possible to infer whether there are siginificant differences in the performance of various methods. It also is not sure the relevance of the nonmonotone line searches as a tool in themselves or whether, in fact, they are usefull only to allow the method to be as similar as possible with the original method of Barzilai e Borwein. The objective of this study is to compare the different methods recently introduced as different combinations of nonmonotone linear searches and different spectral steplengths to find the best combination and from there, evaluating the numerical performance of the method.
30

Reduced-data magnetic resonance imaging reconstruction methods: constraints and solutions.

Hamilton, Lei Hou 11 August 2011 (has links)
Imaging speed is very important in magnetic resonance imaging (MRI), especially in dynamic cardiac applications, which involve respiratory motion and heart motion. With the introduction of reduced-data MR imaging methods, increasing acquisition speed has become possible without requiring a higher gradient system. But these reduced-data imaging methods carry a price for higher imaging speed. This may be a signal-to-noise ratio (SNR) penalty, reduced resolution, or a combination of both. Many methods sacrifice edge information in favor of SNR gain, which is not preferable for applications which require accurate detection of myocardial boundaries. The central goal of this thesis is to develop novel reduced-data imaging methods to improve reconstructed image performance. This thesis presents a novel reduced-data imaging method, PINOT (Parallel Imaging and NOquist in Tandem), to accelerate MR imaging. As illustrated by a variety of computer simulated and real cardiac MRI data experiments, PINOT preserves the edge details, with flexibility of improving SNR by regularization. Another contribution is to exploit the data redundancy from parallel imaging, rFOV and partial Fourier methods. A Gerchberg Reduced Iterative System (GRIS), implemented with the Gerchberg-Papoulis (GP) iterative algorithm is introduced. Under the GRIS, which utilizes a temporal band-limitation constraint in the image reconstruction, a variant of Noquist called iterative implementation iNoquist (iterative Noquist) is proposed. Utilizing a different source of prior information, first combining iNoquist and Partial Fourier technique (phase-constrained iNoquist) and further integrating with parallel imaging methods (PINOT-GRIS) are presented to achieve additional acceleration gains.

Page generated in 0.0772 seconds