• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 13
  • 6
  • 5
  • 1
  • Tagged with
  • 25
  • 25
  • 8
  • 6
  • 5
  • 5
  • 5
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Deconvolution in Random Effects Models via Normal Mixtures

Litton, Nathaniel A. 2009 August 1900 (has links)
This dissertation describes a minimum distance method for density estimation when the variable of interest is not directly observed. It is assumed that the underlying target density can be well approximated by a mixture of normals. The method compares a density estimate of observable data with a density of the observable data induced from assuming the target density can be written as a mixture of normals. The goal is to choose the parameters in the normal mixture that minimize the distance between the density estimate of the observable data and the induced density from the model. The method is applied to the deconvolution problem to estimate the density of $X_{i}$ when the variable $% Y_{i}=X_{i}+Z_{i}$, $i=1,\ldots ,n$, is observed, and the density of $Z_{i}$ is known. Additionally, it is applied to a location random effects model to estimate the density of $Z_{ij}$ when the observable quantities are $p$ data sets of size $n$ given by $X_{ij}=\alpha _{i}+\gamma Z_{ij},~i=1,\ldots ,p,~j=1,\ldots ,n$, where the densities of $\alpha_{i} $ and $Z_{ij}$ are both unknown. The performance of the minimum distance approach in the measurement error model is compared with the deconvoluting kernel density estimator of Stefanski and Carroll (1990). In the location random effects model, the minimum distance estimator is compared with the explicit characteristic function inversion method from Hall and Yao (2003). In both models, the methods are compared using simulated and real data sets. In the simulations, performance is evaluated using an integrated squared error criterion. Results indicate that the minimum distance methodology is comparable to the deconvoluting kernel density estimator and outperforms the explicit characteristic function inversion method.
12

Minimum disparity inference for discrete ranked set sampling data

Alexandridis, Roxana Antoanela 12 September 2005 (has links)
No description available.
13

Evaluating Long-Term Land Cover Changes for Malheur Lake, Oregon Using ENVI and ArcGIS

Woods, Ryan Joseph 01 December 2015 (has links)
Land cover change over time can be a useful indicator of variations in a watershed, such as the patterns of drought in an area. I present a case study using remotely sensed images from Landsat satellites for over a 30-year period to generate classifications representing land cover categories, which I use to quantify land cover change in the watershed areas that contribute to Malheur, Mud, and Harney Lakes. I selected images, about every 4 to 6 years from late June to late July, in an attempt to capture the peak vegetation growth and to avoid cloud cover. Complete coverage of the watershed required that I selected an image that included the lakes, an image to the North, and an image to the West of the lakes to capture the watershed areas for each chosen year. I used the watershed areas defined by the HUC-8 shapefiles. The relevant watersheds are called: Harney-Malheur Lakes, Donner und Blitzen, Silver, and Silvies watershed. To summarize the land cover classes that could be discriminated from the Landsat images in the area, I used an unsupervised classification algorithm called Iterative Self-Organizing Data Analysis Technique (ISODATA) to identify different classes from the pixels. I then used the ISODATA results and visual inspection of calibrated Landsat images and Google Earth imagery, to create Regions of Interest (ROI) with the following land cover classes: Water, Shallow Water, Vegetation, Dark Vegetation, Salty Area, and Bare Earth. The ROIs were used in the following supervised classification algorithms: maximum likelihood, minimum distance, and Mahalanobis distance, to classify land cover for the area. Using ArcGIS, I removed most of the misclassified area from the classified images by the use of the Landsat CDR, combined the main, north, and west images and then extracted the watersheds from the combined image. The area in acres for each land cover class and watershed was computed and stored in graphs and tables.After comparing the three supervised classifications using the amount of area classified into each category, normalized area in each category, and the raster datasets, I determined that the minimum distance classification algorithm produced the most accurate land cover classification. I investigated the correlation of the land cover classes with the average precipitation, average discharge, average summer high temperature, and drought indicators. For the most part, the land cover changes correlate with the weather. However, land use changes, groundwater, and error in the land cover classes may have accounted for the instances of discrepancy. The correlation of land cover classes, except Dark Vegetation and Bare Earth, are statistically significant with weather data. This study shows that Landsat imagery has the necessary components to create and track land cover changes over time. These results can be useful in hydrological studies and can be applied to models.
14

Code design based on metric-spectrum and applications

Papadimitriou, Panayiotis D. 17 February 2005 (has links)
We introduced nested search methods to design (n, k) block codes for arbitrary channels by optimizing an appropriate metric spectrum in each iteration. For a given k, the methods start with a good high rate code, say k/(k + 1), and successively design lower rate codes up to rate k/2^k corresponding to a Hadamard code. Using a full search for small binary codes we found that optimal or near-optimal codes of increasing length can be obtained in a nested manner by utilizing Hadamard matrix columns. The codes can be linear if the Hadamard matrix is linear and non-linear otherwise. The design methodology was extended to the generic complex codes by utilizing columns of newly derived or existing unitary codes. The inherent nested nature of the codes make them ideal for progressive transmission. Extensive comparisons to metric bounds and to previously designed codes show the optimality or near-optimality of the new codes, designed for the fading and the additive white Gaussian noise channel (AWGN). It was also shown that linear codes can be optimal or at least meeting the metric bounds; one example is the systematic pilot-based code of rate k/(k + 1) which was proved to meet the lower bound on the maximum cross-correlation. Further, the method was generalized such that good codes for arbitrary channels can be designed given the corresponding metric or the pairwise error probability. In synchronous multiple-access schemes it is common to use unitary block codes to transmit the multiple users’ information, especially in the downlink. In this work we suggest the use of newly designed non-unitary block codes, resulting in increased throughput efficiency, while the performance is shown not to be substantially sacrificed. The non-unitary codes are again developed through suitable nested searches. In addition, new multiple-access codes are introduced that optimize certain criteria, such as the sum-rate capacity. Finally, the introduction of the asymptotically optimum convolutional codes for a given constraint length, reduces dramatically the search size for good convolutional codes of a certain asymptotic performance, and the consequences to coded code-division multiple access (CDMA) system design are highlighted.
15

Sobre codigos hermitianos generalizados / On generalized hermitian codes

Sepúlveda Castellanos, Alonso 21 February 2008 (has links)
Orientador: Fernando Eduardo Torres Orihuela / Tese (doutorado) - Universidade Estadual de Campinas, Instituto de Matematica, Estatistica e Computação Cientifica / Made available in DSpace on 2018-08-10T07:01:07Z (GMT). No. of bitstreams: 1 SepulvedaCastellanos_Alonso_D.pdf: 783003 bytes, checksum: 2af4bba938cd5b7d31fcd02a5c79ac85 (MD5) Previous issue date: 2008 / Resumo: Estudamos os códigos de Goppa (códigos GH) sobre certos corpos de funções algébricas com muitos lugares racionais. Estes códigos generalizam os bem conhecidos códigos Hermitianos; portanto podemos esperar que estes códigos tenham bons parâmetros. Bulygin (IEEE Trans. Inform. Theory 52 (10), 4664¿4669 (2006)) inicia o estudo dos códigos GH; enquanto Bulygin considerou somente característica par, nosso trabalho 'e feito em qualquer característica. Em qualquer caso, nosso trabalho é fortemente influenciado pelo de Bulygin. A seguir, listamos alguns dos nossos resultados com respeito aos códigos GH. ¿ Calculamos ¿distâncias mínimas exatas¿, em particular, melhoramos os resultados de Bulygin; ¿ Encontramos cotas para os pesos generalizados de Hamming, al'em disso, mostramos um algoritmo para aplicar estes cálculos na criptografia; ¿ Calculamos um subgrupo de Automorfismos; ¿ Consideramos códigos em determinados subcorpos dos corpos usados para construir os códigos GH / Abstract: We study Goppa codes (GH codes) based on certain algebraic function fields whose number of rational places is large. These codes generalize the well-known Hermitian codes; thus we might expect that they have good parameters. Bulygin (IEEE Trans. Inform. Theory 52 (10), 4664¿4669 (2006)) initiate the study of GH-codes; while he considered only the even characteristic, our work is done regardless the characteristic. In any case our work was strongly influenced by Bulygin¿s. Next we list some of the results of our work with respect to GH-codes. ¿ We calculate ¿true minimum distances¿, in particular, we improve Bulygin¿s results; ¿ We find bounds on the generalized Hamming weights, moreover, we show an algorithm to apply these computations to the cryptography; ¿ We calculate an Automorphism subgroup; ¿ We consider codes on certain subfields of the fields used for to construct GH-codes / Doutorado / Algebra (Geometria Algebrica) / Doutor em Matemática
16

Algebraic Soft- and Hard-Decision Decoding of Generalized Reed--Solomon and Cyclic Codes

Zeh, Alexander 02 September 2013 (has links) (PDF)
Deux défis de la théorie du codage algébrique sont traités dans cette thèse. Le premier est le décodage efficace (dur et souple) de codes de Reed--Solomon généralisés sur les corps finis en métrique de Hamming. La motivation pour résoudre ce problème vieux de plus de 50 ans a été renouvelée par la découverte par Guruswami et Sudan à la fin du 20ème siècle d'un algorithme polynomial de décodage jusqu'au rayon Johnson basé sur l'interpolation. Les premières méthodes de décodage algébrique des codes de Reed--Solomon généralisés faisaient appel à une équation clé, c'est à dire, une description polynomiale du problème de décodage. La reformulation de l'approche à base d'interpolation en termes d'équations clés est un thème central de cette thèse. Cette contribution couvre plusieurs aspects des équations clés pour le décodage dur ainsi que pour la variante décodage souple de l'algorithme de Guruswami--Sudan pour les codes de Reed--Solomon généralisés. Pour toutes ces variantes un algorithme de décodage efficace est proposé. Le deuxième sujet de cette thèse est la formulation et le décodage jusqu'à certaines bornes inférieures sur leur distance minimale de codes en blocs linéaires cycliques. La caractéristique principale est l'intégration d'un code cyclique donné dans un code cyclique produit (généralisé). Nous donnons donc une description détaillée du code produit cyclique et des codes cycliques produits généralisés. Nous prouvons plusieurs bornes inférieures sur la distance minimale de codes cycliques linéaires qui permettent d'améliorer ou de généraliser des bornes connues. De plus, nous donnons des algorithmes de décodage d'erreurs/d'effacements [jusqu'à ces bornes] en temps quadratique.
17

Códigos parametrizados afins / Parameterized affine codes

Oliveira, Fabrício Alves 27 February 2014 (has links)
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior / In this work, we present a special class of linear codes: parameterized affine codes. We show that these codes are easy to construct and that given a parameterized affine code one can easily obtain an equivalent projective parameterized code equivalent to it. We also studied some topics which served as the theoretical foundations for the work, such as the theory of Groebner Bases, the footprint of an ideal and some topics of algebraic geometry and commutative algebra. This work has as main goal to obtain the basic parameters (length, dimension and minimum distance) of parameterized codes related and also to relate them to the projective parameterized codes, as done in [7]. We finish by applying the theory of Groebner Bases to the footprint of a certain ideal in order to obtain the basic parameters of the parameterized code over an affine torus. / Neste trabalho apresentamos uma classe especial de códigos lineares: os códigos parametrizados afins. Mostramos que esses códigos são de fácil construção e que, dado um código parametrizado afim, pode-se facilmente obter um código parametrizado projetivo equivalente a ele. Também estudamos algumas teorias que nos serviram como base teórica tais como: a teoria de Bases de Groebner e a Pegada de um ideal e alguns tópicos de geometria algébrica e álgebra comutativa. Este trabalho tem por objetivo principal obter os parâmetros básicos (comprimento, dimensão e distância mínima) dos códigos parametrizados afins e relacioná-los com os códigos parametrizados projetivos, assim como na referência [7]. Encerramos aplicando a teoria de Bases de Groebner a Pegada de um ideal para obter os parâmetros básicos do código parametrizado no toro afim. / Mestre em Matemática
18

Reticulados q-ários e algébricos / Q-ary and algebraic lattices

Jorge, Grasiele Cristiane, 1983- 19 August 2018 (has links)
Orientador: Sueli Irene Rodrigues Costa / Tese (doutorado) - Universidade Estadual de Campinas, Instituto de Matemática, Estatística e Computação Cientifica / Made available in DSpace on 2018-08-19T16:10:47Z (GMT). No. of bitstreams: 1 Jorge_GrasieleCristiane_D.pdf: 3823740 bytes, checksum: 772a88bd2136b4afb884a6e824f37bce (MD5) Previous issue date: 2012 / Resumo: O uso de códigos e reticulados em teoria da informação e na "chamada criptografia pós-quântica" vem sendo cada vez mais explorado. Neste trabalho estudamos temas relacionados a estas duas vertentes. A análise de reticulados foi feita via as métricas euclidiana e da soma. Para a métrica euclidiana, estudamos um algoritmo que procura pela treliça mínima de um reticulado com sub-reticulado ortogonal. No caso bidimensional foi possível caracterizar todos os sub-reticulados ortogonais de um reticulado racional qualquer. No estudo de reticulados via métrica da soma, trabalhamos com duas relações entre códigos e reticulados, conhecidas como "Construção A" e "Construção B". Generalizamos a Construção B para uma classe de códigos q-ários... Observação: O resumo, na íntegra, poderá ser visualizado no texto completo da tese digital / Abstract: The use of codes and lattices in Information Theory and in the so-called "Post-quantum Cryptography" has been increasingly explored. In this work we have studied topics related to these two aspects. The analysis of lattices was made via Euclidean and sum metrics. For the Euclidean metric we studied an algorithm that searches for a minimum trellis of a lattice with orthogonal sublattice. In the two-dimensional case it has been possible to characterize all orthogonal sublattices of any rational lattice. In the study of lattices via sum metric, we worked with two relations between codes and lattices, the so-called "Construction A " and "Construction B". We generalized Construction B for the class of q-ary codes...Note: The complete abstract is available with the full electronic document / Doutorado / Matematica / Doutor em Matemática
19

Reticulados algébricos : abordagem matricial e simulações / Algebraic lattices : matrix approach and simulations

Ferrari, Agnaldo José, 1969- 20 August 2018 (has links)
Orientador: Sueli Irene Rodrigues Costa / Tese (doutorado) - Universidade Estadual de Campinas, Instituto de Matemática, Estatística e Computação Científica / Made available in DSpace on 2018-08-20T11:38:10Z (GMT). No. of bitstreams: 1 Ferrari_AgnaldoJose_D.pdf: 2344410 bytes, checksum: faa96ccdd8ff4ec461abc4f69d6cc999 (MD5) Previous issue date: 2012 / Resumo: Neste trabalho abordamos a construção de reticulados usando propriedades da Teoria Algébrica dos Números. Enfocamos a construção de alguns reticulados com características especiais, conhecidos na literatura, via reticulados ideais, através de uma abordagem matricial e algorítmica...Observação: O resumo, na íntegra, poderá ser visualizado no texto completo da tese digital / Abstract: In this work we approach lattice constructions using properties of algebraic number theory. One focus is on the construction of some well known lattices via ideal lattices, through a matrix and algorithmic approach...Note: The complete abstract is available with the full electronic document / Doutorado / Matematica Aplicada / Doutor em Matemática Aplicada
20

Non-Invasive Skin Cancer Classification from Surface Scanned Lesion Images

Dhinagar, Nikhil J. 12 June 2013 (has links)
No description available.

Page generated in 0.1307 seconds