• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 166
  • 68
  • 44
  • 16
  • 13
  • 13
  • 7
  • 6
  • 3
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 394
  • 55
  • 53
  • 46
  • 46
  • 31
  • 28
  • 28
  • 28
  • 26
  • 26
  • 24
  • 24
  • 23
  • 22
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
151

Nonnegative matrix factorization for clustering

Kuang, Da 27 August 2014 (has links)
This dissertation shows that nonnegative matrix factorization (NMF) can be extended to a general and efficient clustering method. Clustering is one of the fundamental tasks in machine learning. It is useful for unsupervised knowledge discovery in a variety of applications such as text mining and genomic analysis. NMF is a dimension reduction method that approximates a nonnegative matrix by the product of two lower rank nonnegative matrices, and has shown great promise as a clustering method when a data set is represented as a nonnegative data matrix. However, challenges in the widespread use of NMF as a clustering method lie in its correctness and efficiency: First, we need to know why and when NMF could detect the true clusters and guarantee to deliver good clustering quality; second, existing algorithms for computing NMF are expensive and often take longer time than other clustering methods. We show that the original NMF can be improved from both aspects in the context of clustering. Our new NMF-based clustering methods can achieve better clustering quality and run orders of magnitude faster than the original NMF and other clustering methods. Like other clustering methods, NMF places an implicit assumption on the cluster structure. Thus, the success of NMF as a clustering method depends on whether the representation of data in a vector space satisfies that assumption. Our approach to extending the original NMF to a general clustering method is to switch from the vector space representation of data points to a graph representation. The new formulation, called Symmetric NMF, takes a pairwise similarity matrix as an input and can be viewed as a graph clustering method. We evaluate this method on document clustering and image segmentation problems and find that it achieves better clustering accuracy. In addition, for the original NMF, it is difficult but important to choose the right number of clusters. We show that the widely-used consensus NMF in genomic analysis for choosing the number of clusters have critical flaws and can produce misleading results. We propose a variation of the prediction strength measure arising from statistical inference to evaluate the stability of clusters and select the right number of clusters. Our measure shows promising performances in artificial simulation experiments. Large-scale applications bring substantial efficiency challenges to existing algorithms for computing NMF. An important example is topic modeling where users want to uncover the major themes in a large text collection. Our strategy of accelerating NMF-based clustering is to design algorithms that better suit the computer architecture as well as exploit the computing power of parallel platforms such as the graphic processing units (GPUs). A key observation is that applying rank-2 NMF that partitions a data set into two clusters in a recursive manner is much faster than applying the original NMF to obtain a flat clustering. We take advantage of a special property of rank-2 NMF and design an algorithm that runs faster than existing algorithms due to continuous memory access. Combined with a criterion to stop the recursion, our hierarchical clustering algorithm runs significantly faster and achieves even better clustering quality than existing methods. Another bottleneck of NMF algorithms, which is also a common bottleneck in many other machine learning applications, is to multiply a large sparse data matrix with a tall-and-skinny dense matrix. We use the GPUs to accelerate this routine for sparse matrices with an irregular sparsity structure. Overall, our algorithm shows significant improvement over popular topic modeling methods such as latent Dirichlet allocation, and runs more than 100 times faster on data sets with millions of documents.
152

Propriété de Bogomolov pour les modules de Drinfeld à multiplications complexes

Bauchère, Hugues 16 September 2013 (has links) (PDF)
Notons A:=Fq[T] et k:=Fq(T). Soient φ un A-module de Drinfeld défini sur la clôture algébrique de k et h sa hauteur canonique. Soient K/k une extension finie et L/K une extension galoisienne infinie. Par analogie avec la terminologie utilisée par E. Bombieri et U. Zannier, on dit que L a la propriété (B,φ) s'il existe une constante strictement positive qui minore h sur L privé des points de torsion de φ. S. David et A. Pacheco ont montré que pour tout module de Drinfeld φ, la clôture abélienne de K a la propriété (B,φ). Dans cette thèse nous généralisons, dans le cadre des modules de Drinfeld à multiplications complexes, ce résultat.
153

Operators defined by conditional expectations and random measures / Daniel Thanyani Rambane

Rambane, Daniel Thanyani January 2004 (has links)
This study revolves around operators defined by conditional expectations and operators generated by random measures. Studies of operators in function spaces defined by conditional expectations first appeared in the mid 1950's by S-T.C. Moy [22] and S. Sidak [26]. N. Kalton studied them in the setting of Lp-spaces 0 < p < 1 in [15, 131 and in L1-spaces, [14], while W. Arveson [5] studied them in L2-spaces. Their averaging properties were studied by P.G. Dodds and C.B. Huijsmans and B. de Pagter in [7] and C.B. Huijsmans and B. de Pagter in [lo]. A. Lambert [17] studied their relationship with multiplication operators in C*-modules. It was shown by J.J. Grobler and B. de Pagter [8] that partial integral operators that were studied A.S. Kalitvin et a1 in [2, 4, 3, 11, 121 and the special cases of kernel operators that were, inter alia, studied by A.R. Schep in [25] were special cases of conditional expectation operators. On the other hand, operators generated by random measures or pseudo-integral operators were studied by A. Sourour [28, 271 and L.W. Weis [29,30], building on the studies of W. Arveson [5] and N. Kalton [14, 151, in the late 1970's and early 1980's. In this thesis we extend the work of J.J. Grobler and B. de Pagter [8] on Multiplication Conditional Expectation-representable (MCE-representable) operators. We also generalize the result of A. Sourour [27] and show that order continuous linear maps between ideals of almost everywhere finite measurable functions on u-finite measure spaces are MCE-representable. This fact enables us to easily deduce that sums and compositions of MCE-representable operators are again MCE-representable operators. We also show that operators generated by random measures are MCE-representable. The first chapter gathers the definitions and introduces notions and concepts that are used throughout. In particular, we introduce Riesz spaces and operators therein, Riesz and Boolean homomorphisms, conditional expectation operators, kernel and absolute T-kernel operators. In Chapter 2 we look at MCE-operators where we give a definition different from that given by J.J. Grobler and B. de Pagter in [8], but which we show to be equivalent. Chapter 3 involves random measures and operators generated by random measures. We solve the problem (positively) that was posed by A. Sourour in [28] about the relationship of the lattice properties of operators generated by random measures and the lattice properties of their generating random measures. We show that the total variation of a random signed measure representing an order bounded operator T, it being the difference of two random measures, is again a random measure and represents ITI. We also show that the set of all operators generated by a random measure is a band in the Riesz space of all order bounded operators. In Chapter 4 we investigate the relationship between operators generated by random measures and MCE-representable operators. It was shown by A. Sourour in [28, 271 that every order bounded order continuous linear operator acting between ideals of almost everywhere measurable functions is generated by a random measure, provided that the measure spaces involved are standard measure spaces. We prove an analogue of this theorem for the general case where the underlying measure spaces are a-finite. We also, in this general setting, prove that every order continuous linear operator is MCE-representable. This rather surprising result enables us to easily show that sums, products and compositions of MCE-representable operator are again MCE-representable. Key words: Riesz spaces, conditional expectations, multiplication conditional expectation-representable operators, random measures. / Thesis (Ph.D. (Mathematics))--North-West University, Potchefstroom Campus, 2004.
154

Low Power and Low complexity Constant Multiplication using Serial Arithmetic

Johansson, Kenny January 2006 (has links)
The main issue in this thesis is to minimize the energy consumption per operation for the arithmetic parts of DSP circuits, such as digital filters. More specific, the focus is on single- and multiple-constant multiplication using serial arithmetic. The possibility to reduce the complexity and energy consumption is investigated. The main difference between serial and parallel arithmetic, which is of interest here, is that a shift operation in serial arithmetic require a flip-flop, while it can be hardwired in parallel arithmetic. The possible ways to connect a certain number of adders is limited, i.e., for single-constant multiplication, the number of possible structures is limited for a given number of adders. Furthermore, for each structure there is a limited number of ways to place the shift operations. Hence, it is possible to find the best solution for each constant, in terms of complexity, by an exhaustive search. Methods to bound the search space are discussed. We show that it is possible to save both adders and shifts compared to CSD serial/parallel multipliers. Besides complexity, throughput is also considered by defining structures where the critical path, for bit-serial arithmetic, is no longer than one full adder. Two algorithms for the design of multiple-constant multiplication using serial arithmetic are proposed. The difference between the proposed design algorithms is the trade-offs between adders and shifts. For both algorithms, the total complexity is decreased compared to an algorithm for parallel arithmetic. The impact of the digit-size, i.e., the number of bits to be processed in parallel, in FIR filters is studied. Two proposed multiple-constant multiplication algorithms are compared to an algorithm for parallel arithmetic and separate realization of the multipliers. The results provide some guidelines for designing low power multiple-constant multiplication algorithms for FIR filters implemented using digit-serial arithmetic. A method for computing the number of logic switchings in bit-serial constant multipliers is proposed. The average switching activity in all possible multiplier structures with up to four adders is determined. Hence, it is possible to reduce the switching activity by selecting the best structure for any given constant. In addition, a simplified method for computing the switching activity in constant serial/parallel multipliers is presented. Here it is possible to reduce the energy consumption by selecting the best signed-digit representation of the constant. Finally, a data dependent switching activity model is proposed for ripple-carry adders. For most applications, the input data is correlated, while previous estimations assumed un-correlated data. Hence, the proposed method may be included in high-level power estimation to obtain more accurate estimates. In addition, the model can be used as cost function in multiple-constant multiplication algorithms. A modified model based on word-level statistics, which is accurate in estimating the switching activity when real world signals are applied, is also presented. / Report code: LiU-Tek-Lic-2006:30.
155

Development of a Flat Panel Detector with Avalanche Gain for Interventional Radiology

Wronski, Maciej 03 March 2010 (has links)
A number of interventional procedures such as cardiac catheterization, angiography and the deployment of endovascular devices are routinely performed using x-ray fluoroscopy. To minimize the patient’s exposure to ionizing radiation, each fluoroscopic image is acquired using a very low x-ray exposure (~ 1 uR at the detector). At such an exposure, most semiconductor-based digital flat panel detectors (FPD) are not x-ray quantum noise limited (QNL) due to the presence of electronic noise which substantially degrades their imaging performance. The goal of this thesis was to investigate how a FPD based on amorphous selenium (a-Se) with internal avalanche multiplication gain could be used for QNL fluoroscopic imaging at the lowest clinical exposures while satisfying all of the requirements of a FPD for interventional radiology. Towards this end, it was first determined whether a-Se can reliably provide avalanche multiplication gain in the solid-state. An experimental method was developed which enabled the application of sufficiently large electric field strengths across the a-Se. This method resulted in avalanche gains as high as 10000 at an applied field of 105 V/um using optical excitation. This was the first time such high avalanche gains have been reported in a solid-state detector based on an amorphous material. Secondly, it was investigated how the solid-state a-Se avalanche detector could be used to image X-rays at diagnostic radiographic energies (~ 75 kVp). A dual-layered direct-conversion FPD architecture was proposed. It consisted of an x-ray drift region and a charge avalanche multiplication region and was found to eliminate depth-dependent gain fluctuation noise. It was shown that electric field strength non-uniformities in the a-Se do not degrade the detective quantum efficiency (DQE). Lastly, it was determined whether the solid-state a-Se avalanche detector satisfies all of the requirements of interventional radiology. Experimental results have shown that the total noise produced by the detector is negligible and that QNL operation at the lowest fluoroscopic exposures is indeed possible without any adverse effects occurring at much larger radiographic exposures. In conclusion, no fundamental obstacles were found preventing the use of avalanche a-Se in next-generation solid-state QNL FPDs for use in interventional radiology.
156

Efficient Computation with Sparse and Dense Polynomials

Roche, Daniel Steven January 2011 (has links)
Computations with polynomials are at the heart of any computer algebra system and also have many applications in engineering, coding theory, and cryptography. Generally speaking, the low-level polynomial computations of interest can be classified as arithmetic operations, algebraic computations, and inverse symbolic problems. New algorithms are presented in all these areas which improve on the state of the art in both theoretical and practical performance. Traditionally, polynomials may be represented in a computer in one of two ways: as a "dense" array of all possible coefficients up to the polynomial's degree, or as a "sparse" list of coefficient-exponent tuples. In the latter case, zero terms are not explicitly written, giving a potentially more compact representation. In the area of arithmetic operations, new algorithms are presented for the multiplication of dense polynomials. These have the same asymptotic time cost of the fastest existing approaches, but reduce the intermediate storage required from linear in the size of the input to a constant amount. Two different algorithms for so-called "adaptive" multiplication are also presented which effectively provide a gradient between existing sparse and dense algorithms, giving a large improvement in many cases while never performing significantly worse than the best existing approaches. Algebraic computations on sparse polynomials are considered as well. The first known polynomial-time algorithm to detect when a sparse polynomial is a perfect power is presented, along with two different approaches to computing the perfect power factorization. Inverse symbolic problems are those for which the challenge is to compute a symbolic mathematical representation of a program or "black box". First, new algorithms are presented which improve the complexity of interpolation for sparse polynomials with coefficients in finite fields or approximate complex numbers. Second, the first polynomial-time algorithm for the more general problem of sparsest-shift interpolation is presented. The practical performance of all these algorithms is demonstrated with implementations in a high-performance library and compared to existing software and previous techniques.
157

A hardware algorithm for modular multiplication/division

高木, 直史, Takagi, Naofumi 01 1900 (has links)
No description available.
158

Conceptual Understanding of Multiplicative Properties Through Endogenous Digital Game Play

January 2012 (has links)
abstract: This study purposed to determine the effect of an endogenously designed instructional game on conceptual understanding of the associative and distributive properties of multiplication. Additional this study sought to investigate if performance on measures of conceptual understanding taken prior to and after game play could serve as predictors of game performance. Three versions of an instructional game, Shipping Express, were designed for the purposes of this study. The endogenous version of Shipping Express integrated the associative and distributive properties of multiplication within the mechanics, while the exogenous version had the instructional content separate from game play. A total of 111 fourth and fifth graders were randomly assigned to one of three conditions (endogenous, exogenous, and control) and completed pre and posttest measures of conceptual understanding of the associative and distributive properties of multiplication, along with a questionnaire. The results revealed several significant results: 1) there was a significant difference between participants' change in scores on the measure of conceptual understanding of the associative property of multiplication, based on the version of Shipping Express they played. Participants who played the endogenous version of Shipping Express had on average higher gains in scores on the measure of conceptual understanding of the associative property of multiplication than those who played the other versions of Shipping Express; 2) performance on the measures of conceptual understanding of the distributive property collected prior to game play were related to performance within the endogenous game environment; and 3) participants who played the control version of Shipping Express were on average more likely to have a negative attitude towards continuing game play on their own compared to the other versions of the game. No significant differences were found in regards to changes in scores on the measure of conceptual understanding of the distributive property based on the version of Shipping Express played, post hoc pairwise comparisons, and changes on scores on question types within the conceptual understanding of the associative and distributive property of multiplication measures. The findings from this study provide some support for a move towards the design and development of endogenous instructional games. Additional implications for the learning through digital game play and future research directions are discussed. / Dissertation/Thesis / Ph.D. Educational Technology 2012
159

Fatores de Dancoff de celulas unitarias em geometria Cluster com absorção parcial de nêutrons

Rodrigues, Letícia Jenisch January 2011 (has links)
O fator de Dancoff, em sua formulação clássica, corrige a corrente de nêutrons incidente na superfície de uma vareta combustível devido à presença das demais varetas da célula. Alternativamente, esse fator pode ser interpretado como a probabilidade de um nêutron oriundo de uma vareta de combustível entrar em outra vareta sem colidir no moderador ou no revestimento. Para combustíveis perfeitamente absorvedores essas definições são equivalentes. Entretanto, quando se assume a hipótese de absorção parcial no combustível, essa equivalência não se verifica. Então, os fatores de Dancoff devem ser determinados em termos de probabilidades de colisão. Ao longo dos últimos anos, vários trabalhos, usando ambas as definições, vêm relatando melhorias no cálculo dos fatores Dancoff. Neste trabalho, esses fatores são determinados através do método de probabilidades de colisão para células em geometria cluster com contorno externo quadrado, assumindo-se absorção total (Black Dancoff Factors) e parcial (Grey Dancoff Factors) no combustível. A validação dos resultados é feita através de comparações com a célula cilíndrica equivalente. O cálculo é realizado considerando-se reflexão especular, para a célula quadrada, e condição de contorno difusa (white) para a célula cilíndrica equivalente. Os resultados obtidos, com o aumento do tamanho das células, evidenciam o comportamento assintótico da solução. Além disso, são computados fatores de Dancoff para as células canadenses CANDU-37 e CANFLEX por ambas as metodologias de cálculo, direta e probabilística. Finalmente, os fatores de multiplicação efetivo, keff, para as células com contorno externo quadrado e a cilíndrica equivalente, são determinados e as diferenças registradas para os casos onde se assumem as hipóteses de absorção total e parcial. / In its classical formulation, the Dancoff factor for a perfectly absorbing fuel rod is defined as the relative reduction in the incurrent of resonance neutrons into the rod in the presence of neighboring rods, as compared to the incurrent into a single fuel rod immersed in an infinite moderator. Alternatively, this factor can be viewed as the probability that a neutron emerging from the surface of a fuel rod will enter another fuel rod without any collision in the moderator or cladding. For perfectly absorbing fuel these definitions are equivalent. In the last years, several works appeared in literature reporting improvements in the calculation of Dancoff factors, using both the classical and the collision probability definitions. In this work, we step further reporting Dancoff factors for perfectly absorbing (Black) and partially absorbing (Grey) fuel rods calculated by the collision probability method, in cluster cells with square outer boundaries. In order to validate the results, comparisons are made with the equivalent cylindricalized cell in hypothetical test cases. The calculation is performed considering specularly reflecting boundary conditions, for the square lattice, and diffusive reflecting boundary conditions, for the cylindrical geometry. The results show the expected asymptotic behavior of the solution with increasing cell sizes. In addition, Dancoff factors are computed for the Canadian cells CANDU-37 and CANFLEX by the Monte Carlo and Direct methods. Finally, the effective multiplication factors, keff, for these cells (cluster cell with square outer boundaries and the equivalent cylindricalized cell) are also computed, and the differences reported for the cases using the perfect and partial absorption assumptions.
160

Análise combinatória na educação de jovens e adultos : uma proposta de ensino a partir da resolução de problemas

Fonseca, Jussara Aparecida da January 2012 (has links)
O presente trabalho teve como objetivo analisar se uma estratégia de ensino baseada em situações-problema contribui para a aprendizagem da Análise Combinatória pelos alunos da Educação de Jovens e Adultos. A sequência de ensino elaborada e implementada procurou abordar atividades que evocassem o cotidiano dos alunos e não dependessem de fórmulas previamente estudadas. A ordem em que as atividades foram propostas visou a formalização do princípio multiplicativo, como recurso a ser utilizado na resolução de problemas de contagem. A pesquisa foi desenvolvida sob a ótica de um estudo de caso, junto a uma turma de alunos dos cursos PROEJA Agroindústria e PROEJA Informática do Instituto Federal Farroupilha – Campus Alegrete, e teve como aportes teóricos a teoria do desenvolvimento cognitivo de Piaget e a teoria dos campos conceituais de Vergnaud, os quais nos forneceram subsídios para a compreensão do desenvolvimento do raciocínio combinatório e, das dificuldades apresentadas pelos alunos. O trabalho mostrou que é possível a aprendizagem de conteúdos de Análise Combinatória pelos alunos do PROEJA, através da implementação de uma sequência de ensino baseada na resolução de problemas, frente aos quais os alunos construíram diferentes estratégias de resolução que favoreceram o desenvolvimento do seu raciocínio combinatório. / The present research aimed at analyzing to what extent a teaching strategy based on contextualized problems contributes to the learning of the Combinatorial Analysis by students from Education for Young Adults and Adults (Educação de Jovens e Adultos – EJA). The teaching sequence developed and implemented comprehended activities which evoked students’ everyday life and were not dependent on previously studied formulas. The order in which the activities were proposed aimed the formalization of the multiplication principle as a resource to be used in the resolution of counting problems. The research was developed based on a case study, in a class of the National Program for integrating the Professional Education with Basic Education in the Education for Young Adults and Adults (Programa Nacional de Integração da Educação Profissional com a Educação Básica na modalidade de Educação de Jovens e Adultos – PROEJA) from the Food Technology course and the Information technology course of the Farroupilha Federal Institute in the Campus Alegrete and had as theoretical basis the theory of cognitive development by Piaget and the theory of conceptual fields by Vergnaud, which offer groundings for understanding the development of combinatorial thinking and the difficulties presented by the students. This analysis showed that learning of Combinatorial Analysis is possible for the PROEJA students, through the implementation of a teaching sequence based on the resolution of problems, against which the students built different resolution strategies favoring the development of their combinatorial thinking.

Page generated in 0.0591 seconds