• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 215
  • 38
  • 27
  • 23
  • 12
  • 8
  • 5
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • Tagged with
  • 398
  • 194
  • 81
  • 74
  • 58
  • 54
  • 48
  • 47
  • 46
  • 45
  • 37
  • 33
  • 33
  • 32
  • 30
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
71

Products of diagonalizable matrices

Khoury, Maroun Clive 09 1900 (has links)
Chapter 1 reviews better-known factorization theorems of a square matrix. For example, a square matrix over a field can be expressed as a product of two symmetric matrices; thus square matrices over real numbers can be factorized into two diagonalizable matrices. Factorizing matrices over complex numbers into Hermitian matrices is discussed. The chapter concludes with theorems that enable one to prescribe the eigenvalues of the factors of a square matrix, with some degree of freedom. Chapter 2 proves that a square matrix over arbitrary fields (with one exception) can be expressed as a product of two diagonalizable matrices. The next two chapters consider decomposition of singular matrices into Idempotent matrices, and of nonsingular matrices into Involutions. Chapter 5 studies factorization of a complex matrix into Positive-(semi)definite matrices, emphasizing the least number of such factors required. / Mathematical Sciences / M. Sc. (Mathematics)
72

Type theoretic weak factorization systems

North, Paige Randall January 2017 (has links)
This thesis presents a characterization of those categories with weak factorization systems that can interpret the theory of intensional dependent type theory with Σ, Π, and identity types. We use display map categories to serve as models of intensional dependent type theory. If a display map category (C, D) models Σ and identity types, then this structure generates a weak factorization system (L, R). Moreover, we show that if the underlying category C is Cauchy complete, then (C, R) is also a display map category modeling Σ and identity types (as well as Π types if (C, D) models Π types). Thus, our main result is to characterize display map categories (C, R) which model Σ and identity types and where R is part of a weak factorization system (L, R) on the category C. We offer three such characterizations and show that they are all equivalent when C has all finite limits. The first is that the weak factorization system (L, R) has the properties that L is stable under pullback along R and all maps to a terminal object are in R. We call such weak factorization systems type theoretic. The second is that the weak factorization system has what we call an Id-presentation: it can be built from certain categorical structure in the same way that a model of Σ and identity types generates a weak factorization system. The third is that the weak factorization system (L, R) is generated by a Moore relation system. This is a technical tool used to establish the equivalence between the first and second characterizations described. To conclude the thesis, we describe a certain class of convenient categories of topological spaces (a generalization of compactly generated weak Hausdorff spaces). We then construct a Moore relation system within these categories (and also within the topological topos) and thus show that these form display map categories with Σ and identity types (as well as Π types in the topological topos).
73

Improving the Execution Time of Large System Simulations

January 2012 (has links)
abstract: Today, the electric power system faces new challenges from rapid developing technology and the growing concern about environmental problems. The future of the power system under these new challenges needs to be planned and studied. However, due to the high degree of computational complexity of the optimization problem, conducting a system planning study which takes into account the market structure and environmental constraints on a large-scale power system is computationally taxing. To improve the execution time of large system simulations, such as the system planning study, two possible strategies are proposed in this thesis. The first one is to implement a relative new factorization method, known as the multifrontal method, to speed up the solution of the sparse linear matrix equations within the large system simulations. The performance of the multifrontal method implemented by UMFAPACK is compared with traditional LU factorization on a wide range of power-system matrices. The results show that the multifrontal method is superior to traditional LU factorization on relatively denser matrices found in other specialty areas, but has poor performance on the more sparse matrices that occur in power-system applications. This result suggests that multifrontal methods may not be an effective way to improve execution time for large system simulation and power system engineers should evaluate the performance of the multifrontal method before applying it to their applications. The second strategy is to develop a small dc equivalent of the large-scale network with satisfactory accuracy for the large-scale system simulations. In this thesis, a modified Ward equivalent is generated for a large-scale power system, such as the full Electric Reliability Council of Texas (ERCOT) system. In this equivalent, all the generators in the full model are retained integrally. The accuracy of the modified Ward equivalent is validated and the equivalent is used to conduct the optimal generation investment planning study. By using the dc equivalent, the execution time for optimal generation investment planning is greatly reduced. Different scenarios are modeled to study the impact of fuel prices, environmental constraints and incentives for renewable energy on future investment and retirement in generation. / Dissertation/Thesis / M.S. Electrical Engineering 2012
74

Recommendation Approaches Using Context-Aware Coupled Matrix Factorization

Agagu, Tosin January 2017 (has links)
In general, recommender systems attempt to estimate user preference based on historical data. A context-aware recommender system attempts to generate better recommendations using contextual information. However, generating recommendations for specific contexts has been challenging because of the difficulties in using contextual information to enhance the capabilities of recommender systems. Several methods have been used to incorporate contextual information into traditional recommendation algorithms. These methods focus on incorporating contextual information to improve general recommendations for users rather than identifying the different context applicable to the user and providing recommendations geared towards those specific contexts. In this thesis, we explore different context-aware recommendation techniques and present our context-aware coupled matrix factorization methods that use matrix factorization for estimating user preference and features in a specific contextual condition. We develop two methods: the first method attaches user preference across multiple contextual conditions, making the assumption that user preference remains the same, but the suitability of items differs across different contextual conditions; i.e., an item might not be suitable for certain conditions. The second method assumes that item suitability remains the same across different contextual conditions but user preference changes. We perform a number of experiments on the last.fm dataset to evaluate our methods. We also compared our work to other context-aware recommendation approaches. Our results show that grouping ratings by context and jointly factorizing with common factors improves prediction accuracy.
75

Algoritmos de detección de estrellas variables en imágenes astronómicas basados en factorización no negativa de matrices

Berrocal Zapata, Emanuel Antonio January 2015 (has links)
Ingeniero Civil Matemático / En este trabajo se presenta una metodología para de detectar estrellas variables en estampillas de 21 x 21 píxels provenientes de imágenes astronómicas a partir de un pipeline de procesamiento de imágenes para el telescopio Dark Energy Camera (DECam). Para aquello se realizará un aprendizaje con una muestra de imágenes que están etiquetadas como estrellas y no estrellas con el objetivo de detectar nuevas estrellas variables en imágenes desconocidas. Los objetos astronómicos que se observan en las imágenes, pueden ser agrupados en las categorías de: estrellas variables, estrellas fijas, rayos cósmicos, fuentes puntuales falsas, malas restas, supernovas y objetos desconocidos. Para la labor de detectar estrellas se ocuparon 2 algoritmos basados en NMF (Nonnegative matrix factorization) y un tercer algoritmo basado en Análisis de Componentes Principales (PCA). El enfoque del primer algoritmo NMF es la rapidez, mientras que el segundo se enfoca en la dispersión de información. Para el uso de los algoritmos anteriores se generó una metodología con la que trabajarán, los cuales ocupan técnicas de optimización no lineal y descomposición de matrices; finalmente se demuestra cual es el mejor de ellos para detectar estrellas gracias a una herramienta llamada Curva ROC. Una vez escogido el mejor método se realiza un estudio de sensibilidad de parámetros para mejorar la detección de estrellas y obtener una representación más genérica a través de la descomposición de matrices. El resultado final es que utilizando uno de los algoritmos basados en NMF, se obtienen los mejores resultados de clasificación, esto se concluye del análisis de la Curva ROC.
76

Learning to Rank with Contextual Information

Han, Peng 15 November 2021 (has links)
Learning to rank is utilized in many scenarios, such as disease-gene association, information retrieval and recommender system. Improving the prediction accuracy of the ranking model is the main target of existing works. Contextual information has a significant influence in the ranking problem, and has been proved effective to increase the prediction performance of ranking models. Then we construct similarities for different types of entities that could utilize contextual information uniformly in an extensible way. Once we have the similarities constructed by contextual information, how to uti- lize them for different types of ranking models will be the task we should tackle. In this thesis, we propose four algorithms for learning to rank with contextual informa- tion. To refine the framework of matrix factorization, we propose an area under the ROC curve (AUC) loss to conquer the sparsity problem. Clustering and sampling methods are used to utilize the contextual information in the global perspective, and an objective function with the optimal solution is proposed to exploit the contex- tual information in the local perspective. Then, for the deep learning framework, we apply the graph convolutional network (GCN) on the ranking problem with the combination of matrix factorization. Contextual information is utilized to generate the input embeddings and graph kernels for the GCN. The third method in this thesis is proposed to directly exploit the contextual information for ranking. Laplacian loss is utilized to solve the ranking problem, which could optimize the ranking matrix directly. With this loss, entities with similar contextual information will have similar ranking results. Finally, we propose a two-step method to solve the ranking problem of the sequential data. The first step in this two-step method is to generate the em- beddings for all entities with a new sampling strategy. Graph neural network (GNN) and long short-term memory (LSTM) are combined to generate the representation of sequential data. Once we have the representation of the sequential data, we could solve the ranking problem of them with pair-wise loss and sampling strategy.
77

Factorisation des régions cubiques et application à la concurrence / Factorization of cubical area and application to concurrency

Ninin, Nicolas 11 December 2017 (has links)
Cette thèse se propose d'étudier des problèmes de factorisations des régions cubiques. Dans le cadre de l'analyse de programme concurrent via des méthodes issues de la topologie algébrique, les régions cubiques sont un modèle géométrique simple mais expressif de la concurrence. Tout programme concurrent (sans boucle ni branchement) est ainsi représenté comme sous partie de R^n auquel on enlève des cubes interdits représentant les états du programme interdit par les contraintes de la concurrence (mutex par exemple) où n est le nombre de processus. La première partie de cette thèse s’intéresse à la question d'indépendance des processus. Cette question est cruciale dans l'analyse de programme non concurrent car elle permet de simplifier l'analyse en séparant le programme en groupe de processus indépendants. Dans le modèle géométrique d'un programme, l'indépendance se traduit comme une factorisation modulo permutation des processus. Ainsi le but de cette section est de donner un algorithme effectif de factorisation des régions cubiques et de le démontrer. L'algorithme donné est relativement simple et généralise l'algorithme très intuitif suivant (dit algorithme syntaxique). A partir du programme, on met dans un même groupe les processus qui partagent une ressource, puis l’on prend la clôture transitive de cette relation. Le nouvel algorithme s'effectue de la même manière, cependant il supprime certaines de ces relations. En effet par des jeux d'inclusion entre cubes interdits, il est possible d'avoir deux processus qui partagent une ressource mais qui sont toutefois indépendant. Ainsi la nouvelle relation est obtenue en regardant l'ensemble des cubes maximaux de la région interdite. Lorsque deux coordonnées sont différentes de R dans un cube maximal on dira qu’elles sont reliées. Il suffit alors de faire la clôture transitive de cette relation pour obtenir la factorisation optimale. La seconde partie de ce manuscrit s'intéresse à un invariant catégorique que l'on peut définir sur une région cubique. Celui-ci découpe la région cubique en cubes appelés "dés" auxquels on associe une catégorie appelée catégorie émincée de la région cubique. On peut voir cette catégorie comme un intermédiaire fini entre la catégorie des composantes et la catégorie fondamentale. On peut ainsi montrer que lorsque la région cubique factorise alors la catégorie émincée associée va elle-même se factoriser. Cependant la réciproque est plus compliquée et de nombreux contre exemples empêchent une réciproque totale. La troisième et dernière partie de cette thèse s'intéresse à la structure de produit tensoriel que l'on peut mettre sur les régions cubiques. En remarquant comment les opérations booléennes sur une région cubique peuvent être obtenues à partir des opérations sur les régions cubiques de dimension inférieure, on tente de voir ces régions cubiques comme un produit tensoriel des régions de dimension inférieure. La structure de produit tensoriel est hautement dépendante de la catégorie dans laquelle on la considère. Dans ce cas, si l'on considère le produit dans les algèbres de Boole, le résultat n'est pas celui souhaité. Au final il se trouve que le produit tensoriel dans la catégorie des demi-treillis avec zéro donne le résultat voulu. / This thesis studies some problems of the factorization of cubical areas. In the setting of analysis of programs through methods coming from algebraic topology, cubical areas are geometric models used to understand concurrency. Any concurrent programs (without loops nor branchings) can be seen as a subset of R^n where we remove some cubes which contains the states forbidden by the concurrency (think of a mutex) and where n is the number of process in the program. The first part of this thesis is interested in the question the independence of process. This question is particularly important to analyse a program, indeed being able to separate groups of process into independent part will greatly reduce the complexity of the analysis. In the geometric model, the independency is seen as a factorization up to permutation of processes. Hence the goal is to give a new effective algorithm which factorizes cubical areas, and proves that it does. The given algorithm is quite straightforward and is a generalization of the following algorithm (that we called syntactic algorithm). From the written program, groups together process that shares a resource, then take the transitive closure of this relation. This algorithm is not always optimal in that it can groups together process that actually could be separated. Thus we create a new (more relax) relationship between process. From the maximal cubes of the forbidden area of the program, if two coordinate are not equal to R, then groups them together. We can then take the transitive closure of this and get the optimal factorization. Each cube is an object of the category and between two adjacent cubes is an arrow. We can see that this category is in between the fundamental category and the components category of the cubical area. We can then show that if the cubical area factorize then so does the minced category. The reciprocal is harder to get. Indeed there's a few counter example on which we cant go back. The third and last part of this thesis is interested in seeing cubical areas as some kind of product over lower dimension cubical areas. By looking at how the booleans operations of a cubical area arise from the same operation on lower dimensional cubical areas we understand that it can be expressed as a tensor product. A tensor product is highly dependent on the category on which it is built upon. We show that to take the category of Boolean algebra is too restrictive and gives trivial result, while the category of semi-lattice with zeros works well. are not equal to R, then groups them together. We can then take the transitive closure of this and get the optimal factorization. The second part of this thesis looks at some categorical invariant that we define over cubical areas. These categories (called the minced category) slice the space into cubes.
78

Comparasion of recommender systems for stock inspiration

Broman, Nils January 2021 (has links)
Recommender systems are apparent in our lives through multiple different ways, such asrecommending what items to purchase when online shopping, recommending movies towatch and recommending restaurants in your area. This thesis aims to apply the sametechniques of recommender systems on a new area, namely stock recommendations basedon your current portfolio. The data used was collected from a social media platform forinvestments, Shareville, and contained multiple users portfolios. The implicit data wasthen used to train matrix factorization models, and the state-of-the-art LightGCN model.Experiments regarding different data splits was also conducted. Results indicate that rec-ommender systems techniques can be applied successfully to generate stock recommen-dations. Also, that the relative performance of the models on this dataset are in line withprevious research. LightGCN greatly outperforms matrix factorization models on this pro-posed dataset. The results also show that different data splits also greatly impact the re-sults, which is discussed in further detail in this thesis.
79

Source Apportionment Analysis of Measured Volatile Organic Compounds in Corpus Christi, Texas

Abood, Ahmed T. 05 1900 (has links)
Corpus Christi among of the largest industrialized coastal urban areas in Texas. The strategic location of the city along the Gulf of Mexico allows for many important industries and an international business to be located. The cluster of industries and businesses in the region contribute to the air pollution from emissions that are harmful to the environment and to the people living in and visiting the area. Volatile organic compounds (VOC) constitute an important class of pollutants measured in the area. The automated gas chromatography (Auto GC) data was collected from Texas Commission of Environmental Quality (TCEQ) and source apportionment analysis was conducted on this data to identify key sources of VOC affecting this study region. EPA PMF 3.0 was employed in this sources apportionment study of measured VOC concentration during 2005 - 2012 in Corpus Christi, Texas. The study identified nine optimal factors (Source) that could explain the concentration of VOC at two urbane monitoring sites in the study region. Natural gas was found to be the largest contributor of VOC in the area, followed by gasoline and vehicular exhaust. Diesel was the third highest contributor with emissions from manufacturing and combustion processes. Refineries gases and evaporative fugitive emissions were other major contributors in the area; Flaring operations, solvents, and petrochemicals also impacted the measured VOC in the urban area. It was noted that he measured VOC concentrations were significantly influenced by the economic downturn in the region and this was highlighted in the annual trends of the apportioned VOC.
80

A Hardware Interpreter for Sparse Matrix LU Factorization

Syed, Akber 16 September 2002 (has links)
No description available.

Page generated in 0.3119 seconds