Spelling suggestions: "subject:"2matrices."" "subject:"cicatrices.""
761 |
El problema de la caracterización y de la unicidadSimón Pinero, Juan Jacobo 03 July 1992 (has links)
En este trabajo se abordan dos problemas clásicos en el estudio de los anillos de endomorfismos. El problema de la caracterización, que relaciona propiedades de un anillos con el anillo de endomorfismos de ciertos objetos notables y el de la unicidad, que establece hasta qué grado el anillos base está determinado por el anillo de endomorfismos del objeto elegido.Las relaciones se establecen en términos del lenguaje de las categorías, y para ello, primero se desarrolla una versión de la teoría de Morita para el caso de los anillos sin uno
|
762 |
El Método de Neville : un enfoque basado en computación de altas prestacionesCortina Parajón, Raquel 10 September 2008 (has links)
En esta memoria se ha llevado a cabo un trabajo original sobre las prestaciones de varios algoritmos organizados por bloques para aplicar la eliminación de Neville sobre un sistema de ecuaciones lineales (Ax=b) en un computador paralelo, utilizando el paradigma de paso de mensajes y distintas métricas que nos han permitido analizar las prestaciones de los algoritmos estudiados. La eliminación de Neville es un procedimiento alternativo a la eliminación de Gauss para transformar una matriz cuadrada A en una matriz triangular superior. Estrictamente hablando, la eliminación de Neville hace ceros en una columna de A añadiendo a cada fila un múltiplo de la fila previa. Esta estrategia se ha probado especialmente útil cuando se trabaja con cierto tipo de matrices, como por ejemplo, las totalmente positivas o las signo-regulares. Una matriz se dice totalmente positiva si todos sus menores son no negativos. Este tipo de matrices aparecen en muchas ramas de la ciencia, como por ejemplo en, Matemáticas, Estadística, Economía, o en Diseño Geométrico Asistido por Ordenador. En esta línea, los trabajos de un amplio número de autores han mostrado en los últimos años que la eliminación de Neville es una alternativa interesante a la de Gauss para cierto tipo de estudios.En el desarrollo de algoritmos paralelos para resolver problemas de Algebra Lineal Numérica la organización por bloques se muestra como la más eficiente para obtener el máximo provecho de las máquinas actuales, tanto en cuanto al buen uso de la jerarquía de memorias en máquinas con memoria compartida como al aprovechamiento del paralelismo explícito en máquinas de memoria distribuida. Con esta organización se suelen obtener algoritmos eficientes y escalables. Dos librerías bien conocidas como Lapack y ScaLapack utilizan como principal estrategia de diseño de sus algoritmos paralelos la organización por bloques. Para poder llegar a códigos óptimos es necesario definir los parámetros del problema y hacer un análisis profundo del comportamiento de los algoritmos desarrollados en función de las propiedades de éstos. Este análisis debe tener en cuenta el comportamiento de los algoritmos en cuanto a tiempo de ejecución, speedup/eficiencia y escalabilidad. Cuando los algoritmos se organizan por bloques es especialmente importante la relación entre el tamaño del bloque y las prestaciones en cada una de las métricas citadas. El tamaño de los bloques puede influir notablemente en las prestaciones. Es importante conocer como influye en cada una de ellas, si se desea un tipo de algoritmo concreto que optimice las prestaciones en una u otra métrica, o en el conjunto de todas ellas. En nuestro trabajo proponemos una organización del algoritmo de eliminación de Neville para computadores que sigan el modelo de paso de mensajes, y llevamos a cabo un análisis general basado en tres métricas: tiempo, speedup/eficiencia y escalabilidad. Este análisis se considera para las distribuciones por bloques más usuales de los datos, distribuciones unidimensionales (por filas y columnas) y bidimensionales, y se compara con el caso experimental en dos tipos de máquinas representativas del modelo de paso de mensajes: una red de estaciones de trabajo y un multicomputador, para lo que previamente se ha modelizado el comportamiento de ambos entornos.
|
763 |
Reductions and Triangularizations of Sets of MatricesDavidson, Colin January 2006 (has links)
Families of operators that are triangularizable must necessarily satisfy a number of spectral mapping properties. These necessary conditions are often sufficient as well. This thesis investigates such properties in finite dimensional and infinite dimensional Banach spaces. In addition, we investigate whether approximate spectral mapping conditions (being "close" in some sense) is similarly a sufficient condition.
|
764 |
Inversion of Vandermonde Matrices in FPGAs / Invertering av Vandermondematriser i FPGAHu, ShiQiang, Yan, Qingxin January 2004 (has links)
In this thesis, we explore different algorithms for the inversion of Vandermonde matrices and the corresponding suitable architectures for implement in FPGA. The inversion of Vandermonde matrix is one of the three master projects of the topic, Implementation of a digital error correction algorithm for time-interleaved analog-to-digital converters. The project is divided into two major parts: algorithm comparison and optimization for inversion of Vandermonde matrix; architecture selection for implementation. A CORDIC algorithm for sine and cosine and Newton-Raphson based division are implemented as functional blocks.
|
765 |
Reductions and Triangularizations of Sets of MatricesDavidson, Colin January 2006 (has links)
Families of operators that are triangularizable must necessarily satisfy a number of spectral mapping properties. These necessary conditions are often sufficient as well. This thesis investigates such properties in finite dimensional and infinite dimensional Banach spaces. In addition, we investigate whether approximate spectral mapping conditions (being "close" in some sense) is similarly a sufficient condition.
|
766 |
Hierarchical Matrix Techniques on Massively Parallel ComputersIzadi, Mohammad 11 December 2012 (has links) (PDF)
Hierarchical matrix (H-matrix) techniques can be used to efficiently treat dense matrices. With an H-matrix, the storage
requirements and performing all fundamental operations, namely matrix-vector multiplication, matrix-matrix multiplication and matrix inversion
can be done in almost linear complexity.
In this work, we tried to gain even further
speedup for the H-matrix arithmetic by utilizing multiple processors. Our approach towards an H-matrix distribution
relies on the splitting of the index set.
The main results achieved in this work based on the index-wise H-distribution are: A highly scalable algorithm for the H-matrix truncation and matrix-vector multiplication, a scalable algorithm for the H-matrix matrix multiplication, a limited scalable algorithm for the H-matrix inversion for a large number of processors.
|
767 |
Multi-scale texture analysis of remote sensing images using gabor filter banks and wavelet transformsRavikumar, Rahul 15 May 2009 (has links)
Traditional remote sensing image classification has primarily relied on image spectral information and texture information was ignored or not fully utilized. Existing remote sensing software packages have very limited functionalities with respect to texture information extraction and utilization. This research focuses on the use of multi-scale image texture analysis techniques using Gabor filter banks and Wavelet transformations. Gabor filter banks model texture as irradiance patterns in an image over a limited range of spatial frequencies and orientations. Using Gabor filters, each image texture can be differentiated with respect to its dominant spatial frequency and orientation. Wavelet transformations are useful for decomposition of an image into a set of images based on an orthonormal basis. Dyadic transformations are applied to generate a multi-scale image pyramid which can be used for texture analysis. The analysis of texture is carried out using both artificial textures and remotely sensed image corresponding to natural scenes. This research has shown that texture can be extracted and incorporated in conventional classification algorithms to improve the accuracy of classified results. The applicability of Gabor filter banks and Wavelets is explored for classifying and segmenting remote sensing imagery for geographical applications. A qualitative and quantitative comparison between statistical texture indicators and multi-scale texture indicators has been performed. Multi-scale texture indicators derived from Gabor filter banks have been found to be very effective due to the nature of their configurability to target specific textural frequencies and orientations in an image. Wavelet transformations have been found to be effective tools in image texture analysis as they help identify the ideal scale at which texture indicators need to be measured and reduce the computation time taken to derive statistical texture indicators. A robust set of software tools for texture analysis has been developed using the popular .NET and ArcObjects. ArcObjects has been chosen as the API of choice, as these tools can be seamlessly integrated into ArcGIS. This will aid further exploration of image texture analysis by the remote sensing community.
|
768 |
Blind And Semi-blind Channel Order Estimation In Simo SystemsKarakutuk, Serkan 01 September 2009 (has links) (PDF)
Channel order estimation is an important problem in many fields including signal processing,
communications, acoustics, and more. In this thesis, blind channel order estimation problem
is considered for single-input, multi-output (SIMO) FIR systems. The problem is to estimate
the effective channel order for the SIMO system given only the output samples corrupted by
noise. Two new methods for channel order estimation are presented. These methods have
several useful features compared to the currently known techniques. They are guaranteed to
find the true channel order for noise free case and they perform significantly better for noisy
observations. These algorithms show a consistent performance when the number of observations,
channels and channel order are changed. The proposed algorithms are integrated with
the least squares smoothing (LSS) algorithm for blind identification of the channel coefficients. LSS algorithm is selected since it is a deterministic algorithm and has some additional
features suitable for order estimation. The proposed algorithms are compared with a variety
of dierent algorithms including linear prediction (LP) based methods. LP approaches are
known to be robust to channel order overestimation. In this thesis, it is shown that significant
gain can be obtained compared to LP based approaches when the proposed techniques are
used. The proposed algorithms are also compared with the oversampled single-input, single-output (SISO) system with a generic decision feedback equalizer, and better mean-square
error performance is observed for the blind setting.
Channel order estimation problem is also investigated for semi-blind systems where a pilot
signal is used which is known at the receiver. In this case, two new methods are proposed
which exploit the pilot signal in dierent ways. When both unknown and pilot symbols are
used, a better estimation performance can be achieved compared to the proposed blind methods.
The semi-blind approach is especially effective in terms of bit error rate (BER) evaluation
thanks to the use of pilot symbols in better estimation of channel coecients. This approach
is also more robust to ill-conditioned channels. The constraints for these approaches, such
as synchronization, and the decrease in throughput still make the blind approaches a good
alternative for channel order estimation. True and effective channel order estimation topics
are discussed in detail and several simulations are done in order to show the significant performance
gain achieved by the proposed methods.
|
769 |
Robust D-optimal designs for mixture experiments in Scheffe modelsHsu, Hsiang-Ling 10 July 2003 (has links)
A mixture experiment is an experiment in which the
q-ingredients {xi,i=1,...,q} are nonnegative and subject to the simplex restriction sum_{i=1}^q x_i=1 on the (q-1)-dimensional probability simplex S^{q-1}. In this work, we investigate the robust D-optimal designs for mixture experiments with consideration on uncertainties in the Scheffe's linear, quadratic and cubic model without 3-way effects. The D-optimal designs for each of the Scheffe's models are used to find the robust D-optimal designs. With uncertianties on the Scheffe's linear and quadratic models, the optimal convex combination of the two model's D-optimal designs can be proved to be a robust D-optimal design. For the case of the Scheffe's linear and cubic model without 3-way effects, we have some numerical results about the robust D-optimal designs, as well as that for Scheffe's linear, quadratic and cubic model without 3-way effects. Ultimately, we discuss the efficiency of a maxmin type criterion D_r under given the robust D-optimal designs for the Scheffe's linear and quadratic models.
|
770 |
Rigorous joining of advanced reduced-dimensional beam models to 3D finite element modelsSong, Huimin 07 April 2010 (has links)
This dissertation developed a method that can accurately and efficiently capture the response of a structure by rigorous combination of a reduced-dimensional beam finite element model with a model based on full two-dimensional (2D) or three-dimensional (3D) finite elements.
As a proof of concept, a joint 2D-beam approach is studied for planar-inplane deformation of strip-beams. This approach is developed for obtaining understanding needed to do the joint 3D-beam model. A Matlab code is developed to solve achieve this 2D-beam approach. For joint 2D-beam approach, the static response of a basic 2D-beam model is studied. The whole beam structure is divided into two parts. The root part where the boundary condition is applied is constructed as a 2D model. The free end part is constructed as a beam model. To assemble the two different dimensional model, a transformation matrix is used to achieve deflection continuity or load continuity at the interface. After the transformation matrix from deflection continuity or from load continuity is obtained, the 2D part and the beam part can be assembled together and solved as one linear system.
For a joint 3D-beam approach, the static and dynamic response of a basic 3D-beam model is studied. A Fortran program is developed to achieve this 3D-beam approach. For the uniform beam constrained at the root end, similar to the joint 2D-beam analysis, the whole beam structure is divided into two parts. The root part where the boundary condition is applied is constructed as a 3D model. The free end part is constructed as a beam model. To assemble the two different dimensional models, the approach of load continuity at the interface is used to combine the 3D model with beam model. The load continuity at the interface is achieved by stress recovery using the variational-asymptotic method. The beam properties and warping functions required for stress recovery are obtained from VABS constitutive analysis. After the transformation matrix from load continuity is obtained, the 3D part and the beam part can be assembled together and solved as one linear system. For a non-uniform beam example, the whole structure is divided into several parts, where the root end and the non-uniform parts are constructed as 3D models and the uniform parts are constructed as beams. At all the interfaces, the load continuity is used to connect 3D model with beam model. Stress recovery using the variational-asymptotic method is used to achieve the load continuity at all interfaces. For each interface, there is a transformation matrix from load continuity. After we have all the transformation matrices, the 3D parts and the beam parts are assembled together and solved as one linear system.
|
Page generated in 0.0278 seconds