• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 753
  • 245
  • 165
  • 53
  • 49
  • 24
  • 15
  • 15
  • 15
  • 15
  • 15
  • 15
  • 12
  • 10
  • 8
  • Tagged with
  • 1482
  • 204
  • 183
  • 154
  • 131
  • 127
  • 127
  • 121
  • 108
  • 90
  • 82
  • 75
  • 73
  • 69
  • 69
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
871

Geometria de distâncias euclidianas e aplicações / Euclidean distance geometry and applications

Lima, Jorge Ferreira Alencar, 1986- 26 August 2018 (has links)
Orientadores: Carlile Campos Lavor, Tibérius de Oliveira e Bonates / Tese (doutorado) - Universidade Estadual de Campinas, Instituto de Matemática Estatística e Computação Científica / Made available in DSpace on 2018-08-26T15:11:50Z (GMT). No. of bitstreams: 1 Lima_JorgeFerreiraAlencar_D.pdf: 1109545 bytes, checksum: 086223c23c920a9abe0d3661769a6a7d (MD5) Previous issue date: 2015 / Resumo: Geometria de Distâncias Euclidianas (GDE) é o estudo da geometria euclidiana baseado no conceito de distância. É uma teoria útil em diversas aplicações, onde os dados consistem em um conjunto de distâncias e as possíveis soluções são pontos em algum espaço euclidiano que realizam as distâncias dadas. O problema chave em GDE é conhecido como Problema de Geometria de Distâncias (PGD), em que é dado um inteiro K>0 e um grafo simples, não direcionado, ponderado G=(V,E,d), cujas arestas são ponderadas por uma função não negativa d, e queremos determinar se existe uma função (realização) que leva os vértices de V em coordenadas no espaço euclidiano K-dimensional, satisfazendo todas as restrições de distâncias dadas por d. Consideramos tanto problemas teóricos quanto aplicações da GDE. Em termos teóricos, demonstramos a quantidade exata de soluções de uma classe de PGDs muito importante para problemas de conformação molecular e, além disso, conseguimos condições necessárias e suficientes para determinar quando um grafo completo associado a um PGD é realizável e qual o espaço euclidiano com dimensão mínima para tal realização. Em termos práticos, desenvolvemos um algoritmo que calcula tal realização em dimensão mínima com resultados superiores a um algoritmo clássico da literatura. Finalmente, mostramos uma aplicação direta do PGD em problemas de escalonamento multidimensional / Abstract: Euclidean distance geometry (EDG) is the study of Euclidean geometry based on the concept of distance. This is useful in several applications, where the input data consists of an incomplete set of distances and the output is a set of points in some Euclidean space realizing the given distances. The key problem in EDG is known as the Distance Geometry Problem (DGP), where an integer K>0 is given, as well as a simple undirected weighted graph G=(V,E,d), whose edges are weighted by a non-negative function d. The problem consists in determining whether or not there is a (realization) function that associates the vertices of V with coordinates of the K-dimensional Euclidean space, in such a way that those coordinates satisfy all distances given by d. We considered both theoretical issues and applications of EDG. In theoretical terms, we proved the exact number of solutions of a subclass of DGP that is very important in the molecular conformation problems. Moreover, we described necessary and sufficient conditions for determining whether a complete graph associated to a DGP is realizable and the minimum dimension of such realization. In practical terms, we developed an algorithm that computes such realization, which outperforms a classical algorithm from the literature. Finally, we showed a direct application of DGP to multidimensional scaling / Doutorado / Matematica Aplicada / Doutor em Matemática Aplicada
872

Constructible circles on the unit sphere

Pauley, Blaga Slavcheva 01 January 2000 (has links)
In this paper we show how to give an intrinsic definition of a constructible circle on the sphere. The classical definition of constructible circle in the plane, using straight edge and compass is there by translated in ters of so called Lenart tools. The process by which we achieve our goal involves concepts from the algebra of Hermitian matrices, complex variables, and Sterographic projection. However, the discussion is entirely elementary throughout and hopefully can serve as a guide for teachers in advanced geometry.
873

The Euler Line in non-Euclidean geometry

Strzheletska, Elena 01 January 2003 (has links)
The main purpose of this thesis is to explore the conditions of the existence and properties of the Euler line of a triangle in the hyperbolic plane. Poincaré's conformal disk model and Hermitian matrices were used in the analysis.ʹ
874

Cosmology with HI intensity mapping: effect of higher order corrections

Randrianjanahary, Liantsoa Finaritra January 2020 (has links)
Masters of Science / One of the main challenges of cosmology is to unveil the nature of dark energy and dark matter. They can be constrained with baryonic acoustic oscillations (BAO) and redshift space distortions, amongst others. Both have characteristic signatures in the dark matter power spectrum. Biased tracers of dark matter, such as neutral hydrogen, are used to quantify the underlying dark matter density field. It is generally assumed that on large scales the bias of the tracer is linear. However, there is a coupling between small and large scales of the biased tracer which gives rise to a significant non-linear contribution on linear scales in the power spectrum of the biased tracer. The Hydrogen Intensity and Real-time eXperiment (HIRAX) will map the brightness temperature of neutral hydrogen (HI) over BAO scales thanks to the intensity mapping technique. We forecasted cosmological parameters for HIRAX taking into account non-linear corrections to the HI power spectrum and compared them to the linear case. We used methods based on Fisher matrices. We found values for the bias to error ratio of the cosmological parameters as high as 1 or 7, depending on the noise level. We also investigated the change in peaks location on the baryonic acoustic oscillations signal. The value of the shift goes up to Δk = 10-2h/Mpc with a reduction of amplitude of the BAO features from 16:33% to 0:33%, depending on the scales.
875

Théorie des matrices aléatoires pour l'apprentissage automatique en grande dimension et les réseaux de neurones / A random matrix framework for large dimensional machine learning and neural networks

Liao, Zhenyu 30 September 2019 (has links)
Le "Big Data'' et les grands systèmes d'apprentissage sont omniprésents dans les problèmes d'apprentissage automatique aujourd’hui. Contrairement à l'apprentissage de petite dimension, les algorithmes d'apprentissage en grande dimension sont sujets à divers phénomènes contre-intuitifs et se comportent de manière très différente des intuitions de petite dimension sur lesquelles ils sont construits. Cependant, en supposant que la dimension et le nombre des données sont à la fois grands et comparables, la théorie des matrices aléatoires (RMT) fournit une approche systématique pour évaluer le comportement statistique de ces grands systèmes d'apprentissage, lorsqu'ils sont appliqués à des données de grande dimension. L’objectif principal de cette thèse est de proposer un schéma d'analyse basé sur la RMT, pour une grande famille de systèmes d’apprentissage automatique: d'évaluer leurs performances, de mieux les comprendre et finalement les améliorer, afin de mieux gérer les problèmes de grandes dimensions aujourd'hui.Précisément, nous commençons par exploiter la connexion entre les grandes matrices à noyau, les projection aléatoires non-linéaires et les réseaux de neurones aléatoires simples. En considérant que les données sont tirées indépendamment d'un modèle de mélange gaussien, nous fournissons une caractérisation précise des performances de ces systèmes d'apprentissage en grande dimension, exprimée en fonction des statistiques de données, de la dimensionnalité et, surtout, des hyper-paramètres du problème. Lorsque des algorithmes d'apprentissage plus complexes sont considérés, ce schéma d'analyse peut être étendu pour accéder à de systèmes d'apprentissage qui sont définis (implicitement) par des problèmes d'optimisation convexes, lorsque des points optimaux sont atteints. Pour trouver ces points, des méthodes d'optimisation telles que la descente de gradient sont régulièrement utilisées. À cet égard, dans le but d'avoir une meilleur compréhension théorique des mécanismes internes de ces méthodes d'optimisation et, en particulier, leur impact sur le modèle d'apprentissage, nous évaluons aussi la dynamique de descente de gradient dans les problèmes d'optimisation convexes et non convexes.Ces études préliminaires fournissent une première compréhension quantitative des algorithmes d'apprentissage pour le traitement de données en grandes dimensions, ce qui permet de proposer de meilleurs critères de conception pour les grands systèmes d’apprentissage et, par conséquent, d'avoir un gain de performance remarquable lorsqu'il est appliqué à des jeux de données réels. Profondément ancré dans l'idée d'exploiter des données de grandes dimensions avec des informations répétées à un niveau "global'' plutôt qu'à un niveau "local'', ce schéma d'analyse RMT permet une compréhension renouvelée et la possibilité de contrôler et d'améliorer une famille beaucoup plus large de méthodes d'apprentissage automatique, ouvrant ainsi la porte à un nouveau schéma d'apprentissage automatique pour l'intelligence artificielle. / Large dimensional data and learning systems are ubiquitous in modern machine learning. As opposed to small dimensional learning, large dimensional machine learning algorithms are prone to various counterintuitive phenomena and behave strikingly differently from the low dimensional intuitions upon which they are built. Nonetheless, by assuming the data dimension and their number to be both large and comparable, random matrix theory (RMT) provides a systematic approach to assess the (statistical) behavior of these large learning systems, when applied on large dimensional data. The major objective of this thesis is to propose a full-fledged RMT-based framework for various machine learning systems: to assess their performance, to properly understand and to carefully refine them, so as to better handle large dimensional problems that are increasingly needed in artificial intelligence applications.Precisely, we exploit the close connection between kernel matrices, random feature maps, and single-hidden-layer random neural networks. Under a simple Gaussian mixture modeling for the input data, we provide a precise characterization of the performance of these large dimensional learning systems as a function of the data statistics, the dimensionality, and most importantly the hyperparameters (e.g., the choice of the kernel function or activation function) of the problem. Further addressing more involved learning algorithms, we extend the present RMT analysis framework to access large learning systems that are implicitly defined by convex optimization problems (e.g., logistic regression), when optimal points are assumed reachable. To find these optimal points, optimization methods such as gradient descent are regularly used. Aiming to have a better theoretical grasp of the inner mechanism of optimization methods and their impact on the resulting learning model, we further evaluate the gradient descent dynamics in training convex and non-convex objects.These preliminary studies provide a first quantitative understanding of the aforementioned learning algorithms when large dimensional data are processed, which further helps propose better design criteria for large learning systems that result in remarkable gains in performance when applied on real-world datasets. Deeply rooted in the idea of mining large dimensional data with repeated patterns at a global rather than a local level, the proposed RMT analysis framework allows for a renewed understanding and the possibility to control and improve a much larger range of machine learning approaches, and thereby opening the door to a renewed machine learning framework for artificial intelligence.
876

Caractérisation et étalonnage de la caméra de l'expérience ballon PILOT (Polarized Instrument for Long wavelength Observation of the Tenuous interstellar medium) / Caracterization and calibration of the camera of the PILOT balloon born experiment (Polarized Instrument for Long wavelength Observation of the Tenuous interstellar medium)

Buttice, Vincent 30 September 2013 (has links)
PILOT (Polarized Instrument for Long wavelength Observation of the Tenuous interstellar medium) est une expérience embarquée en ballon stratosphérique destinée à la mesure de l'émission polarisée de notre galaxie dans le submillimétrique. La charge pointée de PILOT est composée d'un télescope au foyer duquel est placée une caméra embarquant 2048 bolomètres, refroidis à 300 mK, mesurant dans deux bandes spectrales (240 µm et 550 µm) et deux polarisations. La détection de la polarisation est réalisée à l'aide d'un polariseur placé à 45° dans le faisceau, le décomposant en deux composantes polarisées orthogonales chacune détectée par un bloc détecteur, et d'une lame demi-onde rotative. L'Institut d'Astrophysique Spatiale (Orsay, France) est responsable de la réalisation, de l'intégration, des tests et de l'étalonnage spectral de la caméra. Pour cela deux bancs de mesures sont développés, un pour les essais d'imagerie et de polarisation, et un pour l'étalonnage spectral. L'expérimentation permet de valider l'alignement des optiques froides, de caractériser la qualité optique des images, de caractériser les réponses temporelles et en intensité des détecteurs, et de mesurer la réponse spectrale de la caméra. Un modèle photométrique de l'instrument est développé simulant les différentes configurations pour les essais d'étalonnage spectral, d'imagerie en laboratoire, et en vol, ceci afin d'estimer la puissance totale reçue par chaque pixel du détecteur de chaque configuration. Cette puissance totale est issue de l'émission thermique de l'instrument, de l'atmosphère et des sources observées en vol ou de l'environnement du laboratoire. Une campagne de tests a permis de caractériser et d'étalonner la caméra de l'expérience PILOT. Les premières images dans le domaine du submillimétrique ont été révélées, et les premières réponses spectrales mesurées. Suite à la caractérisation et l'étalonnage spectral, la caméra est alignée avec le miroir primaire sur la nacelle CNES pour des caractérisations et des étalonnages en polarisation de l'instrument complet. Le premier vol est prévu pour le milieu de l'année 2014. / The Polarized Instrument for Long wavelength Observation of the Tenuous interstellar medium (PILOT) is a balloon borne experiment designed to measure the polarized emission from dust grains in the galaxy in the submillimeter range. The payload is composed of a telescope at the optical focus of which is placed a camera using 2048 bolometers cooled to 300 mK. The camera performs polarized optical measurements in two spectral bands (240 µm and 550 µm). The polarization measurement is based on a cryogenic rotating half-wave plate and a fixed mesh grid polarizer placed at 45° separating the beam into two orthogonal polarized components each detected by a detector array. The Institut d'Astrophysique Spatiale (Orsay, France) is responsible for the design, integration, tests and spectral calibration of the camera. Two optical benches have been designed for its imaging and polarization characterization and spectral calibration. Theses setups allow to validate the alignment of the camera cryogenic optics, to check the optical quality of the images, to characterize the time and intensity response of the detectors, and to measure the overall spectral response. A numerical photometric model of the instrument was developed for the optical configuration during calibration tests (spectral), functional tests (imager) on the ground, and flight configuration at the telescope focus, giving an estimate of the optical power received by the detectors for each configuration. The tests campaign validates the PILOT camera characterization and calibration. It delivered the first submillimeter images and the first spectral responses. Next, the camera will be aligned and integrated with the primary mirror of the telescope on the CNES gondola, for characterization and optical polarization calibration of the complete instrument. The first flight is now planned for mid 2014.
877

Cell-Derived Extracellular Matrix Scaffolds Developed using Macromolecular Crowding

Shendi, Dalia M. 07 August 2019 (has links)
Cell-derived (CDM) matrix scaffolds provide a 3-dimensional (3D) matrix material that recapitulates a native, human extracellular matrix (ECM) microenvironment. CDMs are a heterogeneous source of ECM proteins with a composition dependent on the cell source and its phenotype. CDMs have several applications, such as for development of cell culture substrates to study stromal cell propagation and differentiation, as well as cell or drug delivery vehicles, or for regenerative biomaterial applications. Although CDMs are versatile and exhibit advantageous structure and activity, their use has been hindered due to the prolonged culture time required for ECM deposition and maturation in vitro. Macromolecular crowding (MMC) has been shown to increase ECM deposition and organization by limiting the diffusion of ECM precursor proteins and allowing the accumulation of matrix at the cell layer. A commonly used crowder that has been shown to increase ECM deposition in vitro is Ficoll, and was used in this study as a positive control to assess matrix deposition. Hyaluronic acid (HA), a natural crowding macromolecule expressed at high levels during fetal development, has been shown to play a role in ECM production, organization, and assembly in vivo. HA has not been investigated as a crowding molecule for matrix deposition or development of CDMs in vitro. This dissertation focused on 2 aims supporting the development of a functional, human dermal fibroblast-derived ECM material for the delivery deliver an antimicrobial peptide, cCBD-LL37, and for potentially promoting a pro-angiogenic environment. The goal of this thesis was to evaluate the effects of high molecular weight (HMW) HA as a macromolecular crowding agent on in vitro deposition of ECM proteins important for tissue regeneration and angiogenesis. A pilot proteomics study supported the use of HA as a crowder, as it preliminarily showed increases in ECM proteins and increased retention of ECM precursor proteins at the cell layer; thus supporting the use of HA as a crowder molecule. In the presence of HA, human dermal fibroblasts demonstrated an increase in ECM deposition comparable to the effects of Ficoll 70/400 at day 3 using Raman microspectroscopy. It was hypothesized that HA promotes matrix deposition through changes on ECM gene expression. However, qRT-PCR results indicate that HA and Ficoll 70/400 did not have a direct effect on collagen gene expression, but differences in matrix crosslinking and proteinase genes were observed. Decellularized CDMs were then used to assess CDM stiffness and endothelial sprouting, which indicated differences in structural organization of collagen, and preliminarily suggests that there are differences in endothelial cell migration depending on the crowder agent used in culture. Finally, the collagen retained in the decellularized CDM matrix prepared under MMC supported the binding of cCBD-LL37 with retention of antimicrobial activity when tested against E.coli. Overall, the differences in matrix deposition profiles in HA versus Ficoll crowded cultures may be attributed to crowder molecule-mediated differences in matrix crosslinking, turnover, and organization as indicated by differences in collagen deposition, matrix metalloproteinase expression, and matrix stiffness. MMC is a valuable tool for increasing matrix deposition, and can be combined with other techniques, such as low oxygen and bioreactor cultures, to promote development of a biomanufactured CDM-ECM biomaterial. Successful development of scalable CDM materials that stimulate angiogenesis and support antimicrobial peptide delivery would fill an important unmet need in the treatment of non-healing, chronic, infected wounds.
878

Hierarchical Matrix Operations on GPUs

Boukaram, Wagih Halim 26 April 2020 (has links)
Large dense matrices are ubiquitous in scientific computing, arising from the discretization of integral operators associated with elliptic pdes, Schur complement methods, covariances in spatial statistics, kernel-based machine learning, and numerical optimization problems. Hierarchical matrices are an efficient way for storing the dense matrices of very large dimension that appear in these and related settings. They exploit the fact that the underlying matrices, while formally dense, are data sparse. They have a structure consisting of blocks many of which can be well-approximated by low rank factorizations. A hierarchical organization of the blocks avoids superlinear growth in memory requirements to store n × n dense matrices in a scalable manner, requiring O(n) units of storage with a constant depending on a representative rank k for the low rank blocks. The asymptotically optimal storage requirement of the resulting hierarchical matrices is a critical advantage, particularly in extreme computing environments, characterized by low memory per processing core. The challenge then becomes to develop the parallel linear algebra operations that can be performed directly on this compressed representation. In this dissertation, I implement a set of hierarchical basic linear algebra subroutines (HBLAS) optimized for GPUs, including hierarchical matrix vector multiplication, orthogonalization, compression, low rank updates, and matrix multiplication. I develop a library of open source batched kernel operations previously missing on GPUs for the high performance implementation of the H2 operations, while relying wherever possible on existing open source and vendor kernels to ride future improvements in the technology. Fast marshaling routines extract the batch operation data from an efficient representation of the trees that compose the hierarchical matrices. The methods developed for GPUs extend to CPUs using the same code base with simple abstractions around the batched routine execution. To demonstrate the scalability of the hierarchical operations I implement a distributed memory multi-GPU hierarchical matrix vector product that focuses on reducing communication volume and hiding communication overhead and areas of low GPU utilization using low priority streams. Two demonstrations involving Hessians of inverse problems governed by pdes and space-fractional diffusion equations show the effectiveness of the hierarchical operations in realistic applications.
879

Posouzení ekotoxicity kontaminovaných matric vnášených do ekosystému / Ecotoxicological assessment of contaminated matrices discharged into the ecosystem

Urbanová, Veronika January 2015 (has links)
This diploma thesis evaluates the influence of contaminated matrices introduced into the ecosystem in terms of ecotoxicity. It is focused mainly on matrix generated by anthropogenic activities, especially waste of various origin - industrial, energetical, biodegradable and more. For experimental purposes sewage sludge as the bulk waste with the ever-increasing production was selected. Sewage sludge was tested at its most common use - application on the agricultural land. This application is limited by legislative through the concentration limits of hazardous elements. For this reason, the potential ecotoxicity of sludge was evaluated. Ecotoxicological evaluation was performed using the contact bioassays. As the test organisms Eisenia foetida, Folsomia candida, Heterocypris incongruens a plant Lactuca sativa were selected. Sludge from wastewater treatment plants Brno - Modřice, Valtice, Mikulov and Lednice was tested. Samples of sewage sludge showed no ecotoxicity, while respecting the application amount established by the regulation. On the contrary, it can be concluded that soils enriched by sewage sludge show positive effect on soil biota.
880

Matemática básica para administradores - Segunda edición [Capítulo 1]

Curo, Agustín, Martínez, Mihály January 1900 (has links)
Este libro es una guía teórico-práctica que permite al estudiante de administración y carreras afines entender los conceptos sobre los que se fundamenta cada tema y aplicarlos a sus análisis administrativos. Para ello, además de una breve explicación teórica, en cada tema se presentan ejemplos resueltos y luego por resolver para fijar el aprendizaje. Finalmente, se cierra cada unidad con una serie de ejercicios aplicados. Esta obra es producto de la experiencia obtenida a lo largo de varios años en la coordinación y dictado de los cursos de la Universidad Peruana de Ciencias Aplicadas como Nivelación de Matemáticas, Lógica Matemática y, principalmente, Matemática Básica para Administradores. Asimismo, está complementado con los aportes y problemas propuestos por la mayoría de los profesores de estos cursos. Se trata entonces de una publicación útil y práctica para administradores, estudiantes y profesores.

Page generated in 0.03 seconds