• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 414
  • 178
  • 47
  • 40
  • 11
  • 11
  • 11
  • 11
  • 11
  • 11
  • 11
  • 9
  • 8
  • 7
  • 7
  • Tagged with
  • 868
  • 158
  • 156
  • 125
  • 118
  • 113
  • 80
  • 65
  • 63
  • 54
  • 53
  • 48
  • 47
  • 46
  • 42
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
861

Analysis of the human corneal shape with machine learning

Bouazizi, Hala 01 1900 (has links)
Cette thèse cherche à examiner les conditions optimales dans lesquelles les surfaces cornéennes antérieures peuvent être efficacement pré-traitées, classifiées et prédites en utilisant des techniques de modélisation géométriques (MG) et d’apprentissage automatiques (AU). La première étude (Chapitre 2) examine les conditions dans lesquelles la modélisation géométrique peut être utilisée pour réduire la dimensionnalité des données utilisées dans un projet d’apprentissage automatique. Quatre modèles géométriques ont été testés pour leur précision et leur rapidité de traitement : deux modèles polynomiaux (P) – polynômes de Zernike (PZ) et harmoniques sphériques (PHS) – et deux modèles de fonctions rationnelles (R) : fonctions rationnelles de Zernike (RZ) et fonctions rationnelles d’harmoniques sphériques (RSH). Il est connu que les modèles PHS et RZ sont plus précis que les modèles PZ pour un même nombre de coefficients (J), mais on ignore si les modèles PHS performent mieux que les modèles RZ, et si, de manière plus générale, les modèles SH sont plus précis que les modèles R, ou l’inverse. Et prenant en compte leur temps de traitement, est-ce que les modèles les plus précis demeurent les plus avantageux? Considérant des valeurs de J (nombre de coefficients du modèle) relativement basses pour respecter les contraintes de dimensionnalité propres aux taches d’apprentissage automatique, nous avons établi que les modèles HS (PHS et RHS) étaient tous deux plus précis que les modèles Z correspondants (PZ et RR), et que l’avantage de précision conféré par les modèles HS était plus important que celui octroyé par les modèles R. Par ailleurs, les courbes de temps de traitement en fonction de J démontrent qu’alors que les modèles P sont traités en temps quasi-linéaires, les modèles R le sont en temps polynomiaux. Ainsi, le modèle SHR est le plus précis, mais aussi le plus lent (un problème qui peut en partie être remédié en appliquant une procédure de pré-optimisation). Le modèle ZP était de loin le plus rapide, et il demeure une option intéressante pour le développement de projets. SHP constitue le meilleur compromis entre la précision et la rapidité. La classification des cornées selon des paramètres cliniques a une longue tradition, mais la visualisation des effets moyens de ces paramètres sur la forme de la cornée par des cartes topographiques est plus récente. Dans la seconde étude (Chapitre 3), nous avons construit un atlas de cartes d’élévations moyennes pour différentes variables cliniques qui pourrait s’avérer utile pour l’évaluation et l’interprétation des données d’entrée (bases de données) et de sortie (prédictions, clusters, etc.) dans des tâches d’apprentissage automatique, entre autres. Une base de données constituée de plusieurs milliers de surfaces cornéennes antérieures normales enregistrées sous forme de matrices d’élévation de 101 by 101 points a d’abord été traitée par modélisation géométrique pour réduire sa dimensionnalité à un nombre de coefficients optimal dans une optique d’apprentissage automatique. Les surfaces ainsi modélisées ont été regroupées en fonction de variables cliniques de forme, de réfraction et de démographie. Puis, pour chaque groupe de chaque variable clinique, une surface moyenne a été calculée et représentée sous forme de carte d’élévations faisant référence à sa SMA (sphère la mieux ajustée). Après avoir validé la conformité de la base de donnée avec la littérature par des tests statistiques (ANOVA), l’atlas a été vérifié cliniquement en examinant si les transformations de formes cornéennes présentées dans les cartes pour chaque variable étaient conformes à la littérature. C’était le cas. Les applications possibles d’un tel atlas sont discutées. La troisième étude (Chapitre 4) traite de la classification non-supervisée (clustering) de surfaces cornéennes antérieures normales. Le clustering cornéen un domaine récent en ophtalmologie. La plupart des études font appel aux techniques d’extraction des caractéristiques pour réduire la dimensionnalité de la base de données cornéennes. Le but est généralement d’automatiser le processus de diagnostique cornéen, en particulier en ce qui a trait à la distinction entre les cornées normales et les cornées irrégulières (kératocones, Fuch, etc.), et dans certains cas, de distinguer différentes sous-classes de cornées irrégulières. L’étude de clustering proposée ici se concentre plutôt sur les cornées normales afin de mettre en relief leurs regroupements naturels. Elle a recours à la modélisation géométrique pour réduire la dimensionnalité de la base de données, utilisant des polynômes de Zernike, connus pour leur interprétativité transparente (chaque terme polynomial est associé à une caractéristique cornéenne particulière) et leur bonne précision pour les cornées normales. Des méthodes de différents types ont été testées lors de prétests (méthodes de clustering dur (hard) ou souple (soft), linéaires or non-linéaires. Ces méthodes ont été testées sur des surfaces modélisées naturelles (non-normalisées) ou normalisées avec ou sans traitement d’extraction de traits, à l’aide de différents outils d’évaluation (scores de séparabilité et d’homogénéité, représentations par cluster des coefficients de modélisation et des surfaces modélisées, comparaisons statistiques des clusters sur différents paramètres cliniques). Les résultats obtenus par la meilleure méthode identifiée, k-means sans extraction de traits, montrent que les clusters produits à partir de surfaces cornéennes naturelles se distinguent essentiellement en fonction de la courbure de la cornée, alors que ceux produits à partir de surfaces normalisées se distinguent en fonction de l’axe cornéen. La dernière étude présentée dans cette thèse (Chapitre 5) explore différentes techniques d’apprentissage automatique pour prédire la forme de la cornée à partir de données cliniques. La base de données cornéennes a d’abord été traitée par modélisation géométrique (polynômes de Zernike) pour réduire sa dimensionnalité à de courts vecteurs de 12 à 20 coefficients, une fourchette de valeurs potentiellement optimales pour effectuer de bonnes prédictions selon des prétests. Différentes méthodes de régression non-linéaires, tirées de la bibliothèque scikit-learn, ont été testées, incluant gradient boosting, Gaussian process, kernel ridge, random forest, k-nearest neighbors, bagging, et multi-layer perceptron. Les prédicteurs proviennent des variables cliniques disponibles dans la base de données, incluant des variables géométriques (diamètre horizontal de la cornée, profondeur de la chambre cornéenne, côté de l’œil), des variables de réfraction (cylindre, sphère et axe) et des variables démographiques (âge, genre). Un test de régression a été effectué pour chaque modèle de régression, défini comme la sélection d’une des 256 combinaisons possibles de variables cliniques (les prédicteurs), d’une méthode de régression, et d’un vecteur de coefficients de Zernike d’une certaine taille (entre 12 et 20 coefficients, les cibles). Tous les modèles de régression testés ont été évalués à l’aide de score de RMSE établissant la distance entre les surfaces cornéennes prédites (les prédictions) et vraies (les topographies corn¬éennes brutes). Les meilleurs d’entre eux ont été validés sur l’ensemble de données randomisé 20 fois pour déterminer avec plus de précision lequel d’entre eux est le plus performant. Il s’agit de gradient boosting utilisant toutes les variables cliniques comme prédicteurs et 16 coefficients de Zernike comme cibles. Les prédictions de ce modèle ont été évaluées qualitativement à l’aide d’un atlas de cartes d’élévations moyennes élaborées à partir des variables cliniques ayant servi de prédicteurs, qui permet de visualiser les transformations moyennes d’en groupe à l’autre pour chaque variables. Cet atlas a permis d’établir que les cornées prédites moyennes sont remarquablement similaires aux vraies cornées moyennes pour toutes les variables cliniques à l’étude. / This thesis aims to investigate the best conditions in which the anterior corneal surface of normal corneas can be preprocessed, classified and predicted using geometric modeling (GM) and machine learning (ML) techniques. The focus is on the anterior corneal surface, which is the main responsible of the refractive power of the cornea. Dealing with preprocessing, the first study (Chapter 2) examines the conditions in which GM can best be applied to reduce the dimensionality of a dataset of corneal surfaces to be used in ML projects. Four types of geometric models of corneal shape were tested regarding their accuracy and processing time: two polynomial (P) models – Zernike polynomial (ZP) and spherical harmonic polynomial (SHP) models – and two corresponding rational function (R) models – Zernike rational function (ZR) and spherical harmonic rational function (SHR) models. SHP and ZR are both known to be more accurate than ZP as corneal shape models for the same number of coefficients, but which type of model is the most accurate between SHP and ZR? And is an SHR model, which is both an SH model and an R model, even more accurate? Also, does modeling accuracy comes at the cost of the processing time, an important issue for testing large datasets as required in ML projects? Focusing on low J values (number of model coefficients) to address these issues in consideration of dimensionality constraints that apply in ML tasks, it was found, based on a number of evaluation tools, that SH models were both more accurate than their Z counterparts, that R models were both more accurate than their P counterparts and that the SH advantage was more important than the R advantage. Processing time curves as a function of J showed that P models were processed in quasilinear time, R models in polynomial time, and that Z models were fastest than SH models. Therefore, while SHR was the most accurate geometric model, it was the slowest (a problem that can partly be remedied by applying a preoptimization procedure). ZP was the fastest model, and with normal corneas, it remains an interesting option for testing and development, especially for clustering tasks due to its transparent interpretability. The best compromise between accuracy and speed for ML preprocessing is SHP. The classification of corneal shapes with clinical parameters has a long tradition, but the visualization of their effects on the corneal shape with group maps (average elevation maps, standard deviation maps, average difference maps, etc.) is relatively recent. In the second study (Chapter 3), we constructed an atlas of average elevation maps for different clinical variables (including geometric, refraction and demographic variables) that can be instrumental in the evaluation of ML task inputs (datasets) and outputs (predictions, clusters, etc.). A large dataset of normal adult anterior corneal surface topographies recorded in the form of 101×101 elevation matrices was first preprocessed by geometric modeling to reduce the dimensionality of the dataset to a small number of Zernike coefficients found to be optimal for ML tasks. The modeled corneal surfaces of the dataset were then grouped in accordance with the clinical variables available in the dataset transformed into categorical variables. An average elevation map was constructed for each group of corneal surfaces of each clinical variable in their natural (non-normalized) state and in their normalized state by averaging their modeling coefficients to get an average surface and by representing this average surface in reference to the best-fit sphere in a topographic elevation map. To validate the atlas thus constructed in both its natural and normalized modalities, ANOVA tests were conducted for each clinical variable of the dataset to verify their statistical consistency with the literature before verifying whether the corneal shape transformations displayed in the maps were themselves visually consistent. This was the case. The possible uses of such an atlas are discussed. The third study (Chapter 4) is concerned with the use of a dataset of geometrically modeled corneal surfaces in an ML task of clustering. The unsupervised classification of corneal surfaces is recent in ophthalmology. Most of the few existing studies on corneal clustering resort to feature extraction (as opposed to geometric modeling) to achieve the dimensionality reduction of the dataset. The goal is usually to automate the process of corneal diagnosis, for instance by distinguishing irregular corneal surfaces (keratoconus, Fuch, etc.) from normal surfaces and, in some cases, by classifying irregular surfaces into subtypes. Complementary to these corneal clustering studies, the proposed study resorts mainly to geometric modeling to achieve dimensionality reduction and focuses on normal adult corneas in an attempt to identify their natural groupings, possibly in combination with feature extraction methods. Geometric modeling was based on Zernike polynomials, known for their interpretative transparency and sufficiently accurate for normal corneas. Different types of clustering methods were evaluated in pretests to identify the most effective at producing neatly delimitated clusters that are clearly interpretable. Their evaluation was based on clustering scores (to identify the best number of clusters), polar charts and scatter plots (to visualize the modeling coefficients involved in each cluster), average elevation maps and average profile cuts (to visualize the average corneal surface of each cluster), and statistical cluster comparisons on different clinical parameters (to validate the findings in reference to the clinical literature). K-means, applied to geometrically modeled surfaces without feature extraction, produced the best clusters, both for natural and normalized surfaces. While the clusters produced with natural corneal surfaces were based on the corneal curvature, those produced with normalized surfaces were based on the corneal axis. In each case, the best number of clusters was four. The importance of curvature and axis as grouping criteria in corneal data distribution is discussed. The fourth study presented in this thesis (Chapter 5) explores the ML paradigm to verify whether accurate predictions of normal corneal shapes can be made from clinical data, and how. The database of normal adult corneal surfaces was first preprocessed by geometric modeling to reduce its dimensionality into short vectors of 12 to 20 Zernike coefficients, found to be in the range of appropriate numbers to achieve optimal predictions. The nonlinear regression methods examined from the scikit-learn library were gradient boosting, Gaussian process, kernel ridge, random forest, k-nearest neighbors, bagging, and multilayer perceptron. The predictors were based on the clinical variables available in the database, including geometric variables (best-fit sphere radius, white-towhite diameter, anterior chamber depth, corneal side), refraction variables (sphere, cylinder, axis) and demographic variables (age, gender). Each possible combination of regression method, set of clinical variables (used as predictors) and number of Zernike coefficients (used as targets) defined a regression model in a prediction test. All the regression models were evaluated based on their mean RMSE score (establishing the distance between the predicted corneal surfaces and the raw topographic true surfaces). The best model identified was further qualitatively assessed based on an atlas of predicted and true average elevation maps by which the predicted surfaces could be visually compared to the true surfaces on each of the clinical variables used as predictors. It was found that the best regression model was gradient boosting using all available clinical variables as predictors and 16 Zernike coefficients as targets. The most explicative predictor was the best-fit sphere radius, followed by the side and refractive variables. The average elevation maps of the true anterior corneal surfaces and the predicted surfaces based on this model were remarkably similar for each clinical variable.
862

On the distribution of polynomials having a given number of irreducible factors over finite fields

Datta, Arghya 08 1900 (has links)
Soit q ⩾ 2 une puissance première fixe. L’objectif principal de cette thèse est d’étudier le comportement asymptotique de la fonction arithmétique Π_q(n,k) comptant le nombre de polynômes moniques de degré n et ayant exactement k facteurs irréductibles (avec multiplicité) sur le corps fini F_q. Warlimont et Car ont montré que l’objet Π_q(n,k) est approximativement distribué de Poisson lorsque 1 ⩽ k ⩽ A log n pour une constante A > 0. Plus tard, Hwang a étudié la fonction Π_q(n,k) pour la gamme complète 1 ⩽ k ⩽ n. Nous allons d’abord démontrer une formule asymptotique pour Π_q(n,k) en utilisant une technique analytique classique développée par Sathe et Selberg. Nous reproduirons ensuite une version simplifiée du résultat de Hwang en utilisant la formule de Sathe-Selberg dans le champ des fonctions. Nous comparons également nos résultats avec ceux analogues existants dans le cas des entiers, où l’on étudie tous les nombres naturels jusqu’à x avec exactement k facteurs premiers. En particulier, nous montrons que le nombre de polynômes moniques croît à un taux étonnamment plus élevé lorsque k est un peu plus grand que logn que ce que l’on pourrait supposer en examinant le cas des entiers. Pour présenter le travail ci-dessus, nous commençons d’abord par la théorie analytique des nombres de base dans le contexte des polynômes. Nous introduisons ensuite les fonctions arithmétiques clés qui jouent un rôle majeur dans notre thèse et discutons brièvement des résultats bien connus concernant leur distribution d’un point de vue probabiliste. Enfin, pour comprendre les résultats clés, nous donnons une discussion assez détaillée sur l’analogue de champ de fonction de la formule de Sathe-Selberg, un outil récemment développé par Porrit et utilisons ensuite cet outil pour prouver les résultats revendiqués. / Let q ⩾ 2 be a fixed prime power. The main objective of this thesis is to study the asymptotic behaviour of the arithmetic function Π_q(n,k) counting the number of monic polynomials that are of degree n and have exactly k irreducible factors (with multiplicity) over the finite field F_q. Warlimont and Car showed that the object Π_q(n,k) is approximately Poisson distributed when 1 ⩽ k ⩽ A log n for some constant A > 0. Later Hwang studied the function Π_q(n,k) for the full range 1 ⩽ k ⩽ n. We will first prove an asymptotic formula for Π_q(n,k) using a classical analytic technique developed by Sathe and Selberg. We will then reproduce a simplified version of Hwang’s result using the Sathe-Selberg formula in the function field. We also compare our results with the analogous existing ones in the integer case, where one studies all the natural numbers up to x with exactly k prime factors. In particular, we show that the number of monic polynomials grows at a surprisingly higher rate when k is a little larger than logn than what one would speculate from looking at the integer case. To present the above work, we first start with basic analytic number theory in the context of polynomials. We then introduce the key arithmetic functions that play a major role in our thesis and briefly discuss well-known results concerning their distribution from a probabilistic point of view. Finally, to understand the key results, we give a fairly detailed discussion on the function field analogue of the Sathe-Selberg formula, a tool recently developed by Porrit and subsequently use this tool to prove the claimed results.
863

CUDA-based Scientific Computing / Tools and Selected Applications

Kramer, Stephan Christoph 22 November 2012 (has links)
No description available.
864

High Dimensional Fast Fourier Transform Based on Rank-1 Lattice Sampling / Hochdimensionale schnelle Fourier-Transformation basierend auf Rang-1 Gittern als Ortsdiskretisierungen

Kämmerer, Lutz 24 February 2015 (has links) (PDF)
We consider multivariate trigonometric polynomials with frequencies supported on a fixed but arbitrary frequency index set I, which is a finite set of integer vectors of length d. Naturally, one is interested in spatial discretizations in the d-dimensional torus such that - the sampling values of the trigonometric polynomial at the nodes of this spatial discretization uniquely determines the trigonometric polynomial, - the corresponding discrete Fourier transform is fast realizable, and - the corresponding fast Fourier transform is stable. An algorithm that computes the discrete Fourier transform and that needs a computational complexity that is bounded from above by terms that are linear in the maximum of the number of input and output data up to some logarithmic factors is called fast Fourier transform. We call the fast Fourier transform stable if the Fourier matrix of the discrete Fourier transform has a condition number near one and the fast algorithm does not corrupt this theoretical stability. We suggest to use rank-1 lattices and a generalization as spatial discretizations in order to sample multivariate trigonometric polynomials and we develop construction methods in order to determine reconstructing sampling sets, i.e., sets of sampling nodes that allow for the unique, fast, and stable reconstruction of trigonometric polynomials. The methods for determining reconstructing rank-1 lattices are component{by{component constructions, similar to the seminal methods that are developed in the field of numerical integration. During this thesis we identify a component{by{component construction of reconstructing rank-1 lattices that allows for an estimate of the number of sampling nodes M |I|\le M\le \max\left(\frac{2}{3}|I|^2,\max\{3\|\mathbf{k}\|_\infty\colon\mathbf{k}\in I\}\right) that is sufficient in order to uniquely reconstruct each multivariate trigonometric polynomial with frequencies supported on the frequency index set I. We observe that the bounds on the number M only depends on the number of frequency indices contained in I and the expansion of I, but not on the spatial dimension d. Hence, rank-1 lattices are suitable spatial discretizations in arbitrarily high dimensional problems. Furthermore, we consider a generalization of the concept of rank-1 lattices, which we call generated sets. We use a quite different approach in order to determine suitable reconstructing generated sets. The corresponding construction method is based on a continuous optimization method. Besides the theoretical considerations, we focus on the practicability of the presented algorithms and illustrate the theoretical findings by means of several examples. In addition, we investigate the approximation properties of the considered sampling schemes. We apply the results to the most important structures of frequency indices in higher dimensions, so-called hyperbolic crosses and demonstrate the approximation properties by the means of several examples that include the solution of Poisson's equation as one representative of partial differential equations.
865

High Dimensional Fast Fourier Transform Based on Rank-1 Lattice Sampling

Kämmerer, Lutz 21 November 2014 (has links)
We consider multivariate trigonometric polynomials with frequencies supported on a fixed but arbitrary frequency index set I, which is a finite set of integer vectors of length d. Naturally, one is interested in spatial discretizations in the d-dimensional torus such that - the sampling values of the trigonometric polynomial at the nodes of this spatial discretization uniquely determines the trigonometric polynomial, - the corresponding discrete Fourier transform is fast realizable, and - the corresponding fast Fourier transform is stable. An algorithm that computes the discrete Fourier transform and that needs a computational complexity that is bounded from above by terms that are linear in the maximum of the number of input and output data up to some logarithmic factors is called fast Fourier transform. We call the fast Fourier transform stable if the Fourier matrix of the discrete Fourier transform has a condition number near one and the fast algorithm does not corrupt this theoretical stability. We suggest to use rank-1 lattices and a generalization as spatial discretizations in order to sample multivariate trigonometric polynomials and we develop construction methods in order to determine reconstructing sampling sets, i.e., sets of sampling nodes that allow for the unique, fast, and stable reconstruction of trigonometric polynomials. The methods for determining reconstructing rank-1 lattices are component{by{component constructions, similar to the seminal methods that are developed in the field of numerical integration. During this thesis we identify a component{by{component construction of reconstructing rank-1 lattices that allows for an estimate of the number of sampling nodes M |I|\le M\le \max\left(\frac{2}{3}|I|^2,\max\{3\|\mathbf{k}\|_\infty\colon\mathbf{k}\in I\}\right) that is sufficient in order to uniquely reconstruct each multivariate trigonometric polynomial with frequencies supported on the frequency index set I. We observe that the bounds on the number M only depends on the number of frequency indices contained in I and the expansion of I, but not on the spatial dimension d. Hence, rank-1 lattices are suitable spatial discretizations in arbitrarily high dimensional problems. Furthermore, we consider a generalization of the concept of rank-1 lattices, which we call generated sets. We use a quite different approach in order to determine suitable reconstructing generated sets. The corresponding construction method is based on a continuous optimization method. Besides the theoretical considerations, we focus on the practicability of the presented algorithms and illustrate the theoretical findings by means of several examples. In addition, we investigate the approximation properties of the considered sampling schemes. We apply the results to the most important structures of frequency indices in higher dimensions, so-called hyperbolic crosses and demonstrate the approximation properties by the means of several examples that include the solution of Poisson's equation as one representative of partial differential equations.
866

Study of Response Surface Models for the characterization of the performance in Refrigeration Equipments and Heat Pumps

Marchante Avellaneda, Javier 24 February 2024 (has links)
[ES] En un contexto de creciente preocupación por el calentamiento global y de políticas energéticas internacionales, en el cual los sistemas de climatización de los edificios suponen una parte importante del consumo energético global, los sistemas de bombas de calor son considerados como opciones muy interesantes debido a su alta eficiencia y por ser fuentes de energía renovables. En este sentido, una caracterización precisa de estos equipos es de vital importancia con el objetivo de mejorar su diseño y, en aquellos casos dónde este tipo de unidades se integren como parte de sistemas más complejos, implementar estrategias de control eficientes. En este contexto, esta tesis doctoral se centra en el modelado de bombas de calor con el fin de obtener modelos que permitan conocer con precisión el desempeño global de estas unidades en todo el rango de trabajo. En la primera parte del trabajo, se han realizado numerosos ensayos experimentales utilizando un nuevo prototipo de bomba de calor dual, obtenidos dentro del marco de trabajo del proyecto europeo GEOTeCH. Debido a la tipología hibrida de esta unidad, los resultados experimentales obtenidos incluyen datos de desempeño para las principales tecnologías de bombas de calor: las bombas de calor aerotérmicas y geotérmicas. Haciendo uso de toda esta información experimental, esta primera parte del trabajo se centra en obtener modelos polinómicos para la predicción del consumo eléctrico y las capacidades de calefacción y refrigeración en función de las variables externas a la unidad. Dichas variables son fáciles de obtener y suelen medirse en instalaciones reales. Por tanto, estos modelos caracterizan a la bomba de calor como un único componente, simplificando su implementación en modelos globales de sistemas más complejos donde se instalan estas unidades. Además, seleccionado un enfoque empírico para el modelado, en esta parte también se analizan algunos aspectos relevantes, como los términos a incluir en el polinomio, o cómo conformar las matrices experimentales de ensayo necesarias, es decir, cuántos puntos experimentales realizar y dónde situarlos en el rango de operación. Por último, la segunda parte de la tesis doctoral está dedicada a modelar uno de los componentes principales en estas unidades, el compresor. En este caso, el desarrollo de una extensa base de datos que incluye numerosos ensayos calorimétricos de las dos principales tecnologías de compresores, pistón y scroll, ha permitido el análisis detallado de las superficies de respuesta del consumo eléctrico y el caudal másico de refrigerante en función de las temperaturas de evaporación y condensación. A partir de esta información y siguiendo un enfoque similar al utilizado previamente, en esta segunda parte se revisan los modelos incluidos en la norma actual de caracterización de compresores, el estándar AHRI 540 (2020), para comprobar si son adecuados o si, por el contrario, debemos utilizar otro tipo de expresiones polinómicas. También se analizan en profundidad cuestiones críticas como el número de puntos necesarios para caracterizar cada tecnología de compresor, dónde situarlos en el dominio experimental, cómo evitar un posible sobreajuste del modelo minimizando problemas de extrapolación o interpolación, o cómo extrapolar los resultados para predecir con otros refrigerantes u otras condiciones de aspiración. / [CA] En un context de creixent preocupació per l'escalfament global i de polítiques energètiques internacionals, en el qual els sistemes de climatització dels edificis suposen una part important del consum energètic global, els sistemes de bombes de calor són considerats com a opcions molt interessants a causa de la seva alta eficiència i perquè són fonts d'energia renovables. En aquest sentit, una caracterització precisa d'aquests equips és de vital importància amb l'objectiu de millorar el seu disseny i, en aquells casos on aquest tipus d'unitats s'integren com a part de sistemes més complexos, implementar estratègies de control eficients. En aquest context, aquesta tesi doctoral se centra en el modelat de bombes de calor per obtenir models que permitisquen conèixer amb precisió el funcionament d'aquestes unitats a tot el rang de treball. A la primera part del treball, s'han realitzat nombrosos assajos experimentals utilitzant un nou prototip de bomba de calor dual, obtinguts dins del marc de treball del projecte europeu GEOTeCH. A causa de la tipologia hibrida d'aquesta unitat, els resultats experimentals obtinguts inclouen dades de funcionament per a les principals tecnologies de bombes de calor: les bombes de calor aerotèrmiques i geotèrmiques. Fent ús de tota aquesta informació experimental, aquesta primera part del treball se centra a obtenir models polinòmics per a la predicció del consum elèctric i les capacitats de calefacció i refrigeració en funció de les variables externes a la unitat. Aquestes variables són fàcils d'obtenir i se solen mesurar en instal·lacions reals. Per tant, aquests models caracteritzen la bomba de calor com un únic component, simplificant-ne la implementació en models globals de sistemes més complexos on s'instal·len aquestes unitats. A més, seleccionat un enfocament empíric per al modelatge, en aquesta part també s'analitzen alguns aspectes rellevants, com els termes a incloure al polinomi, o cóm conformar les matrius experimentals d'assaig necessàries, és a dir, quants punts experimentals realitzar i on situar-los al rang d'operació. Per acabar, la segona part de la tesi doctoral està dedicada al modelat d'un dels components principals d'aquestes unitats, el compressor. En aquest cas, el desenvolupament d'una extensa base de dades que inclou nombrosos assajos calorimètrics de les dues principals tecnologies de compressors, pistó i scroll, ha permès l'anàlisi detallat de les superfícies de resposta del consum elèctric i el cabal màssic de refrigerant segons les temperatures d'evaporació i de condensació. A partir d'aquesta informació i seguint un enfocament similar a l'utilitzat prèviament, en aquesta segona part es revisen els models inclosos a la norma actual de caracterització de compressors, l'estàndard AHRI 540 (2020), per comprovar si són adequats o si, per contra, cal utilitzar un altre tipus d'expressions polinòmiques. També s'analitzen en profunditat qüestions crítiques com el nombre de punts necessaris per caracteritzar cada tecnologia de compressor, on situar-los al domini experimental, cóm evitar un possible sobreajust del model minimitzant problemes d'extrapolació o interpolació, o cóm extrapolar els resultats per predir amb altres refrigerants o altres condicions d'aspiració. / [EN] In a context of global warming concerns and global energy policies, in which heating and cooling systems in buildings account for a significant amount of the global energy consumption, heat pump systems are widely considered as a really interesting option for enabling high efficiency and also for being renewable energy sources. In this sense, an accurate characterization of these units is of vital importance to improve their design and implement efficient control strategies, when the unit is integrated in more complex systems. Against this background, this PhD thesis focuses on heat pump modelling in order to create map-based models able to accurately characterize the global performance of these units for the entire working range. In the first part of this work, many experimental tests have been obtained for a new Dual Source Heat Pump prototype tested in the framework of the European project GEOTeCH. Due to the dual typology, the experimental results include performance data for the two main heat pump technologies: Air Source Heat Pumps and Ground Source Heat Pumps. By using all this experimental information, this first part focuses on obtaining empirical polynomial models capable of accurately predicting energy consumption and heating and cooling capacities as a function of external variables. Such variables are easy to measure and are usually recorded in real installations. Therefore, these models characterize the heat pump as a single component, simplifying its implementation in global models of more complex systems where these units are installed. Furthermore, selecting the empirical model approach, this part also includes some critical aspects, such as how to obtain the best polynomial expression, or how to perform the required experimental test matrices, i.e., how many tests should be conducted and where in the operating range. Finally, the second part of this PhD thesis is dedicated to modelling one of the main components of these units, the compressor. In this case, the development of an extensive database including numerous calorimetric tests on the two main compressor technologies, reciprocating and scroll compressors, has allowed the detailed analysis of the response surfaces of their performance parameters, i.e., the energy consumption and mass flow rate as a function of the evaporation and condensation temperatures. Using this information, and following an approach similar to that used in the first part, this second part reviews the models included in the current compressor characterization standard, the AHRI 540 (2020), in order to check whether they are appropriate or, on the contrary, whether we should use of other types of polynomial expression. Critical issues such as the number of points needed to characterize each compressor technology, where to place them in the experimental domain, how to prevent possible overfitting in the model adjustment to minimize extrapolation or interpolation problems, or how to extrapolate results for predicting other refrigerant or suction conditions, are discussed in depth. / I would like to acknowledge the financial support that has made this PhD thesis possible. The doctoral fellowship FPU15/03476 was founded by “Ministerio de Educación, Cultura y deporte” inside the program “Formación de Profesorado Universitario”, and the GEOTeCH project (No 656889) founded by the European Union under the “Horizon 2020 Framework Programme for European Research and Technological Development” / Marchante Avellaneda, J. (2023). Study of Response Surface Models for the characterization of the performance in Refrigeration Equipments and Heat Pumps [Tesis doctoral]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/192653
867

Multiple Constant Multiplication Optimization Using Common Subexpression Elimination and Redundant Numbers

Al-Hasani, Firas Ali Jawad January 2014 (has links)
The multiple constant multiplication (MCM) operation is a fundamental operation in digital signal processing (DSP) and digital image processing (DIP). Examples of the MCM are in finite impulse response (FIR) and infinite impulse response (IIR) filters, matrix multiplication, and transforms. The aim of this work is minimizing the complexity of the MCM operation using common subexpression elimination (CSE) technique and redundant number representations. The CSE technique searches and eliminates common digit patterns (subexpressions) among MCM coefficients. More common subexpressions can be found by representing the MCM coefficients using redundant number representations. A CSE algorithm is proposed that works on a type of redundant numbers called the zero-dominant set (ZDS). The ZDS is an extension over the representations of minimum number of non-zero digits called minimum Hamming weight (MHW). Using the ZDS improves CSE algorithms' performance as compared with using the MHW representations. The disadvantage of using the ZDS is it increases the possibility of overlapping patterns (digit collisions). In this case, one or more digits are shared between a number of patterns. Eliminating a pattern results in losing other patterns because of eliminating the common digits. A pattern preservation algorithm (PPA) is developed to resolve the overlapping patterns in the representations. A tree and graph encoders are proposed to generate a larger space of number representations. The algorithms generate redundant representations of a value for a given digit set, radix, and wordlength. The tree encoder is modified to search for common subexpressions simultaneously with generating of the representation tree. A complexity measure is proposed to compare between the subexpressions at each node. The algorithm terminates generating the rest of the representation tree when it finds subexpressions with maximum sharing. This reduces the search space while minimizes the hardware complexity. A combinatoric model of the MCM problem is proposed in this work. The model is obtained by enumerating all the possible solutions of the MCM that resemble a graph called the demand graph. Arc routing on this graph gives the solutions of the MCM problem. A similar arc routing is found in the capacitated arc routing such as the winter salting problem. Ant colony optimization (ACO) meta-heuristics is proposed to traverse the demand graph. The ACO is simulated on a PC using Python programming language. This is to verify the model correctness and the work of the ACO. A parallel simulation of the ACO is carried out on a multi-core super computer using C++ boost graph library.
868

Modelling of Heat Pumps Working with Variable-Speed Compressors

Ossorio Santiago, Rubén Josep 06 August 2024 (has links)
Tesis por compendio / [ES] La tecnología de bombas de calor se ha vuelto estratégica en Europa, está extendiéndose rápidamente y se planea que reemplace las calderas de gas en un futuro cercano. Sin embargo, aún enfrenta desafíos, como encontrar refrige-rantes nuevos viables y altamente eficientes, y mejorar aún más el rendimiento del sistema. Para abordar este último problema, han surgido las bombas de calor de velocidad variable que prometen reducir el consumo anual e incremen-tar el confort adaptando la potencia suministrada a las necesidades cambiantes. Esta tecnología se está implementando ya, pero carece de una metodología estandarizada para diseñar y seleccionar sus componentes. Esta tesis tiene como objetivo establecer pautas de diseño generales para la selección y diseño de componentes de bombas de calor de velocidad variable, y ofrecer información valiosa que se pueda traducir en herramientas para asistir en la simulación, diseño, selección y detección de fallas en estos dispositivos. El contenido del estudio se puede dividir en tres áreas temáticas: En una primera parte, se estudian los compresores de velocidad variable. El compresor es el primer componente que se selecciona en una bomba de calor, modula la capacidad y es el principal consumidor de energía. Sin embargo, no existen metodologías bien establecidas para modelar su comportamiento. En esta parte, se realizan ensayos de caracterización de compresores de velocidad variable y sus inversores para comprender su comportamiento y proporcionar correlaciones compactas para modelar su rendimiento. En la segunda parte, se propone una metodología para dimensionar los intercambiadores de calor en bombas de calor de velocidad variable. Nor-malmente, se diseñan para una potencia fija y temperaturas de trabajo constan-tes, sin embargo, en las bombas de velocidad variable, la capacidad y las tempe-raturas de trabajo fluctúan significativamente con el tiempo. En esta parte, se estudia la evolución del rendimiento de los intercambiadores de calor con la capacidad (velocidad del compresor) y se propone una metodología de selec-ción/dimensionamiento que considera la evolución de la capacidad requerida y de las condiciones climáticas externas a lo largo del año. Por último, se evalúa la circulación del aceite en las bombas de calor de velocidad variable. Gestionar la lubricación en los compresores de velocidad variable es un problema típico ya que, para tener suficiente lubricación a bajas velocidades, el compresor termina bombeando un exceso de aceite a altas velo-cidades. En esta parte se estudia la evolución de las tasas de circulación de acei-te con la velocidad y se analiza teóricamente su efecto en el rendimiento de la bomba de calor. / [CA] La tecnologia de les bombes de calor s'ha tornat estratègica a Europa, s'està estenent ràpidament i es preveu que substituïsca les calderes de gas en un futur pròxim. Tanmateix, encara s'enfronta a desafiaments com trobar refrigerants nous viables i altament eficients, i millorar encara més el rendiment del sistema. Per abordar aquest darrer problema, han sorgit les bombes de calor de velocitat variable que prometen reduir el consum anual i incrementar el confort adaptant la potència subministrada a les necessitats variables. Aquesta tecnologia ja s'es-tà implementant, però manca d'una metodologia estandarditzada per dissenyar i seleccionar els seus components. Aquesta tesi té com a objectiu establir pautes de disseny generals per a la se-lecció i disseny de components de bombes de calor de velocitat variable, i oferir informació valuosa que es pugui traduir en eines per ajudar en la simulació, disseny, selecció i detecció de fallades d'aquests dispositius. El contingut de l'estudi es pot dividir en tres àrees temàtiques: En una primera part, s'estudien els compressors de velocitat variable. El compressor és el primer component seleccionat d'una bomba de calor, modula la capacitat i és el principal consumidor d'energia. Tanmateix, no hi ha metodo-logies ben establides per modelar el seu comportament. En aquesta part, es realitzen assajos de caracterització de compressors de velocitat variable i els seus inversors per comprendre el seu comportament i proporcionar correlaci-ons compactes per modelar el seu rendiment. En la segona part, es proposa una metodologia per dimensionar els inter-canviadors de calor en bombes de calor de velocitat variable. Normalment, es dissenyen per a una potència fixa i temperatures de treball constants, no obs-tant això, en les bombes de velocitat variable, la capacitat i les temperatures de treball fluctuen significativament amb el temps. En aquesta part, s'estudia l'evo-lució del rendiment dels intercanviadors de calor amb la capacitat (velocitat del compressor) i es suggereix una metodologia de selecció/dimensionament que considera l'evolució de les càrregues i de les condicions climàtiques externes al llarg de l'any. Finalment, s'avalua la circulació de l'oli a les bombes de calor de velocitat variable. Gestionar la lubricació als compressors de velocitat variable és un pro-blema típic, ja que per tenir suficient lubricació a baixes velocitats, el compres-sor acaba bombejant un excés d'oli a altes velocitats. En aquesta part s'estudia l'evolució de les taxes de circulació d'oli amb la velocitat i s'analitza teòricament el seu efecte en el rendiment de la bomba de calor. / [EN] Heat pump technology has become strategic in Europe, it is rapidly spread-ing, and it is planned to replace gas boilers in the near future. However, they still have challenges to solve, such as finding new viable and highly efficient refriger-ants and further increasing their system performance. For this latter issue, vari-able-speed heat pumps arise, which claim to decrease annual consumption and increase comfort by adapting the delivered capacity to the changing loads. This technology is being implemented but lacks a standardized methodology to de-sign and select its components. This thesis aims to establish comprehensive design guidelines for selecting and designing variable-speed heat pump components and give insights that can translate into valuable information and tools for engineers to assist them in the pump simulation, design, selection and fault detection. The content of the study can be divided into three thematic areas: In the first part, variable-speed compressors are studied. The compressor is the first selected heat pump component; it modulates the capacity and is the primary energy consumer. However, there are no well-established methodolo-gies to model their behavior. In this part, extensive testing of variable-speed compressors and their inverters was carried out to understand their behavior and to provide compact correlations to model their performance. The second part proposes a methodology to size heat exchangers for variable-speed heat pumps. Typically, they are designed for a fixed capacity and constant working temperatures. However, the capacity and working tempera-tures fluctuate significantly overtime in variable-speed pumps. In this part, the performance evolution of heat exchangers with capacity is studied, and a meth-odological selection/sizing technique is proposed that considers the evolution of external climatic conditions and loads over the year. Lastly, the oil circulation in variable-speed heat pumps is assessed. Man-aging lubrication in variable-speed compressors is a typical issue, as a design valid for sufficient lubrication at low compressor speeds will end up pumping excess oil at high speeds. In this final part, the evolution of oil circulation rates with speed is studied, and its effect on heat pump performance is theoretically analyzed. / I am indebted to the Spanish and European governments for their financial support with the grant PRE2018-083535, which made this research possible. Their commitment to academic excellence and research advancement has been crucial in successfully completing this thesis. / Ossorio Santiago, RJ. (2024). Modelling of Heat Pumps Working with Variable-Speed Compressors [Tesis doctoral]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/203104 / Compendio

Page generated in 0.0586 seconds