111 |
EFFICACY OF SPARSE REGRESSION FOR LINEAR STRUCTURAL SYSTEM IDENTIFICATIONKatwal, Sadiksha 01 August 2024 (has links) (PDF)
The capability of sparse regression with Least Absolute Shrinkage and Selection Operator (LASSO) in modal identification of a simple system and predicting system response is remarkable. However, it has limitations when applied to more complex structure, particularly in equation discovery and response prediction. Despite these challenges, sparse regression demonstrates superior performance in linear system identification compared to Natural Excitation Technique (NExT) coupled with Eigensystem Realization Algorithm (ERA), especially in identifying higher modes and estimating damping ratios with reduced error.Findings indicate that while sparse regression is highly effective for simple systems, its application to real-world structures requires further exploration. The thesis concludes with recommendations for practical validation of sparse regression on actual structures and its comparison with alternative methods to assess its real-world efficacy in structural health monitoring.
|
112 |
Modèles de prédiction pour l'évaluation génomique des bovins laitiers français : application aux races Holstein et Montbéliarde / Prediction models for the genomic evaluation of French dairy cattle : application to the Holstein and Montbéliarde breedsColombani, Carine 16 October 2012 (has links)
L'évolution rapide des techniques de séquençage et de génotypage soulèvent de nouveaux défis dans le développement des méthodes de sélection pour les animaux d’élevage. Par comparaison de séquences, il est à présent possible d'identifier des sites polymorphes dans chaque espèce afin de baliser le génome par des marqueurs moléculaires appelés SNP (Single Nucleotide Polymorphism). Les méthodes de sélection des animaux à partir de cette information moléculaire nécessitent une représentation complète des effets génétiques. Meuwissen et al. (2001) ont introduit le concept de sélection génomique en proposant de prédire simultanément tous les effets des régions marquées puis de construire un index "génomique" en sommant les effets de chaque région. Le challenge dans l’évaluation génomique est de disposer de la meilleure méthode de prédiction afin d’obtenir des valeurs génétiques précises pour une sélection efficace des animaux candidats. L’objectif général de cette thèse est d'explorer et d’évaluer de nouvelles approches génomiques capables de prédire des dizaines de milliers d'effets génétiques, sur la base des phénotypes de centaines d'individus. Elle s’inscrit dans le cadre du projet ANR AMASGEN dont le but est d’étendre la sélection assistée par marqueurs, utilisée jusqu’à lors chez les bovins laitiers français, et de développer une méthode de prédiction performante. Pour cela, un panel varié de méthodes est exploré en estimant leurs capacités prédictives. Les méthodes de régression PLS (Partial Least Squares) et sparse PLS, ainsi que des approches bayésiennes (LASSO bayésien et BayesCπ) sont comparées à deux méthodes usuelles en amélioration génétique : le BLUP basé sur l’information pedigree et le BLUP génomique basé sur l’information des SNP. Ces méthodologies fournissent des modèles de prédiction efficaces même lorsque le nombre d’observations est très inférieur au nombre de variables. Elles reposent sur la théorie des modèles linéaires mixtes gaussiens ou les méthodes de sélection de variables, en résumant l’information massive des SNP par la construction de nouvelles variables. Les données étudiées dans le cadre de ce travail proviennent de deux races de bovins laitiers français (1 172 taureaux de race Montbéliarde et 3 940 taureaux de race Holstein) génotypés sur environ 40 000 marqueurs SNP polymorphes. Toutes les méthodes génomiques testées ici produisent des évaluations plus précises que la méthode basée sur la seule information pedigree. On observe un léger avantage prédictif des méthodes bayésiennes sur certains caractères mais elles sont cependant trop exigeantes en temps de calcul pour être appliquées en routine dans un schéma de sélection génomique. L’avantage des méthodes de sélection de variables est de pouvoir faire face au nombre toujours plus important de données SNP. De plus, elles sont capables de mettre en évidence des ensembles réduits de marqueurs, identifiés sur la base de leurs effets estimés, c’est-à-dire ayant un impact important sur les caractères étudiés. Il serait donc possible de développer une méthode de prédiction des valeurs génomiques sur la base de QTL détectés par ces approches. / The rapid evolution in sequencing and genotyping raises new challenges in the development of methods of selection for livestock. By sequence comparison, it is now possible to identify polymorphic regions in each species to mark the genome with molecular markers called SNPs (Single Nucleotide Polymorphism). Methods of selection of animals from genomic information require the representation of the molecular genetic effects. Meuwissen et al. (2001) introduced the concept of genomic selection by predicting simultaneously all the effects of the markers. Then a genomic index is built summing the effects of each region. The challenge in genomic evaluation is to find the best prediction method to obtain accurate genetic values of candidates. The overall objective of this thesis is to explore and evaluate new genomic approaches to predict tens of thousands of genetic effects, based on the phenotypes of hundreds of individuals. It is part of the ANR project AMASGEN whose aim is to extend the marker-assisted selection, used in French dairy cattle, and to develop an accurate method of prediction. A panel of methods is explored by estimating their predictive abilities. The PLS (Partial Least Squares) and sparse PLS regressions and Bayesian approaches (Bayesian LASSO and BayesCπ) are compared with two current methods in genetic improvement: the BLUP based on pedigree information and the genomic BLUP based on SNP markers. These methodologies are effective even when the number of observations is smaller than the number of variables. They are based on the theory of Gaussian linear mixed models or methods of variable selection, summarizing the massive information of SNP by new variables. The datasets come from two French dairy cattle breeds (1172 Montbéliarde bulls and 3940 Holstein bulls) genotyped with 40 000 polymorphic SNPs. All genomic methods give more accurate estimates than the method based on pedigree information only. There is a slight predictive advantage of Bayesian methods on some traits but they are still too demanding in computation time to be applied routinely in a genomic selection scheme. The advantage of variable selection methods is to cope with the increasing number of SNP data. In addition, they are able to extract reduced sets of markers based of their estimated effects, that is to say, with a significant impact on the trait studied. It would be possible to develop a method to predict genomic values on the basis of QTL detected by these approaches.
|
113 |
Deterministic Sparse FFT AlgorithmsWannenwetsch, Katrin Ulrike 09 August 2016 (has links)
No description available.
|
114 |
Métodos de programação quadrática convexa esparsa e suas aplicações em projeções em poliedros / Sparse convex quadratic programming methods and their applications in projections onto poliedraPolo, Jeinny Maria Peralta 07 March 2013 (has links)
O problema de minimização com restrições lineares e importante, não apenas pelo problema em si, que surge em várias áreas, mas também por ser utilizado como subproblema para resolver problemas mais gerais de programação não-linear. GENLIN e um método eficiente para minimização com restrições lineares para problemas de pequeno e médio porte. Para que seja possível a implementação de um método similar para grande porte, é necessário ter um método eficiente, também para grande porte, para projeção de pontos no conjunto de restrições lineares. O problema de projeção em um conjunto de restrições lineares pode ser escrito como um problema de programação quadrática convexa. Neste trabalho, estudamos e implementamos métodos esparsos para resolução de problemas de programação quadrática convexa apenas com restrições de caixa, em particular o clássico método Moré-Toraldo e o \"método\" NQC. O método Moré-Toraldo usa o método dos Gradientes Conjugados para explorar a face da região factível definida pela iteração atual, e o método do Gradiente Projetado para mudar de face. O \"método\" NQC usa o método do Gradiente Espectral Projetado para definir em que face trabalhar, e o método de Newton para calcular o minimizador da quadrática reduzida a esta face. Utilizamos os métodos esparsos Moré-Toraldo e NQC para resolver o problema de projeção de GENLIN e comparamos seus desempenhos / The linearly constrained minimization problem is important, not only for the problem itself, that arises in several areas, but because it is used as a subproblem in order to solve more general nonlinear programming problems. GENLIN is an efficient method for solving small and medium scaled linearly constrained minimization problems. To implement a similar method to solve large scale problems, it is necessary to have an efficient method to solve sparse projection problems onto linear constraints. The problem of projecting a point onto a set of linear constraints can be written as a convex quadratic programming problem. In this work, we study and implement sparse methods to solve box constrained convex quadratic programming problems, in particular the classical Moré-Toraldo method and the NQC \"method\". The Moré-Toraldo method uses the Conjugate Gradient method to explore the face of the feasible region defined by the current iterate, and the Projected Gradient method to move to a different face. The NQC \"method\" uses the Spectral Projected Gradient method to define the face in which it is going to work, and the Newton method to calculate the minimizer of the quadratic function reduced to this face. We used the sparse methods Moré-Toraldo and NQC to solve the projection problem of GENLIN and we compared their performances
|
115 |
Contributions to generic visual object categorization / Catégorisation automatique d'imagesFu, Huanzhang 14 December 2010 (has links)
Cette thèse de doctorat est consacrée à un sujet de recherche très porteur : la Catégorisation générique d’objets Visuels (VOC). En effet, les applications possibles sont très nombreuses, incluant l’indexation d’images et de vidéos, la vidéo surveillance, le contrôle d’accès de sécurité, le soutien à la conduite automobile, etc. En raison de ses nombreux verrous scientifiques, ce sujet est encore considéré comme l’un des problèmes les plus difficiles en vision par ordinateur et en reconnaissance de formes. Dans ce contexte, nous avons proposé dans ce travail de thèse plusieurs contributions, en particulier concernant les deux principaux éléments des méthodes résolvant les problèmes de VOC, notamment la sélection des descripteurs et la représentation d’images. Premièrement, un algorithme nomme "Embedded Sequential Forward feature Selection"(ESFS) a été proposé pour VOC. Son but est de sélectionner les descripteurs les plus discriminants afin d’obtenir une bonne performance pour la catégorisation. Il est principalement basé sur la méthode de recherche sous-optimale couramment utilisée "Sequential Forward Selection" (SFS), qui repose sur le principe simple d’ajouter progressivement les descripteurs les plus pertinents. Cependant, ESFS non seulement ajoute progressivement les descripteurs les plus pertinents à chaque étape mais de plus les fusionne d’une manière intégrée grâce à la notion de fonctions de masses combinées empruntée à la théorie de l’évidence qui offre également l’avantage d’obtenir un coût de calcul beaucoup plus faible que celui de SFS original. Deuxièmement, nous avons proposé deux nouvelles représentations d’images pour modéliser le contenu visuel d’une image : la Représentation d’Image basée sur la Modélisation Polynomiale et les Mesures Statistiques, appelées respectivement PMIR et SMIR. Elles permettent de surmonter l’inconvénient principal de la méthode populaire "bag of features" qui est la difficulté de fixer la taille optimale du vocabulaire visuel. Elles ont été testées avec nos descripteurs bases région ainsi que les descripteurs SIFT. Deux stratégies différentes de fusion, précoce et tardive, ont également été considérées afin de fusionner les informations venant des "canaux «différents représentés par les différents types de descripteurs. Troisièmement, nous avons proposé deux approches pour VOC en s’appuyant sur la représentation sparse. La première méthode est reconstructive (R_SROC) alors que la deuxième est reconstructive et discriminative (RD_SROC). En effet, le modèle de représentation sparse a été utilisé originalement dans le domaine du traitement du signal comme un outil puissant pour acquérir, représenter et compresser des signaux de grande dimension. Ainsi, nous avons proposé une adaptation de ces principes intéressants au problème de VOC. R_SROC repose sur l’hypothèse intuitive que l’image peut être représentée par une combinaison linéaire des images d’apprentissage de la même catégorie. [...] / This thesis is dedicated to the active research topic of generic Visual Object Categorization(VOC), which can be widely used in many applications such as videoindexation and retrieval, video monitoring, security access control, automobile drivingsupport etc. Due to many realistic difficulties, it is still considered to be one ofthe most challenging problems in computer vision and pattern recognition. In thiscontext, we have proposed in this thesis our contributions, especially concerning thetwo main components of the methods addressing VOC problems, namely featureselection and image representation.Firstly, an Embedded Sequential Forward feature Selection algorithm (ESFS)has been proposed for VOC. Its aim is to select the most discriminant features forobtaining a good performance for the categorization. It is mainly based on thecommonly used sub-optimal search method Sequential Forward Selection (SFS),which relies on the simple principle to add incrementally most relevant features.However, ESFS not only adds incrementally most relevant features in each stepbut also merges them in an embedded way thanks to the concept of combinedmass functions from the evidence theory which also offers the benefit of obtaining acomputational cost much lower than the one of original SFS.Secondly, we have proposed novel image representations to model the visualcontent of an image, namely Polynomial Modeling and Statistical Measures basedImage Representation, called PMIR and SMIR respectively. They allow to overcomethe main drawback of the popular "bag of features" method which is the difficultyto fix the optimal size of the visual vocabulary. They have been tested along withour proposed region based features and SIFT. Two different fusion strategies, earlyand late, have also been considered to merge information from different "channels"represented by the different types of features.Thirdly, we have proposed two approaches for VOC relying on sparse representation,including a reconstructive method (R_SROC) as well as a reconstructiveand discriminative one (RD_SROC). Indeed, sparse representation model has beenoriginally used in signal processing as a powerful tool for acquiring, representingand compressing the high-dimensional signals. Thus, we have proposed to adaptthese interesting principles to the VOC problem. R_SROC relies on the intuitiveassumption that an image can be represented by a linear combination of trainingimages from the same category. Therefore, the sparse representations of images arefirst computed through solving the ℓ1 norm minimization problem and then usedas new feature vectors for images to be classified by traditional classifiers such asSVM. To improve the discrimination ability of the sparse representation to betterfit the classification problem, we have also proposed RD_SROC which includes adiscrimination term, such as Fisher discrimination measure or the output of a SVMclassifier, to the standard sparse representation objective function in order to learna reconstructive and discriminative dictionary. Moreover, we have also proposedChapter 0. Abstractto combine the reconstructive and discriminative dictionary and the adapted purereconstructive dictionary for a given category so that the discrimination power canfurther be increased.The efficiency of all the methods proposed in this thesis has been evaluated onpopular image datasets including SIMPLIcity, Caltech101 and Pascal2007.
|
116 |
Métodos de programação quadrática convexa esparsa e suas aplicações em projeções em poliedros / Sparse convex quadratic programming methods and their applications in projections onto poliedraJeinny Maria Peralta Polo 07 March 2013 (has links)
O problema de minimização com restrições lineares e importante, não apenas pelo problema em si, que surge em várias áreas, mas também por ser utilizado como subproblema para resolver problemas mais gerais de programação não-linear. GENLIN e um método eficiente para minimização com restrições lineares para problemas de pequeno e médio porte. Para que seja possível a implementação de um método similar para grande porte, é necessário ter um método eficiente, também para grande porte, para projeção de pontos no conjunto de restrições lineares. O problema de projeção em um conjunto de restrições lineares pode ser escrito como um problema de programação quadrática convexa. Neste trabalho, estudamos e implementamos métodos esparsos para resolução de problemas de programação quadrática convexa apenas com restrições de caixa, em particular o clássico método Moré-Toraldo e o \"método\" NQC. O método Moré-Toraldo usa o método dos Gradientes Conjugados para explorar a face da região factível definida pela iteração atual, e o método do Gradiente Projetado para mudar de face. O \"método\" NQC usa o método do Gradiente Espectral Projetado para definir em que face trabalhar, e o método de Newton para calcular o minimizador da quadrática reduzida a esta face. Utilizamos os métodos esparsos Moré-Toraldo e NQC para resolver o problema de projeção de GENLIN e comparamos seus desempenhos / The linearly constrained minimization problem is important, not only for the problem itself, that arises in several areas, but because it is used as a subproblem in order to solve more general nonlinear programming problems. GENLIN is an efficient method for solving small and medium scaled linearly constrained minimization problems. To implement a similar method to solve large scale problems, it is necessary to have an efficient method to solve sparse projection problems onto linear constraints. The problem of projecting a point onto a set of linear constraints can be written as a convex quadratic programming problem. In this work, we study and implement sparse methods to solve box constrained convex quadratic programming problems, in particular the classical Moré-Toraldo method and the NQC \"method\". The Moré-Toraldo method uses the Conjugate Gradient method to explore the face of the feasible region defined by the current iterate, and the Projected Gradient method to move to a different face. The NQC \"method\" uses the Spectral Projected Gradient method to define the face in which it is going to work, and the Newton method to calculate the minimizer of the quadratic function reduced to this face. We used the sparse methods Moré-Toraldo and NQC to solve the projection problem of GENLIN and we compared their performances
|
117 |
Computational Complexity and Delay Reduction for RLNC Single and Multi-hop CommunicationsTasdemir, Elif 20 March 2023 (has links)
Today’s communication network is changing rapidly and radically. Demand for low latency, high reliability and low energy consumption increases as well the variety of characteristics of the connected devices. It is also expected that the number of connected devices will be massive in coming years. Some devices will be connected to the new generation base stations directly, while some of them will be connected through other devices via multi-hops. Reliable communication between these massive devices can be done via re-transmission, repetition of packets several times or via Forward Error Correction (FEC). In re-transmission method, when packets are negatively acknowledged or the sender’s acknowledgment timer expires, packets are re-transmitted. In repetition method, every packet can be send several times. Both aforementioned methods can cause a huge delay, particularly, in multi-hop network. On the contrary of these methods, FEC methods are preferred for low latency applications. Source information are transmitted together with redundant information. Hence, the number of transmissions are reduced comparing to the methods mentioned above.
Random Linear Network Coding (RLNC) is a packet level erasure correcting codes which aims to reduce latency. Specifically, source packets are combined and these combinations or coded packets are sent to the destination. Lost packets do no need to be re-sent since another coded packet can be substituted to the lost coded packet. Hence, the feedback mechanism and re-sending process becomes unnecessary. There are many variations of RLNC. One variation is called sliding window RLNC which apples FEC mechanism. This coding scheme achieves low latency via interleaved coded packets in between source packets. Another variation of the RLNC is Fulcrum, which is a versatile code. Fulcrum provides three different decoding options. Received coded packets can be decoded with low, high or middle complexity. This is a very important feature since connected devices will have different computation capabilities and proving a versatile code will allow them flexibility.
Although the aforementioned coding schemes are well suited to error prone network, there are still remaining challenges need to be studied. For instance, Fulcrum RLNC has high encoding and decoding complexity which increase the computation time and energy consumption. Moreover, although original Fulcrum RLNC strengths the reliability, it needs to be improved for low latency applications. Another remaining challenges is that recoding strategy of RLNC is not optimal for low latency. Allowing the intermediate nodes to combine received packets is referred as recoding. As described earlier, data packets will pass many hops until they reach destination. Therefore, compute-and-forward paradigm will be preferred rather than store-and-forward. Although recoding capability of RLNC differs it from other coding schemes (Raptor, LT), the conventional way of recoding is not efficient for low latency. Hence, the aim of this thesis is to address the aforementioned remaining challenges.
One way to address the remaining challenges is to employ sparsity. In other words, a few source packets can be combined rather than a large set of source packets to generate coded packets. Particularly, a dynamic sparse mechanism is proposed to vary the number of combined source packets during the encoding without a signaling between sender and receiver for Fulcrum RLNC to speed up encoding and decoding process without increasing overhead amount. Then, two different sliding window schemes were integrated into Fulcrum RLNC to make Fulcrum RLNC gain the low latency property. Sending source packets systematically and then spreading sparse coded packets in between systematic source packets can be referred as systematic sparsity. Moreover, different sparse and systematic recoding strategies have been proposed in this thesis to lower the delay and computation time at the intermediate nodes and destination. Finally, one of the proposed recoding strategy has been applied to the vehicle platooning scenario to increase reliability. All proposed coding schemes were analyzed and performed on KODO which is well known network coding library.
|
118 |
Bayesian Sparse Regression with Application to Data-driven Understanding of ClimateDas, Debasish January 2015 (has links)
Sparse regressions based on constraining the L1-norm of the coefficients became popular due to their ability to handle high dimensional data unlike the regular regressions which suffer from overfitting and model identifiability issues especially when sample size is small. They are often the method of choice in many fields of science and engineering for simultaneously selecting covariates and fitting parsimonious linear models that are better generalizable and easily interpretable. However, significant challenges may be posed by the need to accommodate extremes and other domain constraints such as dynamical relations among variables, spatial and temporal constraints, need to provide uncertainty estimates and feature correlations, among others. We adopted a hierarchical Bayesian version of the sparse regression framework and exploited its inherent flexibility to accommodate the constraints. We applied sparse regression for the feature selection problem of statistical downscaling of the climate variables with particular focus on their extremes. This is important for many impact studies where the climate change information is required at a spatial scale much finer than that provided by the global or regional climate models. Characterizing the dependence of extremes on covariates can help in identification of plausible causal drivers and inform extremes downscaling. We propose a general-purpose sparse Bayesian framework for covariate discovery that accommodates the non-Gaussian distribution of extremes within a hierarchical Bayesian sparse regression model. We obtain posteriors over regression coefficients, which indicate dependence of extremes on the corresponding covariates and provide uncertainty estimates, using a variational Bayes approximation. The method is applied for selecting informative atmospheric covariates at multiple spatial scales as well as indices of large scale circulation and global warming related to frequency of precipitation extremes over continental United States. Our results confirm the dependence relations that may be expected from known precipitation physics and generates novel insights which can inform physical understanding. We plan to extend our model to discover covariates for extreme intensity in future. We further extend our framework to handle the dynamic relationship among the climate variables using a nonparametric Bayesian mixture of sparse regression models based on Dirichlet Process (DP). The extended model can achieve simultaneous clustering and discovery of covariates within each cluster. Moreover, the a priori knowledge about association between pairs of data-points is incorporated in the model through must-link constraints on a Markov Random Field (MRF) prior. A scalable and efficient variational Bayes approach is developed to infer posteriors on regression coefficients and cluster variables. / Computer and Information Science
|
119 |
Técnicas de esparsidade em sistemas estáticos de energia elétrica / not availableSimeão, Sandra Fiorelli de Almeida Penteado 27 September 2001 (has links)
Neste trabalho foi realizado um grande levantamento de técnicas de esparsidade relacionadas a sistemas estáticos de energia elétrica. Tais técnicas visam, do ponto de vista computacional, ao aumento da eficiência na solução de rede elétrica objetivando, além da resolução em si, a redução dos requisitos de memória, armazenamento e tempo de processamento. Para tanto, uma extensa revisão bibliográfica foi compilada, apresentando um posicionamento histórico e uma ampla visão do desenvolvimento teórico. Os testes comparativos realizados para sistemas de 14, 30, 57 e 118 barras, sobre a implantação de três das técnicas mais empregadas, apontou a Bi-fatoração como tendo o melhor desempenho médio. Para sistemas pequenos, a Eliminação Esparsa e Sintética de Gauss apresentou melhores resultados. Este trabalho fornecerá subsídios conceituais e metodológicos a técnicos e pesquisadores da área. / In this work a great survey of sparsity techniques related to static systems of electric power was accomplished. Such techniques seek, for of the computational point of view, the increase of the efficiency in the solution of the electric net aiming, besides the resolution of itself, the reduction of memory requirements, the storage and time processing. For that, an extensive bibliographic review was compiled providing a historic positioning and a broad view of theoretic development. The comparative tests accomplished for systems of 14,30, 57 and 118 buses, on the implementation of three of the most employed techniques, it pointed out an bi-factorisation as best medium performance. For small systems, the sparse symmetric Gaussian elimination showed the best results. This work will supply conceptual and methodological subsidies to technicians and researchers of the area.
|
120 |
Study and optimization of 2D matrix arrays for 3D ultrasound imaging / Etude et optimisation de sondes matricielles 2D pour l'imagerie ultrasonore 3DDiarra, Bakary 11 October 2013 (has links)
L’imagerie échographique en trois dimensions (3D) est une modalité d’imagerie médicale en plein développement. En plus de ses nombreux avantages (faible cout, absence de rayonnement ionisant, portabilité) elle permet de représenter les structures anatomiques dansleur forme réelle qui est toujours 3D. Les sondes à balayage mécaniques, relativement lentes, tendent à être remplacées par des sondes bidimensionnelles ou matricielles qui sont unprolongement dans les deux directions, latérale et azimutale, de la sonde classique 1D. Cetagencement 2D permet un dépointage du faisceau ultrasonore et donc un balayage 3D del’espace. Habituellement, les éléments piézoélectriques d’une sonde 2D sont alignés sur unegrille et régulièrement espacés d’une distance (en anglais le « pitch ») soumise à la loi del’échantillonnage spatial (distance inter-élément inférieure à la demi-longueur d’onde) pour limiter l’impact des lobes de réseau. Cette contrainte physique conduit à une multitude d’éléments de petite taille. L’équivalent en 2D d’une sonde 1D de 128 éléments contient128x128=16 384 éléments. La connexion d’un nombre d’éléments aussi élevé constitue unvéritable défi technique puisque le nombre de canaux dans un échographe actuel n’excède querarement les 256. Les solutions proposées pour contrôler ce type de sonde mettent en oeuvredu multiplexage ou des techniques de réduction du nombre d’éléments, généralement baséessur une sélection aléatoire de ces éléments (« sparse array »). Ces méthodes souffrent dufaible rapport signal à bruit du à la perte d’énergie qui leur est inhérente. Pour limiter cespertes de performances, l’optimisation reste la solution la plus adaptée. La première contribution de cette thèse est une extension du « sparse array » combinéeavec une méthode d’optimisation basée sur l’algorithme de recuit simulé. Cette optimisation permet de réduire le nombre nécessaire d’éléments à connecter en fonction des caractéristiques attendues du faisceau ultrasonore et de limiter la perte d’énergie comparée à la sonde complète de base. La deuxième contribution est une approche complètement nouvelle consistant à adopter un positionnement hors grille des éléments de la sonde matricielle permettant de supprimer les lobes de réseau et de s’affranchir de la condition d’échantillonnage spatial. Cette nouvelles tratégie permet d’utiliser des éléments de taille plus grande conduisant ainsi à un nombre d’éléments nécessaires beaucoup plus faible pour une même surface de sonde. La surface active de la sonde est maximisée, ce qui se traduit par une énergie plus importante et donc unemeilleure sensibilité. Elle permet également de balayer un angle de vue plus important, leslobes de réseau étant très faibles par rapport au lobe principal. Le choix aléatoire de la position des éléments et de leur apodization (ou pondération) reste optimisé par le recuit simulé.Les méthodes proposées sont systématiquement comparées avec la sonde complète dansle cadre de simulations numériques dans des conditions réalistes. Ces simulations démontrent un réel potentiel pour l’imagerie 3D des techniques développées. Une sonde 2D de 8x24=192 éléments a été construite par Vermon (Vermon SA, ToursFrance) pour tester les méthodes de sélection des éléments développées dans un cadreexpérimental. La comparaison entre les simulations et les résultats expérimentaux permettentde valider les méthodes proposées et de prouver leur faisabilité. / 3D Ultrasound imaging is a fast-growing medical imaging modality. In addition to its numerous advantages (low cost, non-ionizing beam, portability) it allows to represent the anatomical structures in their natural form that is always three-dimensional. The relativelyslow mechanical scanning probes tend to be replaced by two-dimensional matrix arrays that are an extension in both lateral and elevation directions of the conventional 1D probe. This2D positioning of the elements allows the ultrasonic beam steering in the whole space. Usually, the piezoelectric elements of a 2D array probe are aligned on a regular grid and spaced out of a distance (the pitch) subject to the space sampling law (inter-element distancemust be shorter than a mid-wavelength) to limit the impact of grating lobes. This physical constraint leads to a multitude of small elements. The equivalent in 2D of a 1D probe of 128elements contains 128x128 = 16,384 elements. Connecting such a high number of elements is a real technical challenge as the number of channels in current ultrasound scanners rarely exceeds 256. The proposed solutions to control this type of probe implement multiplexing or elements number reduction techniques, generally using random selection approaches (« spars earray »). These methods suffer from low signal to noise ratio due to the energy loss linked to the small number of active elements. In order to limit the loss of performance, optimization remains the best solution. The first contribution of this thesis is an extension of the « sparse array » technique combined with an optimization method based on the simulated annealing algorithm. The proposed optimization reduces the required active element number according to the expected characteristics of the ultrasound beam and permits limiting the energy loss compared to the initial dense array probe.The second contribution is a completely new approach adopting a non-grid positioningof the elements to remove the grating lobes and to overstep the spatial sampling constraint. This new strategy allows the use of larger elements leading to a small number of necessaryelements for the same probe surface. The active surface of the array is maximized, whichresults in a greater output energy and thus a higher sensitivity. It also allows a greater scansector as the grating lobes are very small relative to the main lobe. The random choice of the position of the elements and their apodization (or weighting coefficient) is optimized by the simulated annealing.The proposed methods are systematically compared to the dense array by performing simulations under realistic conditions. These simulations show a real potential of the developed techniques for 3D imaging.A 2D probe of 8x24 = 192 elements was manufactured by Vermon (Vermon SA, Tours,France) to test the proposed methods in an experimental setting. The comparison between simulation and experimental results validate the proposed methods and prove their feasibility. / L'ecografia 3D è una modalità di imaging medicale in rapida crescita. Oltre ai vantaggiin termini di prezzo basso, fascio non ionizzante, portabilità, essa permette di rappresentare le strutture anatomiche nella loro forma naturale, che è sempre tridimensionale. Le sonde ascansione meccanica, relativamente lente, tendono ad essere sostituite da quelle bidimensionali che sono una estensione in entrambe le direzioni laterale ed azimutale dellasonda convenzionale 1D. Questo posizionamento 2D degli elementi permette l'orientamentodel fascio ultrasonico in tutto lo spazio. Solitamente, gli elementi piezoelettrici di una sondamatriciale 2D sono allineati su una griglia regolare e separati da una distanza (detta “pitch”) sottoposta alla legge del campionamento spaziale (la distanza inter-elemento deve esseremeno della metà della lunghezza d'onda) per limitare l'impatto dei lobi di rete. Questo vincolo fisico porta ad una moltitudine di piccoli elementi. L'equivalente di una sonda 1D di128 elementi contiene 128x128 = 16.384 elementi in 2D. Il collegamento di un così grandenumero di elementi è una vera sfida tecnica, considerando che il numero di canali negliecografi attuali supera raramente 256. Le soluzioni proposte per controllare questo tipo disonda implementano le tecniche di multiplazione o la riduzione del numero di elementi, utilizzando un metodo di selezione casuale (« sparse array »). Questi metodi soffrono di unbasso rapporto segnale-rumore dovuto alla perdita di energia. Per limitare la perdita di prestazioni, l’ottimizzazione rimane la soluzione migliore. Il primo contributo di questa tesi è un’estensione del metodo dello « sparse array » combinato con un metodo di ottimizzazione basato sull'algoritmo del simulated annealing. Questa ottimizzazione riduce il numero degli elementi attivi richiesto secondo le caratteristiche attese del fascio di ultrasuoni e permette di limitare la perdita di energia.Il secondo contributo è un approccio completamente nuovo, che propone di adottare un posizionamento fuori-griglia degli elementi per rimuovere i lobi secondari e per scavalcare il vincolo del campionamento spaziale. Questa nuova strategia permette l'uso di elementi piùgrandi, riducendo così il numero di elementi necessari per la stessa superficie della sonda. La superficie attiva della sonda è massimizzata, questo si traduce in una maggiore energia equindi una maggiore sensibilità. Questo permette inoltre la scansione di un più grande settore,in quanto i lobi secondari sono molto piccoli rispetto al lobo principale. La scelta casualedella posizione degli elementi e la loro apodizzazione viene ottimizzata dal simulate dannealing. I metodi proposti sono stati sistematicamente confrontati con la sonda completaeseguendo simulazioni in condizioni realistiche. Le simulazioni mostrano un reale potenzialedelle tecniche sviluppate per l'imaging 3D.Una sonda 2D di 8x24 = 192 elementi è stata fabbricata da Vermon (Vermon SA, ToursFrance) per testare i metodi proposti in un ambiente sperimentale. Il confronto tra lesimulazioni e i risultati sperimentali ha permesso di convalidare i metodi proposti edimostrare la loro fattibilità.
|
Page generated in 0.0457 seconds