Spelling suggestions: "subject:"component 2analysis"" "subject:"component 3analysis""
401 |
Traitement électrocinétique des sédiments de dragage multi-contaminés et évolution de leur toxicité / Electro-remediation of dredged multi-contaminated sediments and the evolution of their toxicityTian, Yue 15 December 2017 (has links)
Les travaux de cette thèse sont consacrés principalement à l'optimisation d'une méthode de remédiation électrocinétique (EK) comme une technologie appropriée pour le traitement de sédiments de dragage de faible perméabilité hydraulique et multi-contaminés (en éléments traces (ET), hydrocarbures aromatiques polycycliques (HAP) et polychlorobiphényles (PCB)). Cette étude porte également sur l’effet du traitement EK sur l’évolution de la toxicité des sédiments. Après une revue bibliographique, une seconde partie a été dédiée aux méthodes d’analyse des contaminants, avec un focus sur leur extraction de la matrice sédimentaire ; ainsi, une nouvelle méthode d’extraction par dispersion de la matrice solide (MSPD) a été développée, pour une extraction rapide et simultanée des HAP et de PCB et une purification de l’échantillon, qui s’est avérée plus efficace que la méthode d’extraction assistée par micro-ondes (MAE). Plusieurs études expérimentales (à différentes échelles) de remédiation électrocinétique ont été décrites dans une troisième partie ; ces études ont été menées sur un sédiment reconstitué ou des sédiments de dragage portuaire. De nombreuses combinaisons de tensioactifs et d’agents chélatants ont été testées comme agents d’amélioration pour abaisser simultanément la concentration en métaux (Cd, Cr, Cu, Pb, Zn) et des HAP/PCB. Le choix a été effectué en raison notamment de leur faible toxicité potentielle, en vue de pouvoir les appliquer ultérieurement pour une restauration sur site : (bio)surfactants (Rhamnolipides, Saponine et Tween 20) combinés avec des agents chélatants (acide citrique (CA) et EDDS). Les résultats obtenus montrent que les métaux (à l'exception de Cr) sont difficiles à extraire de ces sédiments de dragage portuaire à caractère réducteur, qui présentent une capacité tampon élevée, une perméabilité hydraulique très faible et une teneur en matière organique élevée. En revanche, les HAP et les PCB fournissent de meilleurs taux d'abattement (29,2% et 50,2%, respectivement). Dans une quatrième partie, l'efficacité du procédé EK a également été évaluée à travers l’évolution de la toxicité aiguë des sédiments traités sur les copépodes E. affinis exposés aux élutriats de sédiments. Les résultats ont montré que l'utilisation de CA,des biosurfactants et du Tween 20 n'a pas eu d'impact significatif sur la toxicité des sédiments traités. Cependant, les copépodes E. affinis étaient sensibles aux faibles valeurs de pH et aux conditions très oxydantes, ainsi qu’à la présence de Cu et, dans une moindre mesure, de Pb, à condition toutefois qu’ils soient rendus plus mobiles et biodisponibles. En revanche, la toxicité a été peu et même négativement corrélée aux concentrations des HAP et des PCB après le traitement EK, probablement en raison de la production de métabolites oxydés des HAP et des PCB, plus toxiques que les composés natifs. / This thesis research is mainly devoted to the optimization of an electrokinetic (EK) remediation process as a promising technology for treating multi-contaminated (trace metals, polycyclic aromatic hydrocarbons (PAHs) and polychlorobiphenyles (PCBs)) dredged harbor sediments of low permeability. This study is also investigating the effect of the EK treatment on the evolution of sediment toxicity. After a bibliographic review, asecond part of this study was dedicated to the analytical methods carried out for the characterization of the sediment and its contaminants, particularly to their extraction from the sediment matrix; thus a new extraction method, through matrix solid phase dispersion (MSPD) was developed, for a fast and simultaneous extraction of both PAHs and PCBs, and a sample purification.MSPD appeared more efficient than the microwave assisted extraction (MAE) method. Thereafter many EK experiments (at different scales) were described in a third part. EK remediation tests were performed using a spiked model sediment or natural harbor dredged sediments. Many combinations of surfactants and chelators were tested as EK enhancing agents for decreasing simultaneously metal (Cd, Cr, Cu, Pb, Zn) and PAH/PCB levels. Their choice was done because of their possible low toxicity with a view to use them for future site restoration: (bio)surfactants (rhamnolipids, saponin and Tween 20) combined with chelators (citric acid (CA) and EDDS). The results showed that metals (except Cr) were difficult to remove from this kind of dredged sediment owing to its reductive character, to its high buffering capacity, to its very low hydraulic permeability and to its high organic matter content. However PAHs and PCBs showed better removal levels (29.2% and 50.2%, respectively). In a fourth part, the efficiency of the EK process was also assessed by measuring the evolution of the acute toxicity of the treated sediment on E. affinis copepods exposed to sediment elutriates. The results showed that using CA, biosurfactants or Tween 20 as enhancing agents did not significantly impact the toxicity of the treated sediment. However, E. affinis copepods were significantly sensitive to low pH values and oxidative conditions, to Cu, and to a lesser extent to Pb amounts, if they were transformed in more mobile and bioavailable forms. In contrast, acute toxicity was only slightly and even negatively correlated to PAH and PCB amounts after EK treatment, probably due to the production of oxidized metabolites of PAHs and PCBs, more toxic than the parent compounds.
|
402 |
Análise de componentes principais aplicada a avaliação de atributos de agregados na separação sólido líquido /Almeida, Thaís de. January 2020 (has links)
Orientador: Rodrigo Braga Moruzzi / Resumo: Água de qualidade, livre de poluentes e patógenos é um recurso humano necessário e valioso. As contaminações por fontes naturais e antrópicas podem ameaçar a qualidade desses cursos d’água, fazendo-se necessário um tratamento prévio antes de ser disponibilizada para abastecimento púbico. Com objetivo de eliminação de contaminantes e impurezas diversos processos e operações de tratamento físico/químico são utilizados, como a coagulação, a floculação, e processos de separação sólido/líquido. Para a avaliação do padrão de qualidade final da água pós tratamento são necessários índices de monitoramento, que podem ser obtidos através métodos diretos e/ou indiretos. Os métodos diretos de características físicas e morfológicas tem ganhado cada vez mais atenção entre os estudos da área. Seus parâmetros, como tamanho das partículas e estrutura de fractal têm sido um novo recurso para a temática floculação. Buscando maior entendimento sobre os principais fatores que contribuem para a separação dos agregados de fractal, e consequentemente melhor eficiência de remoção, o presente estudo teve como objetivo investigar o desempenho da Sedimentação Gravitacional e da Flotação por Ar Dissolvido, e suas relações com as características físicas das partículas floculentas a partir da análise das principais variáveis que interferiram nos processos. Para tal, foram investigadas em escala de laboratório quatro diferentes águas preparadas com ácido húmico, caulinita e coaguladas com Sulfato de Alumíni... (Resumo completo, clicar acesso eletrônico abaixo) / Abstract: Quality water, without pollutants and pathogens is a necessary and valuable human resource. Contamination of natural and anthropogenic sources can affect the quality of these watercourses, requiring primary treatment before being available for public supply. In order to eliminate contaminants and impurities several processes and physical/chemical treatment operations such as, coagulation, flocculation, and solid/liquid separation are used. For the evaluation of the final quality of the water, monitoring indices are necessary, which can be obtained through direct or indirect methods. The direct methods of physical and morphological characteristics have increased attention in this area studies. Parameters such as particle size and fractal structure has been a new feature for flocculation thematic. The aim of this study was investigate the performance of Gravitational Sedimentation and Dissolved Air Flotation and their relationship to the particle’s physical characteristics particles from the analysis of the main variables that interfered in the processes. For this purpose, four different types of water prepared with humic acid, kaolin solution and coagulated with Aluminum Sulphate and Ferric Chloride were investigated in laboratory scale. The flocculation process was monitored by digital image analysis in order to obtain variables that help to determine the particle’s physical characteristics such as the Particle Size Distribution (DTP) and its representative β parameter as wel... (Complete abstract click electronic access below) / Mestre
|
403 |
Mobile systems for monitoring Parkinson's diseaseMemedi, Mevludin January 2011 (has links)
This thesis presents the development and evaluation of IT-based methods and systems for supporting assessment of symptoms and enabling remote monitoring of Parkinson‟s disease (PD) patients. PD is a common neurological disorder associated with impaired body movements. Its clinical management regarding treatment outcomes and follow-up of patients is complex. In order to reveal the full extent of a patient‟s condition, there is a need for repeated and time-stamped assessments related to both patient‟s perception towards common symptoms and motor function. In this thesis, data from a mobile device test battery, collected during a three year clinical study, was used for the development and evaluation of methods. The data was gathered from a series of tests, consisting of selfassessments and motor tests (tapping and spiral drawing). These tests were carried out repeatedly in a telemedicine setting during week-long test periods. One objective was to develop a computer method that would process tracedspiral drawings and generate a score representing PD-related drawing impairments. The data processing part consisted of using the discrete wavelet transform and principal component analysis. When this computer method was evaluated against human clinical ratings, the results showed that it could perform quantitative assessments of drawing impairment in spirals comparatively well. As a part of this objective, a review of systems and methods for detecting the handwriting and drawing impairment using touch screens was performed. The review showed that measures concerning forces, accelerations, and radial displacements were the most important ones in detecting fine motor movement anomalies. Another objective of this thesis work was to design and evaluate an information system for delivering assessment support information to the treating clinical staff for monitoring PD symptoms in their patients. The system consisted of a patient node for data collection based on the mobile device test battery, a service node for data storage and processing, and a web application for data presentation. A system module was designed for compiling the test battery time series into summary scores on a test period level. The web application allowed adequate graphic feedback of the summary scores to the treating clinical staff. The evaluation results for this integrated system indicate that it can be used as a tool for frequent PD symptom assessments in home environments.
|
404 |
Reduced-orderCombustion Models for Innovative Energy Conversion TechnologiesMalik, Mohammad Rafi 01 February 2021 (has links) (PDF)
The present research seeks to advance the understanding and application of Principal Component Analysis (PCA)-based combustion modelling for practical systems application. This work is a consistent extension to the standard PC-transport model, and integrates the use of Gaussian Process Regression (GPR) in order to increase the accuracy and the potential of size reduction offered by PCA. This new model, labelled PC-GPR, is successively applied and validated in a priori and a posteriori studies.In the first part of this dissertation, the PC-GPR model is validated in an a priori study based on steady and unsteady perfectly stirred reactor (PSR) calculations. The model showed its great accuracy in the predictions for methane and propane, using large kinetic mechanisms. In particular, for methane, the use of GPR allowed to model accurately the system with only 2 principal components (PCs) instead of the 34 variables in the original GRI-3.0 kinetic mechanism. For propane, the model was applied to two different mechanisms consisting of 50 species and 162 species respectively. The PC-GPR model was able to achieve a very significant reduction, and the thermo-chemical state-space was accurately predicted using only 2 PCs for both mechanisms.The second part of this work is dedicated to the application of the PC-GPR model in the framework of non-premixed turbulent combustion in a fully three-dimensional Large Eddy Simulation (LES). To this end, an a posteriori validation is performed on the Sandia flames D, E and F. The PC-GPR model showed very good accuracy in the predictions of the three flames when compared with experimental data using only 2 PCs, instead of the 35 species originally present in the GRI 3.0 mechanism. Moreover, the PC-GPR model was also able to handle the extinction and re-ignition phenomena in flames E and F, thanks to the unsteady data in the training manifold. A comparison with the FPV model showed that the combination of the unsteady data set and the best controlling variables for the system defined by PCA provide an alternative to the use of steady flamelets parameterized by user-defined variables and combined with a PDF approach.The last part of this research focuses on the application of the PC-GPR model in a more challenging case, a lifted methane/air flame. Several key features of the model are investigated: the sensitivty to the training data set, the influence of the scaling methods, the issue of data sampling and the potential of a subgrid scale (SGS) closure. In particular, it is shown that the training data set must contain the effects of diffusion in order to accurately predict the different properties of the lifted flame. Moreover, the kernel density weighting method, used to address the issue of non-homogenous data density usually found in numerical data sets, allowed to improve the predictions of the PC-GPR model. Finally, the integration of subgrid scale closure to the PC-GPR model allowed to significantly improve the simulations results using a presumed PDF closure. A qualitative comparison with the FPV model showed that the results provided by the PC-GPR model are overall very comparable to the FPV results, with a reduced numerical cost as PC-GPR requires a 4D lookup table, instead of a 5D in the case of FPV. / Le double défi de l'énergie et du changement climatique mettent en avant lanécessité de développer des nouvelles technologies de combustion, étantdonné que les projections les plus réalistes montrent que la plus grandeaugmentation de l'offre d'énergie pour les décennies à venir se fera à partirde combustibles fossiles. Ceci représente donc une forte motivation pour larecherche sur l'efficacité énergétique et les technologies propres. Parmicelles-ci, la combustion sans flamme est un concept nouvellementdéveloppé qui permet d'obtenir des rendements thermiques élevés avecdes économies de carburant tout en maintenant les émissions polluantes àun niveau très bas. L'intérêt croissant pour cette technologie est égalementmotivé par sa grande flexibilité de carburant, ce qui représente uneprécieuse opportunité pour les carburants à faible valeur calorifique, lesdéchets industriels à haute valeur calorifique et les combustibles à based'hydrogène. Etant donné que cette technologie est plutôt récente, elle estde ce fait encore mal comprise. Les solutions d'une application industriellesont très difficiles à transposer à d'autres. Pour améliorer les connaissancesdans le domaine de la combustion sans flamme, il est nécessaire de menerdes études fondamentales sur ce nouveau procédé de combustion afin defavoriser son développement. En particulier, il y a deux différencesmajeures par rapport aux flammes classiques :d’une part, les niveaux deturbulence rencontrés dans la combustion sans flamme sont rehaussés, enraison des gaz de recirculation, réduisant ainsi les échelles de mélange.D'autre part, les échelles chimiques sont augmentées, en raison de ladilution des réactifs. Par conséquent, les échelles turbulentes et chimiquessont du même ordre de grandeur, ce qui conduit à un couplage très fort.Après un examen approfondi de l'état de l'art sur la modélisation de lacombustion sans flamme, le coeur du projet représentera le développementd'une nouvelle approche pour le traitement de l'interaction turbulence /chimie pour les systèmes sans flamme dans le contexte des simulationsaux grandes échelles (Large Eddy Simulations, LES). Cette approche serafondée sur la méthode PCA (Principal Component Analysis) afin d'identifierles échelles chimiques de premier plan du processus d'oxydation. Cetteprocédure permettra de ne suivre sur la grille LES qu'un nombre réduit descalaires non conservés, ceux contrôlant l'évolution du système. Destechniques de régression non-linéaires seront couplées avec PCA afind’augmenter la précision et la réductibilité du modèle. Après avoir été validégrâce à des données expérimentales de problèmes simplifiés, le modèlesera mis à l'échelle afin de gérer des applications plus grandes, pertinentespour la combustion sans flamme. Les données expérimentales etnumériques seront validées en utilisant des indicateurs de validationappropriés pour évaluer les incertitudes expérimentales et numériques. / Doctorat en Sciences de l'ingénieur et technologie / info:eu-repo/semantics/nonPublished
|
405 |
Advances in the analysis of event-related potential data with factor analytic methodsScharf, Florian 04 April 2019 (has links)
Researchers are often interested in comparing brain activity between experimental contexts. Event-related potentials (ERPs) are a common electrophysiological measure of brain activity that is time-locked to an event (e.g., a stimulus presented to the participant). A variety of decomposition methods has been used for ERP data among them temporal exploratory factor analysis (EFA). Essentially, temporal EFA decomposes the ERP waveform into a set of latent factors where the factor loadings reflect the time courses of the latent factors, and the amplitudes are represented by the factor scores.
An important methodological concern is to ensure the estimates of the condition effects are unbiased and the term variance misallocation has been introduced in reference to the case of biased estimates. The aim of the present thesis was to explore how exploratory factor analytic methods can be made less prone to variance misallocation. These efforts resulted in a series of three publications in which variance misallocation in EFA was described as a consequence of the properties of ERP data, ESEM was proposed as an extension of EFA that acknowledges the structure of ERP data sets, and regularized estimation was suggested as an alternative to simple structure rotation with desirable properties.
The presence of multiple sources of (co-)variance, the factor scoring step, and high temporal overlap of the factors were identified as major causes of variance misallocation in EFA for ERP data. It was shown that ESEM is capable of separating the (co-)variance sources and that it avoids biases due to factor scoring. Further, regularized estimation was shown to be a suitable alternative for factor rotation that is able to recover factor loading patterns in which only a subset of the variables follow a simple structure. Based on these results, regSEMs and ESEMs with ERP-specific rotation have been proposed as promising extensions of the EFA approach that might be less prone to variance misallocation. Future research should provide a direct comparison of regSEM and ESEM, and conduct simulation studies with more physiologically motivated data generation algorithms.
|
406 |
Fault Detection and Diagnosis for Brine to Water Heat Pump SystemsAbuasbeh, Mohammad January 2016 (has links)
The overall objective of this thesis is to develop methods for fault detection and diagnosis for ground source heat pumps that can be used by servicemen to assist them to accurately detect and diagnose faults during the operation of the heat pump. The aim of this thesis is focused to develop two fault detection and diagnosis methods, sensitivity ratio and data-driven using principle component analysis. For the sensitivity ratio method model, two semi-empirical models for heat pump unit were built to simulate fault free and faulty conditions in the heat pump. Both models have been cross-validated by fault free experimental data. The fault free model is used as a reference. Then, fault trend analysis is performed in order to select a pair of uniquely sensitive and insensitive parameters to calculate the sensitivity ratio for each fault. When a sensitivity ratio value for a certain fault drops below a predefined value, that fault is diagnosed and an alarm message with that fault appears. The simulated faults data is used to test the model and the model successfully detected and diagnosed the faults types that were tested for different operation conditions. In the second method, principle component analysis is used to drive linear correlations of the original variables and calculate the principle components to reduce the dimensionality of the system. Then simple clustering technique is used for operation conditions classification and fault detection and diagnosis process. Each fault is represented by four clusters connected with three lines where each cluster represents different fault intensity level. The fault detection is performed by measuring the shortest orthogonal distance between the test point and the lines connecting the faults’ clusters. Simulated fault free and faulty data are used to train the model. Then, a new set of simulated faults data is used to test the model and the model successfully detected and diagnosed all faults type and intensity level of the tested faults for different operation conditions. Both models used simple seven temperature measurements, two pressure measurements (from which the condensation and evaporation temperatures are calculated) and the electrical power, as an input to the fault detection and diagnosis model. This is to reduce the cost and make it more convenient to implement. Finally, for each models, a user friendly graphical user interface is built to facilitate the model operation by the serviceman.
|
407 |
Identification of Suspicious Semiconductor Devices Using Independent Component Analysis with Dimensionality ReductionBartholomäus, Jenny, Wunderlich, Sven, Sasvári, Zoltán 22 August 2019 (has links)
In the semiconductor industry the reliability of devices is of paramount importance. Therefore, after removing the defective ones, one wants to detect irregularities in measurement data because corresponding devices have a higher risk of failure early in the product lifetime. The paper presents a method to improve the detection of such suspicious devices where the screening is made on transformed measurement data. Thereby, e.g., dependencies between tests can be taken into account. Additionally, a new dimensionality reduction is performed within the transformation, so that the reduced and transformed data comprises only the informative content from the raw data. This simplifies the complexity of the subsequent screening steps. The new approach will be applied to semiconductor measurement data and it will be shown, by means of examples, how the screening can be improved.
|
408 |
Principal Component Modelling of Fuel Consumption ofSeagoing Vessels and Optimising Fuel Consumption as a Mixed-Integer ProblemIvan, Jean-Paul January 2020 (has links)
The fuel consumption of a seagoing vessel is, through a combination of Box-Cox transforms and principal component analysis, reduced to a univariatefunction of the primary principle component with mean model error −3.2%and error standard deviation 10.3%. In the process, a Latin-hypercube-inspired space partitioning sampling technique is developed and successfully used to produce a representative sampleused in determining the regression coefficients. Finally, a formal optimisation problem for minimising the fuel use is described. The problem is derived from a parametrised expression for the fuel consumption, and has only 3, or 2 if simplified, free variables at each timestep. Some information has been redacted in order to comply with NDA restrictions. Most redactions are either names (of vessels or otherwise), units, andin some cases (especially on figures) quantities. / <p>Presentation was performed remotely using Zoom.</p>
|
409 |
L'analyse probabiliste en composantes latentes et ses adaptations aux signaux musicaux : application à la transcription automatique de musique et à la séparation de sources / Probabilistic latent component analysis and its adaptation to musical signals : application to automatic music transcription and source separationFuentes, Benoît 14 March 2013 (has links)
La transcription automatique de musique polyphonique consiste à estimer automatiquernent les notes présentes dans un enregistrement via trois de leurs attributs : temps d'attaque, durée et hauteur. Pour traiter ce problème, il existe une classe de méthodes dont le principe est de modéliser un signal comme une somme d'éléments de base, porteurs d'informations symboliques. Parmi ces techniques d'analyse, on trouve l'analyse probabiliste en composantes latentes (PLCA). L'objet de cette thèse est de proposer des variantes et des améliorations de la PLCA afin qu'elle puisse mieux s'adapter aux signaux musicaux et ainsi mieux traiter le problème de la transcription. Pour cela, un premier angle d'approche est de proposer de nouveaux modèles de signaux, en lieu et place du modèle inhérent à la PLCA, suffisamment expressifs pour pouvoir s'adapter aux notes de musique possédant simultanément des variations temporelles de fréquence fondamentale et d'enveloppe spectrale. Un deuxième aspect du travail effectué est de proposer des outils permettant d'aider l'algorithme d'estimation des paramètres à converger vers des solutions significatives via l'incorporation de connaissances a priori sur les signaux à analyser, ainsi que d'un nouveau modèle dynamique. Tous les algorithmes ainsi imaginés sont appliqués à la tâche de transcription automatique. Nous voyons également qu'ils peuvent être directement utilisés pour la séparation de sources, qui consiste à séparer plusieurs sources d'un mélange, et nous proposons deux applications dans ce sens. / Automatic music transcription consists in automatically estimating the notes in a recording, through three attributes: onset time, duration and pitch. To address this problem, there is a class of methods which is based on the modeling of a signal as a sum of basic elements, carrying symbolic information. Among these analysis techniques, one can find the probabilistic latent component analysis (PLCA). The purpose of this thesis is to propose variants and improvements of the PLCA, so that it can better adapt to musical signals and th us better address the problem of transcription. To this aim, a first approach is to put forward new models of signals, instead of the inherent model 0 PLCA, expressive enough so they can adapt to musical notes having variations of both pitch and spectral envelope over time. A second aspect of this work is to provide tools to help the parameters estimation algorithm to converge towards meaningful solutions through the incorporation of prior knowledge about the signals to be analyzed, as weil as a new dynamic model. Ali the devised algorithms are applie to the task of automatic transcription. They can also be directly used for source separation, which consists in separating several sources from a mixture, and Iwo applications are put forward in this direction
|
410 |
Comprehensive Molecular and Clinical Characterization of Retinoblastoma / Caractérisation moléculaire et clinique complète du rétinoblastomeSefta, Meriem 02 November 2015 (has links)
Le rétinoblastome est un cancer pédiatrique rare de la rétine en cours de développement. Si dans les pays développés, le taux de survie avoisine 100%, une énucléation de l’oeil atteint est cependant nécessaire dans plus de 70% des cas.En 1971, Knudson émit l’hypothèse des deux “hits”, qui permit de comprendre que le rétinoblastome s’initie généralement après une perte bi-allélique du gène RB1. Cependant, les autres mécanismes moléculaires qui régissent ce cancer restent depuis peu connus. Par exemple, peu d’études génomiques ont été conduites. Ainsi, la nature de la cellule d’origine, ainsi que la présence ou non d’une hétérogénéité intertumorale, font encore débat. Dans cette étude, nous avons dressé un portrait génomique et clinique complet du rétinoblastome; plusieurs observations ont montré qu’il s’agit bien d’une maladie hétérogène, avec deux sous-types distincts. Nous avons d’abord identifié les deux sous-types avec à une approche couplant une analyse en composantes indépendantes (ACI) de transcriptomes tumoraux avec des marquages immunohistochimiques. Les rétinoblastomes du premier sous-type, dits “cone-like” expriment uniformément des marqueurs de cônes, tandis que ceux du second sous-type, dits “bivalent-type”, ont une forte hétérogénéité intratumorale, avec un enchevêtrement de zones de différenciation ganglionnaire ou cône. Grâce à une étude plus approfondie des transcriptomes et de données d’altérations génomiques, nous avons ensuite montré que les sous-types dépendent de voies de signalisation et d’oncogènes différents. Les bivalent-type ont notamment une présence quasi-systématique de gains de MDM4 ou d’amplifications de MYCN. Nous nous sommes ensuite tournés vers les méthylomes des rétinoblastomes, et constaté une forte hétérogénéité entre les sous-types. Nous avons décomposé cette hétérogénéité grâce à une ACI, et constaté qu’elle n’était pas liée uniquement à la différenciation cône ou ganglion. Nous avons ensuite étudié les données cliniques de la cohorte, et constaté que les sous-types avaient des âges au diagnostic et des formes de croissance différents, les tumeurs cone-like se developpant généralement chez des patients jeunes avec des tumeurs exophytiques, et les bivalent-type chez des patients plus âgés avec des tumeurs endophytiques. De plus, les patients avec des inactivations constitutionnelles du gène RB1 développent majoritairement des tumeurs cone-like; les cone-like s’initieraient donc plus tôt durant le développement de la rétine. Nous avons finalement séquencé les exomes de 74 paires tumeur-normal. Les rétinoblastomes avaient un taux de mutations extrêmement faible (0.1 mutations par mégabase), comme beaucoup de cancers pédiatriques. Nous avons identifié des mutations somatiques récurrentes dans RB1, BCOR et ARID1A. Ces gènes se trouvaient de plus dans des régions minimales de pertes chromosomiques. Surtout, les inactivations des deux gènes avaient souvent de fortes fréquences alléliques. Ceci indique que ces inactivations ont lieu précocément dans la tumorigénèse. En conclusion, notre étude a permis de dresser un premier portrait génomique complet du rétinoblastome, a révélé l’existence de deux sous-types distincts, ainsi que fourni des indices quant à la cellule d’origine de chaque sous-type, et les mécanismes moléculaires les régissant. / Retinoblastoma is a rare pediatric cancer of the developing retina. In high-income countries, survival rates near 100%; however, enucleation of the affected eye has to be performed in over 70% of patients. Knudson’s 1971 two-hit hypothesis led to the discovery that this cancer usually initiates after a bi-allelic loss of the RB1 gene. Despite this early finding, little is known about the other molecular underpinnings of retinoblastoma. For instance, few genome-wide studies have described the genetic and epigenetic characteristics of these tumors. Furthermore, there is still no clear consensus regarding this cancer’s cell of origin, or whether or not it is homogenous disease. In this study, we built a comprehensive molecular and clinical portrait of retinoblastoma. Several lines of evidence led us to conclude that retinoblastoma is in fact a heterogeneous disease, with two distinct subtypes. We first uncovered the subtypes through a strategy that coupled an independent component analysis (ICA) of tumor transcriptomes to tumor immunohistochemical stainings. Retinoblastomas of the first subtype, called “cone-like”, homogeneously display cone-like differentiation, while those of the second subtype, called “bivalent-type”, exhibit strong intratumoral heterogeneity, with areas of cone-like differentiation intertwined with areas of ganglion-like differentiation. Further analysis of the transcriptomic data, as well as of copy number alteration data revealed that both subtypes may rely on different pathways and oncogenes. We notably observed a quasi-systematic presence of MDM4 gains or MYCN amplifications in bivalent-type tumors. We next turned to retinoblastomas’ methylomes; these considerably varied between the subtypes. ICA allowed us to decompose this inter-subtype methylomic heterogeneity, which was found to go beyond methylation due to cone-like or ganglion-like differentiation. We next studied the tumors’ clinical data, and found that cone-like tumors are most often diagnosed in very young patients with exophytic tumor growth, while bivalent-type tumors are found in older patients with endophytic tumor growth. Furthermore, patients with germline inactivations of RB1 mostly developed cone-like retinoblastomas, indicating that these tumors may initiate earlier during retinal development. In the final part of our study, we performed whole exome sequencing of 74 tumor-normal pairs. Like many pediatric cancers, the tumors had very low background mutation rates (0.1 mutations per megabase). Recurrent somatic mutations were found in RB1, BCOR and ARID1A, and these genes were also found to be in minimal regions of chromosomal losses. Importantly, both inactivations often had very high allelic frequencies, indicating that these events occur very early on in retinoblastoma tumorigenesis.Taken together, our study outlines a first comprehensive genomic portrait of retinoblastomas, points to the existence of two distinct subtypes, and provides insights into the cells-or-origin and the molecular mechanisms underlying these subtypes.
|
Page generated in 0.1167 seconds