• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 12
  • 3
  • 2
  • Tagged with
  • 19
  • 19
  • 7
  • 6
  • 4
  • 4
  • 4
  • 4
  • 4
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Semantic content analysis for effective video segmentation, summarisation and retrieval.

Ren, Jinchang January 2009 (has links)
This thesis focuses on four main research themes namely shot boundary detection, fast frame alignment, activity-driven video summarisation, and highlights based video annotation and retrieval. A number of novel algorithms have been proposed to address these issues, which can be highlighted as follows. Firstly, accurate and robust shot boundary detection is achieved through modelling of cuts into sub-categories and appearance based modelling of several gradual transitions, along with some novel features extracted from compressed video. Secondly, fast and robust frame alignment is achieved via the proposed subspace phase correlation (SPC) and an improved sub-pixel strategy. The SPC is proved to be insensitive to zero-mean-noise, and its gradient-based extension is even robust to non-zero-mean noise and can be used to deal with non-overlapped regions for robust image registration. Thirdly, hierarchical modelling of rush videos using formal language techniques is proposed, which can guide the modelling and removal of several kinds of junk frames as well as adaptive clustering of retakes. With an extracted activity level measurement, shot and sub-shot are detected for content-adaptive video summarisation. Fourthly, highlights based video annotation and retrieval is achieved, in which statistical modelling of skin pixel colours, knowledge-based shot detection, and improved determination of camera motion patterns are employed. Within these proposed techniques, one important principle is to integrate various kinds of feature evidence and to incorporate prior knowledge in modelling the given problems. High-level hierarchical representation is extracted from the original linear structure for effective management and content-based retrieval of video data. As most of the work is implemented in the compressed domain, one additional benefit is the achieved high efficiency, which will be useful for many online applications. / EU IST FP6 Project
12

Semantic content analysis for effective video segmentation, summarisation and retrieval

Ren, Jinchang January 2009 (has links)
This thesis focuses on four main research themes namely shot boundary detection, fast frame alignment, activity-driven video summarisation, and highlights based video annotation and retrieval. A number of novel algorithms have been proposed to address these issues, which can be highlighted as follows. Firstly, accurate and robust shot boundary detection is achieved through modelling of cuts into sub-categories and appearance based modelling of several gradual transitions, along with some novel features extracted from compressed video. Secondly, fast and robust frame alignment is achieved via the proposed subspace phase correlation (SPC) and an improved sub-pixel strategy. The SPC is proved to be insensitive to zero-mean-noise, and its gradient-based extension is even robust to non-zero-mean noise and can be used to deal with non-overlapped regions for robust image registration. Thirdly, hierarchical modelling of rush videos using formal language techniques is proposed, which can guide the modelling and removal of several kinds of junk frames as well as adaptive clustering of retakes. With an extracted activity level measurement, shot and sub-shot are detected for content-adaptive video summarisation. Fourthly, highlights based video annotation and retrieval is achieved, in which statistical modelling of skin pixel colours, knowledge-based shot detection, and improved determination of camera motion patterns are employed. Within these proposed techniques, one important principle is to integrate various kinds of feature evidence and to incorporate prior knowledge in modelling the given problems. High-level hierarchical representation is extracted from the original linear structure for effective management and content-based retrieval of video data. As most of the work is implemented in the compressed domain, one additional benefit is the achieved high efficiency, which will be useful for many online applications.
13

Une amélioration de la description du phénomène de déformation progressive dans les métaux par la prise en compte de la distorsion du domaine d'élasticité

Vincent, Ludovic 18 October 2002 (has links) (PDF)
Si un métal est soumis à un chargement cyclique autour d'une contrainte moyenne non nulle, de la déformation plastique peut s'accumuler au cours des cycles, généralement dans la direction de la contrainte moyenne : c'est le phénomène de déformation progressive ou phénomène dit de rochet.<br />Le contexte de cette thèse était l'amélioration de la description de ce phénomène dans les structures métalliques. Pour cela, nous avons choisi d'axer nos efforts sur les modèles macroscopiques phénoménologiques qui décrivent le comportement de matériaux dans des calculs de structure.<br />Comme cela était suggéré dans plusieurs articles récents, une voie d'amélioration de ces modèles était de prendre en compte la distorsion du domaine d'élasticité, phénomène maintes fois observé expérimentalement, mais jusqu'alors négligé par soucis de simplicité. Pour introduire convenablement ce nouvel ingrédient dans la modélisation, nous nous sommes appuyés sur un dialogue entre un modèle micro-macro d'une part (plus pertinent mais aussi plus coûteux en temps de calcul) et le modèle macroscopique à construire d'autre part. Nous avons sû tirer profit des informations données par le modèle micro-macro pour constuire un modèle macroscopique capable de décrire à la fois le phénomène de rochet multiaxial et la distorsion du domaine d'élasticité prévus par le modèle micro-macro, et ce sur une large base d'"essais virtuels". Ensuite, le modèle développé a été identifié et validé avec succès sur plusieurs résultats d'essais complexes obtenus sur un élément de volume d'un matériau réel. Une validation finale du modèle sur un essai non-homogène (de structure) est en cours d'étude.
14

Modelagem hierárquica para a equação do calor em uma placa heterogênea / Hierarchical modelling for the heat equation in a heterogeneous

Oliveira, Ana Carolina Carius de 15 March 2006 (has links)
Made available in DSpace on 2015-03-04T18:51:18Z (GMT). No. of bitstreams: 1 Apresentacao.pdf: 121883 bytes, checksum: da58d642d3c4e264571ed24d208c16d7 (MD5) Previous issue date: 2006-03-15 / Coordenacao de Aperfeicoamento de Pessoal de Nivel Superior / In this dissertation, we study the stationary heat equation in a heterogeneous tridimensional plate, using a "dimension reduction" techinique called hierarchical modelling and we generate model the original problem in a two-dimensional domain. To estimate the error modelling, we develop an asymptotic expansion for the original problem solution and for the aproximate solution. Comparing both solutions with their own asymptotic expansions, we obtain an estimative of the error modelling. We perform some computational experiments, using the Residual Free Bubbles (RFB) Method and the Multiscale Finite Element Method for the diffusion problem and for the diffusion-reaction problem in a two-dimensional domain, with small parameters. Finally, we extend the numeric solutions found the original tridimensional problem. / Neste trabalho, estudamos a equação do calor estacionária em uma placa heterogênea tridimensional. Para a modelagem deste problema, utilizamos uma técnica de redução de dimensão conhecida por Modelagem Hierárquica. Desta forma, geramos um modelo para o problema original em um domínio bidimensional. Com o objetivo de estimar o erro de modelagem, desenvolvemos a expansão assintótica da solução do problema original e da solução aproximada. Comparando as soluções com suas respectivas expansões assintóticas, obtemos uma estimativa para o erro de modelagem. Realizamos alguns experimentos computacionais, desenvolvendo o método Residual Free Bubbles (RFB) e o método de Elementos Finitos Multiescala (MEFM) para o problema de difusão e para o problema de difusão-reação em um domínio bidimensional, com parâmetros pequenos. Com base nestes experimentos, encontramos algumas soluções numéricas para o problema da placa tridimensional.
15

Aide à la décision pour la conservation des populations de saumon atlantique (Salmo salar L.) / Decision making for the conservation of atlantic salmon populations (Salmo salar L.)

Brun, Mélanie 16 December 2011 (has links)
La gestion durable des ressources naturelles vivantes est un problème majeur dans un contexte de raréfaction, dû à l'impact de l'homme et à une incertitude omniprésente. Améliorer les outils existant et en développer de nouveaux pour conseiller les gestionnaires sur l'évolution potentielle des ressources naturelles vivantes, selon divers scénarios environnementaux et de gestion, est nécessaire. Cette thèse a pour but de contribuer au développement d'une méthodologie pour l'aide à la décision pour la gestion des ressources naturelles vivantes, tout en prenant en compte les sources d'incertitude majeures. Ce travail est appliqué au cas de la population de saumon atlantique (Salmo salar L.) de la Nivelle (France). Cette population fait l'objet d'un programme de suivi à long terme et cette espèce a été largement étudiée. Cette dernière est menacée mais elle est toujours ciblée par la pêche commerciale et récréative. Elle illustre la dualité entre conservation et exploitation, qui est au coeur de la gestion des ressources naturelles vivantes. Pour gérer une population, il est nécessaire de comprendre sa dynamique et de prédire son évolution sous divers scénarios environnementaux et de gestion. L'approche Bayésienne fournit un cadre cohérent pour quantifier l'incertitude sous ses différentes formes. Les modèles hiérarchiques permettent l'assimilation de sources de données multiples et de faire des inférences et des prédictions sur des grandeurs spatio-temporelles inconnues. Un modèle stochastique d'état Bayésien, i.e. un modèle hiérarchique Bayésien dynamique, est construit pour étudier la dynamique de la population d'intérêt et pour prédire son évolution. La théorie de la décision en univers incertain fournit un cadre pour aider un individu dans ses choix, mais son application reste difficile. En théorie, une fonction d'utilité qui dépend des conséquences des alternatives de gestion reflète les préférences d'un individu unique impliqué dans un problème décisionnel. En pratique, sa construction est malaisée. Premièrement, il estdifficile de définir une valeur pour chaque conséquence. Deuxièmement, il y a généralement plus d'un individu impliqué dans le problème décisionnel. Par conséquent, on obtient une classe de fonctions d'utilité. De par les différents intérêts, souvent conflictuels, que les gestionnaires ont à prendre en compte, la fonction d'utilité est multi variée. Dans cette thèse, une classe de fonctions d'utilité bi-variées est construite. Elle prend en compte l'incertitude concernant la fonction, les variations de préférence entre les acteurs et la dualité d'intérêts exploitation vs conservation. Ensuite, une analyse de la robustesse est réalisée pour étudier si la décision optimale, i.e. l'utilité espérée maximale, varie lorsque la fonction d'utilité varie.La méthodologie développée dans cette thèse s'est avérée possible et fructueuse. Elle fournit un cadre cohérent pour organiser les interactions entre scientifiques, acteurs et gestionnaires pour atteindre une compréhension commune des problèmes de décision dans la gestion des ressources naturelles vivantes. En reconnaissant explicitement la diversité des acteurs, elle permet d'identifier des conflits potentiels et de guider les gestionnaires vers des compromis acceptables. Cependant, elle demande un haut niveau de formation et d'expertise en modélisation et en calcul. Elle implique également un temps d'analyse important. Comment rendre ces exigences compatibles avec le niveau actuel d'expertise et les agendas à court terme des structures de gestion est un challenge principal pour le futur. / The sustainable management of natural living resources is a major issue in a context of increasing scarcity due to human impact and of pervasive uncertainty. Improving existing tools and developing new ones to advise decision makers on the potential evolution of natural living resources, according to various management and environmental scenarios, is requested. This PhD aims at contributing to the development of a methodology for decision making for natural living resources management, while taking into account major sources of uncertainty. This is achieved through the study case of the Atlantic salmon (Salmo salar L.) population ofthe Nivelle River (France). This population is subjected to a long term monitoring program and the species has been extensively studied. Atlantic salmon is a threatened species but still targeted by commercial and recreational fisheries. It illustrates the duality between conservation and exploitation which is at the heart of natural living resource management. To manage a population, it is necessary to understand its dynamics and to predict its evolution under various management and environmental scenarios. The Bayesian approach provides a coherent framework to quantify uncertainty in its different forms. Hierarchical models allow the assimilation of multiple sources of data and to make spatio-temporal inferences and predictions. A Bayesian state space model, i.e. a Bayesian dynamic hierarchical model, is constructed to study the dynamics of the population of interest and topredict its evolution. The decision theory under uncertainty provides a framework to help an individual in its choices, but its application still raises difficulties. In theory, a utility function depending on the consequences of alternative actions reflects the preferences of a single individual involved in a decision problem. In practice, its construction is challenging. Firstly, it is difficult to assign a value for each consequence. Secondly, there is usually more than one individual involved in the decision problem. Consequently, we obtain a set of utility functions. Due to the various and often conflicting interests the decision maker has to take into account, the utility function is multivariate. In this PhD, a set of bivariate utility functions is constructed. It accounts for the uncertainty about the function, the variation of preferences among stakeholders and the dual interests of exploitation vs conservation. Next, a robustness analysis is performed to study if the optimal decision, i.e. associated to the maximum expected utility, varies when the utility function varies. The methodology developed in this PhD proved practicable and fruitful. It provides a coherent framework for organizing the interactions between scientists, stakeholders and decision makers for reaching a common understanding of decision problems in the management of natural living resources. By acknowledging explicitly the diversity among stakeholders, it allows to identify potential conflict and it helps guiding decision makers towards acceptable trade-off actions. However, it requires a high level of training and expertise in modelling and computation. It involves also thoughtful and time consuming analyses. How to render these requirements compatible with the current level of expertise and the short term agendas of management bodies is a main challenge for the near future.
16

Dynamic Bayesian models for modelling environmental space-time fields

Dou, Yiping 05 1900 (has links)
This thesis addresses spatial interpolation and temporal prediction using air pollution data by several space-time modelling approaches. Firstly, we implement the dynamic linear modelling (DLM) approach in spatial interpolation and find various potential problems with that approach. We develop software to implement our approach. Secondly, we implement a Bayesian spatial prediction (BSP) approach to model spatio-temporal ground-level ozone fields and compare the accuracy of that approach with that of the DLM. Thirdly, we develop a Bayesian version empirical orthogonal function (EOF) method to incorporate the uncertainties due to temporally varying spatial process, and the spatial variations at broad- and fine- scale. Finally, we extend the BSP into the DLM framework to develop a unified Bayesian spatio-temporal model for univariate and multivariate responses. The result generalizes a number of current approaches in this field.
17

Dynamic Bayesian models for modelling environmental space-time fields

Dou, Yiping 05 1900 (has links)
This thesis addresses spatial interpolation and temporal prediction using air pollution data by several space-time modelling approaches. Firstly, we implement the dynamic linear modelling (DLM) approach in spatial interpolation and find various potential problems with that approach. We develop software to implement our approach. Secondly, we implement a Bayesian spatial prediction (BSP) approach to model spatio-temporal ground-level ozone fields and compare the accuracy of that approach with that of the DLM. Thirdly, we develop a Bayesian version empirical orthogonal function (EOF) method to incorporate the uncertainties due to temporally varying spatial process, and the spatial variations at broad- and fine- scale. Finally, we extend the BSP into the DLM framework to develop a unified Bayesian spatio-temporal model for univariate and multivariate responses. The result generalizes a number of current approaches in this field.
18

Dynamic Bayesian models for modelling environmental space-time fields

Dou, Yiping 05 1900 (has links)
This thesis addresses spatial interpolation and temporal prediction using air pollution data by several space-time modelling approaches. Firstly, we implement the dynamic linear modelling (DLM) approach in spatial interpolation and find various potential problems with that approach. We develop software to implement our approach. Secondly, we implement a Bayesian spatial prediction (BSP) approach to model spatio-temporal ground-level ozone fields and compare the accuracy of that approach with that of the DLM. Thirdly, we develop a Bayesian version empirical orthogonal function (EOF) method to incorporate the uncertainties due to temporally varying spatial process, and the spatial variations at broad- and fine- scale. Finally, we extend the BSP into the DLM framework to develop a unified Bayesian spatio-temporal model for univariate and multivariate responses. The result generalizes a number of current approaches in this field. / Science, Faculty of / Statistics, Department of / Graduate
19

Machine Learning methods in shotgun proteomics

Truong, Patrick January 2023 (has links)
As high-throughput biology experiments generate increasing amounts of data, the field is naturally turning to data-driven methods for the analysis and extraction of novel insights. These insights into biological systems are crucial for understanding disease progression, drug targets, treatment development, and diagnostics methods, ultimately leading to improving human health and well-being, as well as, deeper insight into cellular biology. Biological data sources such as the genome, transcriptome, proteome, metabolome, and metagenome provide critical information about biological system structure, function, and dynamics. The focus of this licentiate thesis is on proteomics, the study of proteins, which is a natural starting point for understanding biological functions as proteins are crucial functional components of cells. Proteins play a crucial role in enzymatic reactions, structural support, transport, storage, cell signaling, and immune system function. In addition, proteomics has vast data repositories and technical and methodological improvements are continually being made to yield even more data. However, generating proteomic data involves multiple steps, which are prone to errors, making sophisticated models essential to handle technical and biological artifacts and account for uncertainty in the data. In this licentiate thesis, the use of machine learning and probabilistic methods to extract information from mass-spectrometry-based proteomic data is investigated. The thesis starts with an introduction to proteomics, including a basic biological background, followed by a description of how massspectrometry-based proteomics experiments are performed, and challenges in proteomic data analysis. The statistics of proteomic data analysis are also explored, and state-of-the-art software and tools related to each step of the proteomics data analysis pipeline are presented. The thesis concludes with a discussion of future work and the presentation of two original research works. The first research work focuses on adapting Triqler, a probabilistic graphical model for protein quantification developed for data-dependent acquisition (DDA) data, to data-independent acquisition (DIA) data. Challenges in this study included verifying that DIA data conformed with the model used in Triqler, addressing benchmarking issues, and modifying the missing value model used by Triqler to adapt for DIA data. The study showed that DIA data conformed with the properties required by Triqler, implemented a protein inference harmonization strategy, and modified the missing value model to adapt for DIA data. The study concluded by showing that Triqler outperformed current protein quantification techniques. The second research work focused on developing a novel deep-learning based MS2-intensity predictor by incorporating the self-attention mechanism called transformer into Prosit, an established Recurrent Neural Networks (RNN) based deep learning framework for MS2 spectrum intensity prediction. RNNs are a type of neural network that can efficiently process sequential data by capturing information from previous steps, in a sequential manner. The transformer self-attention mechanism allows a model to focus on different parts of its input sequence during processing independently, enabling it to capture dependencies and relationships between elements more effectively. The transformers therefore remedy some of the drawbacks of RNNs, as such, we hypothesized that the implementation of MS2-intensity predictor using transformers rather than RNN would improve its performance. Hence, Prosit-transformer was developed, and the study showed that the model training time and the similarity between the predicted MS2 spectrum and the observed spectrum improved. These original research works address various challenges in computational proteomics and contribute to the development of data-driven life science. / Allteftersom high-throughput experiment genererar allt större mängder data vänder sig området naturligt till data-drivna metoder för analys och extrahering av nya insikter. Dessa insikter om biologiska system är avgörande för att förstå sjukdomsprogression, läkemedelspåverkan, behandlingsutveckling, och diagnostiska metoder, vilket i slutändan leder till en förbättring av människors hälsa och välbefinnande, såväl som en djupare förståelse av cell biologi. Biologiska datakällor som genomet, transkriptomet, proteomet, metabolomet och metagenomet ger kritisk information om biologiska systems struktur, funktion och dynamik. I licentiatuppsats fokusområde ligger på proteomik, studiet av proteiner, vilket är en naturlig startpunkt för att förstå biologiska funktioner eftersom proteiner är avgörande funktionella komponenter i celler. Dessa proteiner spelar en avgörande roll i enzymatiska reaktioner, strukturellt stöd, transport, lagring, cellsignalering och immunsystemfunktion. Dessutom har proteomik har stora dataarkiv och tekniska samt metodologiska förbättringar görs kontinuerligt för att ge ännu mer data. Men för att generera proteomisk data krävs flera steg, som är felbenägna, vilket gör att sofistikerade modeller är väsentliga för att hantera tekniska och biologiska artefakter och för att ta hänsyn till osäkerhet i data. I denna licentiatuppsats undersöks användningen av maskininlärning och probabilistiska metoder för att extrahera information från masspektrometribaserade proteomikdata. Avhandlingen börjar med en introduktion till proteomik, inklusive en grundläggande biologisk bakgrund, följt av en beskrivning av hur masspektrometri-baserade proteomikexperiment utförs och utmaningar i proteomisk dataanalys. Statistiska metoder för proteomisk dataanalys utforskas också, och state-of-the-art mjukvara och verktyg som är relaterade till varje steg i proteomikdataanalyspipelinen presenteras. Avhandlingen avslutas med en diskussion om framtida arbete och presentationen av två original forskningsarbeten. Det första forskningsarbetet fokuserar på att anpassa Triqler, en probabilistisk grafisk modell för proteinkvantifiering som utvecklats för datadependent acquisition (DDA) data, till data-independent acquisition (DIA) data. Utmaningarna i denna studie inkluderade att verifiera att DIA-datas egenskaper överensstämde med modellen som användes i Triqler, att hantera benchmarking-frågor och att modifiera missing-value modellen som användes av Triqler till DIA-data. Studien visade att DIA-data överensstämde med de egenskaper som krävdes av Triqler, implementerade en proteininferensharmoniseringsstrategi och modifierade missing-value modellen till DIA-data. Studien avslutades med att visa att Triqler överträffade nuvarande state-of-the-art proteinkvantifieringsmetoder. Det andra forskningsarbetet fokuserade på utvecklingen av en djupinlärningsbaserad MS2-intensitetsprediktor genom att inkorporera self-attention mekanismen som kallas för transformer till Prosit, en etablerad Recurrent Neural Network (RNN) baserad djupinlärningsramverk för MS2 spektrum intensitetsprediktion. RNN är en typ av neurala nätverk som effektivt kan bearbeta sekventiell data genom att bevara och använda dolda tillstånd som fångar information från tidigare steg på ett sekventiellt sätt. Självuppmärksamhetsmekanismen i transformer tillåter modellen att fokusera på olika delar av sekventiellt data samtidigt under bearbetningen oberoende av varandra, vilket gör det möjligt att fånga relationer mellan elementen mer effektivt. Genom detta lyckas Transformer åtgärda vissa nackdelar med RNN, och därför hypotiserade vi att en implementation av en ny MS2-intensitetprediktor med transformers istället för RNN skulle förbättra prestandan. Därmed konstruerades Prosit-transformer, och studien visade att både modellträningstiden och likheten mellan predicerat MS2-spektrum och observerat spektrum förbättrades. Dessa originalforskningsarbeten hanterar olika utmaningar inom beräkningsproteomik och bidrar till utvecklingen av datadriven livsvetenskap. / <p>QC 2023-05-22</p>

Page generated in 0.1239 seconds