• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 79
  • 21
  • 21
  • 16
  • 5
  • 4
  • 3
  • 3
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 172
  • 72
  • 48
  • 33
  • 29
  • 28
  • 26
  • 23
  • 23
  • 22
  • 20
  • 19
  • 18
  • 15
  • 14
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
61

Computer Vision Approaches for Mapping Gene Expression onto Lineage Trees

Lalit, Manan 06 December 2022 (has links)
This project concerns studying the early development of living organisms. This period is accompanied by dynamic morphogenetic events. There is an increase in the number of cells, changes in the shape of cells and specification of cell fate during this time. Typically, in order to capture the dynamic morphological changes, one can employ a form of microscopy imaging such as Selective Plane Illumination Microscopy (SPIM) which offers a single-cell resolution across time, and hence allows observing the positions, velocities and trajectories of most cells in a developing embryo. Unfortunately, the dynamic genetic activity which underlies these morphological changes and influences cellular fate decision, is captured only as static snapshots and often requires processing (sequencing or imaging) multiple distinct individuals. In order to set the stage for characterizing the factors which influence cellular fate, one must bring the data arising from the above-mentioned static snapshots of multiple individuals and the data arising from SPIM imaging of other distinct individual(s) which characterizes the changes in morphology, into the same frame of reference. In this project, a computational pipeline is established, which achieves the aforementioned goal of mapping data from these various imaging modalities and specimens to a canonical frame of reference. This pipeline relies on the three core building blocks of Instance Segmentation, Tracking and Registration. In this dissertation work, I introduce EmbedSeg which is my solution to performing instance segmentation of 2D and 3D (volume) image data. Next, I introduce LineageTracer which is my solution to performing tracking of a time-lapse (2d+t, 3d+t) recording. Finally, I introduce PlatyMatch which is my solution to performing registration of volumes. Errors from the application of these building blocks accumulate which produces a noisy observation estimate of gene expression for the digitized cells in the canonical frame of reference. These noisy estimates are processed to infer the underlying hidden state by using a Hidden Markov Model (HMM) formulation. Lastly, for wider dissemination of these methods, one requires an effective visualization strategy. A few details about the employed approach are also discussed in the dissertation work. The pipeline was designed keeping imaging volume data in mind, but can easily be extended to incorporate other data modalities, if available, such as single cell RNA Sequencing (scRNA-Seq) (more details are provided in the Discussion chapter). The methods elucidated in this dissertation would provide a fertile playground for several experiments and analyses in the future. Some of such potential experiments and current weaknesses of the computational pipeline are also discussed additionally in the Discussion Chapter.
62

Multiple-Instance Learning from Distributions

Doran, Gary Brian, Jr. 06 February 2015 (has links)
No description available.
63

Multiple-Instance Feature Ranking

Latham, Andrew C. 26 January 2016 (has links)
No description available.
64

A generic and extensible asset model for a semantic collaboration framework

Amir, Mohammad, Hu, Yim Fun, Pillai, Prashant 25 February 2014 (has links)
No / Analysis of existing literature reveals the growing need to tackle the issue of unified data dissemination. Where this issue has been given some focus, the outreach has been more or less limited to similar systems (i.e. cross-instance collaboration) and no particular focus has been applied on the problem of exposing this data or knowledge to third parties (i.e. cross-vendor collaboration). This paper proposes an integration of semantic technologies within the Web of Things based on the concept and principles of the Service-Oriented Architecture to realize a distributed and semi-autonomous collaboration framework that is capable of offering cross-vendor information exchange and collaboration facilities. Powered by a semantic engine and exposed as a web application with a RESTful API, the generic framework realizes an extensible knowledge management and exchange system that accounts for the dynamic landscape in business-centric Web of Things applications. Disaster management is taken as a potential application scenario to critically analyse and evaluate the system prototype and show that the asset model for the proposed framework is sufficiently capable of meeting the modern-day and next-generation collaboration needs in a world of ever-increasing cross-vendor information sharing.
65

Increasing big data front end processing efficiency via locally sensitive Bloom filter for elderly healthcare

Cheng, Yongqiang, Jiang, Ping, Peng, Yonghong January 2015 (has links)
No / In support of the increasing number of elderly population, wearable sensors and portable mobile devices capable of monitoring, recording, reporting and alerting are envisaged to enable them an independent lifestyle without relying on intrusive care programmes. However, the big data readings generated from the sensors are characterized as multidimensional, dynamic and non-linear with weak correlation with observable human behaviors and health conditions which challenges the information transmission, storing and processing. This paper proposes to use Locality Sensitive Bloom Filter to increase the Instance Based Learning efficiency for the front end sensor data pre-processing so that only relevant and meaningful information will be sent out for further processing aiming to relieve the burden of the above big data challenges. The approach is proven to optimize and enhance a popular instance-based learning method benefits from its faster speed, less space requirements and is adequate for the application.
66

Enhancing Athletic Training Through AI: A Comparative Analysis Of YOLO Versions For Image Segmentation In Velocity-Based Training

Ågren, Oscar, Palm, Johan January 2024 (has links)
This work explores the application of Artificial Intelligence (AI) in sports, specifically comparing. You Only Look Once (YOLO) version 8 and version 9 models in the context of Velocity-Based Training and resistance training. It aims to evaluate the models’ performance in instance segmentation and their effectiveness in estimating velocity metrics. Additionally, methods for pixel to meter conversion and centroid selection on barbells are developed and discussed. The field of AI is growing vastly with great practical possibilities in the sports industry. Traditional methods of collecting and analyzing data involving sensors are often expensive and not available to many coaches and athletes. By leveraging AI techniques, this work aims to provide insights to more cost-effective solutions. An experiment was conducted where YOLOv8 and YOLOv9 models of different sizes were trained on a custom dataset. Using the resulting model weights, key Velocity-based Training (VBT) metrics were extracted from videos of squat, bench press and deadlift exercises, and compared with sensor data. To automatically track the barbell in the videos, the centroids of bounding boxes were used. Additionally, to acquire the velocity in meters per second, pixel-to-meter conversion ratios were obtained using the Circular Hough Transform. Findings indicate that the YOLOv8x model generally excels according to performance metrics, however recording high mean inference time. Additionally, the YOLOv8m model showed overestimation in mean velocity, peak velocity and range of motion highlighting potential challenges for real-time VBT applications. Otherwise, all models performed very similar to sensor data, occasionally differing in scale stemming from faulty pixel to meter conversions. In conclusion, this work underscores AI’s potential in the sports industry while identifying areas for further enhancement to ensure accuracy and reliability in applications.
67

Caractérisation des instances difficiles de problèmes d'optimisation NP-difficiles / Characterization of difficult instances for NP-hard problems

Weber, Valentin 08 July 2013 (has links)
L'étude expérimentale d'algorithmes est un sujet crucial dans la conception de nouveaux algorithmes, puisque le contexte d'évaluation influence inévitablement la mesure de la qualité des algorithmes. Le sujet particulier qui nous intéresse dans l'étude expérimentale est la pertinence des instances choisies pour servir de base de test à l'expérimentation. Nous formalisons ce critère par la notion de "difficulté d'instance" qui dépend des performances pratiques de méthodes de résolution. Le coeur de la thèse porte sur un outil pour évaluer empiriquement la difficulté d'instance. L'approche proposée présente une méthode de benchmarking d'instances sur des jeux de test d'algorithmes. Nous illustrons cette méthode expérimentale pour évaluer des classes d'instances à travers plusieurs exemples d'applications sur le problème du voyageur de commerce. Nous présentons ensuite une approche pour générer des instances difficiles. Elle repose sur des opérations qui modifient les instances, mais qui permettent de retrouver facilement une solution optimale, d'une instance à l'autre. Nous étudions théoriquement et expérimentalement son impact sur les performances de méthodes de résolution. / The empirical study of algorithms is a crucial topic in the design of new algorithms because the context of evaluation inevitably influences the measure of the quality of algorithms. In this topic, we particularly focus on the relevance of instances forming testbeds. We formalize this criterion with the notion of 'instance hardness' that depends on practical performance of some resolution methods. The aim of the thesis is to introduce a tool to evaluate instance hardness. The approach uses benchmarking of instances against a testbed of algorithms. We illustrate our experimental methodology to evaluate instance classes through several applications to the traveling salesman problem. We also suggest possibilities to generate hard instances. They rely on operations that modify instances but that allow to easily find the optimal solution of one instance from the other. We theoretically and empirically study their impact on the performance of some resolution methods.
68

Stanovení výše pojistného plnění za škodu způsobenou požárem na halovém objektu v obci Stálky / Determining the Amount of an Insurance Claim for Damage Caused by Fire in a Hall in the Village of Stálky

Kutnohorský, Jakub January 2017 (has links)
The thesis is devoted to the issue of determining the amount of indemnity to the indoor facilities caused by fire. The aim is to determine the amount of indemnity if the insured event occurs on the property. Furthermore, I will deal with determining the new premium property value. The theoretical part provides the basic concepts commonly used valuation methods and a description of the valuation in accordance with applicable laws and regulations.
69

Mid-level representations for modeling objects / Représentations de niveau intermédiaire pour la modélisation d'objets

Tsogkas, Stavros 15 January 2016 (has links)
Dans cette thèse, nous proposons l'utilisation de représentations de niveau intermédiaire, et en particulier i) d'axes médians, ii) de parties d'objets, et iii) des caractéristiques convolutionnels, pour modéliser des objets.La première partie de la thèse traite de détecter les axes médians dans des images naturelles en couleur. Nous adoptons une approche d'apprentissage, en utilisant la couleur, la texture et les caractéristiques de regroupement spectral pour construire un classificateur qui produit une carte de probabilité dense pour la symétrie. Le Multiple Instance Learning (MIL) nous permet de traiter l'échelle et l'orientation comme des variables latentes pendant l'entraînement, tandis qu'une variante fondée sur les forêts aléatoires offre des gains significatifs en termes de temps de calcul.Dans la deuxième partie de la thèse, nous traitons de la modélisation des objets, utilisant des modèles de parties déformables (DPM). Nous développons une approche « coarse-to-fine » hiérarchique, qui utilise des bornes probabilistes pour diminuer le coût de calcul dans les modèles à grand nombre de composants basés sur HOGs. Ces bornes probabilistes, calculés de manière efficace, nous permettent d'écarter rapidement de grandes parties de l'image, et d'évaluer précisément les filtres convolutionnels seulement à des endroits prometteurs. Notre approche permet d'obtenir une accélération de 4-5 fois sur l'approche naïve, avec une perte minimale en performance.Nous employons aussi des réseaux de neurones convolutionnels (CNN) pour améliorer la détection d'objets. Nous utilisons une architecture CNN communément utilisée pour extraire les réponses de la dernière couche de convolution. Nous intégrons ces réponses dans l'architecture DPM classique, remplaçant les descripteurs HOG fabriqués à la main, et nous observons une augmentation significative de la performance de détection (~14.5% de mAP).Dans la dernière partie de la thèse nous expérimentons avec des réseaux de neurones entièrement convolutionnels pous la segmentation de parties d'objets.Nous réadaptons un CNN utilisé à l'état de l'art pour effectuer une segmentation sémantique fine de parties d'objets et nous utilisons un CRF entièrement connecté comme étape de post-traitement pour obtenir des bords fins.Nous introduirons aussi un à priori sur les formes à l'aide d'une Restricted Boltzmann Machine (RBM), à partir des segmentations de vérité terrain.Enfin, nous concevons une nouvelle architecture entièrement convolutionnel, et l'entraînons sur des données d'image à résonance magnétique du cerveau, afin de segmenter les différentes parties du cerveau humain.Notre approche permet d'atteindre des résultats à l'état de l'art sur les deux types de données. / In this thesis we propose the use of mid-level representations, and in particular i) medial axes, ii) object parts, and iii)convolutional features, for modelling objects.The first part of the thesis deals with detecting medial axes in natural RGB images. We adopt a learning approach, utilizing colour, texture and spectral clustering features, to build a classifier that produces a dense probability map for symmetry. Multiple Instance Learning (MIL) allows us to treat scale and orientation as latent variables during training, while a variation based on random forests offers significant gains in terms of running time.In the second part of the thesis we focus on object part modeling using both hand-crafted and learned feature representations. We develop a coarse-to-fine, hierarchical approach that uses probabilistic bounds for part scores to decrease the computational cost of mixture models with a large number of HOG-based templates. These efficiently computed probabilistic bounds allow us to quickly discard large parts of the image, and evaluate the exact convolution scores only at promising locations. Our approach achieves a $4times-5times$ speedup over the naive approach with minimal loss in performance.We also employ convolutional features to improve object detection. We use a popular CNN architecture to extract responses from an intermediate convolutional layer. We integrate these responses in the classic DPM pipeline, replacing hand-crafted HOG features, and observe a significant boost in detection performance (~14.5% increase in mAP).In the last part of the thesis we experiment with fully convolutional neural networks for the segmentation of object parts.We re-purpose a state-of-the-art CNN to perform fine-grained semantic segmentation of object parts and use a fully-connected CRF as a post-processing step to obtain sharp boundaries.We also inject prior shape information in our model through a Restricted Boltzmann Machine, trained on ground-truth segmentations.Finally, we train a new fully-convolutional architecture from a random initialization, to segment different parts of the human brain in magnetic resonance image data.Our methods achieve state-of-the-art results on both types of data.
70

Developing a Semantic Framework for Healthcare Information Interoperability

AYDAR, MEHMET 30 November 2015 (has links)
No description available.

Page generated in 0.0685 seconds