• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 7433
  • 1103
  • 1048
  • 794
  • 476
  • 291
  • 237
  • 184
  • 90
  • 81
  • 63
  • 52
  • 44
  • 43
  • 42
  • Tagged with
  • 14405
  • 9224
  • 3942
  • 2366
  • 1924
  • 1915
  • 1721
  • 1624
  • 1513
  • 1439
  • 1372
  • 1354
  • 1341
  • 1275
  • 1269
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
331

Flexible Sparse Learning of Feature Subspaces

Ma, Yuting January 2017 (has links)
It is widely observed that the performances of many traditional statistical learning methods degenerate when confronted with high-dimensional data. One promising approach to prevent this downfall is to identify the intrinsic low-dimensional spaces where the true signals embed and to pursue the learning process on these informative feature subspaces. This thesis focuses on the development of flexible sparse learning methods of feature subspaces for classification. Motivated by the success of some existing methods, we aim at learning informative feature subspaces for high-dimensional data of complex nature with better flexibility, sparsity and scalability. The first part of this thesis is inspired by the success of distance metric learning in casting flexible feature transformations by utilizing local information. We propose a nonlinear sparse metric learning algorithm using a boosting-based nonparametric solution to address metric learning problem for high-dimensional data, named as the sDist algorithm. Leveraged a rank-one decomposition of the symmetric positive semi-definite weight matrix of the Mahalanobis distance metric, we restructure a hard global optimization problem into a forward stage-wise learning of weak learners through a gradient boosting algorithm. In each step, the algorithm progressively learns a sparse rank-one update of the weight matrix by imposing an L-1 regularization. Nonlinear feature mappings are adaptively learned by a hierarchical expansion of interactions integrated within the boosting framework. Meanwhile, an early stopping rule is imposed to control the overall complexity of the learned metric. As a result, without relying on computationally intensive tools, our approach automatically guarantees three desirable properties of the final metric: positive semi-definiteness, low rank and element-wise sparsity. Numerical experiments show that our learning model compares favorably with the state-of-the-art methods in the current literature of metric learning. The second problem arises from the observation of high instability and feature selection bias when applying online methods to highly sparse data of large dimensionality for sparse learning problem. Due to the heterogeneity in feature sparsity, existing truncation-based methods incur slow convergence and high variance. To mitigate this problem, we introduce a stabilized truncated stochastic gradient descent algorithm. We employ a soft-thresholding scheme on the weight vector where the imposed shrinkage is adaptive to the amount of information available in each feature. The variability in the resulted sparse weight vector is further controlled by stability selection integrated with the informative truncation. To facilitate better convergence, we adopt an annealing strategy on the truncation rate. We show that, when the true parameter space is of low dimension, the stabilization with annealing strategy helps to achieve lower regret bound in expectation.
332

Analysis of the migratory potential of cancerous cells by image preprocessing, segmentation and classification / Analyse du potentiel migratoire des cellules cancéreuses par prétraitement et segmentation d'image et classification des données

Syed, Tahir Qasim 13 December 2011 (has links)
Ce travail de thèse s’insère dans un projet de recherche plus global dont l’objectif est d’analyser le potentiel migratoire de cellules cancéreuses. Dans le cadre de ce doctorat, on s’intéresse à l’utilisation du traitement des images pour dénombrer et classifier les cellules présentes dans une image acquise via un microscope. Les partenaires biologistes de ce projet étudient l’influence de l’environnement sur le comportement migratoire de cellules cancéreuses à partir de cultures cellulaires pratiquées sur différentes lignées de cellules cancéreuses. Le traitement d’images biologiques a déjà donné lieu `a un nombre important de publications mais, dans le cas abordé ici et dans la mesure où le protocole d’acquisition des images acquises n'était pas figé, le défi a été de proposer une chaîne de traitements adaptatifs ne contraignant pas les biologistes dans leurs travaux de recherche. Quatre étapes sont détaillées dans ce mémoire. La première porte sur la définition des prétraitements permettant d’homogénéiser les conditions d’acquisition. Le choix d’exploiter l’image des écarts-type plutôt que la luminosité est un des résultats issus de cette première partie. La deuxième étape consiste à compter le nombre de cellules présentent dans l’image. Un filtre original, nommé filtre «halo», permettant de renforcer le centre des cellules afin d’en faciliter leur comptage, a été proposé. Une étape de validation statistique de ces centres permet de fiabiliser le résultat obtenu. L’étape de segmentation des images, sans conteste la plus difficile, constitue la troisième partie de ce travail. Il s’agit ici d’extraire des «vignettes», contenant une seule cellule. Le choix de l’algorithme de segmentation a été celui de la «Ligne de Partage des Eaux», mais il a fallu adapter cet algorithme au contexte des images faisant l’objet de cette étude. La proposition d’utiliser une carte de probabilités comme données d’entrée a permis d’obtenir une segmentation au plus près des bords des cellules. Par contre cette méthode entraine une sur-segmentation qu’il faut réduire afin de tendre vers l’objectif : «une région = une cellule». Pour cela un algorithme utilisant un concept de hiérarchie cumulative basée morphologie mathématique a été développée. Il permet d’agréger des régions voisines en travaillant sur une représentation arborescente de ces régions et de leur niveau associé. La comparaison des résultats obtenus par cette méthode à ceux proposés par d’autres approches permettant de limiter la sur-segmentation a permis de prouver l’efficacité de l’approche proposée. L’étape ultime de ce travail consiste dans la classification des cellules. Trois classes ont été définies : cellules allongées (migration mésenchymateuse), cellules rondes «blebbantes» (migration amiboïde) et cellules rondes «lisses» (stade intermédiaire du mode de migration). Sur chaque vignette obtenue à la fin de l’étape de segmentation, des caractéristiques de luminosité, morphologiques et texturales ont été calculées. Une première analyse de ces caractéristiques a permis d’élaborer une stratégie de classification, à savoir séparer dans un premier temps les cellules rondes des cellules allongées, puis séparer les cellules rondes «lisses» des «blebbantes». Pour cela on divise les paramètres en deux jeux qui vont être utilisés successivement dans ces deux étapes de classification. Plusieurs algorithmes de classification ont été testés pour retenir, au final, l’utilisation de deux réseaux de neurones permettant d’obtenir plus de 80% de bonne classification entre cellules longues et cellules rondes, et près de 90% de bonne classification entre cellules rondes «lisses» et «blebbantes». / This thesis is part of a broader research project which aims to analyze the potential migration of cancer cells. As part of this doctorate, we are interested in the use of image processing to count and classify cells present in an image acquired usinga microscope. The partner biologists of this project study the influence of the environment on the migratory behavior of cancer cells from cell cultures grown on different cancer cell lines. The processing of biological images has so far resulted in a significant number of publications, but in the case discussed here, since the protocol for the acquisition of images acquired was not fixed, the challenge was to propose a chain of adaptive processing that does not constrain the biologists in their research. Four steps are detailed in this paper. The first concerns the definition of pre-processing steps to homogenize the conditions of acquisition. The choice to use the image of standard deviations rather than the brightness is one of the results of this first part. The second step is to count the number of cells present in the image. An original filter, the so-called “halo” filter, that reinforces the centre of the cells in order to facilitate counting, has been proposed. A statistical validation step of the centres affords more reliability to the result. The stage of image segmentation, undoubtedly the most difficult, constitutes the third part of this work. This is a matter of extracting images each containing a single cell. The choice of segmentation algorithm was that of the “watershed”, but it was necessary to adapt this algorithm to the context of images included in this study. The proposal to use a map of probabilities as input yielded a segmentation closer to the edges of cells. As against this method leads to an over-segmentation must be reduced in order to move towards the goal: “one region = one cell”. For this algorithm the concept of using a cumulative hierarchy based on mathematical morphology has been developed. It allows the aggregation of adjacent regions by working on a tree representation ofthese regions and their associated level. A comparison of the results obtained by this method with those proposed by other approaches to limit over-segmentation has allowed us to prove the effectiveness of the proposed approach. The final step of this work consists in the classification of cells. Three classes were identified: spread cells (mesenchymal migration), “blebbing” round cells (amoeboid migration) and “smooth” round cells (intermediate stage of the migration modes). On each imagette obtained at the end of the segmentation step, intensity, morphological and textural features were calculated. An initial analysis of these features has allowed us to develop a classification strategy, namely to first separate the round cells from spread cells, and then separate the “smooth” and “blebbing” round cells. For this we divide the parameters into two sets that will be used successively in Two the stages of classification. Several classification algorithms were tested, to retain in the end, the use of two neural networks to obtain over 80% of good classification between long cells and round cells, and nearly 90% of good Classification between “smooth” and “blebbing” round cells.
333

A STUDY OF REAL TIME SEARCH IN FLOOD SCENES FROM UAV VIDEOS USING DEEP LEARNING TECHNIQUES

Gagandeep Singh Khanuja (7486115) 17 October 2019 (has links)
<div>Following a natural disaster, one of the most important facet that influence a persons chances of survival/being found out is the time with which they are rescued. Traditional means of search operations involving dogs, ground robots, humanitarian intervention; are time intensive and can be a major bottleneck in search operations. The main aim of these operations is to rescue victims without critical delay in the shortest time possible which can be realized in real-time by using UAVs. With advancements in computational devices and the ability to learn from complex data, deep learning can be leveraged in real time environment for purpose of search and rescue operations. This research aims to solve the traditional means of search operation using the concept of deep learning for real time object detection and Photogrammetry for precise geo-location mapping of the objects(person,car) in real time. In order to do so, various pre-trained algorithms like Mask-RCNN, SSD300, YOLOv3 and trained algorithms like YOLOv3 have been deployed with their results compared with means of addressing the search operation in</div><div>real time.</div><div><br></div>
334

Computer-aided model generation and validation for dynamic systems

Brisbine, Brian P. 11 August 1998 (has links)
The primary goal of any model is to emulate, as closely as possible, the desired behavioral phenomena of the real system but still maintain some tangible qualities between the parameters of the model and the system response. In keeping with this directive, models by their very nature migrate towards increasing complexity and hence quickly become tedious to construct and evaluate. In addition, it is sometimes necessary to employ several different analysis techniques on a particular system, which often requires modification of the model. As a result, the concept of versatile, step-wise automated model generation was realized as a means of transferring some of the laborious tasks of model derivation from the analyst to a suitable program algorithm. The focus of this research is on the construction and verification of an efficient modeling environment that captures the dynamic properties of the system and allows many different analysis techniques to be conveniently implemented. This is accomplished through the implementation of Mathematica by Wolfram Research, Inc.. The presented methodology utilizes rigid body, lumped parameter systems and Lagrange's energy formalism. The modeling environment facilitates versatility by allowing straightforward transformations of the model being developed to different forms and domains. The final results are symbolic expressions derived from the equations of motion. However, this approach is predicated upon the absence of significant low frequency flexible vibration modes in the system. This requirement can be well satisfied in the parallel structure machine tools, the main subject of this research. The modeling environment allows a number of techniques for validation to be readily implemented. This includes intuitive checks at key points during model derivation as well as applications of more traditional experimental validation. In all presented cases the analysis can be performed in the same software package that was used for model development. Integration of the generation, validation, and troubleshooting methodology delineated in this research facilitates development of accurate models that can be applied in structure design and exploitation. Possible applications of these models include parameter identification, visualization of vibration, automated supervision and monitoring, and design of advanced control strategies for minimization of dynamic tool path errors. The benefits are especially prevalent in parallel structure machine tools, where there is still a lack of experience. Latest developments in measurement techniques and the emergence of new sensors facilitate reliable validation and optimization of the models. / Graduation date: 1999
335

A Virtual Machine for a Type-omega Denotational Proof Language

III, Teodoro Arvizo 01 June 2002 (has links)
In this thesis, I designed and implemented a virtual machine (VM) for a monomorphic variant of Athena, a type-omega denotational proof language (DPL). This machine attempts to maintain the minimum state required to evaluate Athena phrases. This thesis also includes the design and implementation of a compiler for monomorphic Athena that compiles to the VM. Finally, it includes details on my implementation of a read-eval-print loop that glues together the VM core and the compiler to provide a full, user-accessible interface to monomorphic Athena. The Athena VM provides the same basis for DPLs that the SECD machine does for pure, functional programming and the Warren Abstract Machine does for Prolog.
336

“The technology is great when it works” : Maritime Technology and Human Integration on the Ship’s Bridge

Lützhöft, Margareta January 2004 (has links)
Several recent maritime accidents suggest that modern technology sometimes can make it difficult for mariners to navigate safely. A review of the literature also indicates that the technological remedies designed to prevent maritime accidents at times can be ineffective or counterproductive. To understand why, problem-oriented ethnography was used to collect and analyse data on how mariners understand their work and their tools. Over 4 years, 15 ships were visited; the ship types studied were small and large archipelago passenger ships and cargo ships. Mariners and others who work in the maritime industry were interviewed. What I found onboard were numerous examples of what I now call integration work. Integration is about co-ordination, co-operation and compromise. When humans and technology have to work together, the human (mostly) has to co-ordinate resources, co-operate with devices and compromise between means and ends. What mariners have to integrate to get work done include representations of data and information; rules, regulations and practice; human and machine work; and learning and practice. Mariners largely have to perform integration work themselves because machines cannot communicate in ways mariners see as useful. What developers and manufacturers choose to integrate into screens or systems is not always what the mariners would choose. There are other kinds of ‘mistakes’ mariners have to adapt to. Basically, they arise from conflicts between global rationality (rules, regulations and legislation) and local rationality (what gets defined as good seamanship at a particular time and place). When technology is used to replace human work this is not necessarily a straightforward or successful process. What it often means is that mariners have to work, sometimes very hard, to ‘construct’ a cooperational human-machine system. Even when technology works ‘as intended’ work of this kind is still required. Even in most ostensibly integrated systems, human operators still must perform integration work. In short, technology alone cannot solve the problems that technology created. Further, trying to fix ‘human error’ by incremental ‘improvements’ in technology or procedure tends to be largely ineffective due to the adaptive compensation by users. A systems view is necessary to make changes to a workplace. Finally, this research illustrates the value problem-oriented ethnography can have when it comes to collecting information on what users ‘mean’ and ‘really do’ and what designers ‘need’ to make technology easier and safer to use.
337

Lightweight M2M Solution on Android Platform

Gustafsson, Magnus January 2011 (has links)
Machine-to-machine communication (M2M) is a generic term for technologies dealing with autonomous communication between machines. For the last 10 years a wide range of business areas utilize a variety of different M2M solutions for remote management of equipment. Common for almost all of those solutions is that they are expensive and require the infrastructure to be adapted to them. They are also usually built out of several different systems working together and thus there are several systems that require maintenance. This thesis investigates the possibility to develop a lightweight alternative to existing M2M solutions using only common devices and protocols. Lightweight here means that the system should be flexible, have a low cost for set-up and operation and that both ends should be mobile. By developing a lightweight M2M architecture the technology may become available in new business areas and new types of services may arise.  In the thesis a prototype is implemented. The purpose of the prototype is to practically verify whether a lightweight M2M solution is possible to develop in this manner. The solution uses the Android platform for back-end and user interface and a Cinterion TC65T as slave device to which the sensors can be connected. The implemented system is limited in terms of security and performance but still acts as a proof of concept for this kind of M2M solution.
338

Accuracy Improvement for RNA Secondary Structure Prediction with SVM

Chang, Chia-Hung 30 July 2008 (has links)
Ribonucleic acid (RNA) sometimes occurs in a complex structure called pseudoknots. Prediction of RNA secondary structures has drawn much attention from both biologists and computer scientists. Consequently, many useful tools have been developed for RNA secondary structure prediction, with or without pseudoknots. These tools have their individual strength and weakness. As a result, we propose a hybrid feature extraction method which integrates two prediction tools pknotsRG and NUPACK with a support vector machine (SVM). We first extract some useful features from the target RNA sequence, and then decide its prediction tool preference with SVM classification. Our test data set contains 723 RNA sequences, where 202 pseudoknotted RNA sequences are obtained from PseudoBase, and 521 nested RNA sequences are obtained from RNA SSTRAND. Experimental results show that our method improves not only the overall accuracy but also the sensitivity and the selectivity of the target sequences. Our method serves as a preprocessing process in analyzing RNA sequences before employing the RNA secondary structure prediction tools. The ability to combine the existing methods and make the prediction tools more accurate is our main contribution.
339

Graph based semi-supervised learning in computer vision

Huang, Ning, January 2009 (has links)
Thesis (Ph. D.)--Rutgers University, 2009. / "Graduate Program in Biomedical Engineering." Includes bibliographical references (p. 54-55).
340

Kernel methods in supervised and unsupervised learning /

Tsang, Wai-Hung. January 2003 (has links)
Thesis (M. Phil.)--Hong Kong University of Science and Technology, 2003. / Includes bibliographical references (leaves 46-49). Also available in electronic version. Access restricted to campus users.

Page generated in 0.0399 seconds