• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 295
  • 24
  • 21
  • 17
  • 9
  • 7
  • 7
  • 5
  • 4
  • 3
  • 2
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 485
  • 485
  • 120
  • 103
  • 99
  • 88
  • 69
  • 65
  • 62
  • 56
  • 51
  • 47
  • 47
  • 46
  • 43
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
231

Acquiring 3D Full-body Motion from Noisy and Ambiguous Input

Lou, Hui 2012 May 1900 (has links)
Natural human motion is highly demanded and widely used in a variety of applications such as video games and virtual realities. However, acquisition of full-body motion remains challenging because the system must be capable of accurately capturing a wide variety of human actions and does not require a considerable amount of time and skill to assemble. For instance, commercial optical motion capture systems such as Vicon can capture human motion with high accuracy and resolution while they often require post-processing by experts, which is time-consuming and costly. Microsoft Kinect, despite its high popularity and wide applications, does not provide accurate reconstruction of complex movements when significant occlusions occur. This dissertation explores two different approaches that accurately reconstruct full-body human motion from noisy and ambiguous input data captured by commercial motion capture devices. The first approach automatically generates high-quality human motion from noisy data obtained from commercial optical motion capture systems, eliminating the need for post-processing. The second approach accurately captures a wide variety of human motion even under significant occlusions by using color/depth data captured by a single Kinect camera. The common theme that underlies two approaches is the use of prior knowledge embedded in pre-recorded motion capture database to reduce the reconstruction ambiguity caused by noisy and ambiguous input and constrain the solution to lie in the natural motion space. More specifically, the first approach constructs a series of spatial-temporal filter bases from pre-captured human motion data and employs them along with robust statistics techniques to filter noisy motion data corrupted by noise/outliers. The second approach formulates the problem in a Maximum a Posterior (MAP) framework and generates the most likely pose which explains the observations as well as consistent with the patterns embedded in the pre-recorded motion capture database. We demonstrate the effectiveness of our approaches through extensive numerical evaluations on synthetic data and comparisons against results created by commercial motion capture systems. The first approach can effectively denoise a wide variety of noisy motion data, including walking, running, jumping and swimming while the second approach is shown to be capable of accurately reconstructing a wider range of motions compared with Microsoft Kinect.
232

A Fuzzy Software Prototype For Spatial Phenomena: Case Study Precipitation Distribution

Yanar, Tahsin Alp 01 October 2010 (has links) (PDF)
As the complexity of a spatial phenomenon increases, traditional modeling becomes impractical. Alternatively, data-driven modeling, which is based on the analysis of data characterizing the phenomena, can be used. In this thesis, the generation of understandable and reliable spatial models using observational data is addressed. An interpretability oriented data-driven fuzzy modeling approach is proposed. The methodology is based on construction of fuzzy models from data, tuning and fuzzy model simplification. Mamdani type fuzzy models with triangular membership functions are considered. Fuzzy models are constructed using fuzzy clustering algorithms and simulated annealing metaheuristic is adapted for the tuning step. To obtain compact and interpretable fuzzy models a simplification methodology is proposed. Simplification methodology reduced the number of fuzzy sets for each variable and simplified the rule base. Prototype software is developed and mean annual precipitation data of Turkey is examined as case study to assess the results of the approach in terms of both precision and interpretability. In the first step of the approach, in which fuzzy models are constructed from data, &quot / Fuzzy Clustering and Data Analysis Toolbox&quot / , which is developed for use with MATLAB, is used. For the other steps, the optimization of obtained fuzzy models from data using adapted simulated annealing algorithm step and the generation of compact and interpretable fuzzy models by simplification algorithm step, developed prototype software is used. If the accuracy is the primary objective then the proposed approach can produce more accurate solutions for training data than geographically weighted regression method. The minimum training error value produced by the proposed approach is 74.82 mm while the error obtained by geographically weighted regression method is 106.78 mm. The minimum error value on test data is 202.93 mm. An understandable fuzzy model for annual precipitation is generated only with 12 membership functions and 8 fuzzy rules. Furthermore, more interpretable fuzzy models are obtained when Gath-Geva fuzzy clustering algorithms are used during fuzzy model construction.
233

Processing Turkish Radiology Reports

Hadimli, Kerem 01 May 2011 (has links) (PDF)
Radiology departments utilize various visualization techniques of patients&rsquo / bodies, and narrative free text reports describing the findings in these visualizations are written by medical doctors. The information within these narrative reports is required to be extracted for medical information systems. Turkish is an highly agglutinative language and this poses problems in information retrieval and extraction from Turkish free texts. In this thesis one rule-based and one data-driven alternate methods for information retrieval and structured information extraction from Turkish radiology reports are presented. Contrary to previous studies in medical NLP systems, both of these methods do not utilize any medical lexicon or ontology. Information extraction is performed on the level of extracting medically related phrases from the sentence. The aim is to measure baseline performance Turkish language can provide for medical information extraction and retrieval, in isolation of other factors.
234

Real-time estimation of arterial performance measures using a data-driven microscopic traffic simulation technique

Henclewood, Dwayne Anthony 06 June 2012 (has links)
Traffic congestion is a one hundred billion dollar problem in the US. The cost of congestion has been trending upward over the last few decades, but has experienced slight decreases in recent years partly due to the impact of congestion reduction strategies. The impact of these strategies is however largely experienced on freeways and not arterials. This discrepancy in impact is partially linked to the lack of real-time, arterial traffic information. Toward this end, this research effort seeks to address the lack of arterial traffic information. To address this dearth of information, this effort developed a methodology to provide accurate estimates of arterial performance measures to transportation facility managers and travelers in real-time. This methodology employs transmitted point sensor data to drive an online, microscopic traffic simulation model. The feasibility of this methodology was examined through a series of experiments that were built upon the successes of the previous, while addressing the necessary limitations. The results from each experiment were encouraging. They successfully demonstrated the method's likely feasibility, and the accuracy with which field estimates of performance measures may be obtained. In addition, the method's results support the viability of a "real-world" implementation of the method. An advanced calibration process was also developed as a means of improving the method's accuracy. This process will in turn serve to inform future calibration efforts as the need for more robust and accurate traffic simulation models are needed. The success of this method provides a template for real-time traffic simulation modeling which is capable of adequately addressing the lack of available arterial traffic information. In providing such information, it is hoped that transportation facility managers and travelers will make more informed decisions regarding more efficient management and usage of the nation's transportation network.
235

Kompiuterinių žaidimų dirbtinio intelekto varikliuko uždaviniai ir jų sprendimas / The Problems and Solutions of Artificial Intelligence Engine for Games

Fiodorova, Jelena 30 May 2006 (has links)
Game Artificial Intelligence (AI) is the code in game that makes the computer-controlled opponents (agents) to make smart decisions in game world. There are some AI problems that are essential in games: pathfinding, decision making, generating player’s characteristics, game’s logic management. However these problems are gameplay dependant. The main goal of this study – to generalize AI problems’ solutions for games of many kinds, that is, to make AI solutions gameplay independent. We have achieved this goal by using data-driven design in our solutions. We separated the game logic and the game code levels by using this approach. Such separation gave us an opportunity to manipulate game logic and data freely. We have examined our decision making system and determined that it is flexible and is easy of use.
236

DATA ASSIMILATION AND VISUALIZATION FOR ENSEMBLE WILDLAND FIRE MODELS

Chakraborty, Soham 01 January 2008 (has links)
This thesis describes an observation function for a dynamic data driven application system designed to produce short range forecasts of the behavior of a wildland fire. The thesis presents an overview of the atmosphere-fire model, which models the complex interactions between the fire and the surrounding weather and the data assimilation module which is responsible for assimilating sensor information into the model. Observation plays an important role in data assimilation as it is used to estimate the model variables at the sensor locations. Also described is the implementation of a portable and user friendly visualization tool which displays the locations of wildfires in the Google Earth virtual globe.
237

A robust & reliable Data-driven prognostics approach based on extreme learning machine and fuzzy clustering.

Javed, Kamran 09 April 2014 (has links) (PDF)
Le Pronostic et l'étude de l'état de santé (en anglais Prognostics and Health Management (PHM)) vise à étendre le cycle de vie d'un actif physique, tout en réduisant les coûts d'exploitation et de maintenance. Pour cette raison, le pronostic est considéré comme un processus clé avec des capacités de prédictions. En effet, des estimations précises de la durée de vie avant défaillance d'un équipement, Remaining Useful Life (RUL), permettent de mieux définir un plan d'actions visant à accroître la sécurité, réduire les temps d'arrêt, assurer l'achèvement de la mission et l'efficacité de la production. Des études récentes montrent que les approches guidées par les données sont de plus en plus appliquées pour le pronostic de défaillance. Elles peuvent être considérées comme des modèles de type " boite noire " pour l'étude du comportement du système directement à partir des données de surveillance d'état, pour définir l'état actuel du system et prédire la progression future de défauts. Cependant, l'approximation du comportement des machines critiques est une tâche difficile qui peut entraîner des mauvais pronostics. Pour la compréhension de la modélisation de pronostic guidé par les données, on considère les points suivants. 1) Comment traiter les données brutes de surveillance pour obtenir des caractéristiques appropriées reflétant l'évolution de la dégradation ? 2) Comment distinguer les états de dégradation et définir des critères de défaillance (qui peuvent varier d'un cas à un autre)? 3) Comment être sûr que les modèles définis seront assez robustes pour montrer une performance stable avec des entrées incertaines s'écartant des expériences acquises, et seront suffisamment fiables pour intégrer des données inconnues (c'est à dire les conditions de fonctionnement, les variations de l'ingénierie, etc.)? 4) Comment réaliser facilement une intégration sous des contraintes et des exigences industrielles? Ces questions sont des problèmes abordés dans cette thèse. Elles ont conduit à développer une nouvelle approche allant au-delà des limites des méthodes classiques de pronostic guidé par les données. Les principales contributions sont les suivantes. <br>- L'étape de traitement des données est améliorée par l'introduction d'une nouvelle approche d'extraction des caractéristiques à l'aide de fonctions trigonométriques et cumulatives qui sont basées sur trois caractéristiques : la monotonie, la "trendability" et la prévisibilité. L'idée principale de ce développement est de transformer les données brutes en indicateur qui améliorent la précision des prévisions à long terme. <br>- Pour tenir compte de la robustesse, la fiabilité et l'applicabilité, un nouvel algorithme de prédiction est proposé: Summation Wavelet-Extreme Learning Machine (SWELM). Le SW-ELM assure de bonnes performances de prédiction, tout en réduisant le temps d'apprentissage. Un ensemble de SW-ELM est également proposé pour quantifier l'incertitude et améliorer la précision des estimations. <br>- Les performances du pronostic sont également renforcées grâce à la proposition d'un nouvel algorithme d'évaluation de la santé: Subtractive-Maximum Entropy Fuzzy Clustering (S-MEFC). S-MEFC est une approche de classification non supervisée qui utilise l'inférence de l'entropie maximale pour représenter l'incertitude de données multidimensionnelles. Elle peut automatiquement déterminer le nombre d'états, sans intervention humaine. <br>- Le modèle de pronostic final est obtenu en intégrant le SW-ELM et le S-MEFC pour montrer l'évolution de la dégradation de la machine avec des prédictions simultanées et l'estimation d'états discrets. Ce programme permet également de définir dynamiquement les seuils de défaillance et d'estimer le RUL des machines surveillées. Les développements sont validés sur des données réelles à partir de trois plates-formes expérimentales: PRONOSTIA FEMTO-ST (banc d'essai des roulements), CNC SIMTech (Les fraises d'usinage), C-MAPSS NASA (turboréacteurs) et d'autres données de référence. En raison de la nature réaliste de la stratégie d'estimation du RUL proposée, des résultats très prometteurs sont atteints. Toutefois, la perspective principale de ce travail est d'améliorer la fiabilité du modèle de pronostic.
238

Environmental prediction and risk analysis using fuzzy numbers and data-driven models

Khan, Usman Taqdees 17 December 2015 (has links)
Dissolved oxygen (DO) is an important water quality parameter that is used to assess the health of aquatic ecosystems. Typically physically-based numerical models are used to predict DO, however, these models do not capture the complexity and uncertainty seen in highly urbanised riverine environments. To overcome these limitations, an alternative approach is proposed in this dissertation, that uses a combination of data-driven methods and fuzzy numbers to improve DO prediction in urban riverine environments. A major issue of implementing fuzzy numbers is that there is no consistent, transparent and objective method to construct fuzzy numbers from observations. A new method to construct fuzzy numbers is proposed which uses the relationship between probability and possibility theory. Numerical experiments are used to demonstrate that the typical linear membership functions used are inappropriate for environmental data. A new algorithm to estimate the membership function is developed, where a bin-size optimisation algorithm is paired with a numerical technique using the fuzzy extension principle. The developed method requires no assumptions of the underlying distribution, the selection of an arbitrary bin-size, and has the flexibility to create different shapes of fuzzy numbers. The impact of input data resolution and error value on membership function are analysed. Two new fuzzy data-driven methods: fuzzy linear regression and fuzzy neural network, are proposed to predict DO using real-time data. These methods use fuzzy inputs, fuzzy outputs and fuzzy model coefficients to characterise the total uncertainty. Existing methods cannot accommodate fuzzy numbers for each of these variables. The new method for fuzzy regression was compared against two existing fuzzy regression methods, Bayesian linear regression, and error-in-variables regression. The new method was better able to predict DO due to its ability to incorporate different sources of uncertainty in each component. A number of model assessment metrics were proposed to quantify fuzzy model performance. Fuzzy linear regression methods outperformed probability-based methods. Similar results were seen when the method was used for peak flow rate prediction. An existing fuzzy neural network model was refined by the use of possibility theory based calibration of network parameters, and the use of fuzzy rather than crisp inputs. A method to find the optimum network architecture was proposed to select the number of hidden neurons and the amount of data used for training, validation and testing. The performance of the updated fuzzy neural network was compared to the crisp results. The method demonstrated an improved ability to predict low DO compared to non-fuzzy techniques. The fuzzy data-driven methods using non-linear membership functions correctly identified the occurrence of extreme events. These predictions were used to quantify the risk using a new possibility-probability transformation. All combination of inputs that lead to a risk of low DO were identified to create a risk tool for water resource managers. Results from this research provide new tools to predict environmental factors in a highly complex and uncertain environment using fuzzy numbers. / Graduate / 0543 / 0775 / 0388
239

Projeto de controladores não lineares utilizando referência virtual

Neuhaus, Tassiano January 2012 (has links)
Este trabalho tem o intuito, de apresentar alguns conceitos, relativos à identifi cação de sistemás, tanto lineares quanto não linearep, além da ideia de referência virtual para, em conjunto com a teoria de projeto "de controladores baseados em dados, propor uma forrha de projeto de controladores não lineares baseados em identificação de sistemas. A utilização de referência virtual para a obtenção dos sinais necessários para a caracterização do controlador ótimo de um sistema e utilizado no método VRFT (Virtual Reference Feedback Tuning). Este método serve como base para o desenvolvimento da proposta deste trabalho que, em conjunto com a teoria de identificação de sistemas não lineares, permite a obteriçãci do controlador ótimo que leva o sistema a se comportar como especificado em malha fechada. Em especial optou-se pela caracterização do controlador utilizando estrutura de modelos racional, por esta ser uma classe bastante abrangente no que - diz respeito à quantidade de sistemas reais que ela é capaz de descrever. Fara demonstrar o potencial do método proposto para projeto de controladores, são apresentados ecemplos ilustrativos em situações onde o controlador ideal consegue ser representado pela classe de modelos, e quando isso não é possível. / This work aims to present some concepts related to linear and nonlinear system identification, as well as the •concept of virtual reference that, together with data based controller design's theory, provides design framework for nonlinear controllers. The Virtual Reference Feedback Tuning method (VRFT) is used as a basis for the current proposal, where we propose to unite nonlinear system identification algorithms and virtual reference to obtain the ideal controller: the one which makes the system behave as desired in closed loop. It was choosen to model the controller as a rational model due the wide variety of practical systems that can be represented by this model structure. For rational system identification we used an iterative algorithm which, based on the signal from input and output of the pIant, allows to identify the parameters of the pre defined controller structure with the signals obtained by virtual reference. To demonstrate the operation of the proposed identification controller methodology, illustrative examples are presented in situations where the ideal controller can be represented by the class of modeIs, and also when it is not possible.
240

O uso de fontes documentais no jornalismo guiado por dados

Gehrke, Marília January 2018 (has links)
Estudar as fontes utilizadas nas notícias de jornalismo guiado por dados (JGD) é a proposta desta dissertação. Para tanto, revisita as classificações de fontes trabalhadas por teóricos da área e situa o contexto atual, derivado de transformações sociais e tecnológicas, sob a perspectiva de sociedade em rede e do jornalismo em rede. O foco do estudo está em descobrir quais fontes são acionadas em notícias do JGD, que emerge neste cenário a partir dos anos 2000. Analisa um corpus constituído por 60 notícias veiculadas nos jornais O Globo, The New York Times e La Nación, como veículos tradicionais, e Nexo, FiveThirtyEight e Chequeado, como veículos nativos. A partir do cruzamento entre a teoria e o estudo empírico, propõe a classificação de tipos de fontes nas notícias de JGD. São eles: arquivo documental, estatística e reprodução. Por meio dessa classificação, busca preencher uma lacuna no quadro teórico sobre fontes, superficialmente discutido no jornalismo até então, trazendo o uso de documentos como protagonista neste cenário. / Studying the news sources used in data-driven journalism (DDJ) practices is the proposal of this dissertation. The theoretical approach includes classifications of news sources already discussed in journalism studies. Considering the contemporary context, which is modified by social and technological transformations, this study operates from the networked society and network journalism perspectives. The main point is to detect the use of journalism sources in news developed by DDJ techniques, which emerges in this scenario during the 2000’s. It analyzes 60 news records published by O Globo, The New York Times and La Nación, as traditional media, and Nexo, FiveThirtyEight and Chequeado, as the native ones. Combining the theory and the empirical study, it proposes a classification by types of sources of DDJ news: documentary file, statistics and reproduction. Through this classification, it aims to fulfill a gap found in the theoretical sources approach, which is superficially discussed in journalism until now, bringing the use of documents as a protagonist in this scenario.

Page generated in 0.0371 seconds