• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 146
  • 40
  • 23
  • 20
  • 7
  • 6
  • 5
  • 5
  • 3
  • 3
  • 2
  • 2
  • 1
  • Tagged with
  • 306
  • 200
  • 90
  • 59
  • 52
  • 51
  • 41
  • 37
  • 36
  • 36
  • 33
  • 29
  • 27
  • 26
  • 25
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
171

Découverte d'évènements par contenu visuel dans les médias sociaux / Visual-based event mining in social media

Trad, Riadh 05 June 2013 (has links)
L’évolution du web, de ce qui était typiquement connu comme un moyen de communication à sens unique en mode conversationnel, a radicalement changé notre manière de traiter l’information. Des sites de médias sociaux tels que Flickr et Facebook, offrent des espaces d’échange et de diffusion de l’information. Une information de plus en plus riche, mais aussi personnelle, et qui s’organise, le plus souvent, autour d’événements de la vie réelle. Ainsi, un événement peut être perçu comme un ensemble de vues personnelles et locales, capturées par différents utilisateurs. Identifier ces différentes instances permettrait, dès lors, de reconstituer une vue globale de l’événement. Plus particulièrement, lier différentes instances d’un même événement profiterait à bon nombre d’applications tel que la recherche, la navigation ou encore le filtrage et la suggestion de contenus. L’objectif principal de cette thèse est l’identification du contenu multimédia, associé à un événement dans de grandes collections d’images. Une première contribution est une méthode de recherche d’événements basée sur le contenu visuel. La deuxième contribution est une approche scalable et distribuée pour la construction de graphes des K plus proches voisins. La troisième contribution est une méthode collaborative pour la sélection de contenu pertinent. Plus particulièrement, nous nous intéresserons aux problèmes de génération automatique de résumés d’événements et suggestion de contenus dans les médias sociaux. / The ease of publishing content on social media sites brings to the Web an ever increasing amount of user generated content captured during, and associated with, real life events. Social media documents shared by users often reflect their personal experience of the event. Hence, an event can be seen as a set of personal and local views, recorded by different users. These event records are likely to exhibit similar facets of the event but also specific aspects. By linking different records of the same event occurrence we can enable rich search and browsing of social media events content. Specifically, linking all the occurrences of the same event would provide a general overview of the event. In this dissertation we present a content-based approach for leveraging the wealth of social media documents available on the Web for event identification and characterization. To match event occurrences in social media, we develop a new visual-based method for retrieving events in huge photocollections, typically in the context of User Generated Content. The main contributions of the thesis are the following : (1) a new visual-based method for retrieving events in photo collections, (2) a scalable and distributed framework for Nearest Neighbors Graph construction for high dimensional data, (3) a collaborative content-based filtering technique for selecting relevant social media documents for a given event.
172

Soft Power And Hard Power Approaches In U.S. Foreign Policy: A Case Study Comparison In Latin America

Weinbrenner, John 01 January 2007 (has links)
The purpose of this study was to examine the effects of soft power versus hard power in U.S. policy towards Latin America. In recent years America's unipolar moment has been challenged from populist leaders in the region to its inability to get a handle on the flow of illegal immigrants and illicit drugs that reach its shores. This thesis is a step to understanding the difference between power and influence as well as the effects of hard power and soft power in U.S. foreign policy. A historical comparative case study analysis has been conducted utilizing the cases of FDR's Good Neighbor policy and Reagan's contra war policies. This qualitative approach examined specific short-term and long-term goals of each policy and analyzed each strategy's ability to achieve those stated goals. The results of the study reveal that both soft and hard power approaches can have positive as well as negative effects on American influence in Latin America.
173

Data mining inom tillverkningsindustrin : En fallstudie om möjligheten att förutspå kvalitetsutfall i produktionslinjer

Janson, Lisa, Mathisson, Minna January 2021 (has links)
I detta arbete har en fallstudie utförts på Volvo Group i Köping. I takt med ¨övergången till industri 4.0, ökar möjligheterna att använda maskininlärning som ett verktyg i analysen av industriell data och vidareutvecklingen av industriproduktionen. Detta arbete syftar till att undersöka möjligheten att förutspå kvalitetsutfall vid sammanpressning av nav och huvudaxel. Metoden innefattar implementering av tre maskininlärningsmodeller samt evaluering av dess prestation i förhållande till varandra. Vid applicering av modellerna på monteringsdata från fabriken erhölls ett bristfälligt resultat, vilket indikerar att det utifrån de inkluderade variablerna inte är möjligt att förutspå kvalitetsutfallet. Orsakerna som låg till grund för resultatet granskades, och det resulterade i att det förmodligen berodde på att modellerna var oförmögna att finna samband i datan eller att det inte fanns något samband i datasetet. För att avgöra vilken av dessa två faktorer som var avgörande skapades ett fabricerat dataset där tre nya variabler introducerades. De fabricerade värdena på dessa variabler skapades på sådant sätt att det fanns syntetisk kausalitet mellan två av variablerna och kvalitetsutfallet. Vid applicering av modellerna på den fabricerade datan, lyckades samtliga modeller identifiera det syntetiska sambandet. Utifrån det drogs slutsatsen att det bristfälliga resultatet inte berodde på modellernas prestation utan att det inte fanns något samband i datasetet bestående av verklig monteringsdata. Det här bidrog till bedömningen att om spårbarheten på komponenterna hade ökat i framtiden, i kombination med att fler maskiner i produktionslinjen genererade data till ett sammankopplat system, skulle denna studie kunna utföras igen, men med fler variabler och ett större dataset. Support vector machine var den modell som presterade bäst, givet de prestationsmått som användes i denna studie. Det faktum att modellerna som inkluderats i den här studien lyckades identifiera sambandet i datan, när det fanns vetskap om att sambandet existerade, motiverar användandet av dessa modeller i framtida studier. Avslutningsvis kan det konstateras att med förbättrad spårbarhet och en allt mer uppkopplad fabrik, finns det möjlighet att använda maskininlärningsmodeller som komponenter i större system för att kunna uppnå effektiviseringar. / As the adaptation towards Industry 4.0 proceeds, the possibility of using machine learning as a tool for further development of industrial production, becomes increasingly profound. In this paper, a case study has been conducted at Volvo Group in Köping, in order to investigate the wherewithals of predicting quality outcomes in the compression of hub and mainshaft. In the conduction of this study, three different machine learning models were implemented and compared amongst each other. A dataset containing data from Volvo’s production site in Köping was utilized when training and evaluating the models. However, the low evaluation scores acquired from this, indicate that the quality outcome of the compression could not be predicted given solely the variables included in that dataset. Therefore, a dataset containing three additional variables consisting of fabricated values and a known causality between two of the variables and the quality outcome, was also utilized. The purpose of this was to investigate whether the poor evaluation metrics resulted from a non-existent pattern between the included variables and the quality outcome, or from the models not being able to find the pattern. The performance of the models, when trained and evaluated on the fabricated dataset, indicate that the models were in fact able to find the pattern that was known to exist. Support vector machine was the model that performed best, given the evaluation metrics that were chosen in this study. Consequently, if the traceability of the components were to be enhanced in the future and an additional number of machines in the production line would transmit production data to a connected system, it would be possible to conduct the study again with additional variables and a larger data set. The fact that the models included in this study succeeded in finding patterns in the dataset when such patterns were known to exist, motivates the use of the same models. Furthermore, it can be concluded that with enhanced traceability of the components and a larger amount of machines transmitting production data to a connected system, there is a possibility that machine learning models could be utilized as components in larger business monitoring systems, in order to achieve efficiencies.
174

Community Schools: Catalyst for Comprehensive Neighborhood-Based Initiatives?

Griswold, Michael R. January 2014 (has links)
No description available.
175

Statistics of Quantum Energy Levels of Integrable Systems and a Stochastic Network Model with Applications to Natural and Social Sciences

Ma, Tao 18 October 2013 (has links)
No description available.
176

Investigating the performance of matrix factorization techniques applied on purchase data for recommendation purposes

Holländer, John January 2015 (has links)
Automated systems for producing product recommendations to users is a relatively new area within the field of machine learning. Matrix factorization techniques have been studied to a large extent on data consisting of explicit feedback such as ratings, but to a lesser extent on implicit feedback data consisting of for example purchases.The aim of this study is to investigate how well matrix factorization techniques perform compared to other techniques when used for producing recommendations based on purchase data. We conducted experiments on data from an online bookstore as well as an online fashion store, by running algorithms processing the data and using evaluation metrics to compare the results. We present results proving that for many types of implicit feedback data, matrix factorization techniques are inferior to various neighborhood- and association rules techniques for producing product recommendations. We also present a variant of a user-based neighborhood recommender system algorithm \textit{(UserNN)}, which in all tests we ran outperformed both the matrix factorization algorithms and the k-nearest neighbors algorithm regarding both accuracy and speed. Depending on what dataset was used, the UserNN achieved a precision approximately 2-22 percentage points higher than those of the matrix factorization algorithms, and 2 percentage points higher than the k-nearest neighbors algorithm. The UserNN also outperformed the other algorithms regarding speed, with time consumptions 3.5-5 less than those of the k-nearest neighbors algorithm, and several orders of magnitude less than those of the matrix factorization algorithms.
177

Edge Generation in Mobile Networks Using Graph Deep Learning

Nannesson Meli, Felix, Tell, Johan January 2024 (has links)
Mobile cellular networks are widely integrated in today’s infrastructure. These networks are constantly evolving and continuously expanding, especially with the introduction of fifth-generation (5G). It is important to ensure the effectiveness of these expansions.Mobile networks consist of a set of radio nodes that are distributed in a geographicalregion to provide connectivity services. Each radio node is served by a set of cells. Thehandover relations between cells is determined by Software features such as AutomaticNeighbor Relations (ANR). The handover relations, also refereed as edges, betweenradio nodes in the mobile network graph are created through historical interactions between User Equipment (UE) and radio nodes. The method has the limitation of not being able to set the edges before the physical hardware is integrated. In this work, we usegraph-based deep learning methods to determine mobility relations (edges), trained onradio node configuration data and a set of reliable relations of ANR in stable networks.The report focuses on measuring the accuracy and precision of different graph baseddeep learning approaches applied to real-world mobile networks. The report considers four models. Our comprehensive experiments on Telecom datasets obtained fromoperational Telecom Networks demonstrate that graph neural network model and multilayer perceptron trained with Binary Cross Entropy (BCE) loss outperform all othermodels. The four models evaluation showed that considering graph structure improveresults. Additionally, the model investigates the use of heuristics to reduce the trainingtime based on distance between radio node to eliminate irrelevant cases. The use ofthese heuristics improved precision and accuracy.
178

Estimation de régularité locale

Servien, Rémi 12 March 2010 (has links) (PDF)
L'objectif de cette thèse est d'étudier le comportement local d'une mesure de probabilité, notamment au travers d'un indice de régularité locale. Dans la première partie, nous établissons la normalité asymptotique de l'estimateur des kn plus proches voisins de la densité et de l'histogramme. Dans la deuxième, nous définissons un estimateur du mode sous des hypothèses affaiblies. Nous montrons que l'indice de régularité intervient dans ces deux problèmes. Enfin, nous construisons dans une troisième partie différents estimateurs pour l'indice de régularité à partir d'estimateurs de la fonction de répartition, dont nous réalisons une revue bibliographique.
179

Géo localisation en environnement fermé des terminaux mobiles / Indoor geo-location static and dynamic geo-location of mobile terminals in indoor environments

Dakkak, Mustapha 29 November 2012 (has links)
Récemment, la localisation statique et dynamique d'un objet ou d'une personne est devenue l'un des plus importantes fonctionnalités d'un système de communication, du fait de ses multiples applications. En effet, connaître la position d'un terminal mobile (MT), en milieu extérieur ou intérieur, est généralement d'une importance majeure pour des applications fournissant des services basés sur la localisation. Ce développement des systèmes de localisation est dû au faible coût des infrastructures de réseau sans fil en milieu intérieur (WLAN). Les techniques permettant de localiser des MTs diffèrent selon les paramètres extraits des signaux radiofréquences émis entre des stations de base (BSs) et des MTs. Les conditions idéales pour effectuer des mesures sont des environnements dépourvus de tout obstacle, permettant des émissions directes entre BS et MT. Ce n'est pas le cas en milieu intérieur, du fait de la présence continuelle d'obstacles dans l'espace, qui dispersent les rayonnements. Les mesures prises dans ces conditions (NLOS, pour Non Line of Sight) sont imprévisibles et diffèrent de celles prises en condition LOS. Afin de réduire les erreurs de mesure, différentes techniques peuvent être utilisées, comme la mitigation, l'approximation, la correction à priori, ou le filtrage. En effet, l'application de systèmes de suivi (TSs) constitue une base substantielle pour la navigation individuelle, les réseaux sociaux, la gestion du trafic, la gestion des ressources mobiles, etc. Différentes techniques sont appliquées pour construire des TSs en milieu intérieur, où le signal est bruité, faible voire inexistant. Bien que les systèmes de localisation globaux (GPS) et les travaux qui en découlent fonctionnent bien hors des bâtiments et dans des canyons urbains, le suivi d'utilisateurs en milieu intérieur est bien plus problématique. De ce fait, le problème de prédiction reste un obstacle essentiel à la construction de TSs fiable dans de tels environnements. Une étape de prédiction est inévitable, en particulier, dans le cas où l'on manque d'informations. De multiples approches ont été proposées dans la littérature, la plupart étant basées sur un filtre linéaire (LF), un filtre de Kalman (KF) et ses variantes, ou sur un filtre particulaire (PF). Les filtres de prédiction sont souvent utilisés dans des problèmes d'estimation et l'application de la dérivation non entière peut limiter l'impact de la perte de performances. Ce travail présente une nouvelle approche pour la localisation intérieure par WLAN utilisant un groupement des coordonnées. Ensuite, une étude comparative des techniques déterministes et des techniques d'apprentissage pour la localisation intérieure est présentée. Enfin, une nouvelle approche souple pour les systèmes de suivi en milieu intérieur, par application de la dérivation non entière, est présentée / Recently, the static and dynamic geo-location of a device or a person has become one of the most important aspects of communication systems because of its multiple applications. In general, knowing the position of a mobile terminal (MT) in outdoor or indoor environments is of major importance for applications providing services based on the location. The development of localization systems has been mainly driven by the avail- ability of the affordable cost of indoor wireless local area network (WLAN) infrastructure. There exist different techniques to localize MTs with the different mainly depending on the type of the metrics extracted from the radio frequency signals communicated between base stations (BSs) and MTs. Ideal measurements are taken in environments which are free of obstacles and in direct ray tracings between BS and MT. This is not the case in indoor environment because the daily use of permanent obstacles in the work space scatters the ray tracings. Measurements taken in Non Line Of Sight (NLOS) are unpredictable and different from those taken in LOS. In order to reduce measurement errors, one can apply different techniques such as mitigation, approximation, prior correction, or filtering. Tracking systems (TSs) have many concrete applications in the space of individual navigation, social net- working, asset management, traffic management, mobile resource management, etc. Different techniques are applied to build TSs in indoor environments, where the signal is noisy, weak or even non-existent. While the Global Positioning System (GPS) devices work well outside buildings and in urban canyons, tracking an indoor user in a real-world environment is much more problematic. The prediction problem remains an essential obstacle to construct reliable indoor TSs. Then lacks of reliable wireless signals represent the main issue for indoor geo-location systems. This obviously calls for some sort of predictions and corrections to overcome signal reliability, which unavoidably open the door for a multitude of challenges. Varieties of approaches were proposed in the literature. The most used are the ones based on prediction filters, such as Linear Filter (LF), Kalman Filter (KF) and its derivatives, and Particle Filters (PF). Prediction filters are often used in estimation problems and applying Digital Fractional Differentiation can limit the impact of performance degradations. This work presents a novel approach for the WLAN indoor geo-location by using coordinates clustering. This approach allows overcoming the limitations of NLOS methods without applying any of mitigation, approximation, prior correction, or filtering approaches. Then a comparison study of deterministic and learning techniques for indoor geo-location is presented. Finally, it presents a novel soft approach for indoor tracking system by applying digital fractional integration (DFI) to classical prediction filters
180

Algoritmo kNN para previsão de dados temporais: funções de previsão e critérios de seleção de vizinhos próximos aplicados a variáveis ambientais em limnologia / Time series prediction using a KNN-based algorithm prediction functions and nearest neighbor selection criteria applied to limnological data

Ferrero, Carlos Andres 04 March 2009 (has links)
A análise de dados contendo informações sequenciais é um problema de crescente interesse devido à grande quantidade de informação que é gerada, entre outros, em processos de monitoramento. As séries temporais são um dos tipos mais comuns de dados sequenciais e consistem em observações ao longo do tempo. O algoritmo k-Nearest Neighbor - Time Series Prediction kNN-TSP é um método de previsão de dados temporais. A principal vantagem do algoritmo é a sua simplicidade, e a sua aplicabilidade na análise de séries temporais não-lineares e na previsão de comportamentos sazonais. Entretanto, ainda que ele frequentemente encontre as melhores previsões para séries temporais parcialmente periódicas, várias questões relacionadas com a determinação de seus parâmetros continuam em aberto. Este trabalho, foca-se em dois desses parâmetros, relacionados com a seleção de vizinhos mais próximos e a função de previsão. Para isso, é proposta uma abordagem simples para selecionar vizinhos mais próximos que considera a similaridade e a distância temporal de modo a selecionar os padrões mais similares e mais recentes. Também é proposta uma função de previsão que tem a propriedade de manter bom desempenho na presença de padrões em níveis diferentes da série temporal. Esses parâmetros foram avaliados empiricamente utilizando várias séries temporais, inclusive caóticas, bem como séries temporais reais referentes a variáveis ambientais do reservatório de Itaipu, disponibilizadas pela Itaipu Binacional. Três variáveis limnológicas fortemente correlacionadas são consideradas nos experimentos de previsão: temperatura da água, temperatura do ar e oxigênio dissolvido. Uma análise de correlação é realizada para verificar se os dados previstos mantem a correlação das variáveis. Os resultados mostram que, o critério de seleção de vizinhos próximos e a função de previsão, propostos neste trabalho, são promissores / Treating data that contains sequential information is an important problem that arises during the data mining process. Time series constitute a popular class of sequential data, where records are indexed by time. The k-Nearest Neighbor - Time Series Prediction kNN-TSP method is an approximator for time series prediction problems. The main advantage of this approximator is its simplicity, and is often used in nonlinear time series analysis for prediction of seasonal time series. Although kNN-TSP often finds the best fit for nearly periodic time series forecasting, some problems related to how to determine its parameters still remain. In this work, we focus in two of these parameters: the determination of the nearest neighbours and the prediction function. To this end, we propose a simple approach to select the nearest neighbours, where time is indirectly taken into account by the similarity measure, and a prediction function which is not disturbed in the presence of patterns at different levels of the time series. Both parameters were empirically evaluated on several artificial time series, including chaotic time series, as well as on a real time series related to several environmental variables from the Itaipu reservoir, made available by Itaipu Binacional. Three of the most correlated limnological variables were considered in the experiments carried out on the real time series: water temperature, air temperature and dissolved oxygen. Analyses of correlation were also accomplished to verify if the predicted variables values maintain similar correlation as the original ones. Results show that both proposals, the one related to the determination of the nearest neighbours as well as the one related to the prediction function, are promising

Page generated in 0.0788 seconds