• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 132
  • 39
  • 33
  • 21
  • 11
  • 9
  • 9
  • 7
  • 6
  • 4
  • 4
  • 2
  • 2
  • 2
  • 1
  • Tagged with
  • 317
  • 317
  • 160
  • 66
  • 62
  • 58
  • 44
  • 44
  • 37
  • 37
  • 36
  • 35
  • 35
  • 33
  • 30
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
171

Single-Channel Multiple Regression for In-Car Speech Enhancement

ITAKURA, Fumitada, TAKEDA, Kazuya, ITOU, Katsunobu, LI, Weifeng 01 March 2006 (has links)
No description available.
172

A Design of Karaoke Music Retrieval System by Acoustic Input

Tsai, Shiu-Iau 11 August 2003 (has links)
The objective of this thesis is to design a system that can be used to retrieve the music songs by acoustic input. The system listens to the melody or the partial song singing by the Karaoke users, and then prompts them the whole song paragraphs. Note segmentation is completed by both the magnitude of the song and the k-Nearest Neighbor technique. In order to speed up our system, the pitch period estimation algorithm is rewritten by a theory in communications. Besides, a large popular music database is built to make this system more practical.
173

Algorithmes et méthodes pour le diagnostic ex-situ et in-situ de systèmes piles à combustible haute température de type oxyde solide

Wang, Kun 21 December 2012 (has links) (PDF)
Le projet Européen " GENIUS " ambitionne de développer les méthodologies génériques pour le diagnostic de systèmes piles à combustible à haute température de type oxyde solide (SOFC). Le travail de cette thèse s'intègre dans ce projet ; il a pour objectif la mise en oeuvre d'un outil de diagnostic en utilisant le stack comme capteur spécial pour détecter et identifierles défaillances dans les sous-systèmes du stack SOFC.Trois algorithmes de diagnostic ont été développés, se basant respectivement sur la méthode de classification k-means, la technique de décomposition du signal en ondelettes ainsi que la modélisation par réseau Bayésien. Le premier algorithme sert au diagnostic ex-situ et est appliqué pour traiter les donnés issues des essais de polarisation. Il permet de déterminer les variables de réponse significatives qui indiquent l'état de santé du stack. L'indice Silhouette a été calculé comme mesure de qualité de classification afin de trouver le nombre optimal de classes dans la base de données.La détection de défaut en temps réel peut se réaliser par le deuxième algorithme. Puisque le stack est employé en tant que capteur, son état de santé doit être vérifié préalablement. La transformée des ondelettes a été utilisée pour décomposer les signaux de tension de la pile SOFC dans le but de chercher les variables caractéristiques permettant d'indiquer l'état desanté de la pile et également assez discriminatives pour différentier les conditions d'opération normales et anormales.Afin d'identifier le défaut du système lorsqu'une condition d'opération anormale s'est détectée, les paramètres opérationnelles réelles du stack doivent être estimés. Un réseau Bayésien a donc été développé pour accomplir ce travail.Enfin, tous les algorithmes ont été validés avec les bases de données expérimentales provenant de systèmes SOFC variés, afin de tester leur généricité.
174

Cell Formation: A Real Life Application

Uyanik, Basar 01 September 2005 (has links) (PDF)
In this study, the plant layout problem of a worldwide Printed Circuit Board (PCB) producer company is analyzed. Machines are grouped into cells using grouping methodologies of Tabular Algorithm, K-means clustering algorithm, and Hierarchical grouping with Levenshtein distances. Production plant layouts, which are formed by using different techniques, are evaluated using technical and economical indicators.
175

應用資料採礦技術於數位相機產業消費者行為研究 / Applications of Data Mining Techniques on Consumer Behaviors in the Digital Camera Industry

陳雨農 Unknown Date (has links)
數位化科技帶給人們生活莫大的便利,科技的日新月異,現代人已經很少再像過去一樣靠著寫日記描繪出生活的點點滴滴,而是利用「拍照」來留下想要回憶的時刻,而數位相機正是現代人記錄生活的道具。根據產業情報研究所預估,2009年全球數位相機出貨量約為1.2億台,而過了2010年後,數位相機將會有穩定上升的成長率,本研究找出購買數位相機的消費者個人特色,深信藉此對於行銷上會有很大幫助。 本研究使用使用4種模型建置方式,分別為C5.0、CART、類神經網路和K-means模型,從模型結果中找出數位相機在市場上消費者的共通特性,並依照這些特性擬訂不同的行銷手法。 經由分類矩陣比較3種模型之優劣,最後建模結果顯示C5.0模型為三者模型中的最佳,所以本研究選定C5.0為最後的解釋模型。C5.0選出前10項影響「是否購買數位相機」的重要變數,分別為「搜尋資訊」、「樂活自足」、「網路購物」、「初戀」、「出國旅行」、「年齡」、「家庭月總收入」、「收聽網路電台或音樂」、「個人月可支配所得」、「實用特性」,本研究依據數位相機的產品、價格、通路、推廣等項目提供行銷組合分析。另外,將K-means模型分成三群後,樂活務實群和資訊達人群中,受訪者購買數位相機比例分別為48.17%和49.11%,因此參考此兩群的受訪者特性和C5.0的結果,找出會購買數位相機者的特質,針對這些特質制訂一些行銷策略,希望可以提供給數位相機廠商或其他研究者參考。
176

大陸社經因素與汽車持有之相關性研究 / The research on the correlation between socio-economic factors and car ownership in China

郭綺華 Unknown Date (has links)
汽車工業係屬於一種高附加價值的產業,其綜合性產品不僅含蓋零組件、精密系統設計、鋼鐵、機械、塑膠等製造業外,更擴充至服務業與資訊業等行業,這也表示汽車工業對國家之經濟發展係存在相當的影響力,此外對國家社會發展亦存在相同的影響力,因汽車工業發展不僅可以帶動地區的繁榮,亦能開發地區的就業市場。   本研究為能瞭解大陸之經濟社會概況對汽車持有的影響,導入集群分析等方法,研究解釋變數(包含工資總額、平均工資、平均每人全年可支配收入、生產總值、性別人口數等)與因變數(包含民用載客汽車持有、載貨汽車持有、私人載客汽車持有及私人載貨汽車持有數等)間的關係。其研究結果顯示概區分下列三點:(1)地域性關係對大陸私人汽車、民用汽車持有數量顯著的影響關係;(2)經濟繁榮之區域,載客汽車持有數量也會相對較高;(3)工業發達地區,私人、民用載貨汽車持有數會相對較多。
177

Analysis of online news media through visualisation and text clustering

Pasi, Niharika January 2018 (has links)
Online news has grown in frequency and popularity as a convenient source of information for several years. A result of this drastic surge is the increased competition for viewer-ship and prolonged relevance of online news websites. Higher demands by internet audiences have led to the use of sensationalism such as ‘clickbait’ articles or ‘fake news’ to attract more viewers. The subsequent shift in the journalistic approach in new media opened new opportunities to study the behaviour and intent behind the news content. As news publications cater their news to a specific target audience, conclusions about said news outlets and their readers can be deduced from the content they wish to broadcast. In order to understand the nature behind the publication’s choice of producing content, this thesis uses automated text categorisation as a means to analyse the words and phrases used by most news outlets. The thesis acts as a case study for approximately 143,000 online news articles from 15 different publications focused on the United States between the years 2016 and 2017. The focus of this thesis is to create a framework that observes how news articles group themselves based on the most relevant terms in their corpora. Similarly, other forms of analyses were performed to find similar insights that may give an idea about the news structure over a certain period of time. For this thesis, a preliminary quantitative analysis was also conducted before data processing, followed by applying K-means clustering to these articles post-cleansing. The overall categorisation approach and visual analysis provided sufficient data to re-use this framework with further adjustments. The cluster groups deduced that the most common news categories or genres for the selected publications were either politics - with special focus on the U.S. presidential elections - or crime-related news within the U.S and around the world. The visual formations of these clusters heavily implied that the above two categories were distributed even within groups containing other genres like finance or infotainment. Moreover, the added factor of churning out multiple articles and stories per day suggest that mainstream online news websites continue to use broadcast journalism as their main form of communication with their audiences
178

Novas estrat?gias para conserto de solu??es degeneradas no algoritmo k-means

Dantas, Nielsen Castelo Damasceno 05 October 2016 (has links)
Submitted by Automa??o e Estat?stica (sst@bczm.ufrn.br) on 2017-04-17T22:16:50Z No. of bitstreams: 1 NielsenCasteloDamascenoDantas_TESE.pdf: 581150 bytes, checksum: 9543323aa1568bdc35f349c906b0c64b (MD5) / Approved for entry into archive by Arlan Eloi Leite Silva (eloihistoriador@yahoo.com.br) on 2017-04-19T21:06:11Z (GMT) No. of bitstreams: 1 NielsenCasteloDamascenoDantas_TESE.pdf: 581150 bytes, checksum: 9543323aa1568bdc35f349c906b0c64b (MD5) / Made available in DSpace on 2017-04-19T21:06:11Z (GMT). No. of bitstreams: 1 NielsenCasteloDamascenoDantas_TESE.pdf: 581150 bytes, checksum: 9543323aa1568bdc35f349c906b0c64b (MD5) Previous issue date: 2016-10-05 / O k-means ? um algoritmo benchmark bastante utilizado na ?rea de minera??o de dados.Ele pertence ? grande categoria de heur?sticas com base em etapas delocaliza??o-aloca??o que, alternadamente, localiza centros de cluster e atribu?pontos de dados a eles at? que nenhuma melhoria seja poss?vel. Tais heur?sticass?o conhecidas por sofrer de um fen?meno chamado de degenera??o, em que,alguns dos clusters ficam vazios, e, portanto, fora de uso. Nesta tese, prop?e-sevarias compara??es e uma s?rie de estrat?gias para contornar solu??esdegeneradas durante a execu??o de k-means. Os experimentos computacionaisdemonstram que essas estrat?gias s?o eficientes e levam a melhoressolu??es de agrupamento na grande maioria dos casos testados.
179

Real-time Hand Gesture Detection and Recognition for Human Computer Interaction

Dardas, Nasser Hasan Abdel-Qader January 2012 (has links)
This thesis focuses on bare hand gesture recognition by proposing a new architecture to solve the problem of real-time vision-based hand detection, tracking, and gesture recognition for interaction with an application via hand gestures. The first stage of our system allows detecting and tracking a bare hand in a cluttered background using face subtraction, skin detection and contour comparison. The second stage allows recognizing hand gestures using bag-of-features and multi-class Support Vector Machine (SVM) algorithms. Finally, a grammar has been developed to generate gesture commands for application control. Our hand gesture recognition system consists of two steps: offline training and online testing. In the training stage, after extracting the keypoints for every training image using the Scale Invariance Feature Transform (SIFT), a vector quantization technique will map keypoints from every training image into a unified dimensional histogram vector (bag-of-words) after K-means clustering. This histogram is treated as an input vector for a multi-class SVM to build the classifier. In the testing stage, for every frame captured from a webcam, the hand is detected using my algorithm. Then, the keypoints are extracted for every small image that contains the detected hand posture and fed into the cluster model to map them into a bag-of-words vector, which is fed into the multi-class SVM classifier to recognize the hand gesture. Another hand gesture recognition system was proposed using Principle Components Analysis (PCA). The most eigenvectors and weights of training images are determined. In the testing stage, the hand posture is detected for every frame using my algorithm. Then, the small image that contains the detected hand is projected onto the most eigenvectors of training images to form its test weights. Finally, the minimum Euclidean distance is determined among the test weights and the training weights of each training image to recognize the hand gesture. Two application of gesture-based interaction with a 3D gaming virtual environment were implemented. The exertion videogame makes use of a stationary bicycle as one of the main inputs for game playing. The user can control and direct left-right movement and shooting actions in the game by a set of hand gesture commands, while in the second game, the user can control and direct a helicopter over the city by a set of hand gesture commands.
180

Advanced Algorithms for Classification and Anomaly Detection on Log File Data : Comparative study of different Machine Learning Approaches

Wessman, Filip January 2021 (has links)
Background: A problematic area in today’s large scale distributed systems is the exponential amount of growing log data. Finding anomalies by observing and monitoring this data with manual human inspection methods becomes progressively more challenging, complex and time consuming. This is vital for making these systems available around-the-clock. Aim: The main objective of this study is to determine which are the most suitable Machine Learning (ML) algorithms and if they can live up to needs and requirements regarding optimization and efficiency in the log data monitoring area. Including what specific steps of the overall problem can be improved by using these algorithms for anomaly detection and classification on different real provided data logs. Approach: Initial pre-study is conducted, logs are collected and then preprocessed with log parsing tool Drain and regular expressions. The approach consisted of a combination of K-Means + XGBoost and respectively Principal Component Analysis (PCA) + K-Means + XGBoost. These was trained, tested and with different metrics individually evaluated against two datasets, one being a Server data log and on a HTTP Access log. Results: The results showed that both approaches performed very well on both datasets. Able to with high accuracy, precision and low calculation time classify, detect and make predictions on log data events. It was further shown that when applied without dimensionality reduction, PCA, results of the prediction model is slightly better, by a few percent. As for the prediction time, there was marginally small to no difference for when comparing the prediction time with and without PCA. Conclusions: Overall there are very small differences when comparing the results for with and without PCA. But in essence, it is better to do not use PCA and instead apply the original data on the ML models. The models performance is generally very dependent on the data being applied, it the initial preprocessing steps, size and it is structure, especially affecting the calculation time the most.

Page generated in 0.1227 seconds