• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 252
  • 58
  • 58
  • 56
  • 21
  • 12
  • 10
  • 9
  • 8
  • 7
  • 6
  • 5
  • 3
  • 3
  • 2
  • Tagged with
  • 562
  • 226
  • 180
  • 175
  • 171
  • 169
  • 148
  • 81
  • 75
  • 71
  • 68
  • 67
  • 64
  • 64
  • 60
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
481

An IoT Solution for Urban Noise Identification in Smart Cities : Noise Measurement and Classification

Alsouda, Yasser January 2019 (has links)
Noise is defined as any undesired sound. Urban noise and its effect on citizens area significant environmental problem, and the increasing level of noise has become a critical problem in some cities. Fortunately, noise pollution can be mitigated by better planning of urban areas or controlled by administrative regulations. However, the execution of such actions requires well-established systems for noise monitoring. In this thesis, we present a solution for noise measurement and classification using a low-power and inexpensive IoT unit. To measure the noise level, we implement an algorithm for calculating the sound pressure level in dB. We achieve a measurement error of less than 1 dB. Our machine learning-based method for noise classification uses Mel-frequency cepstral coefficients for audio feature extraction and four supervised classification algorithms (that is, support vector machine, k-nearest neighbors, bootstrap aggregating, and random forest). We evaluate our approach experimentally with a dataset of about 3000 sound samples grouped in eight sound classes (such as car horn, jackhammer, or street music). We explore the parameter space of the four algorithms to estimate the optimal parameter values for the classification of sound samples in the dataset under study. We achieve noise classification accuracy in the range of 88% – 94%.
482

Applications of Artificial Intelligence in Power Systems

Rastgoufard, Samin 18 May 2018 (has links)
Artificial intelligence tools, which are fast, robust and adaptive can overcome the drawbacks of traditional solutions for several power systems problems. In this work, applications of AI techniques have been studied for solving two important problems in power systems. The first problem is static security evaluation (SSE). The objective of SSE is to identify the contingencies in planning and operations of power systems. Numerical conventional solutions are time-consuming, computationally expensive, and are not suitable for online applications. SSE may be considered as a binary-classification, multi-classification or regression problem. In this work, multi-support vector machine is combined with several evolutionary computation algorithms, including particle swarm optimization (PSO), differential evolution, Ant colony optimization for the continuous domain, and harmony search techniques to solve the SSE. Moreover, support vector regression is combined with modified PSO with a proposed modification on the inertia weight in order to solve the SSE. Also, the correct accuracy of classification, the speed of training, and the final cost of using power equipment heavily depend on the selected input features. In this dissertation, multi-object PSO has been used to solve this problem. Furthermore, a multi-classifier voting scheme is proposed to get the final test output. The classifiers participating in the voting scheme include multi-SVM with different types of kernels and random forests with an adaptive number of trees. In short, the development and performance of different machine learning tools combined with evolutionary computation techniques have been studied to solve the online SSE. The performance of the proposed techniques is tested on several benchmark systems, namely the IEEE 9-bus, 14-bus, 39-bus, 57-bus, 118-bus, and 300-bus power systems. The second problem is the non-convex, nonlinear, and non-differentiable economic dispatch (ED) problem. The purpose of solving the ED is to improve the cost-effectiveness of power generation. To solve ED with multi-fuel options, prohibited operating zones, valve point effect, and transmission line losses, genetic algorithm (GA) variant-based methods, such as breeder GA, fast navigating GA, twin removal GA, kite GA, and United GA are used. The IEEE systems with 6-units, 10-units, and 15-units are used to study the efficiency of the algorithms.
483

Optimisation des techniques de compression d'images fixes et de vidéo en vue de la caractérisation des matériaux : applications à la mécanique / Optimization of compression techniques for still images and video for characterization of materials : mechanical applications

Eseholi, Tarek Saad Omar 17 December 2018 (has links)
Cette thèse porte sur l’optimisation des techniques de compression d'images fixes et de vidéos en vue de la caractérisation des matériaux pour des applications dans le domaine de la mécanique, et s’inscrit dans le cadre du projet de recherche MEgABIt (MEchAnic Big Images Technology) soutenu par l’Université Polytechnique Hauts-de-France. L’objectif scientifique du projet MEgABIt est d’investiguer dans l’aptitude à compresser de gros volumes de flux de données issues d’instrumentation mécanique de déformations à grands volumes tant spatiaux que fréquentiels. Nous proposons de concevoir des algorithmes originaux de traitement dans l’espace compressé afin de rendre possible au niveau calculatoire l’évaluation des paramètres mécaniques, tout en préservant le maximum d’informations fournis par les systèmes d’acquisitions (imagerie à grande vitesse, tomographie 3D). La compression pertinente de la mesure de déformation des matériaux en haute définition et en grande dynamique doit permettre le calcul optimal de paramètres morpho-mécaniques sans entraîner la perte des caractéristiques essentielles du contenu des images de surface mécaniques, ce qui pourrait conduire à une analyse ou une classification erronée. Dans cette thèse, nous utilisons le standard HEVC (High Efficiency Video Coding) à la pointe des technologies de compression actuelles avant l'analyse, la classification ou le traitement permettant l'évaluation des paramètres mécaniques. Nous avons tout d’abord quantifié l’impact de la compression des séquences vidéos issues d’une caméra ultra-rapide. Les résultats expérimentaux obtenus ont montré que des taux de compression allant jusque 100 :1 pouvaient être appliqués sans dégradation significative de la réponse mécanique de surface du matériau mesurée par l’outil d’analyse VIC-2D. Finalement, nous avons développé une méthode de classification originale dans le domaine compressé d’une base d’images de topographie de surface. Le descripteur d'image topographique est obtenu à partir des modes de prédiction calculés par la prédiction intra-image appliquée lors de la compression sans pertes HEVC des images. La machine à vecteurs de support (SVM) a également été introduite pour renforcer les performances du système proposé. Les résultats expérimentaux montrent que le classificateur dans le domaine compressé est robuste pour la classification de nos six catégories de topographies mécaniques différentes basées sur des méthodologies d'analyse simples ou multi-échelles, pour des taux de compression sans perte obtenus allant jusque 6: 1 en fonction de la complexité de l'image. Nous avons également évalué les effets des types de filtrage de surface (filtres passe-haut, passe-bas et passe-bande) et de l'échelle d'analyse sur l'efficacité du classifieur proposé. La grande échelle des composantes haute fréquence du profil de surface est la mieux appropriée pour classer notre base d’images topographiques avec une précision atteignant 96%. / This PhD. thesis focuses on the optimization of fixed image and video compression techniques for the characterization of materials in mechanical science applications, and it constitutes a part of MEgABIt (MEchAnic Big Images Technology) research project supported by the Polytechnic University Hauts-de-France (UPHF). The scientific objective of the MEgABIt project is to investigate the ability to compress large volumes of data flows from mechanical instrumentation of deformations with large volumes both in the spatial and frequency domain. We propose to design original processing algorithms for data processing in the compressed domain in order to make possible at the computational level the evaluation of the mechanical parameters, while preserving the maximum of information provided by the acquisitions systems (high-speed imaging, tomography 3D). In order to be relevant image compression should allow the optimal computation of morpho-mechanical parameters without causing the loss of the essential characteristics of the contents of the mechanical surface images, which could lead to wrong analysis or classification. In this thesis, we use the state-of-the-art HEVC standard prior to image analysis, classification or storage processing in order to make the evaluation of the mechanical parameters possible at the computational level. We first quantify the impact of compression of video sequences from a high-speed camera. The experimental results obtained show that compression ratios up to 100: 1 could be applied without significant degradation of the mechanical surface response of the material measured by the VIC-2D analysis tool. Then, we develop an original classification method in the compressed domain of a surface topography database. The topographical image descriptor is obtained from the prediction modes calculated by intra-image prediction applied during the lossless HEVC compression of the images. The Support vector machine (SVM) is also introduced for strengthening the performance of the proposed system. Experimental results show that the compressed-domain topographies classifier is robust for classifying the six different mechanical topographies either based on single or multi-scale analyzing methodologies. The achieved lossless compression ratios up to 6:1 depend on image complexity. We evaluate the effects of surface filtering types (high-pass, low-pass, and band-pass filter) and the scale of analysis on the efficiency of the proposed compressed-domain classifier. We verify that the high analysis scale of high-frequency components of the surface profile is more appropriate for classifying our surface topographies with accuracy of 96 %.
484

PERFORMANCES STATISTIQUES D'ALGORITHMES D'APPRENTISSAGE : ``KERNEL PROJECTION<br /> MACHINE'' ET ANALYSE EN COMPOSANTES PRINCIPALES A NOYAU.

Zwald, Laurent 23 November 2005 (has links) (PDF)
La thèse se place dans le cadre de l'apprentissage statistique. Elle apporte<br />des contributions à la communauté du machine learning en utilisant des<br />techniques de statistiques modernes basées sur des avancées dans l'étude<br />des processus empiriques. Dans une première partie, les propriétés statistiques de<br />l'analyse en composantes principales à noyau (KPCA) sont explorées. Le<br />comportement de l'erreur de reconstruction est étudié avec un point de vue<br />non-asymptotique et des inégalités de concentration des valeurs propres de la matrice de<br />Gram sont données. Tous ces résultats impliquent des vitesses de<br />convergence rapides. Des propriétés <br />non-asymptotiques concernant les espaces propres de la KPCA eux-mêmes sont également<br />proposées. Dans une deuxième partie, un nouvel <br />algorithme de classification a été<br />conçu : la Kernel Projection Machine (KPM). <br />Tout en s'inspirant des Support Vector Machines (SVM), il met en lumière que la sélection d'un espace vectoriel par une méthode de<br />réduction de la dimension telle que la KPCA régularise <br />convenablement. Le choix de l'espace vectoriel utilisé par la KPM est guidé par des études statistiques de sélection de modéle par minimisation pénalisée de la perte empirique. Ce<br />principe de régularisation est étroitement relié à la projection fini-dimensionnelle étudiée dans les travaux statistiques de <br />Birgé et Massart. Les performances de la KPM et de la SVM sont ensuite comparées sur différents jeux de données. Chaque thème abordé dans cette thèse soulève de nouvelles questions d'ordre théorique et pratique.
485

Semantic Classification And Retrieval System For Environmental Sounds

Okuyucu, Cigdem 01 October 2012 (has links) (PDF)
The growth of multimedia content in recent years motivated the research on audio classification and content retrieval area. In this thesis, a general environmental audio classification and retrieval approach is proposed in which higher level semantic classes (outdoor, nature, meeting and violence) are obtained from lower level acoustic classes (emergency alarm, car horn, gun-shot, explosion, automobile, motorcycle, helicopter, wind, water, rain, applause, crowd and laughter). In order to classify an audio sample into acoustic classes, MPEG-7 audio features, Mel Frequency Cepstral Coefficients (MFCC) feature and Zero Crossing Rate (ZCR) feature are used with Hidden Markov Model (HMM) and Support Vector Machine (SVM) classifiers. Additionally, a new classification method is proposed using Genetic Algorithm (GA) for classification of semantic classes. Query by Example (QBE) and keyword-based query capabilities are implemented for content retrieval.
486

Semantic Assisted, Multiresolution Image Retrieval in 3D Brain MR Volumes

Quddus, Azhar January 2010 (has links)
Content Based Image Retrieval (CBIR) is an important research area in the field of multimedia information retrieval. The application of CBIR in the medical domain has been attempted before, however the use of CBIR in medical diagnostics is a daunting task. The goal of diagnostic medical image retrieval is to provide diagnostic support by displaying relevant past cases, along with proven pathologies as ground truths. Moreover, medical image retrieval can be extremely useful as a training tool for medical students and residents, follow-up studies, and for research purposes. Despite the presence of an impressive amount of research in the area of CBIR, its acceptance for mainstream and practical applications is quite limited. The research in CBIR has mostly been conducted as an academic pursuit, rather than for providing the solution to a need. For example, many researchers proposed CBIR systems where the image database consists of images belonging to a heterogeneous mixture of man-made objects and natural scenes while ignoring the practical uses of such systems. Furthermore, the intended use of CBIR systems is important in addressing the problem of "Semantic Gap". Indeed, the requirements for the semantics in an image retrieval system for pathological applications are quite different from those intended for training and education. Moreover, many researchers have underestimated the level of accuracy required for a useful and practical image retrieval system. The human eye is extremely dexterous and efficient in visual information processing; consequently, CBIR systems should be highly precise in image retrieval so as to be useful to human users. Unsurprisingly, due to these and other reasons, most of the proposed systems have not found useful real world applications. In this dissertation, an attempt is made to address the challenging problem of developing a retrieval system for medical diagnostics applications. More specifically, a system for semantic retrieval of Magnetic Resonance (MR) images in 3D brain volumes is proposed. The proposed retrieval system has a potential to be useful for clinical experts where the human eye may fail. Previously proposed systems used imprecise segmentation and feature extraction techniques, which are not suitable for precise matching requirements of the image retrieval in this application domain. This dissertation uses multiscale representation for image retrieval, which is robust against noise and MR inhomogeneity. In order to achieve a higher degree of accuracy in the presence of misalignments, an image registration based retrieval framework is developed. Additionally, to speed-up the retrieval system, a fast discrete wavelet based feature space is proposed. Further improvement in speed is achieved by semantically classifying of the human brain into various "Semantic Regions", using an SVM based machine learning approach. A novel and fast identification system is proposed for identifying a 3D volume given a 2D image slice. To this end, we used SVM output probabilities for ranking and identification of patient volumes. The proposed retrieval systems are tested not only for noise conditions but also for healthy and abnormal cases, resulting in promising retrieval performance with respect to multi-modality, accuracy, speed and robustness. This dissertation furnishes medical practitioners with a valuable set of tools for semantic retrieval of 2D images, where the human eye may fail. Specifically, the proposed retrieval algorithms provide medical practitioners with the ability to retrieve 2D MR brain images accurately and monitor the disease progression in various lobes of the human brain, with the capability to monitor the disease progression in multiple patients simultaneously. Additionally, the proposed semantic classification scheme can be extremely useful for semantic based categorization, clustering and annotation of images in MR brain databases. This research framework may evolve in a natural progression towards developing more powerful and robust retrieval systems. It also provides a foundation to researchers in semantic based retrieval systems on how to expand existing toolsets for solving retrieval problems.
487

Applications of Soft Computing for Power-Quality Detection and Electric Machinery Fault Diagnosis

Wu, Chien-Hsien 20 November 2008 (has links)
With the deregulation of power industry and the market competition, stable and reliable power supply is a major concern of the independent system operator (ISO). Power-quality (PQ) study has become a more and more important subject lately. Harmonics, voltage swell, voltage sag, and power interruption could downgrade the service quality. In recent years, high speed railway (HSR) and massive rapid transit (MRT) system have been rapidly developed, with the applications of widespread semiconductor technologies in the auto-traction system. The harmonic distortion level worsens due to these increased uses of electronic equipment and non-linear loads. To ensure the PQ, power-quality disturbances (PQD) detection becomes important. A detection method with classification capability will be helpful for detecting disturbance locations and types. Electric machinery fault diagnosis is another issue of considerable attentions from utilities and customers. ISO need to provide a high quality service to retain their customers. Fault diagnosis of turbine-generator has a great effect on the benefit of power plants. The generator fault not only damages the generator itself, but also causes outages and loss of profits. With high-temperature, high-pressure and factors such as thermal fatigues, many components may go wrong, which will not only lead to great economic loss, but sometimes a threat to social security. Therefore, it is necessary to detect generator faults and take immediate actions to cut the loss. Besides, induction motor plays a major role in a power system. For saving cost, it is important to run periodical inspections to detect incipient faults inside the motor. Preventive techniques for early detection can find out the incipient faults and avoid outages. This dissertation developed various soft computing (SC) algorithms for detection including power-quality disturbances (PQD), turbine-generator fault diagnosis, and induction motor fault diagnosis. The proposed SC algorithms included support vector machine (SVM), grey clustering analysis (GCA), and probabilistic neural network (PNN). Integrating the proposed diagnostic procedure and existing monitoring instruments, a well-monitored power system will be constructed without extra devices. Finally, all the methods in the dissertation give reasonable and practical estimation method. Compared with conventional method, the test results showed a high accuracy, good robustness, and a faster processing performance.
488

使用AUC特徵選取方法在蛋白質質譜儀資料分類之應用 / An AUC criterion for feature selection on classifying proteomic spectra data

葉勝宗 Unknown Date (has links)
表面增強雷射脫附遊離/飛行時間質譜(SELDI-TOF-MS)是種屬於高維度的蛋白質質譜儀資料,主要是用來偵測蛋白質分子的表現。由於SELDI技術的限制,導致掃描出來的質譜儀資料往往存在誤差與雜訊,因此在分析前通常會先針對原始資料進行低階的事前處理,步驟包括去除基線、正規化、峰偵測(peak detection)與峰調準(peak alignment)。本文中所探討前列腺癌資料,可分成正常、良性腫瘤、癌症初期與癌症末期四種類別。我們分析及比較兩筆事前處理的蛋白質質譜資料,包括我們自行處理的以及Adam等人所處理的資料。為了解決SELDI在偵測分子質量時常出現的位移誤差以及同位素的問題,我們提出以”質荷比段落”當作新的特徵變數的想法來進行分析。本文利用「ROC曲線下面積」(AUC)當作選取的準則來挑選出重要的質荷比段落,而分類方法則採用支援向量機(SVM)。在四分類的分類結果中,我們自行處理的事前處理資可以得到訓練資料89%及測試資料63 %的正確率。而Adam等人所處理的事前處理資料,則得到訓練資料94%及測試資料86 %的正確率。本研究結果指出不同事前處理的方法對分類結果確實有影響,同時也驗證了利用”特徵變數段落”的方法來進行分析的可行性。 / The surface enhanced laser desorption/ionization time-of-flight mass spectrometry (SELDI-TOF MS) is a technique for presenting the expression of molecular masses. It is obvious that every spectrum has a huge dimension of features. In order to analyze these types of spectra samples, preprocessing steps are necessary. The steps of preprocessing include baseline subtraction, normalization, peak detection, and alignment. In our study, we use a prostate cancer data for demonstration. This prostate cancer data can be classified into four categories, namely, healthy men, benign prostate hyperplasia, early stage prostate cancer, and late stage prostate cancer. We analyzed both the preprocessed data processed by ourselves and the preprocessed data done by Adam et al.. In this thesis, we use segmentations of features as “new features” in attempt to solve problems due to location shifts and isotopes. The selection of important segmentations was based on the values of AUC and the SVM was applied for classification. For four-class classification, 94 % and 86 % of accuracy were obtained for training samples and validation samples, respectively, by using Dr. Adam et al.’s preprocessed data, and 89% for training samples, and 63% for validation samples by using our preprocessed data. This study suggested that the preprocessed method does have effect on classification result and a reasonable classification result can be obtained by using segmentations of features.
489

兩階段特徵選取法在蛋白質質譜儀資料之應用 / A Two-Stage Approach of Feature Selection on Proteomic Spectra Data

王健源, Wang,Chien-yuan Unknown Date (has links)
藉由「早期發現,早期治療」的方式,我們可以降低癌症的死亡率。因此找出與癌症病變有關的生物標記以期及早發現與治療是一項重要的工作。本研究分析了包含正常人以及攝護腺癌症病人實際的蛋白質質譜資料,而這些蛋白質質譜資料是來自於表面強化雷射解吸電離飛行質譜技術(SELDI-TOF MS)的蛋白質晶片實驗。表面增強雷射脫附遊離飛行時間質譜技術可有效地留存生物樣本的蛋白質特徵。如果沒有經過適當的事前處理步驟以消除實驗雜訊,ㄧ 個質譜中可能包含多於數百或數千的特徵變數。為了加速對於可能的蛋白質生物標記的搜尋,我們只考慮可以區分癌症病人與正常人的特徵變數。 基因演算法是一種類似生物基因演化的總體最佳化搜尋機制,它可以有效地在高維度空間中去尋找可能的最佳解。本研究中,我們利用仿基因演算法(GAL)進行蛋白質的特徵選取以區分癌症病人與正常人。另外,我們提出兩種兩階段仿基因演算法(TSGAL),以嘗試改善仿基因演算法的缺點。 / Early detection and diagnosis can effectively reduce the mortality of cancer. The discovery of biomarkers for the early detection and diagnosis of cancer is thus an important task. In this study, a real proteomic spectra data set of prostate cancer patients and normal patients was analyzed. The data were collected from a Surface-Enhanced Laser Desorption/Ionization Time-Of-Flight Mass Spectrometry (SELDI-TOF MS) experiment. The SELDI-TOF MS technology captures protein features in a biological sample. Without suitable pre-processing steps to remove experimental noise, a mass spectrum could consists of more than hundreds or thousands of peaks. To narrow down the search for possible protein biomarkers, only those features that can distinguish between cancer and normal patients are selected. Genetic Algorithm (GA) is a global optimization procedure that uses an analogy of the genetic evolution of biological organisms. It’s shown that GA is effective in searching complex high-dimensional space. In this study, we consider GA-Like algorithm (GAL) for feature selection on proteomic spectra data in classifying prostate cancer patients from normal patients. In addition, we propose two types of Two-Stage GAL algorithm (TSGAL) to improve the GAL.
490

On spectrum sensing, resource allocation, and medium access control in cognitive radio networks

Karaputugala Gamacharige, Madushan Thilina 12 1900 (has links)
The cognitive radio-based wireless networks have been proposed as a promising technology to improve the utilization of the radio spectrum through opportunistic spectrum access. In this context, the cognitive radios opportunistically access the spectrum which is licensed to primary users when the primary user transmission is detected to be absent. For opportunistic spectrum access, the cognitive radios should sense the radio environment and allocate the spectrum and power based on the sensing results. To this end, in this thesis, I develop a novel cooperative spectrum sensing scheme for cognitive radio networks (CRNs) based on machine learning techniques which are used for pattern classification. In this regard, unsupervised and supervised learning-based classification techniques are implemented for cooperative spectrum sensing. Secondly, I propose a novel joint channel and power allocation scheme for downlink transmission in cellular CRNs. I formulate the downlink resource allocation problem as a generalized spectral-footprint minimization problem. The channel assignment problem for secondary users is solved by applying a modified Hungarian algorithm while the power allocation subproblem is solved by using Lagrangian technique. Specifically, I propose a low-complexity modified Hungarian algorithm for subchannel allocation which exploits the local information in the cost matrix. Finally, I propose a novel dynamic common control channel-based medium access control (MAC) protocol for CRNs. Specifically, unlike the traditional dedicated control channel-based MAC protocols, the proposed MAC protocol eliminates the requirement of a dedicated channel for control information exchange. / October 2015

Page generated in 0.0243 seconds