• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 209
  • 31
  • 29
  • 13
  • 12
  • 10
  • 7
  • 5
  • 4
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • Tagged with
  • 408
  • 158
  • 59
  • 58
  • 57
  • 57
  • 55
  • 52
  • 49
  • 45
  • 42
  • 41
  • 39
  • 35
  • 34
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
251

Aprendizado semi-supervisionado para o tratamento de incerteza na rotulação de dados de química medicinal / Semi supervised learning for uncertainty on medicinal chemistry labelling

João Carlos Silva de Souza 09 March 2017 (has links)
Nos últimos 30 anos, a área de aprendizagem de máquina desenvolveu-se de forma comparável com a Física no início do século XX. Esse avanço tornou possível a resolução de problemas do mundo real que anteriormente não poderiam ser solucionados por máquinas, devido à dificuldade de modelos puramente estatísticos ajustarem-se de forma satisfatória aos dados de treinamento. Dentre tais avanços, pode-se citar a utilização de técnicas de aprendizagem de máquina na área de Química Medicinal, envolvendo métodos de análise, representação e predição de informação molecular por meio de recursos computacionais. Os dados utilizados no contexto biológico possuem algumas características particulares que podem influenciar no resultado de sua análise. Dentre estas, pode-se citar a complexidade das informações moleculares, o desbalanceamento das classes envolvidas e a existência de dados incompletos ou rotulados de forma incerta. Tais adversidades podem prejudicar o processo de identificação de compostos candidatos a novos fármacos, se não forem tratadas de forma adequada. Neste trabalho, foi abordada uma técnica de aprendizagem de máquina semi-supervisionada capaz de reduzir o impacto causado pelo problema da incerteza na rotulação dos dados, aplicando um método para estimar rótulos mais confiáveis para os compostos químicos existentes no conjunto de treinamento. Na tentativa de evitar os efeitos causados pelo desbalanceamento dos dados, foi incorporada ao processo de estimação de rótulos uma abordagem sensível ao custo, com o objetivo de evitar o viés em benefício da classe majoritária. Após o tratamento do problema da incerteza na rotulação, classificadores baseados em Máquinas de Aprendizado Extremo foram construídos, almejando boa capacidade de aproximação em um tempo de processamento reduzido em relação a outras abordagens de classificação comumente aplicadas. Por fim, o desempenho dos classificadores construídos foi avaliado por meio de análises dos resultados obtidos, confrontando o cenário com os dados originais e outros com as novas rotulações obtidas durante o processo de estimação semi-supervisionado / In the last 30 years, the area of machine learning has developed in a way comparable to Physics in the early twentieth century. This breakthrough has made it possible to solve real-world problems that previously could not be solved by machines because of the difficulty of purely statistical models to fit satisfactorily with training data. Among these advances, one can cite the use of machine learning techniques in the area of Medicinal Chemistry, involving methods for analysing, representing and predicting molecular information through computational resources. The data used in the biological context have some particular characteristics that can influence the result of its analysis. These include the complexity of molecular information, the imbalance of the classes involved, and the existence of incomplete or uncertainly labeled data. If they are not properly treated, such adversities may affect the process of identifying candidate compounds for new drugs. In this work, a semi-supervised machine learning technique was considered to reduce the impact caused by the problem of uncertainty in the data labeling, by applying a method to estimate more reliable labels for the chemical compounds in the training set. In an attempt to reduce the effects caused by data imbalance, a cost-sensitive approach was incorporated to the label estimation process, in order to avoid bias in favor of the majority class. After addressing the uncertainty problem in labeling, classifiers based on Extreme Learning Machines were constructed, aiming for good approximation ability in a reduced processing time in relation to other commonly applied classification approaches. Finally, the performance of the classifiers constructed was evaluated by analyzing the results obtained, comparing the scenario with the original data and others with the new labeling obtained by the semi-supervised estimation process
252

Big Networks: Analysis and Optimal Control

Nguyen, Hung The 01 January 2018 (has links)
The study of networks has seen a tremendous breed of researches due to the explosive spectrum of practical problems that involve networks as the access point. Those problems widely range from detecting functionally correlated proteins in biology to finding people to give discounts and gain maximum popularity of a product in economics. Thus, understanding and further being able to manipulate/control the development and evolution of the networks become critical tasks for network scientists. Despite the vast research effort putting towards these studies, the present state-of-the-arts largely either lack of high quality solutions or require excessive amount of time in real-world `Big Data' requirement. This research aims at affirmatively boosting the modern algorithmic efficiency to approach practical requirements. That is developing a ground-breaking class of algorithms that provide simultaneously both provably good solution qualities and low time and space complexities. Specifically, I target the important yet challenging problems in the three main areas: Information Diffusion: Analyzing and maximizing the influence in networks and extending results for different variations of the problems. Community Detection: Finding communities from multiple sources of information. Security and Privacy: Assessing organization vulnerability under targeted-cyber attacks via social networks.
253

網路商店產品數量與消費者偏好之研究 / Consumer preference with product assortments in on-line store

葉晴晴 Unknown Date (has links)
過去大部分研究都認為人們偏好多樣化的選擇,在商店內提供越多的商品,會吸引到更多的消費者來做購買。但近年來,消費者的購物環境充斥著各式各樣的不同資訊,在這樣的情況下,賣方如果再給消費者增加更多的產品選擇,是否有可能會使消費者的購物會產生障礙,而反而延遲消費者的購買行為?近幾年,紛紛有學者發現,產品多樣化所帶給消費者的效益有一定的限制,並不往往是「越多越好」,許多研究試圖找出影響消費者對商品數量偏好的因素,而其中有文獻指出,商店的評價差異,將會影響消費者對不同數量商店的偏好;另外,消費者做選擇前,是否已對產品存有特定的理想偏好,也可能會影響到消費者的選擇結果。 目前過去對消費者商品數量偏好的研究多以實體零售商店作為探討目標,而近幾年網路購物盛行,網路商店往往會提供消費者數量眾多的商品做選擇,因此本研究則改以網路商店作實驗探討,且參考過去文獻所探討之變數,加入不同產品類型來探討對消費者所造成的影響,此外也探討極大化程度對消費者偏好是否具有影響。 本研究結果則發現: 1. 有無理想選擇與產品類型對消費者不同品數量的偏好影響具有交互作用。 2. 當消費者購買的是享樂品時,有理想選擇的消費者會偏好產品數量較少的網路商店。 3. 當消費者購買的是實用品時,有理想選擇的消費者會偏好產品數量較多的網路商店。
254

Modélisation gaussienne de rang plein des mélanges audio convolutifs appliquée à la séparation de sources

Duong, Ngoc 15 November 2011 (has links) (PDF)
Nous considérons le problème de la séparation de mélanges audio réverbérants déterminés et sous-déterminés, c'est-à-dire l'extraction du signal de chaque source dans un mélange multicanal. Nous proposons un cadre général de modélisation gaussienne où la contribution de chaque source aux canaux du mélange dans le domaine temps-fréquence est modélisée par un vecteur aléatoire gaussien de moyenne nulle dont la covariance encode à la fois les caractéristiques spatiales et spectrales de la source. Afin de mieux modéliser la réverbération, nous nous affranchissons de l'hypothèse classique de bande étroite menant à une covariance spatiale de rang 1 et nous calculons la borne théorique de performance atteignable avec une covariance spatiale de rang plein. Les résultats expérimentaux indiquent une ugmentation du rapport Signal-à-Distorsion (SDR) de 6 dB dans un environnement faiblement à très réverbérant, ce qui valide cette généralisation. Nous considérons aussi l'utilisation de représentations temps-fréquence quadratiques et de l'échelle fréquentielle auditive ERB (equivalent rectangular bandwidth) pour accroître la quantité d'information exploitable et décroître le recouvrement entre les sources dans la représentation temps-fréquence. Après cette validation théorique du cadre proposé, nous nous focalisons sur l'estimation des paramètres du modèle à partir d'un signal de mélange donné dans un scénario pratique de séparation aveugle de sources. Nous proposons une famille d'algorithmes Expectation-Maximization (EM) pour estimer les paramètres au sens du maximum de vraisemblance (ML) ou du maximum a posteriori (MAP). Nous proposons une famille d'a priori de position spatiale inspirée par la théorie de l'acoustique des salles ainsi qu'un a priori de continuité spatiale. Nous étudions aussi l'utilisation de deux a priori spectraux précédemment utilisés dans un contexte monocanal ou multicanal de rang 1: un \textit{a priori} de continuité spatiale et un modèle de factorisation matricielle positive (NMF). Les résultats de séparation de sources obtenus par l'approche proposée sont comparés à plusieurs algorithmes de base et de l'état de l'art sur des mélanges simulés et sur des enregistrements réels dans des scénarios variés.
255

Multitemporal Spaceborne Polarimetric SAR Data for Urban Land Cover Mapping

Niu, Xin January 2012 (has links)
Urban land cover mapping represents one of the most important remote sensing applications in the context of rapid global urbanization. In recent years, high resolution spaceborne Polarimetric Synthetic Aperture Radar (PolSAR) has been increasingly used for urban land cover/land-use mapping, since more information could be obtained in multiple polarizations and the collection of such data is less influenced by solar illumination and weather conditions.  The overall objective of this research is to develop effective methods to extract accurate and detailed urban land cover information from spaceborne PolSAR data. Six RADARSAT-2 fine-beam polarimetric SAR and three RADARSAT-2 ultra-fine beam SAR images were used. These data were acquired from June to September 2008 over the north urban-rural fringe of the Greater Toronto Area, Canada. The major landuse/land-cover classes in this area include high-density residential areas, low-density residential areas, industrial and commercial areas, construction sites, roads, streets, parks, golf courses, forests, pasture, water and two types of agricultural crops. In this research, various polarimetric SAR parameters were evaluated for urban land cover mapping. They include the parameters from Pauli, Freeman and Cloude-Pottier decompositions, coherency matrix, intensities of each polarization and their logarithms.  Both object-based and pixel-based classification approaches were investigated. Through an object-based Support Vector Machine (SVM) and a rule-based approach, efficiencies of various PolSAR features and the multitemporal data combinations were evaluated. For the pixel-based approach, a contextual Stochastic Expectation-Maximization (SEM) algorithm was proposed. With an adaptive Markov Random Field (MRF) and a modified Multiscale Pappas Adaptive Clustering (MPAC), contextual information was explored to improve the mapping results. To take full advantages of alternative PolSAR distribution models, a rule-based model selection approach was put forward in comparison with a dictionary-based approach.  Moreover, the capability of multitemporal fine-beam PolSAR data was compared with multitemporal ultra-fine beam C-HH SAR data. Texture analysis and a rule-based approach which explores the object features and the spatial relationships were applied for further improvement. Using the proposed approaches, detailed urban land-cover classes and finer urban structures could be mapped with high accuracy in contrast to most of the previous studies which have only focused on the extraction of urban extent or the mapping of very few urban classes. It is also one of the first comparisons of various PolSAR parameters for detailed urban mapping using an object-based approach. Unlike other multitemporal studies, the significance of complementary information from both ascending and descending SAR data and the temporal relationships in the data were the focus in the multitemporal analysis. Further, the proposed novel contextual analyses could effectively improve the pixel-based classification accuracy and present homogenous results with preserved shape details avoiding over-averaging. The proposed contextual SEM algorithm, which is one of the first to combine the adaptive MRF and the modified MPAC, was able to mitigate the degenerative problem in the traditional EM algorithms with fast convergence speed when dealing with many classes. This contextual SEM outperformed the contextual SVM in certain situations with regard to both accuracy and computation time. By using such a contextual algorithm, the common PolSAR data distribution models namely Wishart, G0p, Kp and KummerU were compared for detailed urban mapping in terms of both mapping accuracy and time efficiency. In the comparisons, G0p, Kp and KummerU demonstrated better performances with higher overall accuracies than Wishart. Nevertheless, the advantages of Wishart and the other models could also be effectively integrated by the proposed rule-based adaptive model selection, while limited improvement could be observed by the dictionary-based selection, which has been applied in previous studies. The use of polarimetric SAR data for identifying various urban classes was then compared with the ultra-fine-beam C-HH SAR data. The grey level co-occurrence matrix textures generated from the ultra-fine-beam C-HH SAR data were found to be more efficient than the corresponding PolSAR textures for identifying urban areas from rural areas. An object-based and pixel-based fusion approach that uses ultra-fine-beam C-HH SAR texture data with PolSAR data was developed. In contrast to many other fusion approaches that have explored pixel-based classification results to improve object-based classifications, the proposed rule-based fusion approach using the object features and contextual information was able to extract several low backscatter classes such as roads, streets and parks with reasonable accuracy. / <p>QC 20121112</p>
256

Spectral Estimation by Geometric, Topological and Optimization Methods

Enqvist, Per January 2001 (has links)
QC 20100601
257

Energy-efficient relay cooperation for lifetime maximization

Zuo, Fangzhi 01 August 2011 (has links)
We study energy-efficient power allocation among relays for lifetime maximization in a dual-hop relay network operated by amplify-and-forward relays with battery limitations. Power allocation algorithms are proposed for three different scenarios. First, we study the relay cooperation case where all the relays jointly support transmissions for a targeted data rate. By exploring the correlation of time-varying relay channels, we develop a prediction-based relay cooperation method for optimal power allocation strategy to improve the relay network lifetime over existing methods that do not predict the future channel state, or assume the current channel state remains static in the future. Next, we consider energy-efficient relay selection for the single source-destination case. Assuming finite transmission power levels, we propose a stochastic shortest path approach which gives the optimal relay selection decision to maximize the network lifetime. Due to the high computational complexity, a suboptimal prediction-based relay selection algorithm, directly coming from previous problem, is created. Finally, we extend our study to multiple source-destination case, where relay selection needs to be determined for each source-destination pair simultaneously. The network lifetime in the presence of multiple source-destination pairs is defined as the longest time when all source-destination pairs can maintain the target transmission rate. We design relay-to-destination mapping algorithms to prolong the network lifeii time. They all aim at maximizing the perceived network lifetime at the current time slot. The optimal max-min approach and suboptimal user-priority based approach are proposed with different levels of computational complexity. / UOIT
258

Precoding and Resource Allocation for Multi-user Multi-antenna Broadband Wireless Systems

Khanafer, Ali 06 January 2011 (has links)
This thesis is targeted at precoding methods and resource allocation for the downlink of fixed multi-user multi-antenna broadband wireless systems. We explore different utilizations of precoders in transmission over frequency-selective channels. We first consider the weighted sum-rate (WSR) maximization problem for multi-carrier systems using linear precoding and propose a low complexity algorithm which exhibits near-optimal performance. Moreover, we offer a novel rate allocation method that utilizes the signalto- noise-ratio (SNR) gap to capacity concept to choose the rates to allocate to each data stream. We then study a single-carrier transmission scheme that overcomes known impairments associated with multi-carrier systems. The proposed scheme utilizes timereversal space-time block coding (TR-STBC) to orthogonalize the downlink receivers and performs the required pre-equalization using Tomlinson-Harashima precoding (THP).We finally discuss the strengths and weaknesses of the proposed method.
259

Precoding and Resource Allocation for Multi-user Multi-antenna Broadband Wireless Systems

Khanafer, Ali 06 January 2011 (has links)
This thesis is targeted at precoding methods and resource allocation for the downlink of fixed multi-user multi-antenna broadband wireless systems. We explore different utilizations of precoders in transmission over frequency-selective channels. We first consider the weighted sum-rate (WSR) maximization problem for multi-carrier systems using linear precoding and propose a low complexity algorithm which exhibits near-optimal performance. Moreover, we offer a novel rate allocation method that utilizes the signalto- noise-ratio (SNR) gap to capacity concept to choose the rates to allocate to each data stream. We then study a single-carrier transmission scheme that overcomes known impairments associated with multi-carrier systems. The proposed scheme utilizes timereversal space-time block coding (TR-STBC) to orthogonalize the downlink receivers and performs the required pre-equalization using Tomlinson-Harashima precoding (THP).We finally discuss the strengths and weaknesses of the proposed method.
260

Finite-horizon Online Energy-efficient Transmissionscheduling Schemes Forcommunication Links

Bacinoglu, Tan Baran 01 January 2013 (has links) (PDF)
The proliferation of embedded systems, mobile devices, wireless sensor applications and in- creasing global demand for energy directed research attention toward self-sustainable and environmentally friendly systems. In the field of communications, this new trend pointed out the need for study of energy constrained communication and networking. Particularly, in the literature, energy efficient transmission schemes have been well studied for various cases. However, fundamental results have been obtained mostly for offline problems which are not applicable to practical implementations. In contrast, this thesis focuses on online counterparts of offline transmission scheduling problems and provides a theoretical background for energy efficient online transmission schemes. The proposed heuristics, Expected Threshold and Expected Water Level policies, promise an adequate solution which can adapt to short-time-scale dynamics while being computationally efficient.

Page generated in 0.0948 seconds