• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 2596
  • 912
  • 381
  • 347
  • 331
  • 101
  • 66
  • 49
  • 40
  • 36
  • 34
  • 32
  • 31
  • 27
  • 26
  • Tagged with
  • 5939
  • 1421
  • 871
  • 726
  • 722
  • 668
  • 492
  • 490
  • 479
  • 447
  • 421
  • 414
  • 386
  • 365
  • 340
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
261

Security versus Power Consumption in Wireless Sensor Networks

Fötschl, Christine, Rainer, Stefan January 2006 (has links)
X3 C is a Swedish company which develops a world wide good tracking system by using ARFID tags placed on every item which has to be delivered and base stations as gateway in a wireless sensor network. The requirement of a long lifespan of their ARFID tags made it difficult to implement security. Firstly an evaluation of possible security mechanisms and their power consumption was done by measuring the avalanche effect and character frequency of the sym- metric algorithms Blowfish, RC2 and XTEA. Secondly, the required CPU time which is needed by each algorithm for encrypting a demo plaintext, was measured and analyzed. Summariz- ing both analysis, the XTEA algorithm, run in CBC mode, is the recommendation for the XC ARFID tags. The testing processes and the results are presented in detail in this thesis.
262

Metareasoning about propagators for constraint satisfaction

Thompson, Craig Daniel Stewart 11 July 2011 (has links)
Given the breadth of constraint satisfaction problems (CSPs) and the wide variety of CSP solvers, it is often very difficult to determine a priori which solving method is best suited to a problem. This work explores the use of machine learning to predict which solving method will be most effective for a given problem. We use four different problem sets to determine the CSP attributes that can be used to determine which solving method should be applied. After choosing an appropriate set of attributes, we determine how well j48 decision trees can predict which solving method to apply. Furthermore, we take a cost sensitive approach such that problem instances where there is a great difference in runtime between algorithms are emphasized. We also attempt to use information gained on one class of problems to inform decisions about a second class of problems. Finally, we show that the additional costs of deciding which method to apply are outweighed by the time savings compared to applying the same solving method to all problem instances.
263

Joint Detection and Estimation in Cooperative Communication Systems with Correlated Channels Using EM Algorithm

Lin, Hung-Fu 19 July 2010 (has links)
In this thesis, we consider the problem of distributed detection problem in cooperative communication networks when the channel state information (CSI) is unknown. The amplify-and-forward relay strategy is considered in this thesis. Since the CSI is assumed to be unknown to the system, the joint detection and estimation approach is considered in this work. The proposed scheme in this work differs from existing joint detection and estimation schemes in that it utilizes a distributed approach, which exploits node cooperation and achieves a better system performance in cooperative communication networks. Moreover, by contrast to the existing channel estimation and symbol detection schemes, the proposed scheme is mainly developed based on the assumption that the data communication from the source to each relay node is to undergo a correlated fading channel. We derive the joint detection and estimation rules for our problem using the expectation-maximum (EM) algorithm. Simulation results show that the proposed scheme can perform well. Moreover, the obtained results show that the proposed iteration algorithm converges very fast, which implies the proposed scheme can work well in real-time applications.
264

Algorithms for Scaled String Indexing and LCS Variants

Peng, Yung-Hsing 20 July 2010 (has links)
Related problems of string indexing and sequence analysis have been widely studied for a long time. Recently, researchers turn to consider extended versions of these problems, which provides more realistic applications. In this dissertation, we focus on three problems of recent interest, which are (1)the indexing problem for scaled strings, (2)the merged longest common subsequence problem and its variant with blocks, and (3)the sequence alignment problem with weighted constraints. The indexing problem for scaled strings asks one to preprocess a text string T, so that the matched positions of a pattern string P in T, with some scales £\ applied to P, can be reported efficiently. In this dissertation, we propose efficient algorithms for indexing real scaled strings, discretely scaled strings, and proportionally scaled strings. Our indexing algorithms achieve either significant improvements to previous results, or the best known results. The merged longest common subsequence (merged LCS) problem aims to detect the interleaving relationship between sequences, which has important applications to genomic and signal comparison. In this dissertation, we propose improved algorithms for finding the merged LCS. Our algorithms for finding the merged LCS are also more efficient than the previous results, especially for large alphabets. Finally, the sequence alignment problem with weighted constraints is a newly proposed problem in this dissertation. For this new problem, we first propose an efficient solution, and then show that the concept of weighted constraints can be further used to solve many constraint-related problems on sequences. Therefore, our results in this dissertation have significant contributions to the field of string indexing and sequence analysis.
265

Profit-Based Unit Commitment and Risk Analysis

Gow, Hong-Jey 27 July 2010 (has links)
For the power market participators, there are competition and more trade opportunities in the power industry under the deregulation. In the electricity market, the bidding model is adopted instead of the cost model. GenCos try to maximize the profit under bidding model according to the power demand. Electricity becomes commodity and its price varies with power demand, bidding strategy and the grid. GenCos perform the unit commitment in a price volatile environment to reach the maximal profit. In a deregulation environment, Independent System Operator (ISO) is very often responsible for the electricity auction and secured power scheduling. The ISO operation may involve all kinds of risks. These risks include price volatility risk, bidding risk, congestion risk, and so on. For some markets, it is very important how GenCos determine the optimal unit commitment schedule considering risk management. A good risk analysis will help GenCo maximize profit and purse sustainable development. In this study, price forecasting is developed to provide information for power producers to develop bidding strategies to maximize profit. Profit-Based Unit Commitment (PBUC) model was also derived. An Enhanced Immune Algorithm (EIA) is developed to solve the PBUC problem. Finally, the Value-at-Risk (VAR) of GenCos is found with a present confident level. Simulation results provide a risk management rule to find an optimal risk control strategy to maximize profit and raise its compatibility against other players.
266

Berth Schedule Planning of the Kaohsiung Port by Genetic Algorithms

Tsai, An-Hsiou 09 September 2011 (has links)
For a commercial port, to efficiently schedule the public berths is an important issue. Since a berth schedule would affect the usage of the commercial port, in this thesis, we apply a genetic algorithm to schedule the public berths in order to minimize the total waiting time of vessels. When in the initialization process, we encode the chromosome based on wharf characteristics in order to avoid assigning vessels to inappropriate wharves. After mutation process, we also adjust the usage of wharves to improve the speed of convergence speed. Simulation results show that the proposed algorithm can assign vessels to proper berths as soon as vessels arrive. Compared to the other genetic algorithms, the proposed algorithm obtains better performance in convergence speed and the quality of the solutions.
267

A time integration scheme for stress - temperature dependent viscoelastic behaviors of isotropic materials

Khan, Kamran-Ahmed 15 May 2009 (has links)
A recursive-iterative algorithm is developed for predicting nonlinear viscoelastic behaviors of isotropic materials that belong to the thermorheologically complex material (TCM). The algorithm is derived based on implicit stress integration solutions within a general displacement based FE structural analyses for small deformations and uncoupled thermo-mechanical problems. A previously developed recursive-iterative algorithm for a stress-dependent hereditary integral model which was developed by Haj-Ali and Muliana is modified to include time-temperature effects. The recursive formula allows bypassing the need to store entire strain histories at each Gaussian integration point. Two types of iterative procedures, which are fixed point and Newton-Raphson methods, are examined within the recursive algorithm. Furthermore, a consistent tangent stiffness matrix is formulated to accelerate convergence and avoid divergence. The efficiency and accuracy of the proposed algorithm are evaluated using available experimental data and several structural analyses. The performance of the proposed algorithm under multi-axial conditions is verified with analytical solutions of creep responses of a plate with a hole. Next, the recursive-iterative algorithm is used to predict the overall response of single lap-joint. Numerical simulations of time-dependent crack propagations of adhesive bonded joints are also presented. For this purpose, the recursive algorithm is implemented in cohesive elements. The numerical assessment of the TCM and thermorheologically simple material (TSM) behaviors has also been performed. The result showed that TCM are able to describe thermo-viscoelastic behavior under general loading histories, while TSM behaviors are limited to isothermal conditions. The proposed numerical algorithm can be easily used in a micromechanical model for predicting the overall composite responses. Examples are shown for solid spherical particle reinforced composites. Detailed unit-cell FE models of the composite systems are generated to verify the capability of the above micromechanical model for predicting the overall nonlinear viscoelastic behaviors.
268

Application of a spatially referenced water quality model to predict E. coli flux in two Texas river basins

, Deepti 15 May 2009 (has links)
Water quality models are applied to assess the various processes affecting the concentrations of contaminants in a watershed. SPAtially Referenced Regression On Watershed attributes (SPARROW) is a nonlinear regression based approach to predict the fate and transport of contaminants in river basins. In this research SPARROW was applied to the Guadalupe and San Antonio River Basins of Texas to assess E. coli contamination. Since SPARROW relies on the measured records of concentrations of contaminants collected at monitoring stations for the prediction, the effect of the locations and selections of the monitoring stations was analyzed. The results of SPARROW application were studied in detail to evaluate the contribution from the statistically significant sources. For verification of SPARROW application, results were compared to 303 (d) list of Clean Water Act, 2000. Further, a methodology to maintain the monitoring records of the highly contaminated areas in the watersheds was explored with the application of the genetic algorithm. In this study, the importance of the available scale and details of explanatory variables (sources, land-water delivery and reservoir/ stream attenuation factors) in predicting the water quality processes were also analyzed. The effect of uncertainty in the monitored records on SPARROW application was discussed. The application of SPARROW and genetic algorithm were explored to design a monitoring network for the study area. The results of this study show that SPARROW model can be used successfully to predict the pathogen contamination of rivers. Also, SPARROW can be applied to design the monitoring network for the basins.
269

An algorithm for identifying clusters of functionally related genes in genomes

Yi, Gang Man 15 May 2009 (has links)
An increasing body of literature shows that genomes of eukaryotes can contain clusters of functionally related genes. Most approaches to identify gene clusters utilize microarray data or metabolic pathway databases to find groups of genes on chromo- somes that are linked by common attributes. A generalized method that can find gene clusters, regardless of the mechanism of origin, would provide researchers with an unbiased method for finding clusters and studying the evolutionary forces that give rise to them. I present a basis of algorithm to identify gene clusters in eukaryotic genomes that utilizes functional categories defined in graph-based vocabularies such as the Gene Ontology (GO). Clusters identified in this manner need only have a common function and are not constrained by gene expression or other properties. I tested the algorithm by analyzing genomes of a representative set of species. I identified species specific variation in percentage of clustered genes as well as in properties of gene clusters, including size distribution and functional annotation. These properties may be diagnostic of the evolutionary forces that lead to the formation of gene clusters. The approach finds all gene clusters in the data set and ranks them by their likelihood of occurrence by chance. The method successfully identified clusters.
270

Model-based Pre-processing in Protein Mass Spectrometry

Wagaman, John C. 2009 December 1900 (has links)
The discovery of proteomic information through the use of mass spectrometry (MS) has been an active area of research in the diagnosis and prognosis of many types of cancer. This process involves feature selection through peak detection but is often complicated by many forms of non-biologicalbias. The need to extract biologically relevant peak information from MS data has resulted in the development of statistical techniques to aid in spectra pre-processing. Baseline estimation and normalization are important pre-processing steps because the subsequent quantification of peak heights depends on this baseline estimate. This dissertation introduces a mixture model to estimate the baseline and peak heights simultaneously through the expectation-maximization (EM) algorithm and a penalized likelihood approach. Our model-based pre-processing performs well in the presence of raw, unnormalized data, with few subjective inputs. We also propose a model-based normalization solution for use in subsequent classification procedures, where misclassification results compare favorably with existing methods of normalization. The performance of our pre-processing method is evaluated using popular matrix-assisted laser desorption and ionization (MALDI) and surface-enhanced laser desorption and ionization (SELDI) datasets as well as through simulation.

Page generated in 0.0563 seconds