• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 606
  • 285
  • 85
  • 61
  • 40
  • 18
  • 17
  • 16
  • 16
  • 16
  • 15
  • 12
  • 6
  • 5
  • 5
  • Tagged with
  • 1351
  • 236
  • 168
  • 164
  • 140
  • 125
  • 110
  • 109
  • 103
  • 94
  • 91
  • 90
  • 89
  • 82
  • 81
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
951

Automatically Identifying Configuration Files

Huang, Zhen 19 January 2010 (has links)
Systems can become misconfigured for a variety of reasons such as operator errors or buggy patches. When a misconfiguration is discovered, usually the first order of business is to restore availability, often by undoing the misconfiguration. To simplify this task, we propose Ocasta to automatically determine which files contain configuration state. Ocasta uses a novel {\em similarity} metric to measures how similar a file's versions are to each other, and a set of filters to eliminate non-persistent files from consideration. These two mechanisms enable Ocasta to identify all 72 configuration files out of 2363 versioned files from 6 common applications in two user traces, while mistaking only 33 non-configuration files as configuration files. Ocasta allows a versioning file system to eliminate roughly 66\% of non-configuration file versions from its logs, thus reducing the number of file versions that a user must manually examine to recover from a misconfiguration.
952

The Evaluator Effect in Heuristic Evaluation: A Preliminary Study of End-users as Evaluators

Weinstein, Peter 27 November 2012 (has links)
Heuristic Evaluation (HE) is a popular usability inspection method. Yet little is known about the effect the evaluators have on the outcome of HE. One potentially important feature of evaluators is their end-user status, that is, whether or not they are end-users for whom the interface is designed. I completed a detailed review of the HE literature, combined sources, developed an explicit method for conducting an HE and trained HE novices from different work domains using it. Using these methods I conducted a preliminary randomized crossover study (n=6) of the effect of end-user status during the inspection and merging stages of HE. I estimate a larger study of approximately 148 end-users would be needed to test hypotheses regarding end-user status. I demonstrated a novel measure of the effect of end-user status for the merging stage of HE, which I called the measure of matching similarity (MMS).
953

Syntactic and Semantic Analysis and Visualization of Unstructured English Texts

Karmakar, Saurav 14 December 2011 (has links)
People have complex thoughts, and they often express their thoughts with complex sentences using natural languages. This complexity may facilitate efficient communications among the audience with the same knowledge base. But on the other hand, for a different or new audience this composition becomes cumbersome to understand and analyze. Analysis of such compositions using syntactic or semantic measures is a challenging job and defines the base step for natural language processing. In this dissertation I explore and propose a number of new techniques to analyze and visualize the syntactic and semantic patterns of unstructured English texts. The syntactic analysis is done through a proposed visualization technique which categorizes and compares different English compositions based on their different reading complexity metrics. For the semantic analysis I use Latent Semantic Analysis (LSA) to analyze the hidden patterns in complex compositions. I have used this technique to analyze comments from a social visualization web site for detecting the irrelevant ones (e.g., spam). The patterns of collaborations are also studied through statistical analysis. Word sense disambiguation is used to figure out the correct sense of a word in a sentence or composition. Using textual similarity measure, based on the different word similarity measures and word sense disambiguation on collaborative text snippets from social collaborative environment, reveals a direction to untie the knots of complex hidden patterns of collaboration.
954

SSIM-Inspired Quality Assessment, Compression, and Processing for Visual Communications

Rehman, Abdul January 2013 (has links)
Objective Image and Video Quality Assessment (I/VQA) measures predict image/video quality as perceived by human beings - the ultimate consumers of visual data. Existing research in the area is mainly limited to benchmarking and monitoring of visual data. The use of I/VQA measures in the design and optimization of image/video processing algorithms and systems is more desirable, challenging and fruitful but has not been well explored. Among the recently proposed objective I/VQA approaches, the structural similarity (SSIM) index and its variants have emerged as promising measures that show superior performance as compared to the widely used mean squared error (MSE) and are computationally simple compared with other state-of-the-art perceptual quality measures. In addition, SSIM has a number of desirable mathematical properties for optimization tasks. The goal of this research is to break the tradition of using MSE as the optimization criterion for image and video processing algorithms. We tackle several important problems in visual communication applications by exploiting SSIM-inspired design and optimization to achieve significantly better performance. Firstly, the original SSIM is a Full-Reference IQA (FR-IQA) measure that requires access to the original reference image, making it impractical in many visual communication applications. We propose a general purpose Reduced-Reference IQA (RR-IQA) method that can estimate SSIM with high accuracy with the help of a small number of RR features extracted from the original image. Furthermore, we introduce and demonstrate the novel idea of partially repairing an image using RR features. Secondly, image processing algorithms such as image de-noising and image super-resolution are required at various stages of visual communication systems, starting from image acquisition to image display at the receiver. We incorporate SSIM into the framework of sparse signal representation and non-local means methods and demonstrate improved performance in image de-noising and super-resolution. Thirdly, we incorporate SSIM into the framework of perceptual video compression. We propose an SSIM-based rate-distortion optimization scheme and an SSIM-inspired divisive optimization method that transforms the DCT domain frame residuals to a perceptually uniform space. Both approaches demonstrate the potential to largely improve the rate-distortion performance of state-of-the-art video codecs. Finally, in real-world visual communications, it is a common experience that end-users receive video with significantly time-varying quality due to the variations in video content/complexity, codec configuration, and network conditions. How human visual quality of experience (QoE) changes with such time-varying video quality is not yet well-understood. We propose a quality adaptation model that is asymmetrically tuned to increasing and decreasing quality. The model improves upon the direct SSIM approach in predicting subjective perceptual experience of time-varying video quality.
955

Molecular quantum similarity in QSAR: applications in computer-aided molecular design

Gallegos Saliner, Ana 29 June 2004 (has links)
La present tesi està centrada en l'ús de la Teoria de Semblança Quàntica per a calcular descriptors moleculars. Aquests descriptors s'utilitzen com a paràmetres estructurals per a derivar correlacions entre l'estructura i la funció o activitat experimental per a un conjunt de compostos. Els estudis de Relacions Quantitatives Estructura-Activitat són d'especial interès per al disseny racional de molècules assistit per ordinador i, en particular, per al disseny de fàrmacs. Aquesta memòria consta de quatre parts diferenciades. En els dos primers blocs es revisen els fonaments de la teoria de semblança quàntica, així com l'aproximació topològica basada en la teoria de grafs. Ambdues teories es fan servir per a calcular els descriptors moleculars. En el segon bloc, s'ha de remarcar la programació i implementació de programari per a calcular els anomenats índexs topològics de semblança quàntica. La tercera secció detalla les bases de les Relacions Quantitatives Estructura-Activitat i, finalment, el darrer apartat recull els resultats d'aplicació obtinguts per a diferents sistemes biològics. / The present thesis is centred in the use of the Quantum Similarity Theory to calculate molecular descriptors. These molecular descriptors are used as structural parameters to derive correlations between the structure and the function or experimental activity for a set of compounds. Quantitative Structure-Activity Relationship studies are of special interest for the rational Computer-Aided Molecular Design and, in particular, for Computer-Aided Drug Design. The memory has been structured in four differenced parts. The two first blocks revise the foundations of quantum similarity theory, as well as the topological approximation, based in classical graph theory. These theories are used to calculate the molecular descriptors. In the second block, the programming and implementation of Topological Quantum Similarity Indices must be remarked. The third section details the basis for Quantitative Structure-Activity Relationships and, finally, the last section gathers the application results obtained for different biological systems.
956

Multiple hypothesis testing and multiple outlier identification methods

Yin, Yaling 13 April 2010 (has links)
Traditional multiple hypothesis testing procedures, such as that of Benjamini and Hochberg, fix an error rate and determine the corresponding rejection region. In 2002 Storey proposed a fixed rejection region procedure and showed numerically that it can gain more power than the fixed error rate procedure of Benjamini and Hochberg while controlling the same false discovery rate (FDR). In this thesis it is proved that when the number of alternatives is small compared to the total number of hypotheses, Storeys method can be less powerful than that of Benjamini and Hochberg. Moreover, the two procedures are compared by setting them to produce the same FDR. The difference in power between Storeys procedure and that of Benjamini and Hochberg is near zero when the distance between the null and alternative distributions is large, but Benjamini and Hochbergs procedure becomes more powerful as the distance decreases. It is shown that modifying the Benjamini and Hochberg procedure to incorporate an estimate of the proportion of true null hypotheses as proposed by Black gives a procedure with superior power.<p> Multiple hypothesis testing can also be applied to regression diagnostics. In this thesis, a Bayesian method is proposed to test multiple hypotheses, of which the i-th null and alternative hypotheses are that the i-th observation is not an outlier versus it is, for i=1,...,m. In the proposed Bayesian model, it is assumed that outliers have a mean shift, where the proportion of outliers and the mean shift respectively follow a Beta prior distribution and a normal prior distribution. It is proved in the thesis that for the proposed model, when there exists more than one outlier, the marginal distributions of the deletion residual of the i-th observation under both null and alternative hypotheses are doubly noncentral t distributions. The outlyingness of the i-th observation is measured by the marginal posterior probability that the i-th observation is an outlier given its deletion residual. An importance sampling method is proposed to calculate this probability. This method requires the computation of the density of the doubly noncentral F distribution and this is approximated using Patnaiks approximation. An algorithm is proposed in this thesis to examine the accuracy of Patnaiks approximation. The comparison of this algorithms output with Patnaiks approximation shows that the latter can save massive computation time without losing much accuracy.<p> The proposed Bayesian multiple outlier identification procedure is applied to some simulated data sets. Various simulation and prior parameters are used to study the sensitivity of the posteriors to the priors. The area under the ROC curves (AUC) is calculated for each combination of parameters. A factorial design analysis on AUC is carried out by choosing various simulation and prior parameters as factors. The resulting AUC values are high for various selected parameters, indicating that the proposed method can identify the majority of outliers within tolerable errors. The results of the factorial design show that the priors do not have much effect on the marginal posterior probability as long as the sample size is not too small.<p> In this thesis, the proposed Bayesian procedure is also applied to a real data set obtained by Kanduc et al. in 2008. The proteomes of thirty viruses examined by Kanduc et al. are found to share a high number of pentapeptide overlaps to the human proteome. In a linear regression analysis of the level of viral overlaps to the human proteome and the length of viral proteome, it is reported by Kanduc et al. that among the thirty viruses, human T-lymphotropic virus 1, Rubella virus, and hepatitis C virus, present relatively higher levels of overlaps with the human proteome than the predicted level of overlaps. The results obtained using the proposed procedure indicate that the four viruses with extremely large sizes (Human herpesvirus 4, Human herpesvirus 6, Variola virus, and Human herpesvirus 5) are more likely to be the outliers than the three reported viruses. The results with thefour extreme viruses deleted confirm the claim of Kanduc et al.
957

Analysis Of Koch Fractal Antennas

Irgin, Umit 01 June 2009 (has links) (PDF)
Fractal is a recursively-generated object describing a family of complex shapes that possess an inherent self-similarity in their geometrical structure. When used in antenna engineering, fractal geometries provide multi-band characteristics and lowering resonance frequencies by enhancing the space filling property. Moreover, utilizing fractal arrays, controlling side lobe-levels and radiation patterns can be realized. In this thesis, the performance of Koch curve as antenna is investigated. Since fractals are complex shapes, there is no well&ndash / established for mathematical formulation to obtain the radiation properties and frequency response of Koch Curve antennas directly. The Koch curve antennas became famous since they exhibit better frequency response than their Euclidean counterparts. The effect of the parameters of Koch geometry to antenna performance is studied in this thesis. Moreover, modified Koch geometries are generated to obtain the relation between fractal properties and antenna radiation and frequency characteristics.
958

Machine Learning Methods For Promoter Region Prediction

Arslan, Hilal 01 June 2011 (has links) (PDF)
Promoter classification is the task of separating promoter from non promoter sequences. Determining promoter regions where the transcription initiation takes place is important for several reasons such as improving genome annotation and defining transcription start sites. In this study, various promoter prediction methods called ProK-means, ProSVM, and 3S1C are proposed. In ProSVM and ProK-means algorithms, structural features of DNA sequences are used to distinguish promoters from non promoters. Obtained results are compared with ProSOM which is an existing promoter prediction method. It is shown that ProSVM is able to achieve greater recall rate compared to ProSOM results. Another promoter prediction methods proposed in this study is 3S1C. The difference of the proposed technique from existing methods is using signal, similarity, structure, and context features of DNA sequences in an integrated way and a hierarchical manner. In addition to current methods related to promoter classification, the similarity feature, which compares the promoter regions between human and other species, is added to the proposed system. We show that the similarity feature improves the accuracy. To classify core promoter regions, firstly, signal, similarity, structure, and context features are extracted and then, these features are classified separately by using Support Vector Machines. Finally, output predictions are combined using multilayer perceptron. The result of 3S1C algorithm is very promising.
959

Visual Object Representations: Effects Of Feature Frequency And Similarity

Eren Kanat, Selda 01 December 2011 (has links) (PDF)
The effects of feature frequency and similarity on object recognition have been examined through behavioral experiments, and a model of the formation of visual object representations and old/new recognition has been proposed. A number of experiments were conducted to test the hypothesis that frequency and similarity of object features affect the old/new responses to test stimuli in a later recognition task. In the first experiment, when the feature frequencies are controlled, there was a significant increase in the percentage of &ldquo / old&rdquo / responses for unstudied objects as the number of frequently repeated features (FRFs) on the object increased. In the second experiment, where all features had equal frequency, similarity of test objects did not affect old/new responses. An evaluation of the models on object recognition and categorization with respect to the experimental results showed that these models can only partially explain experimental results. A comprehensive model for the formation of visual object representations and old/new recognition, called CDZ-VIS, developed on the Convergence-Divergence Zone framework by Damasio (1989), has been proposed. According to this framework, co-occurring object features converge to upper layer units in the hierarchical representation which act as binding units. As more objects are displayed, frequent object features cause grouping of these binding units which converge to upper binding units. The performance of the CDZ-VIS model on the feature frequency and similarity experiments of the present study was shown to be closer to the performance of the human participants, compared to the performance of two models from the categorization literature.
960

An Algorithmic Approach To Some Matrix Equivalence Problems

Harikrishna, V J 01 January 2008 (has links)
The analysis of similarity of matrices over fields, as well as integral domains which are not fields, is a classical problem in Linear Algebra and has received considerable attention. A related problem is that of simultaneous similarity of matrices. Many interesting algebraic questions that arise in such problems are discussed by Shmuel Friedland[1]. A special case of this problem is that of Simultaneous Unitary Similarity of hermitian matrices, which we describe as follows: Given a collection of m ordered pairs of similar n×n hermitian matrices denoted by {(Hl,Dl)}ml=1, 1. determine if there exists a unitary matrix U such that UHl U∗ = Dl for all l, 2. and in the case where a U exists, find such a U, (where U∗is the transpose conjugate of U ).The problem is easy for m =1. The problem is challenging for m > 1.The problem stated above is the algorithmic version of the problem of classifying hermitian matrices upto unitary similarity. Any problem involving classification of matrices up to similarity is considered to be “wild”[2]. The difficulty in solving the problem of classifying matrices up to unitary similarity is a indicator of, the toughness of problems involving matrices in unitary spaces [3](pg, 44-46 ).Suppose in the statement of the problem we replace the collection {(Hl,Dl)}ml=1, by a collection of m ordered pairs of complex square matrices denoted by {(Al,Bl) ml=1, then we get the Simultaneous Unitary Similarity problem for square matrices. Suppose we consider k ordered pairs of complex rectangular m ×n matrices denoted by {(Yl,Zl)}kl=1, then the Simultaneous Unitary Equivalence problem for rectangular matrices is the problem of finding whether there exists a m×m unitary matrix U and a n×n unitary matrix V such that UYlV ∗= Zl for all l and in the case they exist find them. In this thesis we describe algorithms to solve these problems. The Simultaneous Unitary Similarity problem for square matrices is challenging for even a single pair (m = 1) if the matrices involved i,e A1,B1 are not normal. In an expository article, Shapiro[4]describes the methods available to solve this problem by arriving at a canonical form. That is A1 or B1 is used to arrive at a canonical form and the matrices are unitarily similar if and only if the other matrix also leads to the same canonical form. In this thesis, in the second chapter we propose an iterative algorithm to solve the Simultaneous Unitary Similarity problem for hermitian matrices. In each iteration we either get a step closer to “the simple case” or end up solving the problem. The simple case which we describe in detail in the first chapter corresponds to finding whether there exists a diagonal unitary matrix U such that UHlU∗= Dl for all l. Solving this case involves defining “paths” made up of non-zero entries of Hl (or Dl). We use these paths to define an equivalence relation that partitions L = {1,…n}. Using these paths we associate scalars with each Hl(i,j) and Dl(i,j)denoted by pr(Hl(i,j)) and pr(Dl(i,j)) (pr is used to indicate that these scalars are obtained by considering products of non-zero elements along the paths from i,j to their class representative). Suppose i (I Є L)belongs to the class[d(i)](d(i) Є L) we denote by uisol a modulus one scalar expressed in terms of ud(i) using the path from i to d( i). The free variable ud(i) can be chosen to be any modulus one scalar. Let U sol be a diagonal unitary matrix given by U sol = diag(u1 sol , u2 sol , unsol ). We show that a diagonal U such that U HlU∗ = Dl exists if and only if pr(Hl(i, j)) = pr(Dl(i, j))for all l, i, j and UsolHlUsol∗= Dl. Solving the simple case sets the trend for solving the general case. In the general case in an iteration we are looking for a unitary U such that U = blk −diag(U1,…, Ur) where each Ui is a pi ×p (i, j Є L = {1,… , r}) unitary matrix such that U HlU ∗= Dl. Our aim in each iteration is to get at least a step closer to the simple case. Based on pi we partition the rows and columns of Hl and Dl to obtain pi ×pj sub-matrices denoted by Flij in Hl and Glij in D1. The aim is to diagonalize either Flij∗Flij Flij∗ and a get a step closer to the simple case. If square sub-matrices are multiples of unitary and rectangular sub-matrices are zeros we say that the collection is in Non-reductive-form and in this case we cannot get a step closer to the simple case. In Non- reductive-form just as in the simple case we define a relation on L using paths made up of these non-zero (multiples of unitary) sub-matrices. We have a partition of L. Using these paths we associate with Flij and (G1ij ) matrices denoted by pr(F1ij) and pr(G1ij) respectively where pr(F1ij) and pr(G1ij) are multiples of unitary. If there exist pr(Flij) which are not multiples of identity then we diagonalize these matrices and move a step closer to the simple case and the given collection is said to be in Reduction-form. If not, the collection is in Solution-form. In Solution-form we identify a unitary matrix U sol = blk −diag(U1sol , U2 sol , …, Ur sol )where U isol is a pi ×pi unitary matrix that is expressed in terms of Ud(i) by using the path from i to[d(i)]( i Є [d(i)], d(i) Є L, Ud(i) is free). We show that there exists U such that U HlU∗ = Dl if and only if pr((Flij) = pr(G1ij) and U solHlU sol∗ = Dl. Thus in a maximum of n steps the algorithm solves the Simultaneous Unitary Similarity problem for hermitian matrices. In the second chapter we also relate the Simultaneous Unitary Similarity problem for hermitian matrices to the simultaneous closed system evolution problem for quantum states. In the third chapter we describe algorithms to solve the Unitary Similarity problem for square matrices (single ordered pair) and the Simultaneous Unitary Equivalence problem for rectangular matrices. These problems are related to the Simultaneous Unitary Similarity problem for hermitian matrices. The algorithms described in this chapter are similar in flow to the algorithm described in the second chapter. This shows that it is the fact that we are looking for unitary similarity that makes these forms possible. The hermitian (or normal)nature of the matrices is of secondary importance. Non-reductive-form is the same as in the hermitian case. The definition of the paths changes a little. But once the paths are defined and the set L is partitioned the definitions of Reduction-form and Solution-form are similar to their counterparts in the hermitian case. In the fourth chapter we analyze the worst case complexity of the proposed algorithms. The main computation in all these algorithms is that of diagonalizing normal matrices, partitioning L and calculating the products pr((Flij) = pr(G1ij). Finding the partition of L is like partitioning an undirected graph in the square case and partitioning a bi-graph in the rectangular case. Also, in this chapter we demonstrate the working of the proposed algorithms by running through the steps of the algorithms for three examples. In the fifth and the final chapter we show that finding if a given collection of ordered pairs of normal matrices is Simultaneously Similar is same as finding if the collection is Simultaneously Unitarily Similar. We also discuss why an algorithm to solve the Simultaneous Similarity problem, along the lines of the algorithms we have discussed in this thesis, may not exist. (For equations pl refer the pdf file)

Page generated in 0.0638 seconds