• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 601
  • 285
  • 85
  • 61
  • 40
  • 18
  • 17
  • 16
  • 16
  • 16
  • 15
  • 12
  • 6
  • 5
  • 5
  • Tagged with
  • 1345
  • 236
  • 168
  • 163
  • 140
  • 124
  • 110
  • 109
  • 103
  • 93
  • 90
  • 89
  • 89
  • 82
  • 81
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
401

An experimental study of a plane turbulent wall jet using particle image velocimetry

Dunn, Matthew 14 September 2010 (has links)
This thesis documents the design and fabrication of an experimental facility that was built to produce a turbulent plane wall jet. The target flow was two-dimensional with a uniform profile of the mean streamwise velocity and a low turbulence level at the slot exit. The design requirements for a flow conditioning apparatus that could produce this flow were determined. The apparatus was then designed and constructed, and measurements of the fluid flow were obtained using particle image velocimetry (PIV). The first series of measurements was along the slot width, the second series was along the slot centerline and the third was at 46 slot heights off the centerline. The Reynolds number, based on the slot height and jet exit velocity, of the wall jet varied from 7594 to 8121. Data for the streamwise and transverse components of velocity and the three associated Reynolds stress components were analyzed and used to determine the characteristics of the wall jet.<p> This experimental facility was able to produce a profile of the mean streamwise velocity near the slot exit that was uniform over 71% of the slot height with a streamwise turbulence that was equal to 1.45% of the mean velocity. This initial velocity was maintained to 6 slot heights. The fully developed region for the centerline and the off-centerline measurements was determined to extend from 50 to 100 slot heights and 40 to 100 slot heights, respectively. This was based on self-similarity of the mean streamwise velocity profiles when scaled using the maximum streamwise velocity and the jet half-width. The off-centerline Reynolds stress profiles achieved a greater degree of collapse than did the centerline profiles.<p> The rate of spread of the wall jet along the centerline was 0.080 in the self-similar region from 50 to 100 slot heights, and the off-centerline growth rate was 0.077 in the self-similar region from 40 to 100 slot heights. The decay rate of the maximum streamwise velocity was -0.624 within the centerline self-similar region, and -0.562 within the off-centerline self-similar region. These results for the spread and decay of the wall jet compared well with recent similar studies.<p> The two-dimensionality was initially assessed by measuring the mean streamwise velocity at 1 slot height along the entire slot width. The two-dimensionality of this wall jet was further analyzed by comparing the centerline and off-centerline profiles of the mean streamwise velocity at 2/3, 4, 50, 80, and 100 slot heights, and by comparing the growth rates and decay rates. Although this facility was able to produce a wall jet that was initially two-dimensional, the two-dimensionality was compromised downstream of the slot, most likely due to the presence of return flow and spanwise spreading. Without further measurements, it is not yet clear exactly how the lack of complete two-dimensionality affects the flow characteristics noted above.
402

Graph Similarity, Parallel Texts, and Automatic Bilingual Lexicon Acquisition

Törnfeldt, Tobias January 2008 (has links)
In this masters’ thesis report we present a graph theoretical method used for automatic bilingual lexicon acquisition with parallel texts. We analyze the concept of graph similarity and give an interpretation, of the parallel texts, connected to the vector space model. We represent the parallel texts by a directed, tripartite graph and from here use the corresponding adjacency matrix, A, to compute the similarity of the graph. By solving the eigenvalue problem ρS = ASAT + ATSA we obtain the self-similarity matrix S and the Perron root ρ. A rank k approximation of the self-similarity matrix is computed by implementations of the singular value decomposition and the non-negative matrix factorization algorithm GD-CLS. We construct an algorithm in order to extract the bilingual lexicon from the self-similarity matrix and apply a statistical model to estimate the precision, the correctness, of the translations in the bilingual lexicon. The best result is achieved with an application of the vector space model with a precision of about 80 %. This is a good result and can be compared with the precision of about 60 % found in the literature.
403

The Impact of Swirl in Turbulent Pipe Flow

Islek, Akay A. (Akay Aydin) 01 December 2004 (has links)
The impact of swirl (i.e., flow with axial and azimuthal velocity components) on the turbulent flow in a pipe is studied using two-component laser-Doppler velocimetry (LDV). There are practical motivations for the flow geometry. For example, previous studies demonstrate that introducing swirl in the tube bank of a paper machine headbox can significantly increase mixing, and hence increase fiber dispersion and orientation isotropy in the finished paper product. The flow characteristics in a pipe downstream of a single straight tapered fin, a single fin with 180??ist but otherwise identical geometry, and four twisted fins were therefore studied at a pipe-based Reynolds number of 80,000. Radial profiles of the mean and rms fluctuations of the streamwise and azimuthal velocity components are measured; results for the straight and twisted single fin are compared to determine the effects of fin geometry and swirl on the turbulent wake downstream of the fin. From a practical viewpoint, it is also desirable to have adjustable swirl, where swirl can either be turned on or off depending upon the type of paper product being produced. The next generation swirler concept consists of fins fabricated from two-way shape memory alloys. Using the two-way memory effect, the fins will be in their straight configuration when cold and twisted configuration (hence acting as a swirler) when hot. This study is the initial phase in developing new active control mechanisms, known as the Vortigen concept, for increasing productivity, and hence reducing wasted raw material and energy, in the pulp and paper industry.
404

A Riemannian Geometric Mapping Technique for Identifying Incompressible Equivalents to Subsonic Potential Flows

German, Brian Joseph 05 April 2007 (has links)
This dissertation presents a technique for the solution of incompressible equivalents to planar steady subsonic potential flows. Riemannian geometric formalism is used to develop a gauge transformation of the length measure followed by a curvilinear coordinate transformation to map a subsonic flow into a canonical Laplacian flow with the same boundary conditions. The method represents the generalization of the methods of Prandtl-Glauert and Karman-Tsien and gives exact results in the sense that the inverse mapping produces the subsonic full potential solution over the original airfoil, up to numerical accuracy. The motivation for this research was provided by the analogy between linear potential flow and the special theory of relativity that emerges from the invariance of the wave equation under Lorentz transformations. Whereas elements of the special theory can be invoked for linear and global compressibility effects, the question posed in this work is whether other techniques from relativity theory could be used for effects that are nonlinear and local. This line of thought leads to a transformation leveraging Riemannian geometric methods common to the general theory of relativity. The dissertation presents the theory and a numerical method for practical solutions of equivalent incompressible flows over arbitrary profiles. The numerical method employs an iterative approach involving the solution of the incompressible flow with a panel method and the solution of the coordinate mapping to the canonical flow with a finite difference approach. This method is demonstrated for flow over a circular cylinder and over a NACA 0012 profile. Results are validated with subcritical full potential test cases available in the literature. Two areas of applicability of the method have been identified. The first is airfoil inverse design leveraging incompressible flow knowledge and empirical data for the potential field effects on boundary layer transition and separation. The second is aerodynamic testing using distorted models.
405

Efficient Algorithms for the Block Edit Distance and Related Problems

Ann, Hsing-Yen 18 May 2010 (has links)
Computing the similarity of two strings or sequences is one of the most important fundamental in computer field, and it has been widely studied for several decades. In the last decade, it gained the researchers' attentions again because of the improvements of the hardware computation ability and the presence of huge amount of data in biotechnology. In this dissertation, we pay attention to computing the edit distance between two sequences where the block-edit operations are involved in addition to the character-edit operations. Previous researches show that this problem is NP-hard if recursive block moves are allowed. Since we are interested in solving the editing problems by the polynomial-time optimization algorithms, we consider the simplified version of the edit distance problem. We first focus on the longest common subsequence (LCS) of run-length encoded (RLE) strings, where the runs can be seen as a class of simplified blocks. Then, we apply constraints to the problem, i.e. to find the constrained LCS (CLCS) of RLE strings. Besides, we show that the problems which involve block-edit operations can still be solved by the polynomial-time optimization algorithms if some restrictions are applied. Let X and Y be two sequences of lengths n and m, respectively. Also, let N and M, be the numbers of runs in the corresponding RLE forms of X and Y, respectively. In this dissertation, first, we propose a simple algorithm for computing the LCS of X and Y in O(NM + min{ p_1, p_2 }) time, where p_1 and p_2 denote the numbers of elements in the bottom and right boundaries of the matched blocks, respectively. This new algorithm improves the previously known time bound O(min{nM, Nm}) and outperforms the time bounds O(NM log NM) or O((N+M+q) log (N+M+q)) for some cases, where q denotes the number of matched blocks. Next, we give an efficient algorithm for solving the CLCS problem, which is to find a common subsequences Z of X and Y such that a given constrained sequence P is a subsequence of Z and the length of Z is maximized. Suppose X, Y and P are all in RLE format, and the lengths of X, Y and P are n, m and r, respectively. Let N, M and R be the numbers of runs in X, Y, and P, respectively. We show that by RLE, the CLCS problem can be solved in O(NMr + min{q_1 r + q_4, q_2 r + q_5 }) time, where q_1 and q_2 denote the numbers of elements in the south and east boundaries of the partially matched blocks on the first layer, respectively, and q_4 and q_5 denote the numbers of elements of the west and north pillars in the bottom boundaries of all fully matched cuboids in the DP lattice, respectively. When the input strings have good compression ratios, our work obviously outperforms the previously known DP algorithms and the Hunt-Szymanski-like algorithms. Finally, we consider variations of the block edit distance problem that involve character insertions, character deletions, block copies and block deletions, for two given sequences X and Y. In this dissertation, three variations are defined with different measuring functions, which are P(EIS, C), P(EI, L) and P(EI, N). Then we show that with some preprocessing, the minimum block edit distances of these three variations can be obtained by dynamic programming in O(nm), O(nm log m) and O(nm^2) time, respectively, where n and m are the lengths of X and Y.
406

A Mixed Approach for Multi-Label Document Classification

Tsai, Shian-Chi 10 August 2010 (has links)
Unlike single-label document classification, where each document exactly belongs to a single category, when the document is classified into two or more categories, known as multi-label file, how to classify such documents accurately has become a hot research topic in recent years. In this paper, we propose a algorithm named fuzzy similarity measure multi-label K nearest neighbors(FSMLKNN) which combines a fuzzy similarity measure with the multi-label K nearest neighbors(MLKNN) algorithm for multi-label document classification, the algorithm improved fuzzy similarity measure to calculate the similarity between a document and the center of cluster similarity, and proposed algorithm can significantly improve the performance and accuracy for multi-label document classification. In the experiment, we compare FSMLKNN and the existing classification methods, including decision tree C4.5, support vector machine(SVM) and MLKNN algorithm, the experimental results show that, FSMLKNN method is better than others.
407

Performance Implications of Patent Status and Patent Similarity in Micro-fluidic Biochips Industry: Network Theory Analysis.

Ling, Yueh 16 July 2011 (has links)
The biochip industry is characterized by high entry barrier in technology. For those firms in this industry, owning law-protected patents to lower the entry of the potential competitors is a key strategy in competition and competitive advantage. The firm¡¦s patent analysis not only discloses the firm¡¦s knowledge-base in biochip industry, it also impact other firms¡¦ innovation activity and technology development strategy in this industry. Previous patent analysis literatures usually focus on the performance implications of firm¡¦s patent number or the patent citation on the focal firm. However, the possible performance implications of patent contents between the focal firm and other firms in the biochip industry are relatively under-examined. From the network theory and resource-based theory viewpoint, this study tries to examine the performance implications by developing two patent indexes in patent content analysis, i.e., the patent status and the patent similarity. The results indicate that when the firm¡¦s patent status difference with each other is smaller, or the firm¡¦s patent similarity with each other is larger, the performance difference between the dyad firms will be smaller. In other words, the patent status and the patent similarity are solid indexes to predict the firm¡¦s performance difference in highly competitive and highly innovative industry, such as the biochip industry in this sample. The results provide referable value in addressing the performance issues of patent content analysis from network theory viewpoint. Moreover, it also provides complementary values in discussing market commonality and resource similarity in competitive issues.
408

Design and Manufacturing of Dieless Drawing Prototype Machine

Kuo, Tsung-Yu 31 August 2011 (has links)
In this study, a dieless drawing prototype machine has been developed for tube or wire drawing forming. This dieless drawing machine used a high-frequency heating apparatus as the heating source and used stainless steel SUS304 tube as specimens. A series of experiments with different relative speed between heating source and drawing grip were conducted. The moving power is transformed by two servo-motor connecting with screws. Infrared thermoscope and cooling device are setting on the high-frequency heating apparatus to control the temperature of specimen. The forming stability between different relative speed ratio, drawing speed and heating temperature has been discussed. The maximum stable reduction of area obtained can reach over 40 percent , and the drawing speed can be reach to 0.8mm/s. FE analysis was also conducted to analyz the formability of the tube. Then the validity of the analytical model was verified by comparing the uniformity and similarity between FE analysis and experiment value. A series of FE simulation results were used to understand the distribution of true stress and temperature in tube dieless drawing process, tryingand to improve the drawing speed and the reduction of area.
409

Detecting Near-Duplicate Documents using Sentence-Level Features and Machine Learning

Liao, Ting-Yi 23 October 2012 (has links)
From the large scale of documents effective to find the near-duplicate document, has been a very important issue. In this paper, we propose a new method to detect near-duplicate document from the large scale dataset, our method is divided into three parts, feature selection, similarity measure and discriminant derivation. In feature selection, document will be detected after preprocessed. Documents have to remove signals, stop words ... and so on. We measure the value of the term weight in the sentence, and then choose the terms which have higher weight in the sentence. These terms collected as a feature of the document. The document¡¦s feature set collected by these features. Similarity measure is based on similarity function to measure the similarity value between two feature sets. Discriminant derivation is based on support vector machine which train a classifiers to identify whether a document is a near-duplicate or not. support vector machine is a supervised learning strategy. It trains a classifier by the training patterns. In the characteristics of documents, the sentence-level features are more effective than terms-level features. Besides, learning a discriminant by SVM can avoid trial-and-error efforts required in conventional methods. Trial-and-error is going to find a threshold, a discriminant value to define document¡¦s relation. In the final analysis of experiment, our method is effective in near-duplicate document detection than other methods.
410

A Neuro-Fuzzy Approach for Classificaion

Lin, Wen-Sheng 08 September 2004 (has links)
We develop a neuro-fuzzy network technique to extract TSK-type fuzzy rules from a given set of input-output data for classification problems. Fuzzy clusters are generated incrementally from the training data set, and similar clusters are merged dynamically together through input-similarity, output-similarity, and output-variance tests. The associated membership functions are defined with statistical means and deviations. Each cluster corresponds to a fuzzy IF-THEN rule, and the obtained rules can be further refined by a fuzzy neural network with a hybrid learning algorithm which combines a recursive SVD-based least squares estimator and the gradient descent method. The proposed technique has several advantages. The information about input and output data subspaces is considered simultaneously for cluster generation and merging. Membership functions match closely with and describe properly the real distribution of the training data points. Redundant clusters are combined and the sensitivity to the input order of training data is reduced. Besides, generation of the whole set of clusters from the scratch can be avoided when new training data are considered.

Page generated in 0.1836 seconds