• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 375
  • 124
  • 60
  • 50
  • 46
  • 35
  • 19
  • 15
  • 13
  • 12
  • 11
  • 11
  • 6
  • 5
  • 4
  • Tagged with
  • 884
  • 187
  • 131
  • 130
  • 123
  • 86
  • 78
  • 64
  • 62
  • 57
  • 54
  • 53
  • 50
  • 49
  • 44
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
221

Study of control actions reveals disturbance patterns for cross directional control of basis weight / Studier av styrutslag avslöjar störningsmönster hos tvärsprofilstyrningen av ytvikt

Broman, Patrik January 2004 (has links)
The purpose of this thesis was to examine the demand for cross directional control of basis weight on a board machine. To analyse the demand, changes made by the control system are studied. The significant changes were expected to be present when a major event occurred on the machine. The events classified as major were changes in basis weight, of grade or of coating blade. Break of board and stoppage of the machine were also included. These events can be seen as large disturbances to the machine. In order to identify the disturbances a methodology had to be developed. The methodology developed is to analyse the output from a model with the actuators of the control system as input and measurement of basis weight as output. The analysis of this output was done using the multivariate method of principal component analysis. The data analysed in this thesis was collected on-line from a board machine operating within the Stora Enso group. Over a period of 3 months, a total of 47 sets of data were collected, each set representing 12-14 hours. The data analysis shows that the variations in the control system are greater than the variation in the measured basis weight. This is a strong indication that the control system is needed and in order to find disturbances in the cross directional profile it is not enough only to analyse the final product, the control signals also have to be analysed. The large disturbances do not necessarily emerge from the major events as assumed. Other causes might havelarger impact to the process then first believed. One of the major obstacles in trying to explain the variations is that the basis weight is controlled by using the centre layer of the board but measured on the final product. This leads to the fact that the errors seen by the measuring system can result from anything on the machine and be compensated by basis weight in the centre layer of the board.
222

Circuit Bases of Strongly Connected Digraphs

Gleiss, Petra M., Leydold, Josef, Stadler, Peter F. January 2001 (has links) (PDF)
The cycle space of a strongly connected graph has a basis consisting of directed circuits. The concept of relevant circuits is introduced as a generalization of the relevant cycles in undirected graphs. A polynomial time algorithm for the computation of a minimum weight directed circuit basis is outlined. (author's abstract) / Series: Preprint Series / Department of Applied Statistics and Data Processing
223

Surface reconstruction using variational interpolation

Joseph Lawrence, Maryruth Pradeepa 24 November 2005 (has links)
Surface reconstruction of anatomical structures is an integral part of medical modeling. Contour information is extracted from serial cross-sections of tissue data and is stored as "slice" files. Although there are several reasonably efficient triangulation algorithms that reconstruct surfaces from slice data, the models generated from them have a jagged or faceted appearance due to the large inter-slice distance created by the sectioning process. Moreover, inconsistencies in user input aggravate the problem. So, we created a method that reduces inter-slice distance, as well as ignores the inconsistencies in the user input. Our method called the piecewise weighted implicit functions, is based on the approach of weighting smaller implicit functions. It takes only a few slices at a time to construct the implicit function. This method is based on a technique called variational interpolation. <p> Other approaches based on variational interpolation have the disadvantage of becoming unstable when the model is quite large with more than a few thousand constraint points. Furthermore, tracing the intermediate contours becomes expensive for large models. Even though some fast fitting methods handle such instability problems, there is no apparent improvement in contour tracing time, because, the value of each data point on the contour boundary is evaluated using a single large implicit function that essentially uses all constraint points. Our method handles both these problems using a sliding window approach. As our method uses only a local domain to construct each implicit function, it achieves a considerable run-time saving over the other methods. The resulting software produces interpolated models from large data sets in a few minutes on an ordinary desktop computer.
224

The Effects of Retention Aid Dosage and Mechanical Energy Dissipation on Fiber Flocculation in a Flow Channel

Weseman, Brian D. 23 December 2004 (has links)
Formation plays an important role in the end-use properties of paper products, but before formation can be optimized to achieve superior properties, an understanding about the causes of formation must be developed. Formation is caused by variations in the basis weight of paper that are results of fiber floc formation before and during the forming of the sheet. This project is a first step in a larger research program aimed at studying formation. By observing the effects that mechanical energy dissipation (in the form of turbulence) and retention chemical dosage have on floc formation, we may develop a better understanding of how to control formation. In this study, a rectangular cross-section flow channel was constructed to aid in the acquisition of digital images of a flowing fiber suspension. The furnish consisted of a 55:45 spruce:pine bleached market pulp mix from a Western Canadian mill. Turbulence was varied by changing the flow rate; Reynolds numbers achieved range from 20,000 to 40,000. The retention aid used was a cationic polyacrylamide with a medium charge density. Dosage of the retention aid was varied from 0 to 2 pounds per ton OD fiber. Digital images of the flowing fiber suspension were acquired with a professional digital SLR camera with a forensics-quality lens. Three separate image analysis techniques were used to measure the flocculation state of the fiber suspension: morphological image operations, formation number analysis, and fast Fourier transform analysis. Morphological image analysis was capable of measuring floc size increases seen in the acquired floc images. It was shown how floc diameter could increase simultaneously with decreasing total floc area and total floc number. A regression model relating retention aid dosage and energy dissipation was constructed in an effort to predict flocculation. The regression model was used to predict F2 (formation number squared) results from the study. The interaction effect RE was shown to have a differing effect across the retention aid dosage levels. As a result, this model and technique may prove to be a beneficial tool in optimizing retention aid applications.
225

The Quantitative Investigation of LCModel BASIS Using GAMMA Visual Analysis (GAVA) for in vivo 1H MR Spectroscopy

Huang, Chia-Min 05 August 2010 (has links)
Magnetic resonance imaging (MRI) and magnetic resonance spectroscopy (MRS) has been developed and applied to clinical analysis studies due to its non-invasive properties. Because of the increasing clinical interests of applying MRS, a lot of post-processing tools has been developed, among which LCModel is one of the most popular. LCModel estimates the absolute metabolite concentrations in vivo according to the basis file, so basis files play an important role for the accuracy of absolute metabolite concentrations. The default basis sets of LCModel were made by phantom experiments. However, some special metabolites are difficult to get them, so the basis sets lack for these metabolites. In order to avoid this trouble, LCModel provides a special method called ¡§spectra offering¡¨. In this study, we use GAMMA Visual Analysis (GAVA) software to create basis sets and compare the shape of LCModel default basis sets with the shape of GAVA basis sets. Some metabolites which are not included in the LCModel phantom experiments are also generated. Finally, we estimate the absolute concentrations in normal subjects and patients by using these two kinds of basis sets respectively. Using LCModel ¡§spectra offering¡¨ method to append extra metabolites for LCModel basis sets is applicable to those metabolites of singlet resonance but not of J-coupling resonance in the meanwhile. Our results demonstrate that using GAVA simulation as the basis set leads to different quantitative results from using basis sets of in vitro. We believe that using GAVA simulation as the basis set would provide better consistency among all metabolites and thus achieve more accurate quantification of MRS.
226

High Speed Scalar Multiplication Architecture for Elliptic Curve Cryptosystem

Hsu, Wei-Chiang 28 July 2011 (has links)
An important advantage of Elliptic Curve Cryptosystem (ECC) is the shorter key length in public key cryptographic systems. It can provide adequate security when the bit length over than 160 bits. Therefore, it has become a popular system in recent years. Scalar multiplication also called point multiplication is the core operation in ECC. In this thesis, we propose the ECC architectures of two different irreducible polynomial versions that are trinomial in GF(2167) and pentanomial in GF(2163). These architectures are based on Montgomery point multiplication with projective coordinate. We use polynomial basis representation for finite field arithmetic. All adopted multiplication, square and add operations over binary field can be completed within one clock cycle, and the critical path lies on multiplication. In addition, we use Itoh-Tsujii algorithm combined with addition chain, to execute binary inversion through using iterative binary square and multiplication. Because the double and add operations in point multiplication need to run many iterations, the execution time in overall design will be decreased if we can improve this partition. We propose two ways to improve the performance of point multiplication. The first way is Minus Cycle Version. In this version, we reschedule the double and add operations according to point multiplication algorithm. When the clock cycle time (i.e., critical path) of multiplication is longer than that of add and square, this method will be useful in improving performance. The second way is Pipeline Version. It speeds up the multiplication operations by executing them in pipeline, leading to shorter clock cycle time. For the hardware implementation, TSMC 0.13um library is employed and all modules are organized in a hierarchy structure. The implementation result shows that the proposed 167-bit Minus Cycle Version requires 156.4K gates, and the execution time of point multiplication is 2.34us and the maximum speed is 591.7Mhz. Moreover, we compare the Area x Time (AT) value of proposed architectures with other relative work. The results exhibit that proposed 167-bit Minus Cycle Version is the best one and it can save up to 38% A T value than traditional one.
227

Forward-Selection-Based Feature Selection for Genre Analysis and Recognition of Popular Music

Chen, Wei-Yu 09 September 2012 (has links)
In this thesis, a popular music genre recognition approach for Japanese popular music using SVM (support vector machine) with forward feature selection is proposed. First, various common acoustic features are extracted from the digital signal of popular music songs, including sub-bands, energy, rhythm, tempo, formants. A set of the most appropriate features for the genre identification is then selected by the proposed forward feature selection technique. Experiments conducted on the database consisting of 296 Japanese popular music songs demonstrate that the accuracy of recognition the proposed algorithm can achieve approximately 78.81% and the accuracy is stable when the number of testing music songs is increased.
228

Three Dimensional Controlled-source Electromagnetic Edge-based Finite Element Modeling of Conductive and Permeable Heterogeneities

Mukherjee, Souvik 2010 August 1900 (has links)
Presence of cultural refuse has long posed a serious challenge to meaningful geological interpretation of near surface controlled–source electromagnetic data (CSEM). Cultural refuse, such as buried pipes, underground storage tanks, unexploded ordnance, is often highly conductive and magnetically permeable. Interpretation of the CSEM response in the presence of cultural noise requires an understanding of electromagnetic field diffusion and the effects of anomalous highly conductive and permeable structures embedded in geologic media. While many numerical techniques have been used to evaluate the response of three dimensional subsurface conductivity distributions, there is a lack of approaches for modeling the EM response incorporating variations in both subsurface conductivity σ and relative permeability μr. In this dissertation, I present a new three dimensional edge–based finite element (FE) algorithm capable of modeling the CSEM response of buried conductive and permeable targets. A coupled potential formulation for variable μ using the vector magnetic potential A and scalar electric potential V gives rise to an ungauged curl–curl equation. Using reluctivity (v=1/mu ), a new term in geophysical applications instead of traditional magnetic susceptibility, facilitates a separation of primary and secondary potentials. The resulting differential equation is solved using the finite element method (FEM) on a tetrahedral mesh with local refinement capabilities. The secondary A and V potentials are expressed in terms of the vector edge basis vectors and the scalar nodal basis functions respectively. The finite element matrix is solved using a Jacobi preconditioned QMR solver. Post processing steps to interpolate the vector potentials on the nodes of the mesh are described. The algorithm is validated against a number of analytic and multi dimensional numeric solutions. The code has been deployed to estimate the influence of magnetic permeability on the mutual coupling between multiple geological and cultural targets. Some limitations of the code with regards to speed and performance at high frequency, conductivity and permeability values have been noted. Directions for further improvement and expanding the range of applicability have been proposed.
229

Investigation of Methods for Arbitrarily Profiled Cylindrical Dielectric Waveguides

Hong, Qing-long 07 July 2005 (has links)
Cylindrical dielectric waveguides such as the optical fiber and photonic crystal fiber are very important passive devices in optical communication systems. There are many kinds of commercial software and methods of simulation at present. In this thesis, we proposed the following four methods to analyze arbitrarily profiled cylindrical dielectric waveguides: The first two methods are modified from published work while the last two methods are entirely developed by ourselves. 1. Cylindrical ABCD matrix method: We take the four continuous electromagnetic field components as main variables and derive the exact four-by-four matrix (with Bessel functions) to relate the four field vector within each homogeneous layer. The electromagnetic field components of the inner and outer layer can propagate toward one of the selected interface of our choice by using the method of ABCD matrix. We can then solve for the £]-value of the waveguide mode with this nonlinear inhomogeneous matrix equation. 2. Runge-Kutta method: Runge-Kutta method is mostly used to solve the initial value problems of the differential equations. In this thesis, we introduce the Runge-Kutta method to solve the first-order four-by-four nonlinear differential equation of the electromagnetic field components and find the £]-value of the cylindrical dielectric waveguides in a similar way depicted in method one. 3. Coupled Ez and Hz method: It uses the axial electromagnetic filed components to solve cylindrical dielectric waveguides. The formulation is similar to cylindrical ABCD matrix method, but it requires less variables then cylindrical ABCD matrix method. The numerical solution obtained from this method is most stable, but it is more complicated to derive harder to write the program. 4. Simple basis expansion method: The simple trigonometric functions (sine or cosine) are chosen as the bases of the horizontal coupled magnetic field equation derived from the second-order differential equation of the transverse magnetic field components. We do not select the horizontal coupling electric field because the normal component of the electric field is discontinuous on the interface. But the normal and tangential components of the magnetic field are continuous across the interfaces. The modal solution problem is converted to a linear matrix eigenvalue-eigenvector equation which is solved by the standard linear algebra routines. We will compare these four numerical methods with one another. The characteristics and advantage as well as the disadvantage of each method will be studied and compared in detail.
230

Estimation of Parameters in Support Vector Regression

Chan, Yi-Chao 21 July 2006 (has links)
The selection and modification of kernel functions is a very important problem in the field of support vector learning. However, the kernel function of a support vector machine has great influence on its performance. The kernel function projects the dataset from the original data space into the feature space, and therefore the problems which couldn¡¦t be done in low dimensions could be done in a higher dimension through the transform of the kernel function. In this thesis, we adopt the FCM clustering algorithm to group data patterns into clusters, and then use a statistical approach to calculate the standard deviation of each pattern with respect to the other patterns in the same cluster. Therefore we can make a proper estimation on the distribution of data patterns and assign a proper standard deviation for each pattern. The standard deviation is the same as the variance of a radial basis function. Then we have the origin data patterns and the variance of each data pattern for support vector learning. Experimental results have shown that our approach can derive better kernel functions than other methods, and also can have better learning and generalization abilities.

Page generated in 0.0345 seconds