• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 389
  • 172
  • 154
  • 37
  • 34
  • 29
  • 29
  • 27
  • 27
  • 19
  • 13
  • 11
  • 10
  • 7
  • 4
  • Tagged with
  • 1100
  • 182
  • 140
  • 128
  • 113
  • 111
  • 105
  • 101
  • 99
  • 97
  • 90
  • 88
  • 87
  • 81
  • 78
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
401

A suboptimal SLM based on symbol interleaving scheme for PAPR reduction in OFDM systems

Liu, Yung-Fu 31 July 2012 (has links)
Orthogonal frequency division multiplexing (OFDM) system is the standard of next generation mobile communication, one of the major drawbacks of OFDM systems is the peak-to-average power ratio (PAPR). In this paper, we proposed a low complexity Selected mapping (SLM) scheme to reduce PAPR. In [27], Wang proposed a low complexity SLM scheme by utilizing conversion vectors having the form of a perfect sequence to solve the problem that phase rotation vectors of the conversion vectors do not usually have an equal magnitude in frequency domain. This paper proposed a low complexity SLM scheme based on perfect sequence and consider the symbol interleaving to reduce the correlation between signals in time domain. It is shown that the (Complementary Cumulative Distribution Function, CCDF) of our proposed scheme are closer to the traditional SLM scheme than Wang¡¦s in [27] but with additional complexity. And the computational complexity is much lower than traditional SLM.
402

A Novel Precoding Scheme for Systems Using Data-Dependent Superimposed Training

Chen, Yu-chih 31 July 2012 (has links)
For channel estimation without data-induced interference in data-dependent superimposed training (DDST) scheme, the data sequence is shifted by subtracting a data-dependent sequence before added to training sequence at transmitter. The distorted term causes the data identification problem (DIP) at the receiver. In this thesis, we propose two precoding schemes based on previous work. To maintain low peak-to-average power ratio (PAPR), the precoding matrix is restricted to a diagonal matrix. The first scheme is proposed to enlarge the minimum distance between the closest codewords, termed as efficient diagonal scheme. Conditions to make sure the precoding matrix is efficient for M-ary phase shift keying (MPSK) and M-ary quadrature amplitude modulation (MQAM) modulation are listed in this paper. The second scheme pursues a lowest complexity at receiver which means the amount of searching set is reduced. It is a trade-off between the better bit error rate (BER) performance and a lower complexity at receiver. The simulation results show that PAPR have been improved and the DIP is solved in both schemes.
403

Investigation Of The Significance Of Periodicity Information In Speaker Identification

Gursoy, Secil 01 April 2008 (has links) (PDF)
In this thesis / general feature selection methods and especially the use of periodicity and aperiodicity information in speaker verification task is searched. A software system is constructed to obtain periodicity and aperiodicity information from speech. Periodicity and aperiodicity information is obtained by using a 16 channel filterbank and analyzing channel outputs frame by frame according to the pitch of that frame. Pitch value of a frame is also found by using periodicity algorithms. Parzen window (kernel density estimation) is used to represent each person&rsquo / s selected phoneme. Constructed method is tested for different phonemes in order to find out its usability in different phonemes. Periodicity features are also used with MFCC features to find out their contribution to speaker identification problem.
404

Controlling High Quality Manufacturing Processes: A Robustness Study Of The Lower-sided Tbe Ewma Procedure

Pehlivan, Canan 01 September 2008 (has links) (PDF)
In quality control applications, Time-Between-Events (TBE) type observations may be monitored by using Exponentially Weighted Moving Average (EWMA) control charts. A widely accepted model for the TBE processes is the exponential distribution, and hence TBE EWMA charts are designed under this assumption. Nevertheless, practical applications do not always conform to the theory and it is common that the observations do not fit the exponential model. Therefore, control charts that are robust to departures from the assumed distribution are desirable in practice. In this thesis, robustness of the lower-sided TBE EWMA charts to the assumption of exponentially distributed observations has been investigated. Weibull and lognormal distributions are considered in order to represent the departures from the assumed exponential model and Markov Chain approach is utilized for evaluating the performance of the chart. By analyzing the performance results, design settings are suggested in order to achieve robust lower-sided TBE EWMA charts.
405

Energy Preserving Methods For Korteweg De Vries Type Equations

Simsek, Gorkem 01 July 2011 (has links) (PDF)
Two well-known types of water waves are shallow water waves and the solitary waves. The former waves are those waves which have larger wavelength than the local water depth and the latter waves are used for the ones which retain their shape and speed after colliding with each other. The most well known of the latter waves are Korteweg de Vries (KdV) equations, which are widely used in many branches of physics and engineering. These equations are nonlinear long waves and mathematically represented by partial differential equations (PDEs). For solving the KdV and KdV-type equations, several numerical methods were developed in the recent years which preserve their geometric structure, i.e. the Hamiltonian form, symplecticity and the integrals. All these methods are classified as symplectic and multisymplectic integrators. They produce stable solutions in long term integration, but they do not preserve the Hamiltonian and the symplectic structure at the same time. This thesis concerns the application of energy preserving average vector field integrator(AVF) to nonlinear Hamiltonian partial differential equations (PDEs) in canonical and non-canonical forms. Among the PDEs, Korteweg de Vries (KdV) equation, modified KdV equation, the Ito&rsquo / s system and the KdV-KdV systems are discetrized in space by preserving the skew-symmetry of the Hamiltonian structure. The resulting ordinary differential equations (ODEs) are solved with the AVF method. Numerical examples confirm that the energy is preserved in long term integration and the other integrals are well preserved too. Soliton and traveling wave solutions for the KdV type equations are accurate as those solved by other methods. The preservation of the dispersive properties of the AVF method is also shown for each PDE.
406

Reliability Cost Model Design and Worth Analysis for Distribution System Planning

Yang, Chin-Der 29 May 2002 (has links)
Reliability worth analysis is an important tool for distribution systems planning and operations. The interruption cost model used in the analysis directly affects the accuracy of the reliability worth evaluation. In this dissertation, the reliability worth analysis was dealt with two interruption cost models including an average or aggregated model (AAM), and a probabilistic distribution model (PDM) in two phases. In the first phase, the dissertation presents a reliability cost model based AAM for distribution system planning. The reliability cost model has been derived as a linear function of line flows for evaluating the outages. The objective is to minimize the total cost including the outage cost, feeder resistive loss, and fixed investment cost. The Evolutionary Programming (EP) was used to solve the very complicated mixed-integer, highly non-linear, and non-differential problem. A real distribution network was modeled as the sample system for tests. There is also a higher opportunity to obtain the global optimum during the EP process. In the second phase, the interruption cost model PDM was proposed by using the radial basis function (RBF) neural network with orthogonal least-squares (OLS) learning method. The residential and industrial interruption costs in PDM were integrated by the proposed neural network technique. A Monte-Carlo time sequential simulation technique was adopted for worth assessment. The technique is tested by evaluating the reliability worth of a Taipower system for the installation of disconnected switches, lateral fuses, transformers and alternative supplies. The results show that the two cost models result in very different interruption costs, and PDM may be more realistic in modeling the system.
407

Consecutive Orthogonal Arrays on Design of Power Electronic Circuits

Yen, Hau-Chen 16 January 2003 (has links)
An approach with ¡§consecutive orthogonal arrays (COA)¡¨ is proposed for solving the problems in designing power electronic circuits. This approach is conceptually based on the orthogonal array method, which has been successfully implemented in quality engineering. The circuit parameters to be determined are assigned as the controlled variables of the orthogonal arrays. Incorporating with the inferential rules, the average effects of each control variable levels are used as the indices to determine the control variable levels of the subsequent orthogonal array. By manipulating on COA, circuit parameters with the desired circuit performances can be found from an effectively reduced number of numerical calculations or experimental tests. In this dissertation, the method with COA is implemented on solving four problems often encountered in the design of power electronic circuits. The first problem one has to deal with is to find a combination with the best performance from a great number of analyzed results. The illustrative example is the design of LC passive filters. Using COA method, the desired component values of the filter can be effectively and efficiently found with far fewer calculations. The second design problem arises from the non-linearity of circuit. An experienced engineer may be able to figure out circuit parameters with satisfactory performance based on their pre-knowledge on the circuit. Nevertheless, they are always questioned whether a better choice can be made. The typical case is the self-excited resonant electronic ballast with the non-linear characteristics of the saturated transformer and the power transistor storage-time. In this case, the average effects of COA obtained from experimental tests are used as the observational indexes to search a combination of circuit parameters for the desired lamp power. The third problem is that circuit functions are mutually exclusive. The designers are greatly perplexed to decide the circuit parameters, with which all functions should be met at the same time. The method with COA is applied to design a filter circuit to achieve the goals of low EMI noise and high power factor simultaneously. Finally, one has to cope with the effects of the uncontrolled variables, such as: ambient temperature, divergence among different manufacturers, and used hours. By applying COA with inferential rules, electronic ballasts can be robustly designed to operate fluorescent lamps at satisfied performance under the influence of these uncontrolled variables.
408

Optimal filter design approaches to statistical process control for autocorrelated processes

Chin, Chang-Ho 01 November 2005 (has links)
Statistical Process Control (SPC), and in particular control charting, is widely used to achieve and maintain control of various processes in manufacturing. A control chart is a graphical display that plots quality characteristics versus the sample number or the time line. Interest in effective implementation of control charts for autocorrelated processes has increased in recent years. However, because of the complexities involved, few systematic design approaches have thus far been developed. Many control charting methods can be viewed as the charting of the output of a linear filter applied to the process data. In this dissertation, we generalize the concept of linear filters for control charts and propose new control charting schemes, the general linear filter (GLF) and the 2nd-order linear filter, based on the generalization. In addition, their optimal design methodologies are developed, where the filter parameters are optimally selected to minimize the out-of-control Average Run Length (ARL) while constraining the in-control ARL to some desired value. The optimal linear filters are compared with other methods in terms of ARL performance, and a number of their interesting characteristics are discussed for various types of mean shifts (step, spike, sinusoidal) and various ARMA process models (i.i.d., AR(1), ARMA(1,1)). Also, in this work, a new discretization approach for substantially reducing the computational time and memory use for the Markov chain method of calculating the ARL is proposed. Finally, a gradient-based optimization strategy for searching optimal linear filters is illustrated.
409

The Impact of the Samantha Academy of Creative Education (SACE) on Students Placed At-Risk at a Suburban High School in Southwest Texas

Valdez, Patrick J. 16 January 2010 (has links)
Reducing student dropout is of extreme importance to the United States. The loss in revenue as well as in human terms is huge. Several problems exist concerning students placed at-risk for dropping out. These include no agreed upon method of calculating drop out rates, differing opinions on the causes of school dropout, and a body of literature that is sparse concerning educational approaches for keeping students placed at-risk in school. This study examined the impact of the Samantha Academy of Creative Education (SACE) on the students placed at-risk and the teacher perceptions of the SACE program by the teachers working in the program at a suburban high school of Southwest Texas. The population of this mixed-methods study consisted of secondary general education students from a large suburban high school in Southwest Texas who had been placed at-risk. One of these groups consisted of students that participated in the SACE program while the other group consisted of a similar group of students not participating in SACE. Statistical tests were conducted to determine if a difference existed between the two groups with regard to graduation rate, attendance rate, and core grade average. Perceptions of the SACE program by the teachers that worked within the SACE program were gathered. Results indicate that student placed at-risk who participated in the SACE program had higher core grade averages, higher rates of graduation, and higher rates of attendance compared to students placed at-risk within the same high school who did not participate in SACE. Teachers perceived that the SACE program was efficacious for students placed at risk because of three broad themes. This study further demonstrated that effective programs aimed at helping students placed at-risk can be developed within the context of a regular high school setting. Recommendations for further research and implications for practice were provided.
410

Evaluation of clusterings of gene expression data

Lubovac, Zelmina January 2000 (has links)
<p>Recent literature has investigated the use of different clustering techniques for analysis of gene expression data. For example, self-organizing maps (SOMs) have been used to identify gene clusters of clear biological relevance in human hematopoietic differentiation and the yeast cell cycle (Tamayo et al., 1999). Hierarchical clustering has also been proposed for identifying clusters of genes that share common roles in cellular processes (Eisen et al., 1998; Michaels et al., 1998; Wen et al., 1998). Systematic evaluation of clustering results is as important as generating the clusters. However, this is a difficult task, which is often overlooked in gene expression studies. Several gene expression studies claim success of the clustering algorithm without showing a validation of complete clusterings, for example Ben-Dor and Yakhini (1999) and Törönen et al. (1999).</p><p>In this dissertation we propose an evaluation approach based on a relative entropy measure that uses additional knowledge about genes (gene annotations) besides the gene expression data. More specifically, we use gene annotations in the form of an enzyme classification hierarchy, to evaluate clusterings. This classification is based on the main chemical reactions that are catalysed by enzymes. Furthermore, we evaluate clusterings with pure statistical measures of cluster validity (compactness and isolation).</p><p>The experiments include applying two types of clustering methods (SOMs and hierarchical clustering) on a data set for which good annotation is available, so that the results can be partly validated from the viewpoint of biological relevance.</p><p>The evaluation of the clusters indicates that clusters obtained from hierarchical average linkage clustering have much higher relative entropy values and lower compactness and isolation compared to SOM clusters. Clusters with high relative entropy often contain enzymes that are involved in the same enzymatic activity. On the other hand, the compactness and isolation measures do not seem to be reliable for evaluation of clustering results.</p>

Page generated in 0.707 seconds