• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 684
  • 252
  • 79
  • 57
  • 42
  • 37
  • 30
  • 26
  • 25
  • 14
  • 9
  • 8
  • 7
  • 7
  • 7
  • Tagged with
  • 1504
  • 1030
  • 249
  • 238
  • 223
  • 215
  • 195
  • 185
  • 167
  • 163
  • 151
  • 124
  • 123
  • 122
  • 111
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
901

Throughput Scaling Laws in Point-to-Multipoint Cognitive Networks

Jamal, Nadia 07 1900 (has links)
Simultaneous operation of different wireless applications in the same geographical region and the same frequency band gives rise to undesired interference issues. Since licensed (primary) applications have been granted priority access to the frequency spectrum, unlicensed (secondary) services should avoid imposing interference on the primary system. In other words, secondary system’s activity in the same bands should be in a controlled fashion so that the primary system maintains its quality of service (QoS) requirements. In this thesis, we consider collocated point-to-multipoint primary and secondary networks that have simultaneous access to the same frequency band. Particularly, we examine three different levels at which the two networks may coexist: pure interference, asymmetric co-existence, and symmetric co-existence levels. At the pure interference level, both networks operate simultaneously regardless of their interference to each other. At the other two levels, at least one of the networks attempts to mitigate its interference to the other network by deactivating some of its users. Specifically, at the asymmetric co-existence level, the secondary network selectively deactivates its users based on knowledge of the interference and channel gains, whereas at the symmetric level, the primary network also schedules its users in the same way. Our aim is to derive optimal sum-rates (i.e., throughputs) of both networks at each co-existence level as the number of users grows asymptotically and evaluate how the sum-rates scale with the network size. In order to find the asymptotic throughput results, we derive two propositions; one on the asymptotic behaviour of the largest order statistic and one on the asymptotic behaviour of the sum of lower order statistics. As a baseline comparison, we calculate primary and secondary sum-rates for the time division (TD) channel sharing. Then, we compare the asymptotic secondary sum-rate in TD to that under simultaneous channel sharing, while ensuring the primary network maintains the same sum-rate in both cases. Our results indicate that simultaneous channel sharing at both asymmetric and symmetric co-existence levels can outperform TD. Furthermore, this enhancement is achievable when user scheduling in uplink mode is based only on the interference gains to the opposite network and not on a network’s own channel gains. In other words, the optimal secondary sum-rate is achievable by applying a scheduling strategy, referred to as the least interference strategy, for which only the knowledge of interference gains is required and can be performed in a distributed way.
902

Multiresolutional partial least squares and principal component analysis of fluidized bed drying

Frey, Gerald M. 14 April 2005 (has links)
Fluidized bed dryers are used in the pharmaceutical industry for the batch drying of pharmaceutical granulate. Maintaining optimal hydrodynamic conditions throughout the drying process is essential to product quality. Due to the complex interactions inherent in the fluidized bed drying process, mechanistic models capable of identifying these optimal modes of operation are either unavailable or limited in their capabilities. Therefore, empirical models based on experimentally generated data are relied upon to study these systems.<p> Principal Component Analysis (PCA) and Partial Least Squares (PLS) are multivariate statistical techniques that project data onto linear subspaces that are the most descriptive of variance in a dataset. By modeling data in terms of these subspaces, a more parsimonious representation of the system is possible. In this study, PCA and PLS are applied to data collected from a fluidized bed dryer containing pharmaceutical granulate. <p>System hydrodynamics were quantified in the models using high frequency pressure fluctuation measurements. These pressure fluctuations have previously been identified as a characteristic variable of hydrodynamics in fluidized bed systems. As such, contributions from the macroscale, mesoscale, and microscales of motion are encoded into the signals. A multiresolutional decomposition using a discrete wavelet transformation was used to resolve these signals into components more representative of these individual scales before modeling the data. <p>The combination of multiresolutional analysis with PCA and PLS was shown to be an effective approach for modeling the conditions in the fluidized bed dryer. In this study, datasets from both steady state and transient operation of the dryer were analyzed. The steady state dataset contained measurements made on a bed of dry granulate and the transient dataset consisted of measurements taken during the batch drying of granulate from approximately 33 wt.% moisture to 5 wt.%. Correlations involving several scales of motion were identified in both studies.<p> In the steady state study, deterministic behavior related to superficial velocity, pressure sensor position, and granulate particle size distribution was observed in PCA model parameters. It was determined that these properties could be characterized solely with the use of the high frequency pressure fluctuation data. Macroscopic hydrodynamic characteristics such as bubbling frequency and fluidization regime were identified in the low frequency components of the pressure signals and the particle scale interactions of the microscale were shown to be correlated to the highest frequency signal components. PLS models were able to characterize the effects of superficial velocity, pressure sensor position, and granulate particle size distribution in terms of the pressure signal components. Additionally, it was determined that statistical process control charts capable of monitoring the fluid bed hydrodynamics could be constructed using PCA<p>In the transient drying experiments, deterministic behaviors related to inlet air temperature, pressure sensor position, and initial bed mass were observed in PCA and PLS model parameters. The lowest frequency component of the pressure signal was found to be correlated to the overall temperature effects during the drying cycle. As in the steady state study, bubbling behavior was also observed in the low frequency components of the pressure signal. PLS was used to construct an inferential model of granulate moisture content. The model was found to be capable of predicting the moisture throughout the drying cycle. Preliminary statistical process control models were constructed to monitor the fluid bed hydrodynamics throughout the drying process. These models show promise but will require further investigation to better determine sensitivity to process upsets.<p> In addition to PCA and PLS analyses, Multiway Principal Component Analysis (MPCA) was used to model the drying process. Several key states related to the mass transfer of moisture and changes in temperature throughout the drying cycle were identified in the MPCA model parameters. It was determined that the mass transfer of moisture throughout the drying process affects all scales of motion and overshadows other hydrodynamic behaviors found in the pressure signals.
903

Global and Multi-Input-Multi-Output (MIMO) Extensions of the Algorithm of Mode Isolation (AMI)

Allen, Matthew Scott 18 April 2005 (has links)
A wide range of dynamic systems can be approximated as linear and time invariant, for which a wealth of tools are available to characterize or modify their dynamic characteristics. Experimental modal analysis (EMA) is a procedure whereby the natural frequencies, damping ratios and mode shapes which parameterize vibratory, linear, time invariant systems are derived from experimentally measured response data. EMA is commonly applied in a multitude of applications, for example, to generate experimental models of dynamic systems, validate finite element models and to characterize dissipation in vibratory systems. Recent EMA has also been used to characterize damage or defects in a variety of systems. The Algorithm of Mode Isolation (AMI), presented by Drexel and Ginsberg in 2001, employs a unique strategy for modal parameter estimation in which modes are sequentially identified and subtracted from a set of FRFs. Their natural frequencies, damping ratios and mode vectors are then refined through an iterative procedure. This contrasts conventional multi-degree-of-freedom (MDOF) identification algorithms, most of which attempt to identify all of the modes of a system simultaneously. This dissertation presents a hybrid multi-input-multi-output (MIMO) implementation of the algorithm of mode isolation that improves the performance of AMI for systems with very close or weakly excited modes. The algorithmic steps are amenable to semi-automatic identification, and many FRFs can be processed efficiently and without concern for ill-conditioning, even when many modes are identified. The performance of the algorithm is demonstrated on noise contaminated analytical response data from two systems having close modes, one of which has localized modes while the other has globally responsive modes. The results are compared with other popular algorithms. MIMO-AMI is also applied to experimentally obtained data from shaker excited tests of the Z24 highway bridge, demonstrating the algorithm's performance on a data set typical of many EMA applications. Considerations for determining the number of modes active in the frequency band of interest are addressed, and the results obtained are compared to those found by other groups of researchers.
904

Type-2 Neuro-Fuzzy System Modeling with Hybrid Learning Algorithm

Yeh, Chi-Yuan 19 July 2011 (has links)
We propose a novel approach for building a type-2 neuro-fuzzy system from a given set of input-output training data. For an input pattern, a corresponding crisp output of the system is obtained by combining the inferred results of all the rules into a type-2 fuzzy set which is then defuzzified by applying a type reduction algorithm. Karnik and Mendel proposed an algorithm, called KM algorithm, to compute the centroid of an interval type-2 fuzzy set efficiently. Based on this algorithm, Liu developed a centroid type-reduction strategy to do type reduction for type-2 fuzzy sets. A type-2 fuzzy set is decomposed into a collection of interval type-2 fuzzy sets by £\-cuts. Then the KM algorithm is called for each interval type-2 fuzzy set iteratively. However, the initialization of the switch point in each application of the KM algorithm is not a good one. In this thesis, we present an improvement to Liu's algorithm. We employ the result previously obtained to construct the starting values in the current application of the KM algorithm. Convergence in each iteration except the first one can then speed up and type reduction for type-2 fuzzy sets can be done faster. The efficiency of the improved algorithm is analyzed mathematically and demonstrated by experimental results. Constructing a type-2 neuro-fuzzy system involves two major phases, structure identification and parameter identification. We propose a method which incorporates self-constructing fuzzy clustering algorithm and a SVD-based least squares estimator for structure identification of type-2 neuro-fuzzy modeling. The self-constructing fuzzy clustering method is used to partition the training data set into clusters through input-similarity and output-similarity tests. The membership function associated with each cluster is defined with the mean and deviation of the data points included in the cluster. Then applying SVD-based least squares estimator, a type-2 fuzzy TSK IF-THEN rule is derived from each cluster to form a fuzzy rule base. After that a fuzzy neural network is constructed. In the parameter identification phase, the parameters associated with the rules are then refined through learning. We propose a hybrid learning algorithm which incorporates particle swarm optimization and a SVD-based least squares estimator to refine the antecedent parameters and the consequent parameters, respectively. We demonstrate the effectiveness of our proposed approach in constructing type-2 neuro-fuzzy systems by showing the results for two nonlinear functions and two real-world benchmark datasets. Besides, we use the proposed approach to construct a type-2 neuro-fuzzy system to forecast the daily Taiwan Stock Exchange Capitalization Weighted Stock Index (TAIEX). Experimental results show that our forecasting system performs better than other methods.
905

Neuro-Fuzzy System Modeling with Self-Constructed Rules and Hybrid Learning

Ouyang, Chen-Sen 09 November 2004 (has links)
Neuro-fuzzy modeling is an efficient computing paradigm for system modeling problems. It mainly integrates two well-known approaches, neural networks and fuzzy systems, and therefore possesses advantages of them, i.e., learning capability, robustness, human-like reasoning, and high understandability. Up to now, many approaches have been proposed for neuro-fuzzy modeling. However, it still exists many problems need to be solved. We propose in this thesis two self-constructing rule generation methods, i.e., similarity-based rule generation (SRG) and similarity-and-merge-based rule generation (SMRG), and one hybrid learning algorithm (HLA) for structure identification and parameter identification, respectively, of neuro-fuzzy modeling. SRG and SMRG group the input-output training data into a set of fuzzy clusters incrementally based on similarity tests on the input and output spaces. Membership functions associated with each cluster are defined according to statistical means and deviations of the data points included in the cluster. Additionally, SMRG employs a merging mechanism to merge similar clusters dynamically. Then a zero-order or first-order TSK-type fuzzy IF-THEN rule is extracted from each cluster to form an initial fuzzy rule-base which can be directly employed for fuzzy reasoning or be further refined in the next phase of parameter identification. Compared with other methods, both our SRG and SMRG have advantages of generating fuzzy rules quickly, matching membership functions closely with the real distribution of the training data points, and avoiding the generation of the whole set of clusters from the scratch when new training data are considered. Besides, SMRG supports a more reasonable and quick mechanism for cluster merging to alleviate the problems of data-input-order bias and redundant clusters, which are encountered in SRG and other incremental clustering approaches. To refine the fuzzy rules obtained in the structure identification phase, a zero-order or first-order TSK-type fuzzy neural network is constructed accordingly in the parameter identification phase. Then, we develop a HLA composed by a recursive SVD-based least squares estimator and the gradient descent method to train the network. Our HLA has the advantage of alleviating the local minimal problem. Besides, it learns faster, consumes less memory, and produces lower approximation errors than other methods. To verify the practicability of our approaches, we apply them to the applications of function approximation and classification. For function approximation, we apply our approaches to model several nonlinear functions and real cases from measured input-output datasets. For classification, our approaches are applied to a problem of human object segmentation. A fuzzy self-clustering algorithm is used to divide the base frame of a video stream into a set of segments which are then categorized as foreground or background based on a combination of multiple criteria. Then, human objects in the base frame and the remaining frames of the video stream are precisely located by a fuzzy neural network which is constructed with the fuzzy rules previously obtained and is trained by our proposed HLA. Experimental results show that our approaches can improve the accuracy of human object identification in video streams and work well even when the human object presents no significant motion in an image sequence.
906

A Study of Using the Decomposed Theory of Planned Behavior on the Adoption of e-Dealer Management System in Motorcycle Business

LIN, CHEN-SHENG 26 July 2006 (has links)
Today¡¦s motorcycle business has come to the saturation point in the market of Tawian; consequently, the major motorcycle companies recently competed with each other in building the DMS (Dealer Management System) by using the e-solutions. Through the deployment of an e-DMS (e-solutions of Dealer Management System) for shops of motorcycle, the manufacturers hope that all the channels could be more competitive. The purpose of this research is to explore the influence factors concerning the adoption of e-DMS of motorcycle¡¦s shops. After the studies of literature and empiric, the research is based on ¡§Decomposed Theory of Planned Behavior¡¨ (Taylor and Todd, 1995b) to establish the research model.This resrerch suveryed 250 samples of motorcycle¡¦s shops for study cases The result of the research indicated that factors influenced the adoption of e-DMS for motorcycle shops as follows: (1).¡§Behavioral Intention¡¨ was principally influenced by ¡§Attitude¡¨ and ¡§Perceived Behavioral Control¡¨. The later was less important than the former. ¡§Subject Norms¡¨ showed no obvious influence. (2).¡§Attitude¡¨ was mainly influenced by ¡§Perceived Usefulness¡¨, ¡§Perceived Ease of Use¡¨ and ¡§Compatibility¡¨. The first two factors were more important than the last one. (3).¡§Perceived Behavioral Control¡¨ was chiefly influenced by ¡§Self-efficacy¡¨ and ¡§Technology Facilitating Conditions¡¨. The later was less essential than the former. ¡§Resource Facilitating Conditions¡¨ showed no apparent influence. In the end, this research checks explanation by using three acceptance models, TAM (Davis, 1989), TPB (Ajzen, 1985) and D-TPB (Taylor and Todd, 1995b) to evaluate. All the explanations were nearly close. Because D-TPB considered the contruct of society psychology, it shows better explanation than the others.
907

Robust Control Charts

Cetinyurek, Aysun 01 January 2007 (has links) (PDF)
ABSTRACT ROBUST CONTROL CHARTS &Ccedil / etiny&uuml / rek, Aysun M. Sc., Department of Statistics Supervisor: Dr. BariS S&uuml / r&uuml / c&uuml / Co-Supervisor: Assoc. Prof. Dr. Birdal Senoglu December 2006, 82 pages Control charts are one of the most commonly used tools in statistical process control. A prominent feature of the statistical process control is the Shewhart control chart that depends on the assumption of normality. However, violations of underlying normality assumption are common in practice. For this reason, control charts for symmetric distributions for both long- and short-tailed distributions are constructed by using least squares estimators and the robust estimators -modified maximum likelihood, trim, MAD and wave. In order to evaluate the performance of the charts under the assumed distribution and investigate robustness properties, the probability of plotting outside the control limits is calculated via Monte Carlo simulation technique.
908

Constructing Panoramic Scenes From Aerial Videos

Erdem, Elif 01 December 2007 (has links) (PDF)
In this thesis, we address the problem of panoramic scene construction in which a single image covering the entire visible area of the scene is constructed from an aerial image video. In the literature, there are several algorithms developed for construction of panoramic scene of a video sequence. These algorithms can be categorized as feature based and featureless algorithms. In this thesis, we concentrate on the feature based algorithms and comparison of these algorithms is performed for aerial videos. The comparison is performed on video sequences captured by non-stationary cameras, whose optical axis does not have to be the same. In addition, the matching and tracking performances of the algorithms are separately analyzed, their advantages-disadvantages are presented and several modifications are proposed.
909

Joint Frequency Offset And Channel Estimation

Avan, Muhammet 01 December 2008 (has links) (PDF)
In this thesis study, joint frequency offset and channel estimation methods for single-input single-output (SISO) systems are examined. The performance of maximum likelihood estimate of the parameters are studied for different training sequences. Conventionally training sequences are designed solely for the channel estimation purpose. We present a numerical comparison of different training sequences for the joint estimation problem. The performance comparisons are made in terms of mean square estimation error (MSE) versus SNR and MSE versus the total training energy metrics. A novel estimation scheme using complementary sequences have been proposed and compared with existing schemes. The proposed scheme presents a lower estimation error than the others in almost all numerical simulations. The thesis also includes an extension for the joint channel-frequency offset estimation problem to the multi-input multi-output systems and a brief discussion for multiple frequency offset case is also given.
910

Development Of A Multigrid Accelerated Euler Solver On Adaptively Refined Two- And Three-dimensional Cartesian Grids

Cakmak, Mehtap 01 July 2009 (has links) (PDF)
Cartesian grids offer a valuable option to simulate aerodynamic flows around complex geometries such as multi-element airfoils, aircrafts, and rockets. Therefore, an adaptively-refined Cartesian grid generator and Euler solver are developed. For the mesh generation part of the algorithm, dynamic data structures are used to determine connectivity information between cells and uniform mesh is created in the domain. Marching squares and cubes algorithms are used to form interfaces of cut and split cells. Geometry-based cell adaptation is applied in the mesh generation. After obtaining appropriate mesh around input geometry, the solution is obtained using either flux vector splitting method or Roe&rsquo / s approximate Riemann solver with cell-centered approach. Least squares reconstruction of flow variables within the cell is used to determine high gradient regions of flow. Solution based adaptation method is then applied to current mesh in order to refine these regions and also coarsened regions where unnecessary small cells exist. Multistage time stepping is used with local time steps to increase the convergence rate. Also FAS multigrid technique is used in order to increase the convergence rate. It is obvious that implementation of geometry and solution based adaptations are easier for Cartesian meshes than other types of meshes. Besides, presented numerical results show the accuracy and efficiency of the algorithm by especially using geometry and solution based adaptation. Finally, Euler solutions of Cartesian grids around airfoils, projectiles and wings are compared with the experimental and numerical data available in the literature and accuracy and efficiency of the solver are verified.

Page generated in 0.0312 seconds