Spelling suggestions: "subject:"square""
731 |
Training of Template-Specific Weighted Energy Function for Sequence-to-Structure AlignmentLee, En-Shiun Annie January 2008 (has links)
Threading is a protein structure prediction method that uses a library of template protein structures in the following steps: first the target sequence is matched to the template library and the best template structure is selected, secondly the predicted target structure of the target sequence is modeled by this selected template structure. The deceleration of new folds which are added to the protein data bank promises completion of the template structure library. This thesis uses a new set of template-specific weights to improve the energy function for sequence-to-structure alignment in the template selection step of the threading process. The weights are estimated using least squares methods with the quality of the modelling step in the threading process as the label. These new weights show an average 12.74% improvement in estimating the label. Further family analysis show a correlation between the performance of the new weights to the number of seeds in pFam.
|
732 |
Training of Template-Specific Weighted Energy Function for Sequence-to-Structure AlignmentLee, En-Shiun Annie January 2008 (has links)
Threading is a protein structure prediction method that uses a library of template protein structures in the following steps: first the target sequence is matched to the template library and the best template structure is selected, secondly the predicted target structure of the target sequence is modeled by this selected template structure. The deceleration of new folds which are added to the protein data bank promises completion of the template structure library. This thesis uses a new set of template-specific weights to improve the energy function for sequence-to-structure alignment in the template selection step of the threading process. The weights are estimated using least squares methods with the quality of the modelling step in the threading process as the label. These new weights show an average 12.74% improvement in estimating the label. Further family analysis show a correlation between the performance of the new weights to the number of seeds in pFam.
|
733 |
Multiresolutional partial least squares and principal component analysis of fluidized bed dryingFrey, Gerald M. 14 April 2005 (has links)
Fluidized bed dryers are used in the pharmaceutical industry for the batch drying of pharmaceutical granulate. Maintaining optimal hydrodynamic conditions throughout the drying process is essential to product quality. Due to the complex interactions inherent in the fluidized bed drying process, mechanistic models capable of identifying these optimal modes of operation are either unavailable or limited in their capabilities. Therefore, empirical models based on experimentally generated data are relied upon to study these systems.<p> Principal Component Analysis (PCA) and Partial Least Squares (PLS) are multivariate statistical techniques that project data onto linear subspaces that are the most descriptive of variance in a dataset. By modeling data in terms of these subspaces, a more parsimonious representation of the system is possible. In this study, PCA and PLS are applied to data collected from a fluidized bed dryer containing pharmaceutical granulate. <p>System hydrodynamics were quantified in the models using high frequency pressure fluctuation measurements. These pressure fluctuations have previously been identified as a characteristic variable of hydrodynamics in fluidized bed systems. As such, contributions from the macroscale, mesoscale, and microscales of motion are encoded into the signals. A multiresolutional decomposition using a discrete wavelet transformation was used to resolve these signals into components more representative of these individual scales before modeling the data. <p>The combination of multiresolutional analysis with PCA and PLS was shown to be an effective approach for modeling the conditions in the fluidized bed dryer. In this study, datasets from both steady state and transient operation of the dryer were analyzed. The steady state dataset contained measurements made on a bed of dry granulate and the transient dataset consisted of measurements taken during the batch drying of granulate from approximately 33 wt.% moisture to 5 wt.%. Correlations involving several scales of motion were identified in both studies.<p> In the steady state study, deterministic behavior related to superficial velocity, pressure sensor position, and granulate particle size distribution was observed in PCA model parameters. It was determined that these properties could be characterized solely with the use of the high frequency pressure fluctuation data. Macroscopic hydrodynamic characteristics such as bubbling frequency and fluidization regime were identified in the low frequency components of the pressure signals and the particle scale interactions of the microscale were shown to be correlated to the highest frequency signal components. PLS models were able to characterize the effects of superficial velocity, pressure sensor position, and granulate particle size distribution in terms of the pressure signal components. Additionally, it was determined that statistical process control charts capable of monitoring the fluid bed hydrodynamics could be constructed using PCA<p>In the transient drying experiments, deterministic behaviors related to inlet air temperature, pressure sensor position, and initial bed mass were observed in PCA and PLS model parameters. The lowest frequency component of the pressure signal was found to be correlated to the overall temperature effects during the drying cycle. As in the steady state study, bubbling behavior was also observed in the low frequency components of the pressure signal. PLS was used to construct an inferential model of granulate moisture content. The model was found to be capable of predicting the moisture throughout the drying cycle. Preliminary statistical process control models were constructed to monitor the fluid bed hydrodynamics throughout the drying process. These models show promise but will require further investigation to better determine sensitivity to process upsets.<p> In addition to PCA and PLS analyses, Multiway Principal Component Analysis (MPCA) was used to model the drying process. Several key states related to the mass transfer of moisture and changes in temperature throughout the drying cycle were identified in the MPCA model parameters. It was determined that the mass transfer of moisture throughout the drying process affects all scales of motion and overshadows other hydrodynamic behaviors found in the pressure signals.
|
734 |
Global and Multi-Input-Multi-Output (MIMO) Extensions of the Algorithm of Mode Isolation (AMI)Allen, Matthew Scott 18 April 2005 (has links)
A wide range of dynamic systems can be approximated as linear and time invariant, for which a wealth of tools are available to characterize or modify their dynamic characteristics. Experimental modal analysis (EMA) is a procedure whereby the natural frequencies, damping ratios and mode shapes which parameterize vibratory, linear, time invariant systems are derived from experimentally measured response data. EMA is commonly applied in a multitude of applications, for example, to generate experimental models of dynamic systems, validate finite element models and to characterize dissipation in vibratory systems. Recent EMA has also been used to characterize damage or defects in a variety of systems.
The Algorithm of Mode Isolation (AMI), presented by Drexel and Ginsberg in 2001, employs a unique strategy for modal parameter estimation in which modes are sequentially identified and subtracted from a set of FRFs. Their natural frequencies, damping ratios and mode vectors are then refined through an iterative procedure. This contrasts conventional multi-degree-of-freedom (MDOF) identification algorithms, most of which attempt to identify all of the modes of a system simultaneously. This dissertation presents a hybrid multi-input-multi-output (MIMO) implementation of the algorithm of mode isolation that improves the performance of AMI for systems with very close or weakly excited modes. The algorithmic steps are amenable to semi-automatic identification, and many FRFs can be processed efficiently and without concern for ill-conditioning, even when many modes are identified. The performance of the algorithm is demonstrated on noise contaminated analytical response data from two systems having close modes, one of which has localized modes while the other has globally responsive modes. The results are compared with other popular algorithms. MIMO-AMI is also applied to experimentally obtained data from shaker excited tests of the Z24 highway bridge, demonstrating the algorithm's performance on a data set typical of many EMA applications. Considerations for determining the number of modes active in the frequency band of interest are addressed, and the results obtained are compared to those found by other groups of researchers.
|
735 |
Type-2 Neuro-Fuzzy System Modeling with Hybrid Learning AlgorithmYeh, Chi-Yuan 19 July 2011 (has links)
We propose a novel approach for building a type-2 neuro-fuzzy system from a given set of input-output training data. For an input pattern, a corresponding crisp output of the system is obtained by combining the inferred results of all the rules into a type-2 fuzzy set which is then defuzzified by applying a type reduction algorithm. Karnik and Mendel proposed an algorithm, called KM algorithm, to compute the centroid of an interval type-2 fuzzy set efficiently. Based on this algorithm, Liu developed a centroid type-reduction strategy to do type reduction for type-2 fuzzy sets. A type-2 fuzzy set is decomposed into a collection of interval type-2 fuzzy sets by £\-cuts. Then the KM algorithm is called for each interval type-2 fuzzy set iteratively. However, the initialization of the switch point in each application of the KM algorithm is not a good one. In this thesis, we present an improvement to Liu's algorithm. We employ the result previously obtained to construct the starting values in the current application of the KM algorithm. Convergence in each iteration except the first one can then speed up and type reduction for type-2 fuzzy sets can be done faster. The efficiency of the improved algorithm is analyzed mathematically and demonstrated by experimental results.
Constructing a type-2 neuro-fuzzy system involves two major phases, structure identification and parameter identification. We propose a method which incorporates self-constructing fuzzy clustering algorithm and a SVD-based least squares estimator for structure identification of type-2 neuro-fuzzy modeling. The self-constructing fuzzy clustering method is used to partition the training data set into clusters through input-similarity and output-similarity tests. The membership function associated with each cluster is defined with the mean and deviation of the data points included in the cluster. Then applying SVD-based least squares estimator, a type-2 fuzzy TSK IF-THEN rule is derived from each cluster to form a fuzzy rule base. After that a fuzzy neural network is constructed. In the parameter identification phase, the parameters associated with the rules are then refined through learning. We propose a hybrid learning algorithm which incorporates particle swarm optimization and a SVD-based least squares estimator to refine the antecedent parameters and the consequent parameters, respectively. We demonstrate the effectiveness of our proposed approach in constructing type-2 neuro-fuzzy systems by showing the results for two nonlinear functions and two real-world benchmark datasets. Besides, we use the proposed approach to construct a type-2 neuro-fuzzy system to forecast the daily Taiwan Stock Exchange Capitalization Weighted Stock Index (TAIEX). Experimental results show that our forecasting system performs better than other methods.
|
736 |
Neuro-Fuzzy System Modeling with Self-Constructed Rules and Hybrid LearningOuyang, Chen-Sen 09 November 2004 (has links)
Neuro-fuzzy modeling is an efficient computing paradigm for system modeling problems. It mainly integrates two well-known approaches, neural networks and fuzzy systems, and therefore possesses advantages of them, i.e., learning capability, robustness, human-like reasoning, and high understandability. Up to now, many approaches have been proposed for neuro-fuzzy modeling. However, it still exists many problems need to be solved.
We propose in this thesis two self-constructing rule generation methods, i.e., similarity-based rule generation (SRG) and similarity-and-merge-based rule generation (SMRG), and one hybrid learning algorithm (HLA) for structure identification and parameter identification, respectively, of neuro-fuzzy modeling. SRG and SMRG group the input-output training data into a set of fuzzy clusters incrementally based on similarity tests on the input and output spaces. Membership functions associated with each cluster are defined according to statistical means and deviations of the data points included in the cluster. Additionally, SMRG employs a merging mechanism to merge similar clusters dynamically. Then a zero-order or first-order TSK-type fuzzy IF-THEN rule is extracted from each cluster to form an initial fuzzy rule-base which can be directly employed for fuzzy reasoning or be further refined in the next phase of parameter identification. Compared with other methods, both our SRG and SMRG have advantages of generating fuzzy rules quickly, matching membership functions closely with the real distribution of the training data points, and avoiding the generation of the whole set of clusters from the scratch when new training data are considered. Besides, SMRG supports a more reasonable and quick mechanism for cluster merging to alleviate the problems of data-input-order bias and redundant clusters, which are encountered in SRG and other incremental clustering approaches.
To refine the fuzzy rules obtained in the structure identification phase, a zero-order or first-order TSK-type fuzzy neural network is constructed accordingly in the parameter identification phase. Then, we develop a HLA composed by a recursive SVD-based least squares estimator and the gradient descent method to train the network. Our HLA has the advantage of alleviating the local minimal problem. Besides, it learns faster, consumes less memory, and produces lower approximation errors than other methods.
To verify the practicability of our approaches, we apply them to the applications of function approximation and classification. For function approximation, we apply our approaches to model several nonlinear functions and real cases from measured input-output datasets. For classification, our approaches are applied to a problem of human object segmentation. A fuzzy self-clustering algorithm is used to divide the base frame of a video stream into a set of segments which are then categorized as foreground or background based on a combination of multiple criteria. Then, human objects in the base frame and the remaining frames of the video stream are precisely located by a fuzzy neural network which is constructed with the fuzzy rules previously obtained and is trained by our proposed HLA. Experimental results show that our approaches can improve the accuracy of human object identification in video streams and work well even when the human object presents no significant motion in an image sequence.
|
737 |
Robust Control ChartsCetinyurek, Aysun 01 January 2007 (has links) (PDF)
ABSTRACT
ROBUST CONTROL CHARTS
Ç / etinyü / rek, Aysun
M. Sc., Department of Statistics
Supervisor: Dr. BariS Sü / rü / cü / Co-Supervisor: Assoc. Prof. Dr. Birdal Senoglu
December 2006, 82 pages
Control charts are one of the most commonly used tools in statistical process
control. A prominent feature of the statistical process control is the Shewhart
control chart that depends on the assumption of normality. However, violations of
underlying normality assumption are common in practice. For this reason, control
charts for symmetric distributions for both long- and short-tailed distributions are
constructed by using least squares estimators and the robust estimators -modified
maximum likelihood, trim, MAD and wave. In order to evaluate the performance
of the charts under the assumed distribution and investigate robustness properties,
the probability of plotting outside the control limits is calculated via Monte Carlo
simulation technique.
|
738 |
Constructing Panoramic Scenes From Aerial VideosErdem, Elif 01 December 2007 (has links) (PDF)
In this thesis, we address the problem of panoramic scene construction in which a single image covering the entire visible area of the scene is constructed from an aerial image video.
In the literature, there are several algorithms developed for construction of panoramic scene of a video sequence. These algorithms can be categorized as feature based and featureless algorithms. In this thesis, we concentrate on the feature based algorithms and comparison of these algorithms is performed for aerial videos. The comparison is performed on video sequences captured by non-stationary cameras, whose optical axis does not have to be the same. In addition, the matching and tracking performances of the algorithms are separately analyzed, their advantages-disadvantages are presented and several modifications are proposed.
|
739 |
Development Of A Multigrid Accelerated Euler Solver On Adaptively Refined Two- And Three-dimensional Cartesian GridsCakmak, Mehtap 01 July 2009 (has links) (PDF)
Cartesian grids offer a valuable option to simulate aerodynamic flows around complex geometries such as multi-element airfoils, aircrafts, and rockets. Therefore, an adaptively-refined Cartesian grid generator and Euler solver are developed. For the mesh generation part of the algorithm, dynamic data structures are used to determine connectivity information between cells and uniform mesh is created in the domain. Marching squares and cubes algorithms are used to form interfaces of cut and split cells. Geometry-based cell adaptation is applied in the mesh generation. After obtaining appropriate mesh around input geometry, the solution is obtained using either flux vector splitting method or Roe&rsquo / s approximate Riemann solver with cell-centered approach. Least squares reconstruction of flow variables within the cell is used to determine high gradient regions of flow. Solution based adaptation method is then applied to current mesh in order to refine these regions and also coarsened regions where unnecessary small cells exist. Multistage time stepping is used with local time steps to increase the convergence rate. Also FAS multigrid technique is used in order to increase the convergence rate. It is obvious that implementation of geometry and solution based adaptations are easier for Cartesian meshes than other types of meshes. Besides, presented numerical results show the accuracy and efficiency of the algorithm by especially using geometry and solution based adaptation. Finally, Euler solutions of Cartesian grids around airfoils, projectiles and wings are compared with the experimental and numerical data available in the literature and accuracy and efficiency of the solver are verified.
|
740 |
Development Of A Two-dimensional Navier-stokes Solver For Laminar Flows Using Cartesian GridsSahin, Serkan Mehmet 01 March 2011 (has links) (PDF)
A fully automated Cartesian/Quad grid generator and laminar flow solver have been developed for external flows by using C++. After defining the input geometry by nodal points, adaptively refined Cartesian grids are generated automatically. Quadtree data structure is used in order to connect the Cartesian cells to each other. In order to simulate viscous flows, body-fitted quad cells can be generated optionally. Connectivity is provided by cut and split cells such that the intersection points of Cartesian cells are used as the corners of quads at the outmost row. Geometry based adaptation methods for cut, split cells and highly curved
regions are applied to the uniform mesh generated around the geometry. After obtaining a sufficient resolution in the domain, the solution is achieved with cellcentered approach by using multistage time stepping scheme. Solution based grid adaptations are carried out during the execution of the program in order to refine the regions with high gradients and obtain sufficient resolution in these regions. Moreover, multigrid technique is implemented to accelerate the convergence time significantly. Some tests are performed in order to verify and validate the accuracy and efficiency of the code for inviscid and laminar flows.
|
Page generated in 0.0534 seconds