Spelling suggestions: "subject:"square""
561 |
Higher-Order Spectral/HP Finite Element Technology for Structures and Fluid FlowsVallala, Venkat Pradeep 16 December 2013 (has links)
This study deals with the use of high-order spectral/hp approximation functions in the finite element models of various nonlinear boundary-value and initial-value problems arising in the fields of structural mechanics and flows of viscous incompressible fluids. For many of these classes of problems, the high-order (typically, polynomial order p greater than or equal to 4) spectral/hp finite element technology offers many computational advantages over traditional low-order (i.e., p < 3) finite elements. For instance, higher-order spectral/hp finite element procedures allow us to develop robust structural elements for beams, plates, and shells in a purely displacement-based setting, which avoid all forms of numerical locking. The higher-order spectral/hp basis functions avoid the interpolation error in the numerical schemes, thereby making them accurate and stable. Furthermore, for fluid flows, when combined with least-squares variational principles, such technology allows us to develop efficient finite element models, that always yield a symmetric positive-definite (SPD) coefficient matrix, and thereby robust direct or iterative solvers can be used. The least-squares formulation avoids ad-hoc stabilization methods employed with traditional low-order weak-form Galerkin formulations. Also, the use of spectral/hp finite element technology results in a better conservation of physical quantities (e.g., dilatation, volume, and mass) and stable evolution of variables with time in the case of unsteady flows. The present study uses spectral/hp approximations in the (1) weak-form Galerkin finite element models of viscoelastic beams, (2) weak-form Galerkin displacement finite element models of shear-deformable elastic shell structures under thermal and mechanical loads, and (3) least-squares formulations for the Navier-Stokes equations governing flows of viscous incompressible fluids. Numerical simulations using the developed technology of several non-trivial benchmark problems are presented to illustrate the robustness of the higher-order spectral/hp based finite element technology.
|
562 |
Assessment of Strategic Management Practices in Small Agribusiness Firms in TanzaniaDominic, Theresia 11 May 2015 (has links)
No description available.
|
563 |
Estimation In The Simple Linear Regression Model With One-fold Nested ErrorUlgen, Burcin Emre 01 June 2005 (has links) (PDF)
In this thesis, estimation in simple linear regression model with one-fold nested error is studied.
To estimate the fixed effect parameters, generalized least squares and maximum likelihood estimation procedures are reviewed. Moreover, Minimum Norm Quadratic Estimator (MINQE), Almost Unbiased Estimator (AUE) and Restricted Maximum Likelihood Estimator (REML) of variance of primary units are derived.
Also, confidence intervals for the fixed effect parameters and the variance components are studied. Finally, the aforesaid estimation techniques and confidence intervals are applied to a real-life data and the results are presented
|
564 |
非線性時間序列轉折區間認定之模糊統計分析 / Fuzzy Statistical Analysis for Change Periods Detection in Nonlinear Time Series陳美惠 Unknown Date (has links)
Many papers have been presented on the study of change points detection. Nonetheless, we would like to point out that in dealing with the time series with switching regimes, we should also take the characteristics of change periods into account. Because many patterns of change structure in time series exhibit a certain kind of duration, those phenomena should not be treated as a mere sudden turning at a certain time.
In this paper, we propose procedures about change periods detection for nonlinear time series. One of the detecting statistical methods is an application of fuzzy classification and generalization of Inclan and Tiao’s result. Moreover, we develop the genetic-based searching procedure, which is based on the concepts of leading genetic model. Simulation results show that the performance of these procedures is efficient and successful. Finally, two empirical applications about change periods detection for Taiwan monthly visitors arrival and exchange rate are demonstrated.
|
565 |
Critical Sets in Latin Squares and Associated StructuresBean, Richard Winston Unknown Date (has links)
A critical set in a Latin square of order n is a set of entries in an n x n array which can be embedded in precisely one Latin square of order n, with the property that if any entry of the critical set is deleted, the remaining set can be embedded in more than one Latin square of order n. The number of critical sets grows super-exponentially as the order of the Latin square increases. It is difficult to find patterns in Latin squares of small order (order 5 or less) which can be generalised in the process of creating new theorems. Thus, I have written many algorithms to find critical sets with various properties in Latin squares of order greater than 5, and to deal with other related structures. Some algorithms used in the body of the thesis are presented in Chapter 3; results which arise from the computational studies and observations of the patterns and subsequent results are presented in Chapters 4, 5, 6, 7 and 8. The cardinality of the largest critical set in any Latin square of order n is denoted by lcs(n). In 1978 Curran and van Rees proved that lcs(n)<=n2-n. In Chapter 4, it is shown that lcs(n)<=n2-3n+3. Chapter 5 provides new bounds on the maximum number of intercalates in Latin squares of orders mX2^alpha (m odd, alpha>=2) and mX2^alpha+1 (m odd, alpha>=2 and alpha not equal to 3), and a new lower bound on lcs(4m). It also discusses critical sets in intercalate-rich Latin squares of orders 11 and 14. In Chapter 6 a construction is given which verifies the existence of a critical set of size n2 divided by 4 + 1 when n is even and n>=6. The construction is based on the discovery of a critical set of size 17 for a Latin square of order 8. In Chapter 7 the representation of Steiner trades of volume less than or equal to nine is examined. Computational results are used to identify those trades for which the associated partial Latin square can be decomposed into six disjoint Latin interchanges. Chapter 8 focusses on critical sets in Latin squares of order at most six and extensive computational routines are used to identify all the critical sets of different sizes in these Latin squares.
|
566 |
Real-time power system disturbance identification and its mitigation using an enhanced least squares algorithmManmek, Thip, Electrical Engineering & Telecommunications, Faculty of Engineering, UNSW January 2006 (has links)
This thesis proposes, analyses and implements a fast and accurate real-time power system disturbances identification method based on an enhanced linear least squares algorithm for mitigation and monitoring of various power quality problems such as current harmonics, grid unbalances and voltage dips. The enhanced algorithm imposes less real-time computational burden on processing the system and is thus called ???efficient least squares algorithm???. The proposed efficient least squares algorithm does not require matrix inversion operation and contains only real numbers. The number of required real-time matrix multiplications is also reduced in the proposed method by pre-performing some of the matrix multiplications to form a constant matrix. The proposed efficient least squares algorithm extracts instantaneous sine and cosine terms of the fundamental and harmonic components by simply multiplying a set of sampled input data by the pre-calculated constant matrix. A power signal processing system based on the proposed efficient least squares algorithm is presented in this thesis. This power signal processing system derives various power system quantities that are used for real-time monitoring and disturbance mitigation. These power system quantities include constituent components, symmetrical components and various power measurements. The properties of the proposed power signal processing system was studied using modelling and practical implementation in a digital signal processor. These studies demonstrated that the proposed method is capable of extracting time varying power system quantities quickly and accurately. The dynamic response time of the proposed method was less than half that of a fundamental cycle. Moreover, the proposed method showed less sensitivity to noise pollution and small variations in fundamental frequency. The performance of the proposed power signal processing system was compared to that of the popular DFT/FFT methods using computer simulations. The simulation results confirmed the superior performance of the proposed method under both transient and steady-state conditions. In order to investigate the practicability of the method, the proposed power signal processing system was applied to two real-life disturbance mitigation applications namely, an active power filter (APF) and a distribution synchronous static compensator (D-STATCOM). The validity and performance of the proposed signal processing system in both disturbance mitigations applications were investigated by simulation and experimental studies. The extensive modelling and experimental studies confirmed that the proposed signal processing system can be used for practical real-time applications which require fast disturbance identification such as mitigation control and power quality monitoring of power systems
|
567 |
An assessment of using least squares adjustment to upgrade spatial data in GISMerritt, Roger, Surveying & Spatial Information Systems, Faculty of Engineering, UNSW January 2005 (has links)
The GIS Industry has digitised cadastre from the best available paper maps over the last few decades, incorporating the inherent errors in those paper maps and in the digitising process. The advent of Global Positioning Systems, modern surveying instruments and advances in the computing industry has made it desirable and affordable to upgrade the placement, in terms of absolute and relative position) of these digital cadastres. The Utility Industry has used GIS software to place their assets relative to these digital cadastres, and are now finding their assets placed incorrectly when viewed against these upgraded digital cadastres. This thesis examines the processes developed in the software program called the ???Spatial Adjustment Engine???, and documents a holistic approach to semi-automating the upgrading of the digital cadastre and the subsequent upgrading of the utility assets. This thesis also documents the various pilot projects undertaken during the development of the Spatial Adjustment Engine, the topological scenarios found in each pilot, their solution, and provides a framework of definitions needed to explore this field further. The results of each pilot project are given in context, and lead to the conclusions. The conclusions indicate the processes and procedures implemented in the Spatial Adjustment Engine are a suitable mechanism for the upgrade of digital cadastre and of spatially dependant themes such as utility assets, zoning themes, annotation layers, and some road centreline themes.
|
568 |
Infrared spectroscopy and advanced spectral data analyses to better describe sorption of pesticides in soils.Forouzangohar, Mohsen January 2009 (has links)
The fate and behaviour of hydrophobic organic compounds (e.g. pesticides) in soils are largely controlled by sorption processes. Recent findings suggest that the chemical properties of soil organic carbon (OC) significantly control the extent of sorption of such compounds in soil systems. However, currently there is no practical tool to integrate the effects of OC chemistry into sorption predictions. Therefore, the K [subscript]oc model, which relies on the soil OC content (foc), is used for predicting soil sorption coefficients (K[subscript]d) of pesticides. The K[subscript]oc model can be expressed as K[subscript]d = K[subscript]oc × foc, where K[subscript]oc is the OC-normalized sorption coefficient for the compound. Hence, there is a need for a prediction tool that can effectively capture the role of both the chemical structural variation of OC as well as foc in the prediction approach. Infrared (IR) spectroscopy offers a potential alternative to the K[subscript]oc approach because IR spectra contain information on the amount and nature of both organic and mineral soil components. The potential of mid-infrared (MIR) spectroscopy for predicting K[subscript]d values of a moderately hydrophobic pesticide, diuron, was investigated. A calibration set of 101 surface soils from South Australia was characterized for reference sorption data (K[subscript]d and K[subscript]oc) and foc as well as IR spectra. Partial least squares (PLS) regression was employed to harness the apparent complexity of IR spectra by reducing the dimensionality of the data. The MIR-PLS model was developed and validated by dividing the initial data set into corresponding calibration and validation sets. The developed model showed promising performance in predicting K[subscript]d values for diuron and proved to be a more efficacious than the K[subscript]oc model. The significant statistical superiority of the MIR-PLS model over the K[subscript]oc model was caused by some calcareous soils which were outliers for the K[subscript]oc model. Apart from these samples, the performance of the two compared models was essentially similar. The existence of carbonate peaks in the MIR-PLS loadings of the MIR based model suggested that carbonate minerals may interfere or affect the sorption. This requires further investigation. Some other concurrent studies suggested excellent quality of prediction of soil properties by NIR spectroscopy when applied to homogenous samples. Next, therefore, the performance of visible near-infrared (VNIR) and MIR spectroscopy was thoroughly compared for predicting both foc and diuron K[subscript]d values in soils. Some eleven calcareous soils were added to the initial calibration set for an attempt to further investigate the effect of carbonate minerals on sorption. MIR spectroscopy was clearly a more accurate predictor of foc and K[subscript]d in soils than VNIR spectroscopy. Close inspection of spectra showed that MIR spectra contain more relevant and straightforward information regarding the chemistry of OC and minerals than VNIR and thus useful in modelling sorption and OC content. Moreover, MIR spectroscopy provided a better (though still not great) estimation of sorption in calcareous soils than either VNIR spectroscopy or the K[subscript]oc model. Separate research is recommended to fully explore the unusual sorption behaviour of diuron in calcareous soils. In the last experiment, two dimensional (2D) nuclear magnetic resonance/infrared heterospectral correlation analyses revealed that MIR spectra contain specific and clear signals related to most of the major NMR-derived carbon types whereas NIR spectra contain only a few broad and overlapped peaks weakly associated with aliphatic carbons. 2D heterospectral correlation analysis facilitated accurate band assignments in the MIR and NIR spectra to the NMR-derived carbon types in isolated SOM. In conclusion, the greatest advantage of the MIR-PLS model is the direct estimation of Kd based on integrated properties of organic and mineral components. In addition, MIR spectroscopy is being used increasingly in predicting various soil properties including foc, and therefore, its simultaneous use for K[subscript]d estimation is a resource-effective and attractive practice. Moreover, it has the advantage of being fast and inexpensive with a high repeatability, and unlike the K[subscript]oc approach, MIR-PLS shows a better potential for extrapolating applications in data-poor regions. Where available, MIR spectroscopy is highly recommended over NIR spectroscopy. 2D correlation spectroscopy showed promising potential for providing rich insight and clarification into the thorough study of soil IR spectra. / http://proxy.library.adelaide.edu.au/login?url= http://library.adelaide.edu.au/cgi-bin/Pwebrecon.cgi?BBID=1415416 / Thesis (Ph.D.) - University of Adelaide, School of Earth and Environmental Sciences, 2009
|
569 |
Critical Sets in Latin Squares and Associated StructuresBean, Richard Winston Unknown Date (has links)
A critical set in a Latin square of order n is a set of entries in an n x n array which can be embedded in precisely one Latin square of order n, with the property that if any entry of the critical set is deleted, the remaining set can be embedded in more than one Latin square of order n. The number of critical sets grows super-exponentially as the order of the Latin square increases. It is difficult to find patterns in Latin squares of small order (order 5 or less) which can be generalised in the process of creating new theorems. Thus, I have written many algorithms to find critical sets with various properties in Latin squares of order greater than 5, and to deal with other related structures. Some algorithms used in the body of the thesis are presented in Chapter 3; results which arise from the computational studies and observations of the patterns and subsequent results are presented in Chapters 4, 5, 6, 7 and 8. The cardinality of the largest critical set in any Latin square of order n is denoted by lcs(n). In 1978 Curran and van Rees proved that lcs(n)<=n2-n. In Chapter 4, it is shown that lcs(n)<=n2-3n+3. Chapter 5 provides new bounds on the maximum number of intercalates in Latin squares of orders mX2^alpha (m odd, alpha>=2) and mX2^alpha+1 (m odd, alpha>=2 and alpha not equal to 3), and a new lower bound on lcs(4m). It also discusses critical sets in intercalate-rich Latin squares of orders 11 and 14. In Chapter 6 a construction is given which verifies the existence of a critical set of size n2 divided by 4 + 1 when n is even and n>=6. The construction is based on the discovery of a critical set of size 17 for a Latin square of order 8. In Chapter 7 the representation of Steiner trades of volume less than or equal to nine is examined. Computational results are used to identify those trades for which the associated partial Latin square can be decomposed into six disjoint Latin interchanges. Chapter 8 focusses on critical sets in Latin squares of order at most six and extensive computational routines are used to identify all the critical sets of different sizes in these Latin squares.
|
570 |
The value and validity of software effort estimation models built from a multiple organization data setDeng, Kefu January 2008 (has links)
The objective of this research is to empirically assess the value and validity of a multi-organization data set in the building of prediction models for several ‘local’ software organizations; that is, smaller organizations that might have a few project records but that are interested in improving their ability to accurately predict software project effort. Evidence to date in the research literature is mixed, due not to problems with the underlying research ideas but with limitations in the analytical processes employed: • the majority of previous studies have used only a single organization as the ‘local’ sample, introducing the potential for bias • the degree to which the conclusions of these studies might apply more generally is unable to be determined because of a lack of transparency in the data analysis processes used. It is the aim of this research to provide a more robust and visible test of the utility of the largest multi-organization data set currently available – that from the ISBSG – in terms of enabling smaller-scale organizations to build relevant and accurate models for project-level effort prediction. Stepwise regression is employed to enable the construction of ‘local’, ‘global’ and ‘refined global’ models of effort that are then validated against actual project data from eight organizations. The results indicate that local data, that is, data collected for a single organization, is almost always more effective as a basis for the construction of a predictive model than data sourced from a global repository. That said, the accuracy of the models produced from the global data set, while worse than that achieved with local data, may be sufficiently accurate in the absence of reliable local data – an issue that could be investigated in future research. The study concludes with recommendations for both software engineering practice – in setting out a more dynamic scenario for the management of software development – and research – in terms of implications for the collection and analysis of software engineering data.
|
Page generated in 0.05 seconds