• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 684
  • 252
  • 79
  • 57
  • 42
  • 37
  • 30
  • 26
  • 25
  • 14
  • 9
  • 8
  • 7
  • 7
  • 7
  • Tagged with
  • 1504
  • 1030
  • 249
  • 238
  • 223
  • 215
  • 195
  • 185
  • 167
  • 163
  • 151
  • 124
  • 123
  • 122
  • 111
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
621

Hyperspectral Remote Sensing of Temperate Pasture Quality

Thulin, Susanne Maria, smthulin@telia.com January 2009 (has links)
This thesis describes the research undertaken for the degree of Doctor of Philosophy, testing the hypothesis that spectrometer data can be used to establish usable relationships for prediction of pasture quality attributes. The research data consisted of reflectance measurements of various temperate pasture types recorded at four different times (years 2000 to 2002), recorded by three hyperspectral sensors, the in situ ASD, the airborne HyMap and the satellite-borne Hyperion. Corresponding ground-based pasture samples were analysed for content of chlorophyll, water, crude protein, digestibility, lignin and cellulose at three study sites in rural Victoria, Australia. This context was used to evaluate effects of sensor differences, data processing and enhancement, analytical methods and sample variability on the predictive capacity of derived prediction models. Although hyperspectral data analysis is being applied in many areas very few studies on temperate pastures have been conducted and hardly any encompass the variability and heterogeneity of these southern Australian examples. The research into the relationship between the spectrometer data and pasture quality attribute assays was designed using knowledge gained from assessment of other hyperspectral remote sensing and near-infrared spectroscopy research, including bio-chemical and physical properties of pastures, as well as practical issues of the grazing industries and carbon cycling/modelling. Processing and enhancement of the spectral data followed methods used by other hyperspectral researchers with modifications deemed essential to produce better relationships with pasture assay data. As many different methods are in use for the analysis of hyperspectral data several alternative approaches were investigated and evaluated to determine reliability, robustness and suitability for retrieval of temperate pasture quality attributes. The analyses employed included stepwise multiple linear regression (SMLR) and partial least squares regression (PLSR). The research showed that the spectral research data had a higher potential to be used for prediction of crude protein and digestibility than for the plant fibres lignin and cellulose. Spectral transformation such as continuum removal and derivatives enhanced the results. By using a modified approach based on sample subsets identified by a matrix of subjective bio-physical and ancillary data parameters, the performance of the models were enhanced. Prediction models from PLSR developed on ASD in situ spectral data, HyMap airborne imagery and Hyperion and corresponding pasture assays showed potential for predicting the two important pasture quality attributes crude protein and digestibility in hyperspectral imagery at a few quantised levels corresponding to levels currently used in commercial feed testing. It was concluded that imaging spectrometry has potential to offer synoptic, simultaneous and spatially continuous information valuable to feed based enterprises in temperate Victoria. The thesis provide a significant contribution to the field of hyperspectral remote sensing and good guidance for future hyperspectral researchers embarking on similar tasks. As the research is based on temperate pastures in Victoria, Australia, which are dominated by northern hemisphere species, the findings should be applicable to analysis of temperate pastures elsewhere, for example in Western Australia, New Zealand, South Africa, North America, Europe and northern Asia (China).
622

Gender and Technologies of Knowledge in Development Discourse: Analysing United Nations Least Developed Country Policy 1971-2004

Goulding, Sarah, sarahgoulding@yahoo.com.au January 2006 (has links)
The United Nations category Least Developed country (LDC) was created in 1971 to ameliorate conditions in countries the UN identified as the poorest of the poor. Its administration and operation within UN development discourse has not been explored previously in academic analysis. This thesis explores this rich archive of development discourse. It seeks to situate the LDC category as a vehicle that both produces and is a product of development discourse, and uses gender analysis as a critical tool to identify the ways in which the LDC category discourse operates. The thesis draws on Foucauldian theory to develop and use the concept ‘technologies of knowledge’, which places the dynamics of LDC discourse into relief. Three technologies of knowledge are identified: LDC policy, classification through criteria, and data. The ways each of these technologies of knowledge operates are explored through detailed readings of over thirty years of UN policy documents that form the thesis’s primary source material. A central question within this thesis is: If the majority of the world’s poor are women, where are the women in the policy about the countries that are the poorest of the poor? In focusing the analysis on the representation of women in LDCs, I place women at the centre of the analytic stage, as opposed to the marginal position I have found they occupy within LDC discourse. Through this analysis of the reductionist representations of LDC women, I explore the gendered dynamics of development discourse. Exploring the operation of these three technologies of knowledge reveals some of the discursive boundaries of UN LDC category discourse, particularly through its inability to incorporate gender analysis. The discussion of these three technologies of knowledge – policy, classification through criteria, and data – is framed by discussions of development and gender. The discussion on development positions this analysis within post-development critiques of development policy, practice and theory. The discussion on gender positions this analysis within the trajectory of postmodern and postcolonial influenced feminist engagements with development as a theory and praxis, particularly with debates about the representation of women in the third world. This case study of the operation of development discourse usefully highlights gendered dynamics of discursive ways of knowing.
623

Real-time power system disturbance identification and its mitigation using an enhanced least squares algorithm

Manmek, Thip, Electrical Engineering & Telecommunications, Faculty of Engineering, UNSW January 2006 (has links)
This thesis proposes, analyses and implements a fast and accurate real-time power system disturbances identification method based on an enhanced linear least squares algorithm for mitigation and monitoring of various power quality problems such as current harmonics, grid unbalances and voltage dips. The enhanced algorithm imposes less real-time computational burden on processing the system and is thus called ???efficient least squares algorithm???. The proposed efficient least squares algorithm does not require matrix inversion operation and contains only real numbers. The number of required real-time matrix multiplications is also reduced in the proposed method by pre-performing some of the matrix multiplications to form a constant matrix. The proposed efficient least squares algorithm extracts instantaneous sine and cosine terms of the fundamental and harmonic components by simply multiplying a set of sampled input data by the pre-calculated constant matrix. A power signal processing system based on the proposed efficient least squares algorithm is presented in this thesis. This power signal processing system derives various power system quantities that are used for real-time monitoring and disturbance mitigation. These power system quantities include constituent components, symmetrical components and various power measurements. The properties of the proposed power signal processing system was studied using modelling and practical implementation in a digital signal processor. These studies demonstrated that the proposed method is capable of extracting time varying power system quantities quickly and accurately. The dynamic response time of the proposed method was less than half that of a fundamental cycle. Moreover, the proposed method showed less sensitivity to noise pollution and small variations in fundamental frequency. The performance of the proposed power signal processing system was compared to that of the popular DFT/FFT methods using computer simulations. The simulation results confirmed the superior performance of the proposed method under both transient and steady-state conditions. In order to investigate the practicability of the method, the proposed power signal processing system was applied to two real-life disturbance mitigation applications namely, an active power filter (APF) and a distribution synchronous static compensator (D-STATCOM). The validity and performance of the proposed signal processing system in both disturbance mitigations applications were investigated by simulation and experimental studies. The extensive modelling and experimental studies confirmed that the proposed signal processing system can be used for practical real-time applications which require fast disturbance identification such as mitigation control and power quality monitoring of power systems
624

The value and validity of software effort estimation models built from a multiple organization data set

Deng, Kefu January 2008 (has links)
The objective of this research is to empirically assess the value and validity of a multi-organization data set in the building of prediction models for several ‘local’ software organizations; that is, smaller organizations that might have a few project records but that are interested in improving their ability to accurately predict software project effort. Evidence to date in the research literature is mixed, due not to problems with the underlying research ideas but with limitations in the analytical processes employed: • the majority of previous studies have used only a single organization as the ‘local’ sample, introducing the potential for bias • the degree to which the conclusions of these studies might apply more generally is unable to be determined because of a lack of transparency in the data analysis processes used. It is the aim of this research to provide a more robust and visible test of the utility of the largest multi-organization data set currently available – that from the ISBSG – in terms of enabling smaller-scale organizations to build relevant and accurate models for project-level effort prediction. Stepwise regression is employed to enable the construction of ‘local’, ‘global’ and ‘refined global’ models of effort that are then validated against actual project data from eight organizations. The results indicate that local data, that is, data collected for a single organization, is almost always more effective as a basis for the construction of a predictive model than data sourced from a global repository. That said, the accuracy of the models produced from the global data set, while worse than that achieved with local data, may be sufficiently accurate in the absence of reliable local data – an issue that could be investigated in future research. The study concludes with recommendations for both software engineering practice – in setting out a more dynamic scenario for the management of software development – and research – in terms of implications for the collection and analysis of software engineering data.
625

Chaos polynomial creux et adaptatif pour la propagation d'incertitudes et l'analyse de sensibilité

Blatman, Géraud 09 October 2009 (has links) (PDF)
Cette thèse s'insère dans le contexte général de la propagation d'incertitudes et de l'analyse de sensibilité de modèles de simulation numérique, en vue d'applications industrielles. Son objectif est d'effectuer de telles études en minimisant le nombre d'évaluations du modèle, potentiellement coûteuses. Le présent travail repose sur une approximation de la réponse du modèle sur la base du chaos polynomial(CP), qui permet de réaliser des post-traitements à un coût de calcul négligeable. Toutefois, l'ajustement du CP peut nécessiter un nombre conséquent d'appels au modèle si ce dernier dépend d'un nombre élevé de paramètres (e.g. supérieur à 10). Pour contourner ce problème, on propose deux algorithmes pour ne sélectionner qu'un faible nombre de termes importants dans la représentation par CP, à savoir une procédure de régression pas-à-pas et une procédure basée sur la méthode de Least Angle Regression (LAR). Le faible nombre de coefficients associés aux CP creux obtenus peuvent ainsi être déterminés à partir d'un nombre réduit d'évaluations du modèle. Les méthodes sont validées sur des cas-tests académiques de mécanique, puis appliquées sur le cas industriel de l'analyse d'intégrité d'une cuve de réacteur à eau pressurisée. Les résultats obtenus confirment l'efficacité des méthodes proposées pour traiter les problèmes de propagation d'incertitudes et d'analyse de sensibilité en grande dimension.
626

Investigation of multivariate prediction methods for the analysis of biomarker data

Hennerdal, Aron January 2006 (has links)
<p>The paper describes predictive modelling of biomarker data stemming from patients suffering from multiple sclerosis. Improvements of multivariate analyses of the data are investigated with the goal of increasing the capability to assign samples to correct subgroups from the data alone.</p><p>The effects of different preceding scalings of the data are investigated and combinations of multivariate modelling methods and variable selection methods are evaluated. Attempts at merging the predictive capabilities of the method combinations through voting-procedures are made. A technique for improving the result of PLS-modelling, called bagging, is evaluated.</p><p>The best methods of multivariate analysis of the ones tried are found to be Partial least squares (PLS) and Support vector machines (SVM). It is concluded that the scaling have little effect on the prediction performance for most methods. The method combinations have interesting properties – the default variable selections of the multivariate methods are not always the best. Bagging improves performance, but at a high cost. No reasons for drastically changing the work flows of the biomarker data analysis are found, but slight improvements are possible. Further research is needed.</p>
627

Estimation Using Low Rank Signal Models

Mahata, Kaushik January 2003 (has links)
<p>Designing estimators based on low rank signal models is a common practice in signal processing. Some of these estimators are designed to use a single low rank snapshot vector, while others employ multiple snapshots. This dissertation deals with both these cases in different contexts.</p><p>Separable nonlinear least squares is a popular tool to extract parameter estimates from a single snapshot vector. Asymptotic statistical properties of the separable non-linear least squares estimates are explored in the first part of the thesis. The assumptions imposed on the noise process and the data model are general. Therefore, the results are useful in a wide range of applications. Sufficient conditions are established for consistency, asymptotic normality and statistical efficiency of the estimates. An expression for the asymptotic covariance matrix is derived and it is shown that the estimates are circular. The analysis is extended also to the constrained separable nonlinear least squares problems.</p><p>Nonparametric estimation of the material functions from wave propagation experiments is the topic of the second part. This is a typical application where a single snapshot vector is employed. Numerical and statistical properties of the least squares algorithm are explored in this context. Boundary conditions in the experiments are used to achieve superior estimation performance. Subsequently, a subspace based estimation algorithm is proposed. The subspace algorithm is not only computationally efficient, but is also equivalent to the least squares method in accuracy.</p><p>Estimation of the frequencies of multiple real valued sine waves is the topic in the third part, where multiple snapshots are employed. A new low rank signal model is introduced. Subsequently, an ESPRIT like method named R-Esprit and a weighted subspace fitting approach are developed based on the proposed model. When compared to ESPRIT, R-Esprit is not only computationally more economical but is also equivalent in performance. The weighted subspace fitting approach shows significant improvement in the resolution threshold. It is also robust to additive noise.</p>
628

A Study of Missing Data Imputation and Predictive Modeling of Strength Properties of Wood Composites

Zeng, Yan 01 August 2011 (has links)
Problem: Real-time process and destructive test data were collected from a wood composite manufacturer in the U.S. to develop real-time predictive models of two key strength properties (Modulus of Rupture (MOR) and Internal Bound (IB)) of a wood composite manufacturing process. Sensor malfunction and data “send/retrieval” problems lead to null fields in the company’s data warehouse which resulted in information loss. Many manufacturers attempt to build accurate predictive models excluding entire records with null fields or using summary statistics such as mean or median in place of the null field. However, predictive model errors in validation may be higher in the presence of information loss. In addition, the selection of predictive modeling methods poses another challenge to many wood composite manufacturers. Approach: This thesis consists of two parts addressing above issues: 1) how to improve data quality using missing data imputation; 2) what predictive modeling method is better in terms of prediction precision (measured by root mean square error or RMSE). The first part summarizes an application of missing data imputation methods in predictive modeling. After variable selection, two missing data imputation methods were selected after comparing six possible methods. Predictive models of imputed data were developed using partial least squares regression (PLSR) and compared with models of non-imputed data using ten-fold cross-validation. Root mean square error of prediction (RMSEP) and normalized RMSEP (NRMSEP) were calculated. The second presents a series of comparisons among four predictive modeling methods using imputed data without variable selection. Results: The first part concludes that expectation-maximization (EM) algorithm and multiple imputation (MI) using Markov Chain Monte Carlo (MCMC) simulation achieved more precise results. Predictive models based on imputed datasets generated more precise prediction results (average NRMSEP of 5.8% for model of MOR model and 7.2% for model of IB) than models of non-imputed datasets (average NRMSEP of 6.3% for model of MOR and 8.1% for model of IB). The second part finds that Bayesian Additive Regression Tree (BART) produced most precise prediction results (average NRMSEP of 7.7% for MOR model and 8.6% for IB model) than other three models: PLSR, LASSO, and Adaptive LASSO.
629

An Analysis of the Law, Practice and Policy of the WTO Agreement on Technical Barriers to Trade in relation to International Standards and the International Organization for Standardization: Implications for Least Developed Countries in Africa.

Okwenye, Tonny. January 2007 (has links)
<p><font face="Times New Roman"> <p align="left">This study examines the legal and policy objectives of the World Trade Organisation (WTO) Agreement on Technical Barriers to Trade (TBT) with specific reference to international standards and the International Organisation for Standardisation (ISO). The study sets out the history and development of the TBT Agreement and the relationship between the TBT Agreement and selected WTO Agreements. The study also explores the application and interpretation of the TBT Agreement under the WTO dispute settlement system. More importantly, the study addresses the legal, policy and practical implications of the TBT Agreement for Least Developed Countries (LDCs) in Africa. A central argument put forward in this study is that, albeit international standards have been recognised as an important tool for LDCs in Africa to gain access to foreign markets, there is no significant &lsquo / political will&rsquo / and commitment from the key players in standardisation work, that is, the national governments, the private sector and the ISO. At the same time, some developed and developing countries tend to use their influence and involvement in the activities of the ISO as a means of promoting the use and adoption of their homegrown standards. The study proposes, among others, that a more participatory approach which encompasses representatives from consumer groups, the private sector and non-governmental organisations (NGOs) from these LDCs in Africa, should be adopted.</p> </font></p>
630

On Some Properties of Interior Methods for Optimization

Sporre, Göran January 2003 (has links)
This thesis consists of four independent papers concerningdifferent aspects of interior methods for optimization. Threeof the papers focus on theoretical aspects while the fourth oneconcerns some computational experiments. The systems of equations solved within an interior methodapplied to a convex quadratic program can be viewed as weightedlinear least-squares problems. In the first paper, it is shownthat the sequence of solutions to such problems is uniformlybounded. Further, boundedness of the solution to weightedlinear least-squares problems for more general classes ofweight matrices than the one in the convex quadraticprogramming application are obtained as a byproduct. In many linesearch interior methods for nonconvex nonlinearprogramming, the iterates can "falsely" converge to theboundary of the region defined by the inequality constraints insuch a way that the search directions do not converge to zero,but the step lengths do. In the sec ond paper, it is shown thatthe multiplier search directions then diverge. Furthermore, thedirection of divergence is characterized in terms of thegradients of the equality constraints along with theasymptotically active inequality constraints. The third paper gives a modification of the analytic centerproblem for the set of optimal solutions in linear semidefiniteprogramming. Unlike the normal analytic center problem, thesolution of the modified problem is the limit point of thecentral path, without any strict complementarity assumption.For the strict complementarity case, the modified problem isshown to coincide with the normal analytic center problem,which is known to give a correct characterization of the limitpoint of the central path in that case. The final paper describes of some computational experimentsconcerning possibilities of reusing previous information whensolving system of equations arising in interior methods forlinear programming. <b>Keywords:</b>Interior method, primal-dual interior method,linear programming, quadratic programming, nonlinearprogramming, semidefinite programming, weighted least-squaresproblems, central path. <b>Mathematics Subject Classification (2000):</b>Primary90C51, 90C22, 65F20, 90C26, 90C05; Secondary 65K05, 90C20,90C25, 90C30.

Page generated in 0.039 seconds