• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 15
  • 2
  • 1
  • 1
  • Tagged with
  • 32
  • 9
  • 8
  • 5
  • 5
  • 5
  • 5
  • 4
  • 4
  • 4
  • 4
  • 4
  • 4
  • 4
  • 4
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Forensic speaker analysis and identification by computer : a Bayesian approach anchored in the cepstral domain

Khodai-Joopari, Mehrdad, Information Technology & Electrical Engineering, Australian Defence Force Academy, UNSW January 2007 (has links)
This thesis advances understanding of the forensic value of the automatic speech parameters by addressing the following question: what is the potentiality of the speech cepstrum as a forensic-acoustic parameter? Despite many advances in automatic speech and speaker recognition, robust and unconstrained progress in technical forensic speaker identification has been partly impeded by our incomplete understanding of the interaction and relation between forensic phonetics and the techniques employed in state-of-the-art automatic speech and speaker recognition. The posed question underlies the recurrent and longstanding issue of acoustic parameterisation in the area of forensic phonetics, where 1) speaker identification often must be carried out under less than optimal conditions, and 2) views differ on the usefulness and trustworthiness of the formant frequency measurements. To this end, a new formulation for the forensic evaluation of speech data was derived which is effectively a spectral likelihood ratio with enhanced sensitivity to the local peaks of the formant structure of the speech spectrum of vowel sounds, while retaining the characteristics of the Bayesian framework. This new hybrid formula was used together with a novel approach, which is founded on a statistically-based matched-pairs technique to account for various levels of variation inherent in speech recordings, thereby providing a spectrally meaningful measure of variations between two speech spectra and hence the true worth of speech samples as forensic evidence. The experimental results are obtained based on a forensically-realistic database of a relatively large population of 297 native speakers of Japanese. In sum, the research conducted in this thesis is a major step forward in advancing the forensic-phonetic field which broadens the objective basis of the forensic speaker identification. Beyond advancing knowledge in the field, the semi data-independent nature of the new formula ultimately has great implications in technical forensic speaker identification. It also provides us with a valuable biometric tool with both academic and commercial potential in crime investigation in a field which is already suffering from the lack of adequate data.
12

Automatic emotion recognition: an investigation of acoustic and prosodic parameters

Sethu, Vidhyasaharan , Electrical Engineering & Telecommunications, Faculty of Engineering, UNSW January 2009 (has links)
An essential step to achieving human-machine speech communication with the naturalness of communication between humans is developing a machine that is capable of recognising emotions based on speech. This thesis presents research addressing this problem, by making use of acoustic and prosodic information. At a feature level, novel group delay and weighted frequency features are proposed. The group delay features are shown to emphasise information pertaining to formant bandwidths and are shown to be indicative of emotions. The weighted frequency feature, based on the recently introduced empirical mode decomposition, is proposed as a compact representation of the spectral energy distribution and is shown to outperform other estimates of energy distribution. Feature level comparisons suggest that detailed spectral measures are very indicative of emotions while exhibiting greater speaker specificity. Moreover, it is shown that all features are characteristic of the speaker and require some of sort of normalisation prior to use in a multi-speaker situation. A novel technique for normalising speaker-specific variability in features is proposed, which leads to significant improvements in the performances of systems trained and tested on data from different speakers. This technique is also used to investigate the amount of speaker-specific variability in different features. A preliminary study of phonetic variability suggests that phoneme specific traits are not modelled by the emotion models and that speaker variability is a more significant problem in the investigated setup. Finally, a novel approach to emotion modelling that takes into account temporal variations of speech parameters is analysed. An explicit model of the glottal spectrum is incorporated into the framework of the traditional source-filter model, and the parameters of this combined model are used to characterise speech signals. An automatic emotion recognition system that takes into account the shape of the contours of these parameters as they vary with time is shown to outperform a system that models only the parameter distributions. The novel approach is also empirically shown to be on par with human emotion classification performance.
13

A Numerical Modelling Study of Tropical Cyclone Sidr (2007): Sensitivity Experiments Using the Weather Research and Forecasting (WRF) Model

Shepherd, Tristan James January 2008 (has links)
The tropical cyclone is a majestic, yet violent atmospheric weather system occurring over tropical waters. Their majesty evolves from the significant range of spatial scales they operate over: from the mesoscale, to the larger synoptic-scale. Their associated violent winds and seas, however, are often the cause of damage and destruction for settlements in their path. Between 10/11/07 and 16/11/07, tropical cyclone Sidr formed and intensified into a category 5 hurricane over the southeast tropical waters of the northern Indian Ocean. Sidr tracked west, then north, during the course of its life, and eventually made landfall on 15/11/07, as a category 4 cyclone near the settlement of Barguna, Bangladesh. The storm affected approximately 2.7 million people in Bangladesh, and of that number 4234 were killed. In this study, the dynamics of tropical cyclone Sidr are simulated using version 2.2.1 of Advanced Weather Research and Forecasting — a non-hydrostatic, two-way interactive, triply-nested-grid mesoscale model. Three experiments were developed examining model sensitivity to ocean-atmosphere interaction; initialisation time; and choice of convective parameterisation scheme. All experiments were verified against analysed synoptic data. The ocean-atmosphere experiment involved one simulation of a cold sea surface temperature, fixed at 10 °C; and simulated using a 15 km grid resolution. The initialisation experiment involved three simulations of different model start time: 108-, 72-, and 48-hours before landfall respectively. These were simulated using a 15 km grid resolution. The convective experiment consisted of four simulations, with three of these using a different implicit convective scheme. The three schemes used were, the Kain-Fritsch, Betts-Miller-Janjic, and Grell-Devenyi ensemble. The fourth case simulated convection explicitly. A nested domain of 5km grid spacing was used in the convective experiment, for high resolution modelling. In all experiments, the Eta-Ferrier microphysics scheme, and the Mellor-Yamada-Janjic planetary boundary layer scheme were used. As verified against available observations, the model showed considerable sensitivity in each of the experiments. The model was found to be well suited for combining ocean-atmosphere interactions: a cool sea surface caused cyclone Sidr to dissipate within 24 hours. The initialisation simulations indicated moderate model sensitivity to initialisation time: variations were found for both cyclone track and intensity. Of the three simulations, an initialisation time 108 hours prior to landfall, was found to most accurately represent cyclone Sidr’s track and intensity. Finally, the convective simulations showed that considerable differences were found in cyclone track, intensity, and structure, when using different convective schemes. The Kain-Fritsch scheme produced the most accurate cyclone track and structure, but the rainfall rate was spurious on the sub-grid-scale. The Betts-Miller-Janjic scheme resolved realistic rainfall on both domains, but cyclone intensity was poor. Of particular significance, was that explicit convection produced a similar result to the Grell-Devenyi ensemble for both model domain resolutions. Overall, the results suggest that the modelled cyclone is highly sensitive to changes in initial conditions. In particular, in the context of other studies, it appears that the combination of convective scheme, microphysics scheme, and boundary layer scheme, are most significant for accurate track and intensity prediction.
14

Maximal controllability via reduced parameterisation model predictive control

Medioli, Adrian January 2008 (has links)
Research Doctorate - Doctor of Philosophy (PhD) / This dissertation presents some new approaches to addressing the main issues encountered by practitioners in the implementation of linear model predictive control(MPC), namely, stability, feasibility, complexity and the size of the region of attraction. When stability guaranteeing techniques are applied nominal feasibility is also guaranteed. The most common technique for guaranteeing stability is to apply a special weighting to the terminal state of the MPC formulation and to constrain the state to a terminal region where certain properties hold. However, the combination of terminal state constraints and the complexity of the MPC algorithm result in regions of attraction that are relatively small. Small regions of attraction are a major problem for practitioners. The main approaches used to address this issue are either via the reduction of complexity or the enlargement of the terminal region. Although the ultimate goal is to enlarge the region of attraction, none of these techniques explicitly consider the upper bound of this region. Ideally the goal is to achieve the largest possible region of attraction which for constrained systems is the null controllable set. For the case of systems with a single unstable pole or a single non-minimum phase zero their null controllable sets are defined by simple bounds which can be thought of as implicit constraints. We show in this thesis that adding implicit constraints to MPC can produce maximally controllable systems, that is, systems whose region of attraction is the null controllable set. For higher dimensional open-loop unstable systems with more than one real unstable mode, the null controllable sets belong to a class of polytopes called zonotopes. In this thesis, the properties of these highly structured polytopes are used to implement a new variant of MPC, which we term reduced parameterisation MPC (RP MPC). The proposed new strategy dynamically determines a set of contractive positively invariant sets that require only a small number of parameters for the optimisation problem posed by MPC. The worst case complexity of the RP MPC strategy is polylogarithmic with respect to the prediction horizon. This outperforms the most efficient on-line implementations of MPC which have a worst case complexity that is linear in the horizon. Hence, the reduced complexity allows the resulting closed-loop system to have a region of attraction approaching the null controllable set and thus the goal of maximal controllability.
15

Geometrical representations for efficient aircraft conceptual design and optimisation

Sripawadkul, Vis January 2012 (has links)
Geometrical parameterisation has an important role in the aircraft design process due to its impact on the computational efficiency and accuracy in evaluating different configurations. In the early design stages, an aircraft geometrical model is normally described parametrically with a small number of design parameters which allows fast computation. However, this provides only a course approximation which is generally limited to conventional configurations, where the models have already been validated. An efficient parameterisation method is therefore required to allow rapid synthesis and analysis of novel configurations. Within this context, the main objectives of this research are: 1) Develop an economical geometrical parameterisation method which captures sufficient detail suitable for aerodynamic analysis and optimisation in early design stage, and2) Close the gap between conceptual and preliminary design stages by bringing more detailed information earlier in the design process. Research efforts were initially focused on the parameterisation of two-dimensional curves by evaluating five widely-cited methods for airfoil against five desirable properties. Several metrics have been proposed to measure these properties, based on airfoil fitting tests. The comparison suggested that the Class-Shape Functions Transformation (CST) method is most suitable and therefore was chosen as the two-dimensional curve generation method. A set of blending functions have been introduced and combined with the two-dimensional curves to generate a three-dimensional surface. These surfaces form wing or body sections which are assembled together through a proposed joining algorithm. An object-oriented structure for aircraft components has also been proposed. This allows modelling of the main aircraft surfaces which contain sufficient level of accuracy while utilising a parsimonious number of intuitive design parameters.
16

Geometrical representations for efficient aircraft conceptual design and optimisation

Sripawadkul, Vis 06 1900 (has links)
Geometrical parameterisation has an important role in the aircraft design process due to its impact on the computational efficiency and accuracy in evaluating different configurations. In the early design stages, an aircraft geometrical model is normally described parametrically with a small number of design parameters which allows fast computation. However, this provides only a course approximation which is generally limited to conventional configurations, where the models have already been validated. An efficient parameterisation method is therefore required to allow rapid synthesis and analysis of novel configurations. Within this context, the main objectives of this research are: 1) Develop an economical geometrical parameterisation method which captures sufficient detail suitable for aerodynamic analysis and optimisation in early design stage, and2) Close the gap between conceptual and preliminary design stages by bringing more detailed information earlier in the design process. Research efforts were initially focused on the parameterisation of two-dimensional curves by evaluating five widely-cited methods for airfoil against five desirable properties. Several metrics have been proposed to measure these properties, based on airfoil fitting tests. The comparison suggested that the Class-Shape Functions Transformation (CST) method is most suitable and therefore was chosen as the two-dimensional curve generation method. A set of blending functions have been introduced and combined with the two-dimensional curves to generate a three-dimensional surface. These surfaces form wing or body sections which are assembled together through a proposed joining algorithm. An object-oriented structure for aircraft components has also been proposed. This allows modelling of the main aircraft surfaces which contain sufficient level of accuracy while utilising a parsimonious number of intuitive design parameters ... [cont.].
17

Automating Geographic Object-Based Image Analysis and Assessing the Methods Transferability : A Case Study Using High Resolution Geografiska SverigedataTM Orthophotos

Hast, Isak, Mehari, Asmelash January 2016 (has links)
Geographic object-based image analysis (GEOBIA) is an innovative image classification technique that treats spatial features in an image as objects, rather than as pixels; thus resembling closer to that of human perception of the geographic space. However, the process of a GEOBIA application allows for multiple interpretations. Particularly sensitive parts of the process include image segmentation and training data selection. The multiresolution segmentation algorithm (MSA) is commonly applied. The performance of segmentation depends primarily on the algorithms scale parameter, since scale controls the size of image objects produced. The fact that the scale parameter is unit less makes it a challenge to select a suitable one; thus, leaving the analyst to a method of trial and error. This can lead to a possible bias. Additionally, part from the segmentation, training area selection usually means that the data has to be manually collected. This is not only time consuming but also prone to subjectivity. In order to overcome these challenges, we tested a GEOBIA scheme that involved automatic methods of MSA scale parameterisation and training area selection which enabled us to more objectively classify images. Three study areas within Sweden were selected. The data used was high resolution Geografiska Sverigedata (GSD) orthophotos from the Swedish mapping agency, Lantmäteriet. We objectively found scale for each classification using a previously published technique embedded as a tool in eCognition software. Based on the orthophoto inputs, the tool calculated local variance and rate of change at different scales. These figures helped us to determine scale value for the MSA segmentation. Moreover, we developed in this study a novel method for automatic training area selection. The method is based on thresholded feature statistics layers computed from the orthophoto band derivatives. Thresholds were detected by Otsu’s single and multilevel algorithms. The layers were run through a filtering process which left only those fit for use in the classification process. We also tested the transferability of classification rule-sets for two of the study areas. This test helped us to investigate the degree to which automation can be realised. In this study we have made progress toward a more objective way of object-based image classification, realised by automating the scheme. Particularly noteworthy is the algorithm for automatic training area selection proposed, which compared to manual selection restricts human intervention to a minimum. Results of the classification show overall well delineated classes, in particular, the border between open area and forest contributed by the elevation data. On the other hand, there still persists some challenges regarding separating between deciduous and coniferous forest. Furthermore, although water was accurately classified in most instances, in one of the study areas, the water class showed contradictory results between its thematic and positional accuracy; hence stressing the importance of assessing the result based on more than the thematic accuracy. From the transferability test we noted the importance of considering the spatial/spectral characteristics of an area before transferring of rule-sets as these factors are a key to determine whether a transfer is possible.
18

Periodically integrated models : estimation, simulation, inference and data analysis

Hamadeh, Lina January 2016 (has links)
Periodically correlated time series generally exist in several fields including hydrology, climatology, economics and finance, and are commonly modelled using periodic autoregressive (PAR) model. For a time series with stochastic periodic trend, for which a unit root is expected, a periodically integrated autoregressive PIAR model with periodic and/or seasonal unit root has been shown to be a satisfactory model. The existing theory used the multivariate methodology to study PIAR models. However, this theory is convoluted, majority of it only developed for quarterly time series and its generalisation to time series with larger number of periods is quite cumbersome. This thesis studies the existing theory and highlights its restrictions and flaws. It provides a coherent presentation of the steps for analysing PAR and PIAR models for different number of periods. It presents the different unit roots representations and compares the performance of different unit root tests available in literature. The restrictions of existing studies gave us the impetus to develop a unified theory that gives a clear understanding of the integration and unit roots in the periodic models. This theory is based on the spectral information of the multi-companion matrix of the periodic models. It is more general than the existing theory, since it can be applied to any number of periods whereas the existing methods are developed for quarterly time series. Using the multi-companion method, we specify and estimate the periodic models without the need to extract complicated restrictions on the model parameters corresponding to the unit roots, as required by NLS method. The multi-companion estimation method performed well and its performance is equivalent to the NLS estimation method that has been used in the literature. Analysing integrated multivariate models is a problematic issue in time series. The multi-companion theory provides a more general approach than the error correction method that is commonly used to analyse such time series. A modified state state representation for the seasonal periodically integrated autoregressive (SPIAR) model with periodic and seasonal unit roots is presented. Also an alternative state space representations from which the state space representations of PAR, PIAR and the seasonal periodic autoregressive (SPAR) models can be directly obtained is proposed. The seasons of the parameters in these representations have been clearly specified, which guarantees correct estimated parameters. Kalman filter have been used to estimate the parameters of these models and better estimation results are obtained when the initial values were estimated rather than when they were given.
19

Study of piezoelectricity on III/V semiconductors from atomistic simulations to computer modelling

Tse, Geoffrey January 2012 (has links)
High quality and accurate computational data was obtained through first principle quantum mechanical calculations originated from density functional theory without the inclusion of empirical data (ab initio). The support of the computing facility NGS allows us to carry out our research involving large scale atomistic simulations. The data we recently obtained clearly shows piezoelectricity in GaAs and InAs are proved to be non linear in relation to a general strain.The high order fitting equation obtained through the parameterization procedure allowed us to directly evaluate higher order piezoelectric coefficients. By comparing with other linear and non linear models and also experimental data, we reached the conclusion that the validity of our model is correct in the limitation of small shear strain, particularly in case of (111) grown semiconductors. Such limitation however is not restricted under pseudomorphic growth in (001) direction where typically shear strain is small.We further validate our model through elasticity theory to demonstrate the sign of the polarization is found to be opposite to bulk values for an InAs semiconductor layer grown in the (001) direction of growth and subject to 6-7% of lattice mismatch. This is additionally supported with experimental evidence (optical absorption spectra).Furthermore our model provides a direct way in evaluating the polarization for any crystal structure described on the atomic level. This is mainly beneficial to researchers who use molecular dynamics and empirical methods for predicting bandstructure.The fundamental performance for semiconductor devices can be improved through the use of the small polarization created from strain and is likely to bring advantages in future photovoltaics devices.
20

Hydraulic- hydromorphologic analysis as an aid for improving peak flow predictions

Åkesson, Anna January 2010 (has links)
Conventional hydrological compartmental models have been shown to exhibit a high degree of uncertainty for predictions of peak flows, such as the design floods for design of hydropower infrastructure. One reason for these uncertainties is that conventional models are parameterised using statistical methods based on how catchments have responded in the past. Because the rare occurrence of peak flows, these are underrepresented during the periods used for calibration. This implies that the model has to be extrapolated beyond the discharge intervals where it has been calibrated. In this thesis, hydromechanical approaches are used to investigate the properties of stream networks, reflecting mechanisms including stage dependency, damming effects, interactions between tributaries (network effects) and the topography of the stream network. Further, it is investigated how these properties can be incorporated into the streamflow response functions of compartmental hydrological models. The response of the stream network was shown to vary strongly with stage in a non-linear manner, an effect that is commonly not accounted for in model formulation. The non-linearity is particularly linked to the flooding of stream channels and interactions with the flow on flood-plains. An evaluation of the significance of using physically based response functions on discharge predictions in a few sub-catchments in Southern Sweden show improvements (compared to a conventional model) in discharge predictions – particularly when modelling peak discharges. An additional benefit of replacing statistical parameterisation methods with physical parameterisation methods is the possibility of hydrological modelling during non-stationary conditions, such as the ongoing climate change. / QC 20101022

Page generated in 0.1936 seconds