131 |
Advanced speech processing and coding techniquesAl-Naimi, Khaldoon Taha January 2002 (has links)
Over the past two decades there has been substantial growth in speech communications and new speech related applications. Bandwidth constraints led researchers to investigate ways of compressing speech signals whilst maintaining speech quality and intelligibility so as to increase the possible number of customers for the given bandwidth. Because of this a variety of speech coding techniques have been proposed over this period. At the heart of any proposed speech coding method is quantisation of the speech production model parameters that need to be transmitted to the decoder. Quantisation is a controlling factor for the targeted bit rates and for meeting quality requirements. The objectives of the research presented in this thesis are twofold. The first enabling the development of a very low bit rate speech coder which maintains quality and intelligibility. This includes increasing the robustness to various operating conditions as well as enhancing the estimation and improving the quantisation of speech model parameters. The second objective is to provide a method for enhancing the performance of an existing speech related application. The first objective is tackled with the aid of three techniques. Firstly, various novel estimation techniques are proposed which are such that the resultant estimated speech production model parameters have less redundant information and are highly correlated. This leads to easier quantisation (due to higher correlation) and therefore to bit saving. The second approach is to make use of the joint effect of the quantisation of spectral parameters (i.e. LSF and spectral amplitudes) for their big impact on the overall bit allocation required. Work towards the first objective also includes a third technique which enhances the estimation of a speech model parameter (i.e. the pitch) through a robust statistics-based post-processing (or tracking) method which operates in noise contaminated environments. Work towards the second objective focuses on an application where speech plays an important role, namely echo-canceller and noise-suppressor systems. A novel echo-canceller method is proposed which resolves most of the weaknesses present in existing echo-canceller systems and improves the system performance.
|
132 |
A knowledge-based approach to modelling fast response catchmentsWedgwood, Owen January 1993 (has links)
This thesis describes research in to flood forecasting on rapid response catchments, using knowledge based principles. Extensive use was made of high resolution single site radar data from the radar site at Hameldon Hill in North West England. Actual storm events and synthetic precipitation data were used in an attempt to identify 'knowledge' of the rainfall - runoff process. Modelling was carried out with the use of transfer functions, and an analysis is presented of the problems in using this type of model in hydrological forecasting. A physically realisable' transfer function model is outlined, and storm characteristics were analysed to establish information about model tuning. The knowledge gained was built into a knowledge based system (KBS) to enable real-time optimisation of model parameters. A rainfall movement forecasting program was used to provide input to the system. Forecasts using the KBS tuned parameters proved better than those from a naive transfer function model in most cases. In order to further improve flow forecasts a simple catchment wetness procedure was developed and included in the system, based on antecedent precipitation index, using radar rainfall input. A new method of intensity - duration - frequency analysis was developed using distributed radar data at a 2Km by 2Km resolution. This allowed a new application of return periods in real time, in assessing storm severity as it occurs. A catchment transposition procedure was developed allowing subjective catchment placement infront of an approaching event, to assess rainfall `risk', in terms of catchment history, before the event reaches it. A knowledge based approach, to work in real time, was found to be successful. The main drawback is the initial procurement of knowledge, or information about thresholds, linkages and relationships.
|
133 |
Time-varying Markov models of school enrolmentMagalhaes, M. M. M. P.de January 1987 (has links)
No description available.
|
134 |
Predicted risk of harm versus treatment benefit in large randomised controlled trialsThompson, Douglas David January 2015 (has links)
Most drugs come with unwanted, and perhaps harmful, side-effects. Depending on the size of the treatment benefit such harms may be tolerable. In acute stroke, treatment with aspirin and treatment with alteplase have both proven to be effective in reducing the odds of death or dependency in follow-up. However, in both cases, treated patients are subject to a greater risk of haemorrhage – a serious side-effect which could result in early death or greater dependency. Current treatment licenses are restricted so as to avoid treating those with certain traits or risk factors associated with bleeding. It is plausible however that a weighted combination of all these factors would achieve better discrimination than an informal assessment of each individual risk factor. This has the potential to help target treatment to those most likely to benefit and avoid treating those at greater risk from harm. This thesis will therefore: (i) explore how predictions of harm and benefit are currently made; (ii) seek to make improvements by adopting more rigorous methodological approaches in model development; and (iii) investigate how the predicted risk of harm and treatment benefit could be used to strike an optimal balance. Statistical prediction is not an exact science. Before clinical utility can be established it is essential that the performance of any prediction method be assessed at the point of application. A prediction method must attain certain desirable properties to be of any use, namely: good discrimination – which quantifies how well the prediction method can separate events from non-events; and good calibration – which measures how close the obtained predicted risks match the observed. A comparison of informal predictions made by clinicians and formal predictions made by clinical prediction models is presented using a prospective observational study of stroke patients seen at a single centre hospital in Edinburgh. These results suggest that both prediction methods achieve similar discrimination. A stratified framework based on predicted risks obtained from clinical prediction models is considered using data from large randomised trials. First, with three of the largest aspirin trials it is shown that there is no evidence to suggest that the benefit of aspirin on reducing six month death or dependency varies with the predicted risk of benefit or with the predicted risk of harm. Second, using data from the third International Stroke Trial (IST3) a similar question is posed of the effect of alteplase and the predicted risk of symptomatic intracranial haemorrhage. It was found that this relationship corresponded strongly with the relationship associated with stratifying patients according to their predicted risk of death or dependency in the absence of treatment: those at the highest predicted risk from either event stand to experience the largest absolute benefit from alteplase with no indication of harm amongst those at lower predicted risk. It is concluded that prediction models for harmful side-effects based on simple clinical variables measured at baseline in randomised trials appear to offer little use in targeting treatments. Better separation between harmful events like bleeding and overall poor outcomes is required. This may be possible through the identification of novel (bio)markers unique to haemorrhage post treatment.
|
135 |
A hydrological study of Northern Ireland catchments with particular reference to the low flow regime in natural river channelsWright, G. D. January 1978 (has links)
No description available.
|
136 |
The relative efficiency of clinical and actuarial methods in the prediction of University freshman success.Simmons, Helen January 1957 (has links)
The purpose of this study was to investigate the relative, efficiency of clinical and actuarial methods in predicting success of University freshmen. The clinical predictions were based on the judgements of two University counselors. The information available to the counselors consisted of an interview report sheet, scores from tests intended to measure general learning ability, English placement, mathematical ability, and an expression of vocational interests, as well as identifying information such as name, age, and other similar kinds of data.
The actuarial prediction was based on a regression equation built on scores of two tests, one of mathematical ability, and one of English placement. These scores were among those available to the counselors. The regression equation was cross-validated in the study, since it was originally built on the scores obtained on these two tests by a previous sample of University freshmen, chosen on the same set of criteria.
Predictions were made for 158 Arts freshmen registered at the University of British Columbia for courses amounting to exactly 15 units of credit (including two laboratory sciences and an introductory mathematics course of freshmen level). Each case included in the sample, had availed himself of University counselling and each counselor predicted only for those students he had personally counselled. One counselor predicted for 78 subjects, and, the other for 80 subjects. Predictions to success were made in terms of a "pass-fail” dichotomy.
The relative efficiency of the two methods was tested against four criteria: better than chance prediction, homogeniety, relatedness, and "hit" predictions. In testing for "better-than-chance" prediction the accuracy of each method was compared with "chance prediction" accuracy, where "chance" accuracy was considered to be 50 per cent. In testing for homogeniety, the number of cases assigned to either category of the success dichotomy by each method was compared. In testing relatedness of predictions, a comparison was made in terms of agreement by the two methods in assigning, a given subject to one or the other category of the dichotomy. Finally, in comparing "hit" predictions, the number of predictions which were in fact correct predictions made by each method were used as the basis for comparison.
Analysis of the obtained results showed no significant difference in efficiency of the actuarial and clinical methods in predicting success of University freshmen. Both methods were shown to be significantly better than chance predictors. / Arts, Faculty of / Psychology, Department of / Graduate
|
137 |
A critical analysis of the thesis of the symmetry between explanation and prediction : including a case study of evolutionary theoryLee, Robert Wai-Chung January 1979 (has links)
One very significant characteristic of Hempel's covering-law models of scientific explanation, that is, the deductive-nomological model and the inductive-statistical model, is the supposed symmetry between explanation and prediction. In brief, the symmetry thesis asserts that explanation and prediction have the same logical structure; in other words, if an explanation of an. event had been taken account of in time, then it could have served as a basis for predicting the event in question, and vice versa. The present thesis is a critical analysis of the validity of this purported symmetry between explanation and prediction.
The substance of the thesis begins with a defence against some common misconceptions of the symmetry thesis, for example, the idea that the symmetry concerns statements but not arguments. Specifically, Grunbaum's interpretation of the symmetry thesis as pertaining to the logical inferability rather than the epistemological symmetry between explanation and prediction is examined.
The first sub-thesis of the symmetry thesis, namely that "Every adequate explanation is a potential prediction," is then analyzed. Purported counterexamples such as evolutionary theory and the paresis case are critically
examined and consequently dismissed. Since there are conflicting views regarding the nature of explanation and prediction in evolutionary theory, a case study of the theory is also presented.
Next, the second sub-thesis of the symmetry thesis, namely that "Every adequate prediction is a potential explanation," is discussed. In particular, the barometer case is discharged as a counterexample to the second sub-thesis when the explanatory power of indicator laws is properly understood.
Finally, Salmon's current causal-relevance model of explanation, which claims to be an alternative to Hempel's inductive-statistical model, is critically analyzed. A modified inductive-statistical model of explanation is also proposed. This modified model retains the nomological ingredient of Hempel's original inductive-statistical model, but it is immune to criticisms raised against the latter.
In conclusion, I maintain that there is indeed a symmetry between explanation and prediction. But since deductive-nomological explanation and prediction are essentially different from inductive-statistical explanation and prediction, the form the symmetry takes between deductive-nomological explanation and prediction differs from the form it exhibits between inductive-statistical explanation and prediction. / Arts, Faculty of / Philosophy, Department of / Graduate
|
138 |
Adaptive Software Fault Prediction Approach Using Object-Oriented MetricsBabic, Djuradj 09 November 2012 (has links)
As users continually request additional functionality, software systems will continue to grow in their complexity, as well as in their susceptibility to failures. Particularly for sensitive systems requiring higher levels of reliability, faulty system modules may increase development and maintenance cost. Hence, identifying them early would support the development of reliable systems through improved scheduling and quality control. Research effort to predict software modules likely to contain faults, as a consequence, has been substantial.
Although a wide range of fault prediction models have been proposed, we remain far from having reliable tools that can be widely applied to real industrial systems. For projects with known fault histories, numerous research studies show that statistical models can provide reasonable estimates at predicting faulty modules using software metrics. However, as context-specific metrics differ from project to project, the task of predicting across projects is difficult to achieve. Prediction models obtained from one project experience are ineffective in their ability to predict fault-prone modules when applied to other projects. Hence, taking full benefit of the existing work in software development community has been substantially limited. As a step towards solving this problem, in this dissertation we propose a fault prediction approach that exploits existing prediction models, adapting them to improve their ability to predict faulty system modules across different software projects.
|
139 |
The problem of drop-outs in public schoolsUnknown Date (has links)
Since Union School is rural, located about ten miles from Live Oak, and also a comparatively small elementary school, it has been under consideration four times for consolidation by the Suwannee County Board of Public Instruction upon the recommendation of the county school superintendent during the writer's four years as principal of the aforesaid school. / Typescript. / "August, 1953." / "Submitted to the Graduate Council of Florida State University in partial fulfillment of the requirements for the degree of Master of Arts." / Advisor: Virgil E. Strickland, Professor Directing Paper. / Includes bibliographical references (leaf 41).
|
140 |
Machine learning for corporate failure prediction : an empirical study of South African companiesKornik, Saul January 2004 (has links)
Includes bibliographical references (leaves 255-266). / The research objective of this study was to construct an empirical model for the prediction of corporate failure in South Africa through the application of machine learning techniques using information generally available to investors. The study began with a thorough review of the corporate failure literature, breaking the process of prediction model construction into the following steps: * Defining corporate failure * Sample selection * Feature selection * Data pre-processing * Feature Subset Selection * Classifier construction * Model evaluation These steps were applied to the construction of a model, using a sample of failed companies that were listed on the JSE Securities Exchange between 1 January 1996 and 30 June 2003. A paired sample of non-failed companies was selected. Pairing was performed on the basis of year of failure, industry and asset size (total assets per the company financial statements excluding intangible assets). A minimum of two years and a maximum of three years of financial data were collated for each company. Such data was mainly sourced from BFA McGregor RAID Station, although the BFA McGregor Handbook and JSE Handbook were also consulted for certain data items. A total of 75 financial and non-financial ratios were calculated for each year of data collected for every company in the final sample. Two databases of ratios were created - one for all companies with at least two years of data and another for those companies with three years of data. Missing and undefined data items were rectified before all the ratios were normalised. The set of normalised values was then imported into MatLab Version 6 and input into a Population-Based Incremental Learning (PBIL) algorithm. PBIL was then used to identify those subsets of features that best separated the failed and non-failed data clusters for a one, two and three year forward forecast period. Thornton's Separability Index (SI) was used to evaluate the degree of separation achieved by each feature subset.
|
Page generated in 0.1322 seconds