261 |
EVERYDAY SPEECH PRODUCTION ASSESSMENT MEASURE (E-SPAM): RELIABILITY AND VALIDITYWatts, Tracy N. 01 January 2011 (has links)
Purpose: The Everyday Speech Production Assessment Measure (E-SPAM) is a novel test for assessing changes in clients‟ speech production skills after intervention. This study provides information on reliability and validity for the test and overviews its clinical application.
Method & Procedures: E-SPAM, oral reading, and sequential motion rate tasks were administered to 15 participants with motor speech disorders (MSDs). E-SPAM responses were scored using a 5-point system by four graduate students to assess inter-scorer and temporal reliability and to determine validity for E-SPAM.
Results: Findings of this study indicate that the E-SPAM can be scored with sufficient reliability for clinical use, yields stable scores on repeat administrations, and that its results correlate highly with other accepted measures of speech production ability, specifically sentence intelligibility and severity.
Conclusions: While the results of this study must be considered preliminary because of the small sample size, it does appear that the E-SPAM can provide information about aspects of speech production such as intelligibility, efficiency, and speech naturalness, that are important when treatment focuses on improving speech. The E-SPAM also appears to be a “clinician-friendly” test as it is quick to administer and score and can be administered to patients across the severity continuum.
|
262 |
Theoretical Foundations for Practical ‘Totally Functional Programming’Colin Kemp Unknown Date (has links)
Interpretation is an implicit part of today’s programming; it has great power but is overused and has significant costs. For example, interpreters are typically significantly hard to understand and hard to reason about. The methodology of “Totally Functional Programming” (TFP) is a reasoned attempt to redress the problem of interpretation. It incorporates an awareness of the undesirability of interpretation with observations that definitions and a certain style of programming appear to offer alternatives to it. Application of TFP is expected to lead to a number of significant outcomes, theoretical as well as practical. Primary among these are novel programming languages to lessen or eliminate the use of interpretation in programming, leading to better-quality software. However, TFP contains a number of lacunae in its current formulation, which hinder development of these outcomes. Among others, formal semantics and type-systems for TFP languages are yet to be discovered, the means to reduce interpretation in programs is to be determined, and a detailed explication is needed of interpretation, definition, and the differences between the two. Most important of all however is the need to develop a complete understanding of the nature of interpretation. In this work, suitable type-systems for TFP languages are identified, and guidance given regarding the construction of appropriate formal semantics. Techniques, based around the ‘fold’ operator, are identified and developed for modifying programs so as to reduce the amount of interpretation they contain. Interpretation as a means of language-extension is also investigated. Finally, the nature of interpretation is considered. Numerous hypotheses relating to it considered in detail. Combining the results of those analyses with discoveries from elsewhere in this work leads to the proposal that interpretation is not, in fact, symbol-based computation, but is in fact something more fundamental: computation that varies with input. We discuss in detail various implications of this characterisation, including its practical application. An often more-useful property, ‘inherent interpretiveness’, is also motivated and discussed in depth. Overall, our inquiries act to give conceptual and theoretical foundations for practical TFP.
|
263 |
Development and Application of Statistical and Machine Learning Techniques in Probabilistic Astronomical Catalogue-Matching ProblemsDavid Rohde Unknown Date (has links)
Advances in the development of detector and computer technology have led to a rapid increase in the availability of large datasets to the astronomical community. This has created opportunities to do science that would otherwise be difficult or impossible. At the same time, astronomers have acknowledged that this influx of data creates new challe nges in the development of tools and practice to facilitate usage of this technology by the international community. A world wide effort known as the Virtual Observatory has developed to this end involving collaborations between astronomers, computer scientists and statisticians. Different telescopes survey the sky in different wavelengths producing catalogues of objects con- taining observations of both positional and non-positional properties. Because multiple catalogues exist, a common situation is that there are two catalogues containing observations of the same piece of sky (e.g. one sparse catalogue with relatively few objects per unit area, and one dense catalogue with many more objects per unit area). Identifying matches i.e. different observations of the same object in different catalogues is an important step in building a multi-wavelength understanding of the universe. Positional properties of objects can be used in some cases to perform catalogue matching, however in other cases position alone is insufficient to determine matching objects. This thesis applies machine learning and statistical methods to explore the usefulness of non- positional properties in identifying these matching objects common in two different catalogues. A machine learning classification system is shown to be able to identify these objects in a particu- lar problem domain. It is shown that non-positional inputs can be very beneficial in identifying matches for a particular problem. The result is that supervised learning is shown to be a viable method to be applied in difficult catalogue matching problems. The use of probabilistic outputs is developed as an enhancement in order to give a means of iden- tifying the uncertainty in the matches. Something that distinguishes this problem from standard pattern classification problems is that one class, the match es, belong to a high dimensional dis- tribution where the non-matches belong to a lower dimensional distribution. This assumption is developed in a probabilistic framework. The result of this is a class of probability models useful for catalogue matching and a number of tests for the suitability of the computed probabilities. The tests were applied on a problem and showed a good classificati on rate, good results obtained by scoring rules and good calibration. Visual inspection of the output also suggested that algorithm was behaving in a sensible way. While reasonable results are obtained, it is acknowledged that the question of is the probability a good probability is philosophically awkward. One goal of analysing astronomical matched or unmatched catalogues is in order to make accurate inferential statements on the basis of the available data. A silent assumption is often made that the first step in analysing unmatched catalogues is to find the best match between them, then to plot this best-match data assuming it to be correct. This thesis shows that this assumption is false, inferential statements based on the best match data can potentially be quite misleading. To address this problem a new framework for catalogue matching, based on Bayesian statistics is developed. In this Bayesian framework it is unnecessary for the method to commit to a single matched dataset; rather the ensemble of all possible matches can be used. This method compares favourably to other methods either based upon choosing the most likely match. The result of this is the outline of a method for analysing astronomical datasets not by a scatter plot obtained from the perfectly known pre-matched list of data, but rather using predictive distributions which need not be based on a perfect list and indeed might be based upon unmatched or partly matched catalogues.
|
264 |
Automated product configuration in a custom mast building environmentTwidale, Z. Unknown Date (has links)
No description available.
|
265 |
Digital immigrant teachers learning for the information ageSenjov-Makohon, Natalie January 2009 (has links) (PDF)
This study investigated how experienced teachers learned Information and Communication Technologies (ICT) during their professional development. With the introduction of ICT, experienced teachers encountered change becoming virtually displaced persons – digital immigrants; new settlers – endeavouring to obtain digital citizenship in order to survive in the information age. In the process, these teachers moved from learning how to push buttons, to applying software, and finally to changing their practice. They learned collectively and individually, in communities and networks, like immigrants and adult learners: by doing, experimenting and reflecting on ICT. Unfortunately, for these teachers-as-pedagogues, their focus on pedagogical theory during the action research they conducted, was not fully investigated or embraced during the year-long study. This study used a participant observation qualitative methodology to follow teachers in their university classroom. Interviews were conducted and documentation collected and verified by the teacher educator. The application of Kolb‘s, Gardner‘s, and Vygotsky‘s work allowed for the observation of these teachers within their sociocultural contexts. Kolb‘s work helped to understand their learning processes and Gardner‘s work indicated the learning abilities that these teachers valued in the new ICT environment. Meanwhile Vygotsky‘s work – and in particular three concepts, uchit, perezhivanija, and mislenija – presented a richer and more informed basis to understand immigration and change. Finally, this research proposes that teachers learn ICT through what is termed a hyperuchit model, consisting of developments; action; interaction; and reflection. The recommendation is that future teacher university ICT professional learning incorporates this hyperuchit model.
|
266 |
Investigation on the quality of videoconferencing over the Internet and intranet environmentsDanthuluri, Ravi January 2003 (has links) (PDF)
This study deals with the scope and feasibility of video-conferencing on the Internet and Intranet, for a real-time implementation of a classroom atmosphere linking different universities. I have considered the effects of various factors on video conferencing and different tests have been performed to study the data transfer during the online sessions. Readings of send rate, received rate and CPU load have been considered during these tests and the results have been plotted in the form of graphs. The study also gives conclusions at regular intervals on the tests performed and the limitations on various video confencing sessions. From the statistics collected I have concluded on the hardware requirements for optimized performance of video conferencing over the Internet. The study also states the scope of research to be undertaken in future for much better performance and understanding of different types of protocols. This thesis includes the study of various network-monitoring tools.
|
267 |
A Statistical Approach to Automatic Process Control (regulation schemes)Venkatesan, Gopalachary January 1997 (has links) (PDF)
Automatic process control (APC) techniques have been applied to process variables such as feed rate, temperature, pressure, viscosity, and to product quality variables as well. Conventional practices of engineering control use the potential for step changes to justify an integral term in the controller algorithm to give (long-run) compensation for a shift in the mean of a product quality variable. Application of techniques from the fields of time series analysis and stochastic control to tackle product quality control problems is also common. The focus of this thesis is on the issues of process delay ('dead time') and dynamics ('inertia') which provides opportunity to utilise technologies from both statistical process control (SPC) and APC. A presentation of the application of techniques from both SPC and APC is made in an approach to control the quality of a product (product variability) at the output. The thesis considers the issues of process control in situations where some form of feedback control is necessary and yet where stability in the feedback control loop cannot be easily attained. 'Disturbances' afflict a process control system which together with issues of dynamics and dead time (time delay), compound the control problem. An explanation of proportional, integral and derivative (PID) controllers, time series controllers, minimum variance (mean square error) control and MMSE (minimum mean square error) controllers is given after a literature review of stochastic process control and 'dead-time compensation' methods. The dynamic relationship between (output) controlled and (input) manipulative variables is described by a second-order dynamic model (transfer function) as also is the process dead time. The use of an ARIMA (0,l,l) stochastic time series model characterizes and forecasts the drifting behaviour of process disturbances. A feedback control algorithm is developed which minimizes the variance of the output controlled variable by making an adjustment at every sample point that exactly compensates for the forecasted disturbance. An expression is derived for the input control adjustment required that will exactly cancel the output deviation by imposing feed back control stability conditions. The (dead-time) simulation of the stochastic feedback control algorithm and EWMA process control are critiqued. The feedback control algorithm is simulated to find the CESTDDVN (control error standard deviation) or control error sigma (product variability) and the adjustment frequency of the time series controller. An analysis of the time series controller performance results and discussion follow the simulation. Time series controller performance is discussed and an outline of a process regulation scheme given. The thesis enhances some of the methodologies that have been recently suggested in the literature on integrating SPC and APC and concludes with details of some suggestions for further research. Solutions to the problems of statistical process monitoring and feedback control adjustment connected with feedback (closed loop) stability, controller limitations and adequate compensation of dead time in achieving minimum variance control. are found by the application of both process control techniques. By considering the dynamic behaviour of the process and by manipulating the inputs during non-stationary conditions, dynamic optimization is achieved. The IMA parameter, suggested as an on-line tuning parameter to compensate dead time, leads to adaptive (self-tuning) control. It is demonstrated that the performance of the time series controller is superior to that of the EWMA and CUSUM controllers and provides minimum variance control even in the face of dead time and dynamics. Some articles/papers have appeared in Technometrics, Volume 34, No.3, 1992, in relation to statistical process monitoring and feedback adjustment (25l-267), ASPC (286-297), and discourse given on integrating SPC and APC (268-285). By exploiting the time series controller's one-step ahead forecasting feature and considering closed-loop (feedback) stability and dead-time compensation, this thesis adds further to these contributions.
|
268 |
Nonlinear Methods in the Study of Singular Partial Differential EquationsCirstea, Florica-Corina January 2005 (has links) (PDF)
Nonlinear singular partial differential equations arise naturally when studying models from such areas as Riemannian geometry, applied probability, mathematical physics and biology. The purpose of this thesis is to develop analytical methods to investigate a large class of nonlinear elliptic PDEs underlying models from physical and biological sciences. These methods advance the knowledge of qualitative properties of the solutions to equations of the form &Delta u= &fnof(x,u) where &Omega is a smooth domain in R^N (bounded or possibly unbounded) with compact (possibly empty) boundary &part&Omega. A non-negative solution of the above equation subject to the singular boundary condition u(x)&rarr &infin as dist(x,&part&Omega)&rarr 0 (if &Omega&ne R^N), or u(x)&rarr &infin as | x | &rarr &infin (if &Omega=R^N) is called a blow-up or large solution; in the latter case the solution is called an entire large solution. Issues such as existence, uniqueness and asymptotic behavior of blow-up solutions are the main questions addressed and resolved in this dissertation. The study of similar equations with homogeneous Dirichlet boundary conditions, along with that of ODEs, supplies basic tools for the theory of blow-up. The treatment is based on devices used in Nonlinear Analysis such as the maximum principle and the method of sub and super-solutions, which is one of the main tools for finding solutions to boundary value problems. The existence of blow-up solutions is examined not only for semilinear elliptic equations, but also for systems of elliptic equations in R^N and for singular mixed boundary value problems. Such a study is motivated by applications in various fields and stimulated by very recent trends in research at the international level. The influence of the nonlinear term &fnof(x,u) on the uniqueness and asymptotics of the blow-up solution is very delicate and still eludes researchers, despite a very extensive literature on the subject. This challenge is met in a general setting capable of modelling competition near the boundary (that is, 0&sdot &infin near &part &Omega), which is very suitable to applications in population dynamics. As a special feature, we develop innovative methods linking, for the first time, the topic of blow-up in PDEs with regular variation theory (or Karamata's theory) arising in applied probability. This interplay between PDEs and probability theory plays a crucial role in proving the uniqueness of the blow-up solution in a setting that removes previous restrictions imposed in the literature. Moreover, we unveil the intricate pattern of the blow-up solution near the boundary by establishing the two-term asymptotic expansion of the solution and its variation speed (in terms of Karamata's theory). The study of singular phenomena is significant because computer modelling is usually inefficient in the presence of singularities or fast oscillation of functions. Using the asymptotic methods developed by this thesis one can find the appropriate functions modelling the singular phenomenon. The research outcomes prove to be of significance through their potential applications in population dynamics, Riemannian geometry and mathematical physics.
|
269 |
Channel Estimation for OFDM Systems With Transmitter DiversityTolochko, Igor Aleksandrovich January 2005 (has links) (PDF)
Orthogonal Frequency-Division Multiplexing (OFDM) is now regarded as a feasible alternative to the conventional single carrier modulation techniques for high data rate communication systems, mainly because of its inherent equalisation simplicity. Transmitter diversity can effectively combat multipath channel impairments due to the dispersive wireless channel that can cause deep fades in some subchannels. The combination of the two techniques, OFDM and transmitter diversity, can further enhance the data rates in a frequency-selective fading environment. However, this enhancement requires accurate and computationally efficient channel state information when coherent detection is involved. A good choice for high accuracy channel estimation is the linear minimum mean-squared error (LMMSE) technique, but it requires a large number of processing operations. In this thesis, a deep and thorough study is carried out, based on the mathematical analysis and simulations in MATLAB, to find new and effective channel estimation methods for OFDM in a transmit diversity environment. As a result, three novel LMMSE based channel estimation algorithms are evaluated: real time LMMSE, LMMSE by significant weight catching (SWC) and low complexity LMMSE with power delay profile approximation as uniform. The new techniques and their combinations can significantly reduce the full LMMSE processor complexity, by 50% or more, when the estimation accuracy loss remains within 1-2 dB over a wide range of channel delay spreads and signal-to-noise ratios (SNR). To further enhance the channel estimator performance, pilot symbol structures are investigated and methods for statistical parameter estimation in real time are also presented.
|
270 |
Innovation and change in the Information Systems curriculum of an Australian University: a socio-technical perspective.Tatnall, Arthur Unknown Date (has links) (PDF)
Information Systems is a relatively new curriculum area and one that is still growing in size and importance. It involves applied studies that are concerned with the ways people build and use computer-based systems in their organisations to produce useful information. Information Systems is, of necessity, a socio-technical discipline that has to deal with issues involving both people and machines; with the multitude of human and non-human entities that comprise an information system. This thesis reports an investigation of how Information Systems curriculum is made and how the choices of individual lecturers or groups of lecturers to adopt or ignore a new concept or technology are formed. It addresses this issue by describing a study into how the programming language Visual Basic entered the Information Systems curriculum of an Australian university, and how it has retained its place there despite challenges from other programming languages. It is a study of curriculum innovation that involves an important but small change in the curriculum of a single department in a particular university. Little of the literature on innovation deals with university curriculum and most reported work is focussed on research, development and diffusion studies of the adoption, or otherwise, of centrally developed curriculum innovations in primary and secondary schools. The innovation described here is of a different order being developed initially by a single university lecturer in one of the subjects for which he had responsibility. It is important primarily because it examines something that does not appear to have been reported on before: the negotiations and alliances that allow new material, in this case the programming language Visual Basic, to enter individual subjects of a university curriculum, and to obtain a durable place there. The research investigates a single instance of innovation, and traces the associations between various human and non-human entities including Visual Basic, the university, the student laboratories, the Course Advisory Committee and the academic staff that made this happen. It follows the formation of alliances and complex networks of association, and how their interplay resulted in the curriculum change that allowed Visual Basic to enter the Information Systems curriculum, and to fend off challenges from other programming languages in order to retain its place there. I argue that in this curriculum innovation no pre-planned path was followed, and that representations of events like this as straightforward or well planned hide the complexity of what took place. The study reveals the complex set of negotiations and compromises made by both human and non-human actors in allowing Visual Basic to enter the curriculum. The study draws on the sociology of translations, more commonly known as actor-network theory (ANT) as a framework for its analysis. I show that innovation translation can be used to advantage to trace the progress of technological innovations such as this. My analysis maps the progress of Visual Basic from novelty to ‘obvious choice’ in this university’s Information Systems curriculum.
|
Page generated in 0.1361 seconds