• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 9
  • 2
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 23
  • 23
  • 5
  • 5
  • 4
  • 4
  • 4
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

The development of a hybrid intelligent maintenance optimisation system

Jeon, J. January 2000 (has links)
No description available.
2

From visual saliency to video behaviour understanding

Hung, Hayley Shi Wen January 2007 (has links)
In a world of ever increasing amounts of video data, we are forced to abandon traditional methods of scene interpretation by fully manual means. Under such circumstances, some form of automation is highly desirable but this can be a very open ended issue with high complexity. Dealing with such large amounts of data is a non-trivial task that requires efficient selective extraction of parts of a scene which have the potential to develop a higher semantic meaning, alone, or in combination with others. In particular, the types of video data that are in need of automated analysis tend to be outdoor scenes with high levels of activity generated from either foreground or background. Such dynamic scenes add considerable complexity to the problem since we cannot rely on motion energy alone to detect regions of interest. Furthermore, the behaviour of these regions of motion can differ greatly, while still being highly dependent, both spatially and temporally on the movement of other objects within the scene. Modelling these dependencies, whilst eliminating as much redundancy from the feature extraction process as possible are the challenges addressed by this thesis. In the first half, finding the right mechanism to extract and represent meaningful features from dynamic scenes with no prior knowledge is investigated. Meaningful or salient information is treated as the parts of a scene that stand out or seem unusual or interesting to us. The novelty of the work is that it is able to select salient scales in both space and time in which a particular spatio-temporal volume is considered interesting relative to the rest of the scene. By quantifying the temporal saliency values of regions of motion, it is possible to consider their importance in terms of both the long and short-term. Variations in entropy over spatio-temporal scales are used to select a context dependent measure of the local scene dynamics. A method of quantifying temporal saliency is devised based on the variation of the entropy of the intensity distribution in a spatio-temporal volume over incraeasing scales. Entropy is used over traditional filter methods since the stability or predictability of the intensity distribution over scales of a local spatio-temporal region can be defined more robustly relative to the context of its neighbourhood, even for regions exhibiting high intensity variation due to being extremely textured. Results show that it is possible to extract both locally salient features as well as globally salient temporal features from contrasting scenerios. In the second part of the thesis, focus will shift towards binding these spatio-temporally salient features together so that some semantic meaning can be inferred from their interaction. Interaction in this sense, refers to any form of temporally correlated behaviour between any salient regions of motion in a scene. Feature binding as a mechanism for interactive behaviour understanding is particularly important if we consider that regions of interest may not be treated as particularly significant individually, but represent much more semantically when considered in combination. Temporally correlated behaviour is identified and classified using accumulated co-occurrences of salient features at two levels. Firstly, co-occurrences are accumulated for spatio-temporally proximate salient features to form a local representation. Then, at the next level, the co-occurrence of these locally spatio-temporally bound features are accumulated again in order to discover unusual behaviour in the scene. The novelty of this work is that there are no assumptions made about whether interacting regions should be spatially proximate. Furthermore, no prior knowledge of the scene topology is used. Results show that it is possible to detect unusual interactions between regions of motion, which can visually infer higher levels of semantics. In the final part of the thesis, a more specific investigation of human behaviour is addressed through classification and detection of interactions between 2 human subjects. Here, further modifications are made to the feature extraction process in order to quantify the spatiotemporal saliency of a region of motion. These features are then grouped to find the people in the scene. Then, a loose pose distribution model is extracted for each person for finding salient correlations between poses of two interacting people using canonical correlation analysis. These canonical factors can be formed into trajectories and used for classification. Levenshtein distance is then used to categorise the features. The novelty of the work is that the interactions do not have to be spatially connected or proximate for them to be recognised. Furthermore, the data used is outdoors and cluttered with non-stationary background. Results show that co-occurrence techniques have the potential to provide a more generalised, compact, and meaningful representation of dynamic interactive scene behaviour.
3

Precoding and the Accuracy of Automated Analysis of Child Language Samples

Winiecke, Rachel Christine 01 May 2015 (has links)
Language sample analysis is accepted as the gold standard in child language assessment. Unfortunately it is often viewed as too time consuming for the practicing clinician. Over the last 15 years a great deal of research has been invested in the automated analysis of child language samples to make the process more time efficient. One step in the analysis process may be precoding the sample, as is used in the Systematic Analysis of Language Transcripts (SALT) software. However, a claim has been made (MacWhinney, 2008) that such precoding in fact leads to lower accuracy because of manual coding errors. No data on this issue have been published. The current research measured the accuracy of language samples analyzed with and without SALT precoding. This study also compared the accuracy of current software to an older version called GramCats (Channell & Johnson 1999). The results presented support the use of precoding schemes such as SALT and suggest that the accuracy of automated analysis has improved over time.
4

Circuit breaker monitoring application using wireless communication

Ved, Nitin 25 April 2007 (has links)
Circuit breakers are used in the power system to break or make current flow through power apparatus. Reliable operation of circuit breakers is critical to the well- being of the power system and can be achieved by regular inspection and maintenance. A low-cost automated circuit breaker monitoring system is developed to monitor circuit breaker control signals. An interface is designed on top of which different local and system-wide applications can be developed which utilize the data recorded by the system. Some of the possible applications are proposed. Lab and field evaluation of the designed system is performed and results are presented.
5

Computational Methods for Comparative Analysis of Rare Cell Subsets in Flow Cytometry

Frelinger, Jacob Jeffrey January 2013 (has links)
<p>Automated analysis techniques for flow cytometry data can address many of the limitations of manual analysis by providing an objective approach for the identification of cellular subsets. While automated analysis has the potential to significantly improve automated analysis, challenges remain for automated methods in cross sample analysis for large scale studies. This thesis presents new methods for data normalization, sample enrichment for rare events of interest, and cell subset relabeling. These methods build upon and extend the use of Gaussian mixture models in automated flow cytometry analysis to enable practical large scale cell subset identification.</p> / Dissertation
6

Comparison of Spatial Resolution and Contrast Uniformity of Various Printers

Madhavji, Milan 12 January 2011 (has links)
For several common inkjet, laser and thermal dye printers, a method of evaluating prints that is not associated with the level of dental expertise of the observer is introduced. In addition, an automated analysis that mimics the observations made by observers is tested. Metrics that are evaluated in this study include spatial resolution, contrast uniformity, the type of paper, and overall observer preference. The results demonstrate that observer preference is associated with a high print contrast uniformity and with the use of glossy paper, but not with increased spatial resolution. The automated analysis produced results that were in general agreement with the observer data for spatial resolution, which concluded that the Lexmark C543DN printer produced prints with the highest spatial resolution. A thermal dye printer (Kodak CMI1000) produced prints with the highest contrast uniformity, and the print most favored by observers overall was produced by the Kodak ESP-9 inkjet printer on Kodak Everyday Glossy Photo paper.
7

Comparison of Spatial Resolution and Contrast Uniformity of Various Printers

Madhavji, Milan 12 January 2011 (has links)
For several common inkjet, laser and thermal dye printers, a method of evaluating prints that is not associated with the level of dental expertise of the observer is introduced. In addition, an automated analysis that mimics the observations made by observers is tested. Metrics that are evaluated in this study include spatial resolution, contrast uniformity, the type of paper, and overall observer preference. The results demonstrate that observer preference is associated with a high print contrast uniformity and with the use of glossy paper, but not with increased spatial resolution. The automated analysis produced results that were in general agreement with the observer data for spatial resolution, which concluded that the Lexmark C543DN printer produced prints with the highest spatial resolution. A thermal dye printer (Kodak CMI1000) produced prints with the highest contrast uniformity, and the print most favored by observers overall was produced by the Kodak ESP-9 inkjet printer on Kodak Everyday Glossy Photo paper.
8

Evaluating tool based automated malware analysis through persistence mechanism detection

Webb, Matthew S. January 1900 (has links)
Master of Science / Department of Computer Science / Eugene Vasserman / Since 2014 there have been over 120 million new malicious programs registered every year. Due to the amount of new malware appearing every year, analysts have automated large sections of the malware reverse engineering process. Many automated analysis systems are created by re-implementing analysis techniques rather than automating existing tools that utilize the same techniques. New implementations take longer to create and do not have the same proven quality as a tool that evolved alongside malware for many years. The goal of this study is to assess the efficiency and effectiveness of using existing tools for the application of automated malware analysis. This study focuses on the problem of discovering how malware persists on an infected system. Six tools are chosen based on their usefulness in manual analysis for revealing different persistence techniques employed by malware. The functions of these tools are automated in a fashion that emulates how they can be manually utilized, resulting in information about a tested sample. These six tools are tested against a collection of actual malware samples, pulled from malware families that are known for employing various persistence techniques. The findings are then scanned for indicators of persistence. The results of these tests are used to determine the smallest tool subset that discovers the largest range of persistence mechanisms. For each tool, implementation difficulty is compared to the number of indicators discovered to reveal the effectiveness of similar tools for future analysis applications. The conclusion is that while the tools covered a wide range of persistence mechanisms, the standalone tools that were designed with scripting in mind were more effective than those with multiple system requirements or those with only a graphical interface. It was also discovered that the automation process limits functionality of some tools, as they are designed for analyst interaction. Regaining the tools’ functionality lost from automation to use them for other reverse engineering applications could be cumbersome and could require necessary implementation overhauls. Finally, the more successful tools were able to detect a broader range of techniques, while some less successful tools could only detect a portion of the same techniques. This study concludes that while an analysis system can be created by automating existing tools, the characteristics of the tools chosen impact the workload required to automate them. A well-documented tool that is controllable through a command line interface that offers many configuration options will require less work for an analyst to automate than a tool with little documentation that can only be controlled through a graphical interface.
9

Automated Identification of Relative Clauses in Child Language Samples

Ehlert, Erika E. 14 June 2013 (has links) (PDF)
Relative clauses are grammatical constructions that are of relevance in both typical and impaired language development. Thus, the accurate identification of these structures in child language samples is clinically important. In recent years, computer software has been used to assist in the automated analysis of clinical language samples. However, this software has had only limited success when attempting to identify relative clauses. The present study explores the development and clinical importance of relative clauses and investigates the accuracy of the software used for automated identification of these structures. Two separate collections of language samples were used. The first collection included 10 children with language impairment, ranging in age from 7;6 to 11;1 (years;months), 10 age-matched peers, and 10 language-matched peers. A second collection contained 30 children considered to have typical speech and language skills and who ranged in age from 2;6 to 7;11. Language samples were manually coded for the presence of relative clauses (including those containing a relative pronoun, those without a relative pronoun and reduced relative clauses). These samples were then tagged using computer software and finally tabulated and compared for accuracy. ANACOVA revealed a significant difference in the frequency of relative clauses containing a relative pronoun but not for those without a relative pronoun nor for reduce relative clauses. None of the structures were significantly correlated with age; however, frequencies of both relative clauses with and without relative pronouns were correlated with mean length of utterance. Kappa levels revealed that agreement between manual and automated coding was relatively high for each relative clause type and highest for relative clauses containing relative pronouns.
10

Automated Analysis of Gamma Ray Spectra

Tervo, Richard 07 1900 (has links)
<p> Contemporary approaches to data analysis suffer from being both time-consuming and subjective; however, the application of numerical techniques to the automated (non-interactive) analysis of gamma ray spectra often leads to considerably improved performance. The foundations and limitations of such techniques lie in the applicability of certain mathematical operations such as deconvolution, and the careful study of stochastic models. The use of digital filters as a method of enhancing detector response has been applied to a triple-coincidence counting arrangement, after modelling undesired physical effects. An objective background estimation method has been described based on the statistical nature of nuclear measurements. Finally, the application of such techniques is demonstrated with a package of FORTRAN programs designed to be used in a variety of situations with minimal modifications. </p> / Thesis / Master of Science (MSc)

Page generated in 0.0708 seconds