• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 456
  • 10
  • 8
  • 6
  • Tagged with
  • 1052
  • 1052
  • 739
  • 316
  • 307
  • 307
  • 295
  • 287
  • 248
  • 239
  • 205
  • 204
  • 113
  • 86
  • 85
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
971

Automatisation of programming of a PLC code : a thesis presented in partial fulfilment of the requirements of the degree of Masters of Engineering in Mechatronics

Mastilovich, Nikola January 2010 (has links)
Appendix D, CD content can be found with print thesis held at Turitea library, Palmerston North. Content: Empty APCG program Empty RSLogix5000 l5k file Empty RSLogix5000 ACD file Real Life project - APCG program (only partial) Real Life project - RSLogix5000 l5k file (only partial) Real Life project - RSLogix5000 ACD file (only partial) / A competitive edge is one of the requirements of a successful business. Tools, which increase an engineer s productivity and minimize cost, can be considered as a competitive edge. The objective of this thesis was to design, create, and implement Automatic PLC Code Generator (APCG) software. A secondary objective was to demonstrate that the use of the APCG software will lead to improved project efficiency and enhanced profit margin. To create the APCG software, the MS Excel and Visual Basic for Applications (VBA) programs were used as the platform. MS Excel sheets were used as a user interface, while VBA creates the PLC code from the information entered by the engineer. The PLC code, created by the APCG software, follows the PLC structure of the Realcold Milmech Pty. Ltd, as well as the research Automatic generation of PLC code beyond the nominal sequence written by Guttel et al [1]. The APCG software was used to design and create a PLC code for one of the projects undertaken by Realcold Milmech Pty. Ltd. By using APCG software, time to design, create, and test the PLC code was improved when compared to the budgeted time. In addition, the project's profit margin was increased. Based on the results of this thesis it is expected that the APCG software will be useful for programmers that tend to handle a variety of projects on a regular basis, where programming in a modular way is not appropriate.
972

Adaptation of colour perception through dynamic ICC profile modification : a thesis presented in partial fulfilment of the requirements for the degree of Doctor of Philosophy in Computer Science at Massey University, Albany (Auckland), New Zealand

Kloss, Guy Kristoffer January 2010 (has links)
Digital colour cameras are dramatically falling in price, making them a ordable for ubiquitous appliances in many applications. Change in colour perception with changing light conditions induce errors that may escape a user's awareness. Colour constancy algorithms are based on inferring light properties (usually the white point) to correct colour. Other attempts using more data for colour correction such as (ICC based) colour management characterise a capturing device under given conditions through an input device pro le. This pro le can be applied to correct for deviating colour perception. But this pro le is only valid for the speci c conditions at the time of the characterisation, but fails with changes in light. This research presents a solution to the problem of long time observations with changes in the scene's illumination for common natural (overcast or clear, blue sky) and arti cial sources (incandescent or uorescent lamps). Colour measurements for colour based reasoning need to be represented in a robustly de ned way. One such suitable and well de ned description is given by the CIE LAB colour space, a device-independent, visually linearised colour description. Colour transformations using ICC pro le are also based on CIE colour descriptions. Therefore, also the corrective colour processing has been based on ICC based colour management. To verify the viability of CIE LAB based corrective colour processing colour constancy algorithms (White Patch Retinex and Grey World Assumption) have been modi ed to operate on L a b colour tuples. Results were compared visually and numerically (using colour indexing) against those using the same algorithms operating on RGB colour tuples. We can take advantage of the fact that we are dealing with image streams over time, adding another dimension usable for analysis. A solution to the problem of slowly changing light conditions in scenes with a static camera perspective is presented. It takes advantage of the small (frame-to-frame) changes in appearance of colour within the scene over time. Reoccurring objects or (background) areas of the scene are tracked to gather data points for an analysis. As a result, a suitable colour space distortion model has been devised through a rst order Taylor approximation (a ne transformation). By performing a multidimensional linear regression analysis on the tracked data points, parameterisations for the a ne transformations were derived. Finally, the device pro le is updated by amalgamating the corrections from the model into the ICC pro le for a single, comprehensive transformation. Following applications of the ICC based colour pro les are very fast and can be used in real-time with the camera's capturing frame rate (for current normal web cameras and low spec desktop computers). As light conditions usually change on a much slower time scale than the capturing rate of a camera, the computationally expensive pro le adaptation generally showed to be usable for many frames. The goal was to set out and nd a solution for consistent colour capturing using digital cameras, which is capable of coping with changing light conditions. Theoretical backgrounds and strategies for such a system have been devised and implemented successfully.
973

Adaptation of colour perception through dynamic ICC profile modification : a thesis presented in partial fulfilment of the requirements for the degree of Doctor of Philosophy in Computer Science at Massey University, Albany (Auckland), New Zealand

Kloss, Guy Kristoffer January 2010 (has links)
Digital colour cameras are dramatically falling in price, making them a ordable for ubiquitous appliances in many applications. Change in colour perception with changing light conditions induce errors that may escape a user's awareness. Colour constancy algorithms are based on inferring light properties (usually the white point) to correct colour. Other attempts using more data for colour correction such as (ICC based) colour management characterise a capturing device under given conditions through an input device pro le. This pro le can be applied to correct for deviating colour perception. But this pro le is only valid for the speci c conditions at the time of the characterisation, but fails with changes in light. This research presents a solution to the problem of long time observations with changes in the scene's illumination for common natural (overcast or clear, blue sky) and arti cial sources (incandescent or uorescent lamps). Colour measurements for colour based reasoning need to be represented in a robustly de ned way. One such suitable and well de ned description is given by the CIE LAB colour space, a device-independent, visually linearised colour description. Colour transformations using ICC pro le are also based on CIE colour descriptions. Therefore, also the corrective colour processing has been based on ICC based colour management. To verify the viability of CIE LAB based corrective colour processing colour constancy algorithms (White Patch Retinex and Grey World Assumption) have been modi ed to operate on L a b colour tuples. Results were compared visually and numerically (using colour indexing) against those using the same algorithms operating on RGB colour tuples. We can take advantage of the fact that we are dealing with image streams over time, adding another dimension usable for analysis. A solution to the problem of slowly changing light conditions in scenes with a static camera perspective is presented. It takes advantage of the small (frame-to-frame) changes in appearance of colour within the scene over time. Reoccurring objects or (background) areas of the scene are tracked to gather data points for an analysis. As a result, a suitable colour space distortion model has been devised through a rst order Taylor approximation (a ne transformation). By performing a multidimensional linear regression analysis on the tracked data points, parameterisations for the a ne transformations were derived. Finally, the device pro le is updated by amalgamating the corrections from the model into the ICC pro le for a single, comprehensive transformation. Following applications of the ICC based colour pro les are very fast and can be used in real-time with the camera's capturing frame rate (for current normal web cameras and low spec desktop computers). As light conditions usually change on a much slower time scale than the capturing rate of a camera, the computationally expensive pro le adaptation generally showed to be usable for many frames. The goal was to set out and nd a solution for consistent colour capturing using digital cameras, which is capable of coping with changing light conditions. Theoretical backgrounds and strategies for such a system have been devised and implemented successfully.
974

Adaptation of colour perception through dynamic ICC profile modification : a thesis presented in partial fulfilment of the requirements for the degree of Doctor of Philosophy in Computer Science at Massey University, Albany (Auckland), New Zealand

Kloss, Guy Kristoffer January 2010 (has links)
Digital colour cameras are dramatically falling in price, making them a ordable for ubiquitous appliances in many applications. Change in colour perception with changing light conditions induce errors that may escape a user's awareness. Colour constancy algorithms are based on inferring light properties (usually the white point) to correct colour. Other attempts using more data for colour correction such as (ICC based) colour management characterise a capturing device under given conditions through an input device pro le. This pro le can be applied to correct for deviating colour perception. But this pro le is only valid for the speci c conditions at the time of the characterisation, but fails with changes in light. This research presents a solution to the problem of long time observations with changes in the scene's illumination for common natural (overcast or clear, blue sky) and arti cial sources (incandescent or uorescent lamps). Colour measurements for colour based reasoning need to be represented in a robustly de ned way. One such suitable and well de ned description is given by the CIE LAB colour space, a device-independent, visually linearised colour description. Colour transformations using ICC pro le are also based on CIE colour descriptions. Therefore, also the corrective colour processing has been based on ICC based colour management. To verify the viability of CIE LAB based corrective colour processing colour constancy algorithms (White Patch Retinex and Grey World Assumption) have been modi ed to operate on L a b colour tuples. Results were compared visually and numerically (using colour indexing) against those using the same algorithms operating on RGB colour tuples. We can take advantage of the fact that we are dealing with image streams over time, adding another dimension usable for analysis. A solution to the problem of slowly changing light conditions in scenes with a static camera perspective is presented. It takes advantage of the small (frame-to-frame) changes in appearance of colour within the scene over time. Reoccurring objects or (background) areas of the scene are tracked to gather data points for an analysis. As a result, a suitable colour space distortion model has been devised through a rst order Taylor approximation (a ne transformation). By performing a multidimensional linear regression analysis on the tracked data points, parameterisations for the a ne transformations were derived. Finally, the device pro le is updated by amalgamating the corrections from the model into the ICC pro le for a single, comprehensive transformation. Following applications of the ICC based colour pro les are very fast and can be used in real-time with the camera's capturing frame rate (for current normal web cameras and low spec desktop computers). As light conditions usually change on a much slower time scale than the capturing rate of a camera, the computationally expensive pro le adaptation generally showed to be usable for many frames. The goal was to set out and nd a solution for consistent colour capturing using digital cameras, which is capable of coping with changing light conditions. Theoretical backgrounds and strategies for such a system have been devised and implemented successfully.
975

Real-time adaptive noise cancellation for automatic speech recognition in a car environment : a thesis presented in partial fulfillment of the requirements for the degree of Doctor of Philosophy in Computer Engineering at Massey University, School of Engineering and Advanced Technology, Auckland, New Zealand

Qi, Ziming January 2008 (has links)
This research is mainly concerned with a robust method for improving the performance of a real-time speech enhancement and noise cancellation for Automatic Speech Recognition (ASR) in a real-time environment. Therefore, the thesis titled, “Real-time adaptive beamformer for Automatic speech Recognition in a car environment” presents an application technique of a beamforming method and Automatic Speech Recognition (ASR) method. In this thesis, a novel solution is presented to the question as below, namely: How can the driver’s voice control the car using ASR? The solution in this thesis is an ASR using a hybrid system with acoustic beamforming Voice Activity Detector (VAD) and an Adaptive Wiener Filter. The beamforming approach is based on a fundamental theory of normalized least-mean squares (NLMS) to improve Signal to Noise Ratio (SNR). The microphone has been implemented with a Voice Activity Detector (VAD) which uses time-delay estimation together with magnitude-squared coherence (MSC). An experiment clearly shows the ability of the composite system to reduce noise outside of a defined active zone. In real-time environments a speech recognition system in a car has to receive the driver’s voice only whilst suppressing background noise e.g. voice from radio. Therefore, this research presents a hybrid real-time adaptive filter which operates within a geometrical zone defined around the head of the desired speaker. Any sound outside of this zone is considered to be noise and suppressed. As this defined geometrical zone is small, it is assumed that only driver's speech is incoming from this zone. The technique uses three microphones to define a geometric based voice-activity detector (VAD) to cancel the unwanted speech coming from outside of the zone. In the case of a sole unwanted speech incoming from outside of a desired zone, this speech is muted at the output of the hybrid noise canceller. In case of an unwanted speech and a desired speech are incoming at the same time, the proposed VAD fails to identify the unwanted speech or desired speech. In such a situation an adaptive Wiener filter is switched on for noise reduction, where the SNR is improved by as much as 28dB. In order to identify the signal quality of the filtered signal from Wiener filter, a template matching speech recognition system that uses a Wiener filter is designed for testing. In this thesis, a commercial speech recognition system is also applied to test the proposed beamforming based noise cancellation and the adaptive Wiener filter.
976

Adaptation of colour perception through dynamic ICC profile modification : a thesis presented in partial fulfilment of the requirements for the degree of Doctor of Philosophy in Computer Science at Massey University, Albany (Auckland), New Zealand

Kloss, Guy Kristoffer January 2010 (has links)
Digital colour cameras are dramatically falling in price, making them a ordable for ubiquitous appliances in many applications. Change in colour perception with changing light conditions induce errors that may escape a user's awareness. Colour constancy algorithms are based on inferring light properties (usually the white point) to correct colour. Other attempts using more data for colour correction such as (ICC based) colour management characterise a capturing device under given conditions through an input device pro le. This pro le can be applied to correct for deviating colour perception. But this pro le is only valid for the speci c conditions at the time of the characterisation, but fails with changes in light. This research presents a solution to the problem of long time observations with changes in the scene's illumination for common natural (overcast or clear, blue sky) and arti cial sources (incandescent or uorescent lamps). Colour measurements for colour based reasoning need to be represented in a robustly de ned way. One such suitable and well de ned description is given by the CIE LAB colour space, a device-independent, visually linearised colour description. Colour transformations using ICC pro le are also based on CIE colour descriptions. Therefore, also the corrective colour processing has been based on ICC based colour management. To verify the viability of CIE LAB based corrective colour processing colour constancy algorithms (White Patch Retinex and Grey World Assumption) have been modi ed to operate on L a b colour tuples. Results were compared visually and numerically (using colour indexing) against those using the same algorithms operating on RGB colour tuples. We can take advantage of the fact that we are dealing with image streams over time, adding another dimension usable for analysis. A solution to the problem of slowly changing light conditions in scenes with a static camera perspective is presented. It takes advantage of the small (frame-to-frame) changes in appearance of colour within the scene over time. Reoccurring objects or (background) areas of the scene are tracked to gather data points for an analysis. As a result, a suitable colour space distortion model has been devised through a rst order Taylor approximation (a ne transformation). By performing a multidimensional linear regression analysis on the tracked data points, parameterisations for the a ne transformations were derived. Finally, the device pro le is updated by amalgamating the corrections from the model into the ICC pro le for a single, comprehensive transformation. Following applications of the ICC based colour pro les are very fast and can be used in real-time with the camera's capturing frame rate (for current normal web cameras and low spec desktop computers). As light conditions usually change on a much slower time scale than the capturing rate of a camera, the computationally expensive pro le adaptation generally showed to be usable for many frames. The goal was to set out and nd a solution for consistent colour capturing using digital cameras, which is capable of coping with changing light conditions. Theoretical backgrounds and strategies for such a system have been devised and implemented successfully.
977

Interaction between existing social networks and information and communication technology (ICT) tools : evidence from rural Andes

Diaz Andrade, Antonio January 2007 (has links)
This exploratory and interpretive research examines the anticipated consequences of information and communication technology (ICT) on six remote rural communities, located in the northern Peruvian Andes, which were provided with computers connected to the Internet. Instead of looking for economic impacts of the now-available technological tools, this research investigates how local individuals use (or not) computers, and analyses the mechanisms by which computer-mediated information, obtained by those who use computers, is disseminated through their customary face-to-face interactions with their compatriots. A holistic multiple-case study design was the basis for the data collection process. Data were collected during four-and-half months of fieldwork. Grounded theory informed both the method of data analysis and the technique for theory building. As a result of an inductive thinking process, two intertwined core themes emerged. The first theme, individuals’ exploitation of ICT, is related to how some individuals overcome some difficulties and try to make the most of the now available ICT tools. The second theme, complementing existing social networks through ICT, reflects the interaction between the newly ICT-mediated information and virtual networks and the local existing social networks. However, these two themes were not evenly distributed across the communities studied. The evidence revealed that dissimilarities in social cohesion among the communities and, to some extent, disparities in physical infrastructure are contributing factors that explain the unevenness. But social actors – named as ‘activators of information’ – become the key triggers of the disseminating process for fresh and valuable ICT-mediated information throughout their communities. These findings were compared to the relevant literature to produce theoretical generalisations. As a conclusion, it is suggested any ICT intervention in a developing country requires at least three elements to be effective: a tolerable physical infrastructure, a strong degree of social texture and an activator of information.
978

Model-based strategies for automated segmentation of cardiac magnetic resonance images

Lin, Xiang, 1971- January 2008 (has links)
Segmentation of the left and right ventricles is vital to clinical magnetic resonance imaging studies of cardiac function. A single cardiac examination results in a large amount of image data. Manual analysis by experts is time consuming and also susceptible to intra- and inter-observer variability. This leads to the urgent requirement for efficient image segmentation algorithms to automatically extract clinically relevant parameters. Present segmentation techniques typically require at least some user interaction or editing, and do not deal well with the right ventricle. This thesis presents mathematical model based methods to automatically localize and segment the left and right ventricular endocardium and epicardium in 3D cardiac magnetic resonance data without any user interaction. An efficient initialization algorithm was developed which used a novel temporal Fourier analysis to determine the size, orientation and position of the heart. Quantitative validation on a large dataset containing 330 patients showed that the initialized contours had only ~ 5 pixels (modified Hausdorff distance) error on average in the middle short-axis slices. A model-based graph cuts algorithm was investigated and achieved good results on the midventricular slices, but was not found to be robust on other slices. Instead, automated segmentation of both the left and right ventricular contours was performed using a new framework, called SMPL (Simple Multi-Property Labelled) atlas based registration. This framework was able to integrate boundary, intensity and anatomical information. A comparison of similarity measures showed the sum of squared difference was most appropriate in this context. The method improved the average contour errors of the middle short-axis slices to ~ 1 pixel. The detected contours were then used to update the 3D model using a new feature-based 3D registration method. These techniques were iteratively applied to both short-axis and long-axis slices, resulting in a 3D segmentation of the patient’s heart. This automated model-based method showed a good agreement with expert observers, giving average errors of ~ 1–4 pixels on all slices.
979

Design and evaluation of software obfuscations

Majumdar, Anirban January 2008 (has links)
Software obfuscation is a protection technique for making code unintelligible to automated program comprehension and analysis tools. It works by performing semantic preserving transformations such that the difficulty of automatically extracting the computational logic out of code is increased. Obfuscating transforms in existing literature have been designed with the ambitious goal of being resilient against all possible reverse engineering attacks. Even though some of the constructions are based on intractable computational problems, we do not know, in practice, how to generate hard instances of obfuscated problems such that all forms of program analyses would fail. In this thesis, we address the problem of software protection by developing a weaker notion of obfuscation under which it is not required to guarantee an absolute blackbox security. Using this notion, we develop provably-correct obfuscating transforms using dependencies existing within program structures and indeterminacies in communication characteristics between programs in a distributed computing environment. We show how several well known static analysis tools can be used for reverse engineering obfuscating transforms that derive resilience from computationally hard problems. In particular, we restrict ourselves to one common and potent static analysis tool, the static slicer, and use it as our attack tool. We show the use of derived software engineering metrics to indicate the degree of success or failure of a slicer attack on a piece of obfuscated code. We address the issue of proving correctness of obfuscating transforms by adapting existing proof techniques for functional program refinement and communicating sequential processes. The results of this thesis could be used for future work in two ways: first, future researchers may extend our proposed techniques to design obfuscations using a wider range of dependencies that exist between dynamic program structures. Our restricted attack model using one static analysis tool can also be relaxed and obfuscations capable of withstanding a broader class of static and dynamic analysis attacks could be developed based on the same principles. Secondly, our obfuscatory strength evaluation techniques could guide anti-malware researchers in the development of tools to detect obfuscated strains of polymorphic viruses. / Whole document restricted, but available by request, use the feedback form to request access.
980

Accelerating classifier training using AdaBoost within cascades of boosted ensembles : a thesis presented in partial fulfillment of the requirements for the degree of Master of Science in Computer Sciences at Massey University, Auckland, New Zealand

Susnjak, Teo January 2009 (has links)
This thesis seeks to address current problems encountered when training classifiers within the framework of cascades of boosted ensembles (CoBE). At present, a signifi- cant challenge facing this framework are inordinate classifier training runtimes. In some cases, it can take days or weeks (Viola and Jones, 2004; Verschae et al., 2008) to train a classifier. The protracted training runtimes are an obstacle to the wider use of this framework (Brubaker et al., 2006). They also hinder the process of producing effective object detection applications and make the testing of new theories and algorithms, as well as verifications of others research, a considerable challenge (McCane and Novins, 2003). An additional shortcoming of the CoBE framework is its limited ability to train clas- sifiers incrementally. Presently, the most reliable method of integrating new dataset in- formation into an existing classifier, is to re-train a classifier from beginning using the combined new and old datasets. This process is inefficient. It lacks scalability and dis- cards valuable information learned in previous training. To deal with these challenges, this thesis extends on the research by Barczak et al. (2008), and presents alternative CoBE frameworks for training classifiers. The alterna- tive frameworks reduce training runtimes by an order of magnitude over common CoBE frameworks and introduce additional tractability to the process. They achieve this, while preserving the generalization ability of their classifiers. This research also introduces a new framework for incrementally training CoBE clas- sifiers and shows how this can be done without re-training classifiers from beginning. However, the incremental framework for CoBEs has some limitations. Although it is able to improve the positive detection rates of existing classifiers, currently it is unable to lower their false detection rates.

Page generated in 0.115 seconds