• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 456
  • 10
  • 8
  • 6
  • Tagged with
  • 1051
  • 1051
  • 738
  • 315
  • 307
  • 306
  • 295
  • 287
  • 248
  • 239
  • 205
  • 204
  • 113
  • 86
  • 85
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
961

Interaction between existing social networks and information and communication technology (ICT) tools : evidence from rural Andes

Diaz Andrade, Antonio January 2007 (has links)
This exploratory and interpretive research examines the anticipated consequences of information and communication technology (ICT) on six remote rural communities, located in the northern Peruvian Andes, which were provided with computers connected to the Internet. Instead of looking for economic impacts of the now-available technological tools, this research investigates how local individuals use (or not) computers, and analyses the mechanisms by which computer-mediated information, obtained by those who use computers, is disseminated through their customary face-to-face interactions with their compatriots. A holistic multiple-case study design was the basis for the data collection process. Data were collected during four-and-half months of fieldwork. Grounded theory informed both the method of data analysis and the technique for theory building. As a result of an inductive thinking process, two intertwined core themes emerged. The first theme, individuals’ exploitation of ICT, is related to how some individuals overcome some difficulties and try to make the most of the now available ICT tools. The second theme, complementing existing social networks through ICT, reflects the interaction between the newly ICT-mediated information and virtual networks and the local existing social networks. However, these two themes were not evenly distributed across the communities studied. The evidence revealed that dissimilarities in social cohesion among the communities and, to some extent, disparities in physical infrastructure are contributing factors that explain the unevenness. But social actors – named as ‘activators of information’ – become the key triggers of the disseminating process for fresh and valuable ICT-mediated information throughout their communities. These findings were compared to the relevant literature to produce theoretical generalisations. As a conclusion, it is suggested any ICT intervention in a developing country requires at least three elements to be effective: a tolerable physical infrastructure, a strong degree of social texture and an activator of information.
962

Model-based strategies for automated segmentation of cardiac magnetic resonance images

Lin, Xiang, 1971- January 2008 (has links)
Segmentation of the left and right ventricles is vital to clinical magnetic resonance imaging studies of cardiac function. A single cardiac examination results in a large amount of image data. Manual analysis by experts is time consuming and also susceptible to intra- and inter-observer variability. This leads to the urgent requirement for efficient image segmentation algorithms to automatically extract clinically relevant parameters. Present segmentation techniques typically require at least some user interaction or editing, and do not deal well with the right ventricle. This thesis presents mathematical model based methods to automatically localize and segment the left and right ventricular endocardium and epicardium in 3D cardiac magnetic resonance data without any user interaction. An efficient initialization algorithm was developed which used a novel temporal Fourier analysis to determine the size, orientation and position of the heart. Quantitative validation on a large dataset containing 330 patients showed that the initialized contours had only ~ 5 pixels (modified Hausdorff distance) error on average in the middle short-axis slices. A model-based graph cuts algorithm was investigated and achieved good results on the midventricular slices, but was not found to be robust on other slices. Instead, automated segmentation of both the left and right ventricular contours was performed using a new framework, called SMPL (Simple Multi-Property Labelled) atlas based registration. This framework was able to integrate boundary, intensity and anatomical information. A comparison of similarity measures showed the sum of squared difference was most appropriate in this context. The method improved the average contour errors of the middle short-axis slices to ~ 1 pixel. The detected contours were then used to update the 3D model using a new feature-based 3D registration method. These techniques were iteratively applied to both short-axis and long-axis slices, resulting in a 3D segmentation of the patient’s heart. This automated model-based method showed a good agreement with expert observers, giving average errors of ~ 1–4 pixels on all slices.
963

Design and evaluation of software obfuscations

Majumdar, Anirban January 2008 (has links)
Software obfuscation is a protection technique for making code unintelligible to automated program comprehension and analysis tools. It works by performing semantic preserving transformations such that the difficulty of automatically extracting the computational logic out of code is increased. Obfuscating transforms in existing literature have been designed with the ambitious goal of being resilient against all possible reverse engineering attacks. Even though some of the constructions are based on intractable computational problems, we do not know, in practice, how to generate hard instances of obfuscated problems such that all forms of program analyses would fail. In this thesis, we address the problem of software protection by developing a weaker notion of obfuscation under which it is not required to guarantee an absolute blackbox security. Using this notion, we develop provably-correct obfuscating transforms using dependencies existing within program structures and indeterminacies in communication characteristics between programs in a distributed computing environment. We show how several well known static analysis tools can be used for reverse engineering obfuscating transforms that derive resilience from computationally hard problems. In particular, we restrict ourselves to one common and potent static analysis tool, the static slicer, and use it as our attack tool. We show the use of derived software engineering metrics to indicate the degree of success or failure of a slicer attack on a piece of obfuscated code. We address the issue of proving correctness of obfuscating transforms by adapting existing proof techniques for functional program refinement and communicating sequential processes. The results of this thesis could be used for future work in two ways: first, future researchers may extend our proposed techniques to design obfuscations using a wider range of dependencies that exist between dynamic program structures. Our restricted attack model using one static analysis tool can also be relaxed and obfuscations capable of withstanding a broader class of static and dynamic analysis attacks could be developed based on the same principles. Secondly, our obfuscatory strength evaluation techniques could guide anti-malware researchers in the development of tools to detect obfuscated strains of polymorphic viruses. / Whole document restricted, but available by request, use the feedback form to request access.
964

Personality effect in the design of adaptive e-learning systems : a thesis presented in partial fulfilment of the requirements for the degree of Doctor of Philosophy in Information System at Massey University

Al-Dujaily, Amal Unknown Date (has links)
This PhD thesis is a theoretical and practical study concerning the user model for adaptive e-learning systems. The research activity is two-fold. It firstly explores the personality aspect in the user model which has been overlooked in the previous literature on the design of adaptive e-learning systems, in order to see whether learners with different types of personality would have different effects on their learning performance with adaptive e-learning systems. And secondly, it investigates how to embody the personality features in the current user model, proposing that the inclusion of the personality in the user model for adaptive e-learning systems would lead to better learning performance. The thesis has considered the personality aspect in four parts. PART I reviews the theoretical and empirical literature on adaptive e-learning systems from which the main research questions are constructed. It explains how this study derives an overarching model for the inclusion of personality type in effective e-learning systems. PART II consists of the experiments, which explore empirically the importance of identifying the personality in the user model for adaptive e-learning and its effect in individual learning. That is, the main theme of the thesis hypothesises that different personality type’s influence performance with e-learning systems. PART III shows the effects of personality type on groups of learners performing collaborative learning activities. It suggests practical implications of designing collaborative learning technologies in conjunction with the personality feature. Finally, PART IV includes personality in the proposed user model and tests the primary hypothesis that “the personality may influence the learning performance of students using adaptive e-learning systems”.
965

Personality effect in the design of adaptive e-learning systems : a thesis presented in partial fulfilment of the requirements for the degree of Doctor of Philosophy in Information System at Massey University

Al-Dujaily, Amal Unknown Date (has links)
This PhD thesis is a theoretical and practical study concerning the user model for adaptive e-learning systems. The research activity is two-fold. It firstly explores the personality aspect in the user model which has been overlooked in the previous literature on the design of adaptive e-learning systems, in order to see whether learners with different types of personality would have different effects on their learning performance with adaptive e-learning systems. And secondly, it investigates how to embody the personality features in the current user model, proposing that the inclusion of the personality in the user model for adaptive e-learning systems would lead to better learning performance. The thesis has considered the personality aspect in four parts. PART I reviews the theoretical and empirical literature on adaptive e-learning systems from which the main research questions are constructed. It explains how this study derives an overarching model for the inclusion of personality type in effective e-learning systems. PART II consists of the experiments, which explore empirically the importance of identifying the personality in the user model for adaptive e-learning and its effect in individual learning. That is, the main theme of the thesis hypothesises that different personality type’s influence performance with e-learning systems. PART III shows the effects of personality type on groups of learners performing collaborative learning activities. It suggests practical implications of designing collaborative learning technologies in conjunction with the personality feature. Finally, PART IV includes personality in the proposed user model and tests the primary hypothesis that “the personality may influence the learning performance of students using adaptive e-learning systems”.
966

A calculation of colours: towards the automatic creation of graphical user interface colour schemes : a thesis presented in partial fulfilment of the requirements for the degree of Doctor of Philosophy in Computer Science at Massey University, Palmerston North, New Zealand

Moretti, Giovanni S. January 2010 (has links)
Interface colour scheme design is complex, but important. Most software allows users to choose the colours of single items individually and out of context, but does not acknowledge colour schemes or aid in their design. Creating colour schemes by picking individual colours can be time-consuming, error-prone, and frustrating, and the results are often mediocre, especially for those without colour design skills. Further, as colour harmony arises from the interactions between all of the coloured elements, anticipating the overall eff ect of changing the colour of any single element can be difficult. This research explores the feasibility of extending artistic colour harmony models to include factors pertinent to user interface design. An extended colour harmony model is proposed and used as the basis for an objective function that can algorithmically assess the colour relationships in an interface colour scheme. Its assessments have been found to agree well with human evaluations and have been used as part of a process to automatically create harmonious and usable interface colour schemes. A three stage process for the design of interface colour schemes is described. In the fi rst stage, the designer speci es, in broad terms and without requiring colour design expertise, colouring constraints such as grouping and distinguishability that are needed to ensure that the colouring of interface elements reflects their semantics. The second stage is an optimisation process that chooses colour relationships to satisfy the competing requirements of harmonious colour usage, any designer-specified constraints, and readability. It produces sets of coordinates that constitute abstract colour schemes: they de fine only relationships between coloured items, not real colours. In the third and fi nal stage, a user interactively maps an abstract scheme to one or more real colour schemes. The colours can be fi ne-tuned as a set (but not altered individually), to allow for such "soft" factors as personal, contextual and cultural considerations, while preserving the integrity of the design embodied in the abstract scheme. The colours in the displayed interface are updated continuously, so users can interactively explore a large number of colour schemes, all of which have readable text, distinguishable controls, and conform to the principles of colour harmony. Experimental trials using a proof-of-concept implementation called the Colour Harmoniser have been used to evaluate a method of holistic colour adjustment and the resulting colour schemes. The results indicate that the holistic controls are easy to understand and eff ective, and that the automatically produced colour schemes, prior to fi ne-tuning, are comparable in quality to many manually created schemes, and after fi ne-tuning, are generally better. By designing schemes that incorporate colouring constraints specifi ed by the user prior to scheme creation, and enabling the user to interactively fi ne-tune the schemes after creation, there is no need to specify or incorporate the subtle and not well understood factors that determine whether any particular set of colours is "suitable". Instead, the approach used produces broadly harmonious schemes, and defers to the developer in the choice of the fi nal colours.
967

VERTIPH : a visual environment for real-time image processing on hardware : a thesis presented in partial fulfilment of the requirements for the degree of Doctor of Philosophy in Computer Systems Engineering at Massey University, Palmerston North, New Zealand

Johnston, Christopher Troy January 2009 (has links)
This thesis presents VERTIPH, a visual programming language for the development of image processing algorithms on FPGA hardware. The research began with an examination of the whole design cycle, with a view to identifying requirements for implementing image processing on FPGAs. Based on this analysis, a design process was developed where a selected software algorithm is matched to a hardware architecture tailor made for its implementation. The algorithm and architecture are then transformed into an FPGA suitable design. It was found that in most cases the most efficient mapping for image processing algorithms is to use a streamed processing approach. This constrains how data is presented and requires most existing algorithms to be extensively modified. Therefore, the resultant designs are heavily streamed and pipelined. A visual notation was developed to complement this design process, as both streaming and pipelining can be well represented by data flow visual languages. The notation has three views each of which represents and supports a different part of the design process. An architecture view gives an overview of the design's main blocks and their interconnections. A computational view represents lower-level details by representing each block by a set of computational expressions and low-level controls. This includes a novel visual representation of pipelining that simplifies latency analysis, multiphase design, priming, flushing and stalling, and the detection of sequencing errors. A scheduling view adds a state machine for high-level control of processing blocks. This extended state objects to allow for the priming and flushing of pipelined operations. User evaluations of an implementation of the key parts of this language (the architecture view and the computational view) found that both were generally good visualisations and aided in design (especially the type interface, pipeline and control notations). The user evaluations provided several suggestions for the improvement of the language, and in particular the evaluators would have preferred to use the diagrams as a verification tool for a textual representation rather than as the primary data capture mechanism. A cognitive dimensions analysis showed that the language scores highly for thirteen of the twenty dimensions considered, particularly those related to making details of the design clearer to the developer.
968

VERTIPH : a visual environment for real-time image processing on hardware : a thesis presented in partial fulfilment of the requirements for the degree of Doctor of Philosophy in Computer Systems Engineering at Massey University, Palmerston North, New Zealand

Johnston, Christopher Troy January 2009 (has links)
This thesis presents VERTIPH, a visual programming language for the development of image processing algorithms on FPGA hardware. The research began with an examination of the whole design cycle, with a view to identifying requirements for implementing image processing on FPGAs. Based on this analysis, a design process was developed where a selected software algorithm is matched to a hardware architecture tailor made for its implementation. The algorithm and architecture are then transformed into an FPGA suitable design. It was found that in most cases the most efficient mapping for image processing algorithms is to use a streamed processing approach. This constrains how data is presented and requires most existing algorithms to be extensively modified. Therefore, the resultant designs are heavily streamed and pipelined. A visual notation was developed to complement this design process, as both streaming and pipelining can be well represented by data flow visual languages. The notation has three views each of which represents and supports a different part of the design process. An architecture view gives an overview of the design's main blocks and their interconnections. A computational view represents lower-level details by representing each block by a set of computational expressions and low-level controls. This includes a novel visual representation of pipelining that simplifies latency analysis, multiphase design, priming, flushing and stalling, and the detection of sequencing errors. A scheduling view adds a state machine for high-level control of processing blocks. This extended state objects to allow for the priming and flushing of pipelined operations. User evaluations of an implementation of the key parts of this language (the architecture view and the computational view) found that both were generally good visualisations and aided in design (especially the type interface, pipeline and control notations). The user evaluations provided several suggestions for the improvement of the language, and in particular the evaluators would have preferred to use the diagrams as a verification tool for a textual representation rather than as the primary data capture mechanism. A cognitive dimensions analysis showed that the language scores highly for thirteen of the twenty dimensions considered, particularly those related to making details of the design clearer to the developer.
969

An investigation of system integrations and XML applications within a NZ government agency : a thesis submitted in partial fulfillment of the requirements for the degree of Master of Information Systems at Massey University, New Zealand

Li, Steven January 2009 (has links)
With the evolution of Information Technology, especially the Internet, system integration is becoming a common way to expand IT systems within and beyond an enterprise network. Although system integration is becoming more and more common within large organizations, however, the literature review had found IS research in this area had not been sufficient, especially for the development of integration solutions within large organizations. It has made research like this one conducted within a large NZ government agency necessary. Four system integration projects were selected and studied using case study research methodology. The case study was designed and conducted using guidelines mainly from the well-known R. K. Yin’s (2002) “Case Study Research” book. The research was set to seek answers for a series of research questions, which were related to requirements of system integration and challenges for solution development. Special attention had been given to XML applications, as system integration and XML were found to be coupled in many system integrations and frameworks during the literature review. Data were first gathered from all four projects one by one, and then the bulk of analysis was done on the summarized data. Various analysis methods including chain-of-evidence, root-cause-analysis and pattern-matching were adopted. The principles of interpretive research proposed by Klein and Myers (1999) and triangulation were observed. In conclusions, a set of models have been derived from the research, namely a model for clarifying integration requirements; a model for integration solution architecture; a model for integration development life cycle and a model of critical success factor for integration projects. A development framework for small to medium size integration projects has also been proposed based on the models. The research also found XML application indeed would play an important role for system integration; the critical success factors for XML application included suitable development tools, development skills and methodologies.
970

e-Process selection using decision making methods : a thesis presented in partial fulfilment of the requirements for the degree of Doctor of Philosophy in Information Systems at Massey University, Palmerston North, New Zealand

Albertyn, Erina Francina January 2010 (has links)
The key objective of this research is to develop a selection methodology that can be used to support and aid the selection of development processes for e-Commerce Information Systems (eCIS) effectively using various decision methods. The selection methodology supports developers in their choice of an e-Commerce Information System Development Process (e-Process) by providing them with a few different decision making methods for choosing between defined e-Processes using a set of quality aspects to compare and evaluate the different options. The methodology also provides historical data of previous selections that can be used to further support their specific choice. The research was initiated by the fast growing Information Technology environment, where e-Commerce Information Systems is a relatively new development area and developers of these systems may be using new development methods and have difficulty deciding on the best suited process to use when developing new eCIS. These developers also need documentary support for their choices and this research helps them with these decision-making processes. The e-Process Selection Methodology allows for the comparison of existing development processes as well as the comparison of processes as defined by the developers. Four different decision making methods, the Value-Benefit Method (Weighted Scoring), the Analytical Hierarchy Process, Case-Based Reasoning and a Social Choice method are used to solve the problem of selecting among e-Commerce Development Methodologies. The Value-Benefit Method, when applied to the selection of an e-Process from a set of e-Processes, uses multiple quality aspects. Values are assigned to each aspect for each of the e-Processes by experts. The importance of each of the aspects, to the eCIS, is defined in terms of weights. The selected e-Process is the one with the highest score when the values and weights are multiplied and then summed. The Analytic Hierarchy Process is used to quantify a selection of quality aspects and then these are used to evaluate alternative e-Processes and thus determining the best matching solution to the problem. This process provides for the ranking and determining of the relative worth of each of the quality aspects. Case-Based Reasoning requires the capturing of the resulting knowledge of previous cases, in a knowledge base, in order to make a decision. The case database is built in such a way that the concrete factual knowledge of previous individual cases that were solved previously is stored and can be used in the decision process. Case-based reasoning is used to determine the best choices. This allows the user to either use the selection methodology or the case base database to resolve their problems or both. Social Choice Methods are based on voting processes. Individuals vote for their preferences from a set of e-Processes. The results are aggregated to obtain a final result that indicates which e-Process is the preferred one. The e-Process Selection Methodology is demonstrated and validated by the development of a prototype tool. This tool can be used to select the most suitable solution for a case at hand. The thesis includes the factors that motivated the research and the process that was followed. The e-Process Selection Methodology is summarised as well as the strengths and weaknesses discussed. The contribution to knowledge is explained and future developments are proposed. To conclude, the lessons learnt and reinforced are considered.

Page generated in 0.1533 seconds