• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 763
  • 170
  • 24
  • 21
  • 21
  • 21
  • 21
  • 21
  • 21
  • 6
  • 6
  • 4
  • 1
  • 1
  • Tagged with
  • 2872
  • 2872
  • 2521
  • 2129
  • 1312
  • 553
  • 527
  • 462
  • 443
  • 382
  • 373
  • 306
  • 262
  • 223
  • 208
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
351

A novel framework for the implementation and evaluation of type-1 and interval type-2 ANFIS

Chen, Chao January 2018 (has links)
This thesis explores a novel framework for implementing and evaluating type-1 (T1) and interval type-2 (IT2) models of Adaptive Network Fuzzy Inference Systems (ANFIS). A fundamental requirement for this research is the capability to reliably and efficiently implement ANFIS models. In the last ten years, many studies have been devoted to creating IT2 ANFIS models. However, a clear architecture for IT2 ANFIS has not yet been presented. This somehow has been an obstacle to the research of IT2 ANFIS and its application to real-world problems. In this thesis, we introduce an extended ANFIS architecture that can be used for both T1 and IT2 models. In conjunction with this, a crucial obstacle to the use of IT2 fuzzy systems in general (and including IT2 ANFIS) is that IT2 models are often more computationally expensive than T1 models. Note that a bottle-neck for IT2 ANFIS is to aggregate the output of each rule produced by the inference process of the Karnik-Mendel (KM) algorithm. Many enhanced algorithms have been proposed to improve the computational efficiency of the KM algorithm. However, all of these algorithms are still based on iterative procedures to determine the switch points required for the lower and upper bounds of defuzzification. This thesis introduces a `direct approach' which can be used to determine these switch points based on derivatives, without the need for multiple iterations. When comparing various models (including T1 and IT2 ANFIS models), it is necessary to conduct fair comparisons. Partly to address this issue, a new accuracy measure is proposed which combines the best features of various alternative measures without having their common drawbacks. Experimental comparisons are made between T1 and IT2 ANFIS using the novel accuracy measure in addition to the commonly used RMSE, on both synthetic and real-world data. Finally, it is shown that IT2 ANFIS models are not easy to optimise from scratch due to difficulties with the output intervals, that are not present in T1 ANFIS models. Detailed experiments are carried out to evaluate the comparative performance of IT2 ANFIS models, including the best method for initialising the IT2 membership functions. In summary, a coherent framework for efficiently implementing IT2 ANFIS models and fairly evaluating their comparative performance is presented. This framework allows the implementation of IT2 ANFIS in any application context, and the resultant performance to be carefully considered, since clear performance improvement compared to T1 ANFIS may not always be found.
352

Governing open source communities through boundary decisions

Al Bulushi, Wisal Abbas Jaffer January 2018 (has links)
Governing open source software (OSS) communities is defined in the relevant literature as the formal and informal means to control and coordinate the collective efforts towards common objectives (Markus, 2007). OSS communities are not based on a fixed structure. Instead, the structure emerges through collaboration. Participants, technical artefacts, ideas, resources, and interactions are fluid (Faraj et al., 2011) in the sense that they are reconfigured over time, depending on the context of the community. This has raised governance challenges in terms of determining “how open is open enough” (West, 2003). Governing a fluid complex technically-mediated ecosystem, such as OSS communities, requires determining whether to keep the boundaries open to all, which may risk the quality of the deliverables, or restricting the contributions to an elite population, which restrains collaboration (Ferraro and O'Mahony, 2012). In this thesis, I argue that OSS governance is a boundary decision to determine and legitimise the practices that best govern the collective effort in a particular context (Ferraro and O'Mahony, 2012). The current literature focused on two types of boundaries; the external boundary that separates OSS communities from the commercial world; and the role-based boundary that identifies the roles and responsibilities of the individuals (Chen and O’Mahony, 2009). The former boundary has been extensively discussed in the literature by focusing on how firms reap the benefits of OSS products without exploiting the collective effort. The latter boundary focuses on individuals as the main actors of the community. The current views on OSS governance have two main limitations. First, current accounts focus on creating a governance structure that facilitates the collaboration among dispersed individuals, neglecting the issues of fluidity and dynamicity. As a result, scholars continue to build their studies on taken-for-granted assumptions overlooking the transformations that have occurred to the overall settings of the OSS community. One of the overlooked areas is the emergence of vertical (i.e. domain-specific) OSS communities, which is the main interest of this thesis. Second, technology, in the context of OSS, is either considered as an end product or a medium of governance. Current studies failed to address the materiality of technology, where materiality refers to the ways in which the properties of technology are arranged and rearranged in relation to each other to accomplish governance practices in a particular context. The materiality of technology entails different possibilities for governance practices, which is not sufficiently addressed in the literature. Therefore, I argue and demonstrate that any attempt to explain OSS governance without addressing materiality is considered incomplete. In this thesis, I demonstrate that OSS communities are governed through boundary decisions, where decisions refer to delineating the boundaries of the community. This is achieved by identifying the actors, actions, and resources required to control and coordinate the collaborative effort in a particular context. Boundary decisions entail remaining sensitive to the changes that occur to the context and change the boundaries accordingly. I adopt grounded theory approach to conduct a case study on Kuali; a vertical OSS community that develops ERP system for the higher education sector. The research findings contribute to the OSS governance literature by developing a theoretical foundation that explains OSS governance as a boundary decision. The emergent theory explains OSS governance in terms of context, control, resources, and materiality. I illustrate through empirical evidence how these constructs interact with each other to govern the collective effort. The thesis contributes to the OSS literature by bringing to the fore the dynamicity and materiality of OSS governance. The thesis also has implications in the area of boundary management. OSS communities represent a non-traditional organisational settings, and thus provides novel theoretical insights with regards to boundary management.
353

An algorithm for computing short-range forces in molecular dynamics simulations with non-uniform particle densities

Law, Timothy R. January 2017 (has links)
We develop the projection sorting algorithm, used to compute pairwise short-range interaction forces between particles in molecular dynamics simulations. We contrast this algorithm to the state of the art and discuss situations where it may be particularly effective. We then explore the efficient implementation of the projection sorting algorithm in both on-node (shared memory parallel) and off-node (distributed memory parallel) environments. We provide AVX, AVX2, KNC and AVX-512 intrinsic implementations of the force calculation kernel. We use the modern multi- and many-core architectures: Intell Haswell, Broadwell Knights Corner (KNC) and Knights Landing (KNL), as representative slice of modern High Performance Computing (HPC) installations. In the course of implementation we use our algorithm as a means of optimising a contemporary biophysical molecular dynamics simulation of chromosome condensation. We compare state-of-the-art Molecular Dynamics (MD) algorithms and projection sorting, and experimentally demonstrate the performance gains possible with our algorithm. These experiments are carried out in single- and multi-node configurations. We observe speedups of up to 5x when comparing our algorithm to the state of the art, and up to 10x when compared to the original unoptimised simulation. These optimisations have directly affected the ability of domain scientists to carry out their work.
354

A visual adaptive authoring framework for adaptive hypermedia

Khan, Javed Arif January 2018 (has links)
In a linear hypermedia system, all users are offered a standard series of hyperlinks. Adaptive Hypermedia (AH) tailors what the user sees to the user's goals, abilities, interests, knowledge and preferences. Adaptive Hypermedia is said to be the answer to the 'lost in hyperspace' phenomenon, where the user has too many hyperlinks to choose from, and has little knowledge to select the most appropriate hyperlink. AH offers a selection of links and content that is most appropriate to the current user. In an Adaptive Educational Hypermedia (AEH) course, a student's learning experiences can be personalised using a User Model (UM), which could include information such as the student's knowledge level, preferences and culture. Beside these basic components, a Goal Model (GM) can represent the goal the users should meet and a Domain Model (DM) would represent the knowledge domain. Adaptive strategies are sets of adaptive rules that can be applied to these models, to allow the personalisation of the course for students, according to their needs. From the many interacting elements, it is clear that the authoring process is a bottleneck in the adaptive course creation, which needs to be improved in terms of interoperability, usability and reuse of the adaptive behaviour (strategies). Authoring of Adaptive Hypermedia is considered to be difficult and time consuming. There is great scope for improving authoring tools in Adaptive Educational Hypermedia system, to aid already burdened authors to create adaptive courses easily. Adaptation specifications are very useful in creating adaptive behaviours, to support the needs of a group of learners. Authors often lack the time or the skills needed to create new adaptation specifications from scratch. Creating an adaptation specification requires the author to know and remember the programming language syntax, which places a knowledge barrier for the author. LAG is a complete and useful programming language, which, however, is considered too complex for authors to deal with directly. This thesis thus proposes a visual framework (LAGBlocks) for the LAG adaptation language and an authoring tool (VASE) to utilise the proposed visual framework, to create adaptive specifications, by manipulating visual elements. It is shown that the VASE authoring tool along with the visual framework enables authors to create adaptive specifications with ease and assist authors in creating adaptive specifications which promote the "separation of concern". The VASE authoring tool offers code completeness, correctness at design time, and also allows for adaptive strategies to be used within other tools for adaptive hypermedia. The goal is thus to make adaptive specifications easier, to create and to share for authors with little or no programming knowledge and experience. This thesis looks at three aspects of authoring in adaptive educational hypermedia systems. The first aspect of the thesis is concerned with problems faced by the author of an adaptive hypermedia system; the second aspect is concerned with describing the findings gathered from investigating the previously developed authoring tools; and the final aspect of the thesis is concerned with the proposal, the implementation and the evaluation of a new authoring tool that improves the authoring process for authors with different knowledge, background and experience. The purpose of the new tool, VASE, is to enable authors to create adaptive strategies in a puzzle-building manner; moreover, the created adaptive strategies could be used within (are compatible with) other systems in adaptive hypermedia, which use the LAG programming language.
355

Towards a model of giftedness in programming : an investigation of programming characteristics of gifted students at University of Warwick

Qahmash, Ayman January 2018 (has links)
This study investigates characteristics related to learning programming for gifted first year computer science students. These characteristics include mental representations, knowledge representations, coding strategies, and attitudes and personality traits. This study was motivated by developing a theoretical framework to define giftedness in programming. In doing so, it aims to close the gap between gifted education and computer science education, allowing gifted programmers to be supported. Previous studies indicated a lack of theoretical foundation of gifted education in computer science, especially for identifying gifted programmers, which may have resulted in identification process concerns and/or inappropriate support. The study starts by investigating the relationship between mathematics and programming. We collected 3060 records of raw data of students' grades from 1996 to 2015. Descriptive statistics and the Pearson product-moment correlation test were used for the analysis. The results indicate a statistically significant positive correlation between mathematics and programming in general and between specific mathematics and programming modules. The study evolves to investigate other programming-related characteristics using case study methodology and collecting quantitative and qualitative data. A sample of n=9 cases of gifted students was selected and was interviewed. In addition, we collected the students' grades, code-writing problems and project (Witter) source codes and analysed these data using specific analysis procedures according to each method. The results indicate that gifted student programmers might possess a single or multiple characteristics that have large overlaps. We introduced a model to define giftedness in programming that consists of three profiles: mathematical ability, creativity and personal traits, and each profile consists of sub-characteristics.
356

High-dimensional-output surrogate models for uncertainty and sensitivity analyses

Triantafyllidis, Vasileios January 2018 (has links)
Computational models that describe complex physical phenomena tend to be computationally expensive and time consuming. Partial differential equation (PDE) based models in particular produce spatio-temporal data sets in high dimensional output spaces. Repeated calls of computer models to perform tasks such as sensitivity analysis, uncertainty quantification and design optimization can become computationally infeasible as a result. While constructing an emulator is one solution to approximate the outcome of expensive computer models, it is not always capable of dealing with high-dimensional data sets. To deal with high-dimensional data, in this thesis emulation strategies (Gaussian processes (GPs), artificial neural networks (ANNs) and support vector machines (SVMs)) are combined with linear and non-linear dimensionality reduction techniques (kPCA, Isomap and diffusion maps) to develop efficient emulators. For sensitivity analysis (variance based), a probabilistic framework is developed to account for the emulator uncertainty and the method is extended to multivariate outputs, with a derivation of new semi-analytical results for performing rapid sensitivity analysis of univariate or multivariate outputs. The developed emulators are also used to extend reduced order models (ROMs) based on proper orthogonal decomposition to parameter-dependent PDEs, including an extension of the discrete empirical interpolation method for non-linear problems PDE systems.
357

A parser modification of the Euclid compiler : automatic generation of syntax error recovery

Evans, Grace January 2010 (has links)
Typescript (photocopy). / Digitized by Kansas Correctional Industries
358

University space planning : projections for Kansas State University

Chandrashekar, K January 2010 (has links)
Digitized by Kansas Correctional Industries
359

Recognition of mathematical handwriting on whiteboards

Sabeghi Saroui, Behrang January 2015 (has links)
Automatic recognition of handwritten mathematics has enjoyed significant improvements in the past decades. In particular, online recognition of mathematical formulae has seen a number of important advancements. However, in reality most mathematics is still taught and developed on regular whiteboards and offline recognition remains an open and challenging task in this area. In this thesis we develop methods to recognise mathematics from static images of handwritten expressions on whiteboards, while leveraging the strength of online recognition systems by transforming offline data into online information. Our approach is based on trajectory recovery techniques, that allow us to reconstruct the actual stroke information necessary for online recognition. To this end we develop a novel recognition process especially designed to deal with whiteboards by prudently extracting information from colour images. To evaluate our methods we use an online recogniser for the recognition task, which is specifically trained for recognition of maths symbols. We present our experiments with varying quality and sources of images. In particular, we have used our approach successfully in a set of experiments using Google Glass for capturing images from whiteboards, in which we achieve highest accuracies of 88.03% and 84.54% for segmentation and recognition of mathematical symbols respectively.
360

Coherent minimisation : aggressive optimisation for symbolic finite state transducers

Al-Zobaidi, Zaid January 2014 (has links)
Automata minimisation is considered as one of the key computational resources that drive the cost of computation. Most of the conventional minimisation techniques are based on the notion of bisimulation to determine equivalent states which can be identified. Although minimisation of automata has been an established topic of research, the optimisation of automata works in constrained environments is a novel idea which we will examine in this dissertation, along with a motivating, non-trivial application to efficient tamper-proof hardware compilation. This thesis introduces a new notion of equivalence, coherent equivalence, between states of a transducer. It is weaker than the usual notions of bisimulation, so it leads to more states being identified as equivalent. This new equivalence relation can be utilised to aggressively optimise transducers by reducing the number of states, a technique which we call coherent minimisation. We note that the coherent minimisation always outperforms the conventional minimisation algorithms. The main result of this thesis is that the coherent minimisation is sound and compositional. In order to support more realistic applications to hardware synthesis, we also introduce a refined model of transducers, which we call symbolic finite states transducers that can model systems which involve very large or infinite data-types.

Page generated in 0.4547 seconds