• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 20462
  • 5226
  • 1262
  • 1211
  • 867
  • 670
  • 435
  • 410
  • 410
  • 410
  • 410
  • 410
  • 407
  • 158
  • 156
  • Tagged with
  • 34411
  • 34411
  • 14120
  • 10833
  • 3108
  • 2983
  • 2738
  • 2541
  • 2483
  • 2354
  • 2281
  • 2179
  • 2167
  • 2049
  • 1937
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
171

End user software product line support for smart spaces

Tzeremes, Vasilios 29 March 2017 (has links)
<p> Smart spaces are physical environments equipped with pervasive technology that sense and react to human activities and changes in the environment. End User Development (EUD) skills vary significantly among end users who want to design, develop and deploy software applications for their smart spaces. Typical end user development is opportunistic, requirements are usually unplanned and undocumented, applications are simplistic in nature, design is ad-hoc, reuse is limited, and software testing is typically haphazard, leading to many quality issues. On the other hand, technical end users with advanced EUD skills and domain expertise have the ability to create sophisticated software applications for smart spaces that are well designed and tested.</p><p> This research presents a systematic approach for adopting reuse in end user development for smart spaces by using Software Product Line (SPL) concepts. End User (EU) SPL Designers (who are technical end users and domain experts) design and develop EU SPLs for smart spaces whereas less technical end users derive their individual smart space applications from these SPLs. Incorporating SPL concepts in EUD for smart spaces makes it easier for novice end users to derive applications for their spaces without having to interface directly with devices, networks, programming logic, etc. End users only have to select and configure the EU SPL features needed for their space. Another benefit of this approach is that it promotes reuse. End user requirements are mapped to product line features that are realized by common, optional, and variant components available in smart spaces. Product line features and the corresponding component product line architecture can then be used to derive EU applications. Derived EU applications can then be deployed to different smart spaces, thereby avoiding end users having to create EU applications from scratch. Finally the proposed approach has the potential of improving software quality since testing will be an integral part of EU SPL process.</p><p> In particular, this research has: (a) defined a systematic approach for EU SPL Designers to design and develop EU SPLs, (b) provided an EU SPL application derivation approach to enable end users to derive software applications for their spaces, (c) designed an EU SPL meta-model to capture the underlying representation of EU SPL and derived application artifacts in terms of meta-classes and relationships that supports different EUD platforms, (d) designed and implemented an EUD development environment that supports EU SPL development and application derivation, and (e) provided a testing approach and framework for systematic testing of EU SPLs and derived applications.</p>
172

Intelligent emotion recognition from facial and whole-body expressions using adaptive ensemble models

Zhang, Yang January 2015 (has links)
Automatic emotion recognition has been widely studied and applied to various computer vision tasks (e.g. health monitoring, driver state surveillance, personalized learning, and security monitoring). With the great potential provided by current advanced 3D scanners technology (e.g. the Kinect), we shed light on robust emotion recognition based one users’ facial and whole-body expressions. As revealed by recent psychological and behavioral research, facial expressions are good in communicating categorical emotions (e.g. happy, sad, surprise, etc.), while bodily expressions could contribute more to the perception of dimensional emotional states (e.g. the arousal and valence dimensions). Thus, we propose two novel emotion recognition systems respectively applying adaptive ensemble classification and regression models respectively based on the facial and bodily modalities. The proposed real-time 3D facial Action Unit (AU) intensity estimation and emotion recognition system automatically selects 16 motion-based facial feature sets to estimate the intensities of 16 diagnostic AUs. Then a set of six novel adaptive ensemble classifiers are proposed for robust classification of the six basic emotions and the detection of newly arrived unseen novel emotion classes (emotions that are not included in the training set). In both offline-line and on-line real-time evaluation, the system shows the highest recognition accuracy in comparison with other related work and flexibility and good adaptation for newly arrived novel emotion detection(e.g. ‘contempt’ which is not included in the six basic emotions). The second system focuses on continuous and dimensional affect prediction from users’ bodily expressions using adaptive regression. Both static posture and dynamic motion bodily features are extracted and subsequently selected by a Genetic Algorithm to identify their most discriminative combinations for both valence and arousal dimensions. Then an adaptive ensemble regression model is proposed to robustly map subjects’ emotional states onto a continuous arousal-valence affective space using the identified feature subsets. Experimental results show that the proposed system outperforms other benchmark models and achieves promising performance compared to other state-of-the-art research reported in the literature. Furthermore, we also propose a novel semi-feature level bimodal fusion framework that integrates both facial and bodily information together to draw a more comprehensive and robust dimensional interpretation of subjects’ emotional states. By combining the optimal discriminative bodily features and the derived AU intensities as inputs, the proposed adaptive ensemble regression model achieves remarkable improvements in comparison to solely applying the bodily features.
173

An investigation into the use of knowledge representation in computer-integrated manufacturing

Beesley, Simon A. S. January 1988 (has links)
In a certain automobile factory, batch-painting of the body types in colours is controlled by an allocation system. This tries to balance production with orders, whilst making optimally-sized batches of colours. Sequences of cars entering painting cannot be optimised for easy selection of colour and batch size. `Over-production' is not allowed, in order to reduce buffer stocks of unsold vehicles. Paint quality is degraded by random effects. This thesis describes a toolkit which supports IKBS in an object-centred formalism. The intended domain of use for the toolkit is flexible manufacturing. A sizeable application program was developed, using the toolkit, to test the validity of the IKBS approach in solving the real manufacturing problem above, for which an existing conventional program was already being used. A detailed statistical analysis of the operating circumstances of the program was made to evaluate the likely need for the more flexible type of program for which the toolkit was intended. The IKBS program captures the many disparate and conflicting constraints in the scheduling knowledge and emulates the behaviour of the program installed in the factory. In the factory system, many possible, newly-discovered, heuristics would be awkward to represent and it would be impossible to make many new extensions. The representation scheme is capable of admitting changes to the knowledge, relying on the inherent encapsulating properties of object-centres programming to protect and isolate data. The object-centres scheme is supported by an enhancement of the `C' programming language and runs under BSD 4.2 UNIX. The structuring technique, using objects, provides a mechanism for separating control of expression of rule-based knowledge from the knowledge itself and allowing explicit `contexts', within which appropriate expression of knowledge can be done. Facilities are provided for acquisition of knowledge in a consistent manner.
174

A constructive learning algorithm based on back-propagation

Lowton, Andrew D. January 1995 (has links)
There are been a resurgence of interest in the neural networks field in recent years, provoked in part by the discovery of the properties of multi-layer networks. This interest has in turn raised questions about the possibility of making neural network behaviour more adaptive by automating some of the processes involved. Prior to these particular questions, the process of determining the parameters and network architecture required to solve a given problem had been a time consuming activity. A number of researchers have attempted to address these issues by automating these processes, concentrating in particular on the dynamic selection of an appropriate network architecture. The work presented here specifically explores the area of automatic architecture selection; it focuses upon the design and implementation of a dynamic algorithm based on the Back-Propagation learning algorithm. The algorithm constructs a single hidden layer as the learning process proceeds using individual pattern error as the basis of unit insertion. This algorithm is applied to several problems of differing type and complexity and is found to produce near minimal architectures that are shown to have a high level of generalisation ability. (DX 187, 339)
175

A novel entropy measure for analysis of the electrocardiogram

Woodcock, Dan January 2007 (has links)
There has been much recent research into extracting useful diagnostic features from the electrocardiogram with numerous studies claiming impressive results. However, the robustness and consistency of the methods employed in these studies is rarely, if ever, mentioned. Hence, we propose two new methods; a biologically motivated time series derived from consecutive P-wave durations, and a mathematically motivated regularity measure. We investigate the robustness of these two methods when compared with current corresponding methods. We find that the new time series performs admirably as a compliment to the current method and the new regularity measure consistently outperforms the current measure in numerous tests on real and synthetic data.
176

Automated document retrieval based on distributed processing

Jamieson, S. H. January 1981 (has links)
No description available.
177

Automatic refinement of constructive solid geometry models

Afshari-Aliabad, Esfandyar January 1991 (has links)
Geometric information relating to most engineering products is available in the form of orthographic drawings or 2D data files. For many recent computer based applications, such as Computer Integrated Manufacturing (CIM), these data are required in the form of a sophisticated model based on Constructive Solid Geometry (CSG) concepts. A recent novel technique in this area transfers 2D engineering drawings directly into a 3D solid model called `the first approximation'. In many cases, however, this does not represent the real object. In this thesis, a new method is proposed and developed to enhance this model. This method uses the notion of expanding an object in terms of other solid objects, which are either primitive or first approximation models. To achieve this goal, in addition to the prepared subroutine to calculate the first approximation model of input data, two other wireframe models are found for extraction of sub-objects. One is the wireframe representation on input, and the other is the wireframe of the first approximation model. A new fast method is developed for the latter special case wireframe, which is named the `first approximation wireframe model'. This method avoids the use of a solid modeller. Detailed descriptions of algorithms and implementation procedures are given. In these techniques utilisation of dashed line information is also considered in improving the model. Different practical examples are given to illustrate the functioning of the program. Finally, a recursive method is employed to automatically modify the output model towards the real object. Some suggestions for further work are made to increase the domain of objects covered, and provide a commercially usable package. It is concluded that the current method promises the production of accurate models for a large class of objects.
178

Statistical physics of error-correcting codes

Vicente, Renato January 2000 (has links)
In this thesis we use statistical physics techniques to study the typical performance of four families of error-correcting codes based on very sparse linear transformations: Sourlas codes, Gallager codes, MacKay-Neal codes and Kanter-Saad codes. We map the decoding problem onto an Ising spin system with many-spins interactions. We then employ the replica method to calculate averages over the quenched disorder represented by the code constructions, the arbitrary messages and the random noise vectors. We find, as the noise level increases, a phase transition between successful decoding and failure phases. This phase transition coincides with upper bounds derived in the information theory literature in most of the cases. We connect the practical decoding algorithm known as probability propagation with the task of finding local minima of the related Bethe free-energy. We show that the practical decoding thresholds correspond to noise levels where suboptimal minima of the free-energy emerge. Simulations of practical decoding scenarios using probability propagation agree with theoretical predictions of the replica symmetric theory. The typical performance predicted by the thermodynamic phase transitions is shown to be attainable in computation times that grow exponentially with the system size. We use the insights obtained to design a method to calculate the performance and optimise parameters of the high performance codes proposed by Kanter and Saad.
179

Document retrieval based on a cognitive model of dialogue

Ofori-Dwumfuo, George O. January 1982 (has links)
Owing to the rise in the volume of literature, problems arise in the retrieval of required information. Various retrieval strategies have been proposed, but most of that are not flexible enough for their users. Specifically, most of these systems assume that users know exactly what they are looking for before approaching the system, and that users are able to precisely express their information needs according to l aid- down specifications. There has, however, been described a retrieval program THOMAS which aims at satisfying incompletely- defined user needs through a man- machine dialogue which does not require any rigid queries. Unlike most systems, Thomas attempts to satisfy the user's needs from a model which it builds of the user's area of interest. This model is a subset of the program's "world model" - a database in the form of a network where the nodes represent concepts since various concepts have various degrees of similarities and associations, this thesis contends that instead of models which assume equal levels of similarities between concepts, the links between the concepts should have values assigned to them to indicate the degree of similarity between the concepts. Furthermore, the world model of the system should be structured such that concepts which are related to one another be clustered together, so that a user- interaction would involve only the relevant clusters rather than the entire database such clusters being determined by the system, not the user. This thesis also attempts to link the design work with the current notion in psychology centred on the use of the computer to simulate human cognitive processes. In this case, an attempt has been made to model a dialogue between two people - the information seeker and the information expert. The system, called Thomas-II, has been implemented and found to require less effort from the user than Thomas.
180

Evolutionary neural networks : models and applications

Williams, Bryn V. January 1995 (has links)
The scaling problems which afflict attempts to optimise neural networks (NNs) with genetic algorithms (GAs) are disclosed. A novel GA-NN hybrid is introduced, based on the bumptree, a little-used connectionist model. As well as being computationally efficient, the bumptree is shown to be more amenable to genetic coding lthan other NN models. A hierarchical genetic coding scheme is developed for the bumptree and shown to have low redundancy, as well as being complete and closed with respect to the search space. When applied to optimising bumptree architectures for classification problems the GA discovers bumptrees which significantly out-perform those constructed using a standard algorithm. The fields of artificial life, control and robotics are identified as likely application areas for the evolutionary optimisation of NNs. An artificial life case-study is presented and discussed. Experiments are reported which show that the GA-bumptree is able to learn simulated pole balancing and car parking tasks using only limited environmental feedback. A simple modification of the fitness function allows the GA-bumptree to learn mappings which are multi-modal, such as robot arm inverse kinematics. The dynamics of the 'geographic speciation' selection model used by the GA-bumptree are investigated empirically and the convergence profile is introduced as an analytical tool. The relationships between the rate of genetic convergence and the phenomena of speciation, genetic drift and punctuated equilibrium arc discussed. The importance of genetic linkage to GA design is discussed and two new recombination operators arc introduced. The first, linkage mapped crossover (LMX) is shown to be a generalisation of existing crossover operators. LMX provides a new framework for incorporating prior knowledge into GAs. Its adaptive form, ALMX, is shown to be able to infer linkage relationships automatically during genetic search.

Page generated in 0.0843 seconds