• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1102
  • 257
  • 89
  • 85
  • 75
  • 26
  • 23
  • 21
  • 18
  • 17
  • 14
  • 13
  • 11
  • 7
  • 7
  • Tagged with
  • 2117
  • 524
  • 520
  • 488
  • 435
  • 357
  • 343
  • 317
  • 282
  • 270
  • 269
  • 262
  • 236
  • 180
  • 174
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
201

Feature-based approach to bridge the information technology and business gap

Alazemi, Fayez January 2014 (has links)
The gap between business goals (problem domain), such as cost reduction, new business processes, increasing competitive advantage, etc., and the supporting Information Technology infrastructure (solution domain), such as the ability to implement software solutions to achieve these goals, is complex and challenging to bridge. This gap emerges for many reasons; for instance, inefficient communication, domain terminology misunderstanding or external factors, e.g. business change. As most business and software products can be described by a set of features, a promising solution would be to link both the problem and solution domains based on these features. Thus, the proposed approach aims to bridge the gap between the problem and the solution domains by using a feature-based technique in order to provide a quick and efficient means for understanding the relationships between IT solutions and business goals. The novelty of the proposed framework emanates from the three characteristics of the business-IT gap: the problem domain, the solution domain and the matching process. Besides the proposed feature-based IT-business framework, other contributions are proposed: a feature extracting method and feature matching algorithms. The proposed approach is achieved in three phases. The first phase is to decompose business needs and transform them into a feature model (presented in UML diagrams); this is represented as a top-to-middle process. The second phase is a reverse engineering process. A system program code is sliced into modules and transformed into feature-based models (again, in UML diagrams); these are represented as a bottom-to-middle process. The third phase is a model-driven engineering process. It uses model comparison techniques to match the UML feature models of the top-to-middle and bottom-to-middle phases. The presented approach in this research shows that features elicited from the business goals can be matched to features extracted from in the IT side. This proposed approach is feasible and able to provide a quick and efficient means for improving feature-based business IT matching. Two case studies are presented to demonstrate that the feature-oriented view of features from the users' perspective can be matched to the feature-oriented view of features in the IT side. This matching can serve to remove any ambiguities that may cause difficulties in the cases of system maintenance or system evolution, in particular when there are changes in requirements, which is to be expected when there is any business change.
202

Features interaction detection and resolution in smart home systems using agent-based negotiation approach

Alghamdi, Ahmed Saeed January 2015 (has links)
Smart home systems (SHS) have become an increasingly important technology in modern life. Apart from safety, security, convenience and entertainment, they offer significant potential benefits for the elderly, disabled and others who cannot live independently. Furthermore, smart homes are environmentally friendly. SHS functionality is based on perceiving residents’ needs and desires, then offering services accordingly. In order to be smart, homes have to be equipped with sensors, actuators and intelligent devices and appliances, as well as connectivity and control mechanisms. A typical SHS comprises heterogeneous services and appliances that are designed by many different developers and which may meet for the first time in the home network. The heterogeneous nature of the systems, in addition to the dynamic environment in which they are deployed, exposes them to undesirable interactions between services, known as Feature Interaction (FI). Another reason for FI is the divergence between the policies, needs and desires of different residents. Proposed approaches to FI detection and resolution should take these different types of interaction into account. Negotiation is an effective mechanism to address FI, as conflicting features can then negotiate with each other to reach a compromise agreement. The ultimate goal of this study is to develop an Agent-Based Negotiation Approach (ABNA) to detect and resolve feature interaction in a SHS. A smart home architecture incorporating the components of the ABNA has been proposed. The backbone of the proposed approach is a hierarchy in which features are organised according to their importance in terms of their functional contribution to the overall service. Thus, features are categorised according to their priority, those which are essential for the service to function having the highest priority. An agent model of the ABNA is proposed and comprehensive definitions of its components are presented. A computational model of the system also has been proposed which is used to explain the behaviour of different components when a proposal to perform a task is raised. To clarify the system requirements and also to aid the design and implementation of its properties, a formal specification of the ABNA is presented using the mathematical notations of Calculus of Context-aware Ambient (CCA), then in order to evaluate the approach a case study is reported, involving two services within the SHS: ventilation and air conditioning. For the purpose of evaluation, the execution environment of CCA is utilised to execute and analyse the ABNA.
203

The representation of motherhood and mother-daughter relationships in films

Lee, Yuen-kwan., 李婉君. January 2000 (has links)
published_or_final_version / Literary and Cultural Studies / Master / Master of Arts
204

Visual control of multi-rotor UAVs

Duncan, Stuart Johann Maxwell January 2014 (has links)
Recent miniaturization of computer hardware, MEMs sensors, and high energy density batteries have enabled highly capable mobile robots to become available at low cost. This has driven the rapid expansion of interest in multi-rotor unmanned aerial vehicles. Another area which has expanded simultaneously is small powerful computers, in the form of smartphones, which nearly always have a camera attached, many of which now contain a OpenCL compatible graphics processing units. By combining the results of those two developments a low-cost multi-rotor UAV can be produced with a low-power onboard computer capable of real-time computer vision. The system should also use general purpose computer vision software to facilitate a variety of experiments. To demonstrate this I have built a quadrotor UAV based on control hardware from the Pixhawk project, and paired it with an ARM based single board computer, similar those in high-end smartphones. The quadrotor weights 980 g and has a flight time of 10 minutes. The onboard computer capable of running a pose estimation algorithm above the 10 Hz requirement for stable visual control of a quadrotor. A feature tracking algorithm was developed for efficient pose estimation, which relaxed the requirement for outlier rejection during matching. Compared with a RANSAC- only algorithm the pose estimates were less variable with a Z-axis standard deviation 0.2 cm compared with 2.4 cm for RANSAC. Processing time per frame was also faster with tracking, with 95 % confidence that tracking would process the frame within 50 ms, while for RANSAC the 95 % confidence time was 73 ms. The onboard computer ran the algorithm with a total system load of less than 25 %. All computer vision software uses the OpenCV library for common computer vision algorithms, fulfilling the requirement for running general purpose software. The tracking algorithm was used to demonstrate the capability of the system by per- forming visual servoing of the quadrotor (after manual takeoff). Response to external perturbations was poor however, requiring manual intervention to avoid crashing. This was due to poor visual controller tuning, and to variations in image acquisition and attitude estimate timing due to using free running image acquisition. The system, and the tracking algorithm, serve as proof of concept that visual control of a quadrotor is possible using small low-power computers and general purpose computer vision software.
205

Actionable Knowledge Discovery using Multi-Step Mining

DharaniK, Kalpana Gudikandula 01 December 2012 (has links)
Data mining at enterprise level operates on huge amount of data such as government transactions, banks, insurance companies and so on. Inevitably, these businesses produce complex data that might be distributed in nature. When mining is made on such data with a single-step, it produces business intelligence as a particular aspect. However, this is not sufficient in enterprise where different aspects and standpoints are to be considered before taking business decisions. It is required that the enterprises perform mining based on multiple features, data sources and methods. This is known as combined mining. The combined mining can produce patterns that reflect all aspects of the enterprise. Thus the derived intelligence can be used to take business decisions that lead to profits. This kind of knowledge is known as actionable knowledge. / Data mining is a process of obtaining trends or patterns in historical data. Such trends form business intelligence that in turn leads to taking well informed decisions. However, data mining with a single technique does not yield actionable knowledge. This is because enterprises have huge databases and heterogeneous in nature. They also have complex data and mining such data needs multi-step mining instead of single step mining. When multiple approaches are involved, they provide business intelligence in all aspects. That kind of information can lead to actionable knowledge. Recently data mining has got tremendous usage in the real world. The drawback of existing approaches is that insufficient business intelligence in case of huge enterprises. This paper presents the combination of existing works and algorithms. We work on multiple data sources, multiple methods and multiple features. The combined patterns thus obtained from complex business data provide actionable knowledge. A prototype application has been built to test the efficiency of the proposed framework which combines multiple data sources, multiple methods and multiple features in mining process. The empirical results revealed that the proposed approach is effective and can be used in the real world.
206

Linear Feature Extraction with Emphasis on Face Recognition

Mahanta, Mohammad Shahin 15 February 2010 (has links)
Feature extraction is an important step in the classification of high-dimensional data such as face images. Furthermore, linear feature extractors are more prevalent due to computational efficiency and preservation of the Gaussianity. This research proposes a simple and fast linear feature extractor approximating the sufficient statistic for Gaussian distributions. This method preserves the discriminatory information in both first and second moments of the data and yields the linear discriminant analysis as a special case. Additionally, an accurate upper bound on the error probability of a plug-in classifier can be used to approximate the number of features minimizing the error probability. Therefore, tighter error bounds are derived in this work based on the Bayes error or the classification error on the trained distributions. These bounds can also be used for performance guarantee and to determine the required number of training samples to guarantee approaching the Bayes classifier performance.
207

Adaptive division of feature space for rapid detection of near-duplicate video segments

Ide, Ichiro, Suzuki, Shugo, Takahashi, Tomokazu, Murase, Hiroshi 28 June 2009 (has links)
No description available.
208

Developing integrated data fusion algorithms for a portable cargo screening detection system

Ayodeji, Akiwowo January 2012 (has links)
Towards having a one size fits all solution to cocaine detection at borders; this thesis proposes a systematic cocaine detection methodology that can use raw data output from a fibre optic sensor to produce a set of unique features whose decisions can be combined to lead to reliable output. This multidisciplinary research makes use of real data sourced from cocaine analyte detecting fibre optic sensor developed by one of the collaborators - City University, London. This research advocates a two-step approach: For the first step, the raw sensor data are collected and stored. Level one fusion i.e. analyses, pre-processing and feature extraction is performed at this stage. In step two, using experimentally pre-determined thresholds, each feature decides on detection of cocaine or otherwise with a corresponding posterior probability. High level sensor fusion is then performed on this output locally to combine these decisions and their probabilities at time intervals. Output from every time interval is stored in the database and used as prior data for the next time interval. The final output is a decision on detection of cocaine. The key contributions of this thesis includes investigating the use of data fusion techniques as a solution for overcoming challenges in the real time detection of cocaine using fibre optic sensor technology together with an innovative user interface design. A generalizable sensor fusion architecture is suggested and implemented using the Bayesian and Dempster-Shafer techniques. The results from implemented experiments show great promise with this architecture especially in overcoming sensor limitations. A 5-fold cross validation system using a 12 13 - 1 Neural Network was used in validating the feature selection process. This validation step yielded 89.5% and 10.5% true positive and false alarm rates with 0.8 correlation coefficient. Using the Bayesian Technique, it is possible to achieve 100% detection whilst the Dempster Shafer technique achieves a 95% detection using the same features as inputs to the DF system.
209

The influence of human factors on user's preferences of web-based applications : a data mining approach

Clewley, Natalie Christine January 2010 (has links)
As the Web is fast becoming an integral feature in many of our daily lives, designers are faced with the challenge of designing Web-based applications for an increasingly diverse user group. In order to develop applications that successfully meet the needs of this user group, designers have to understand the influence of human factors upon users‘ needs and preferences. To address this issue, this thesis presents an investigation that analyses the influence of three human factors, including cognitive style, prior knowledge and gender differences, on users‘ preferences for Web-based applications. In particular, two applications are studied: Web search tools and Web-based instruction tools. Previous research has suggested a number of relationships between these three human factors, so this thesis was driven by three research questions. Firstly, to what extent is the similarity between the two cognitive style dimensions of Witkin‘s Field Dependence/Independence and Pask‘s Holism/Serialism? Secondly, to what extent do computer experts have the same preferences as Internet experts and computer novices have the same preferences as Internet novices? Finally, to what extent are Field Independent users, experts and males alike, and Field Dependent users, novices and females alike? As traditional statistical analysis methods would struggle to effectively capture such relationships, this thesis proposes an integrated data mining approach that combines feature selection and decision trees to effectively capture users‘ preferences. From this, a framework is developed that integrates the combined effect of the three human factors and can be used to inform system designers. The findings suggest that firstly, there are links between these three human factors. In terms of cognitive style, the relationship between Field Dependent users and Holists can be seen more clearly than the relationship between Field Independent users and Serialists. In terms of prior knowledge, although it is shown that there is a link between computer experience and Internet experience, computer experts are shown to have similar preferences to Internet novices. In terms of the relationship between all three human factors, the results of this study highlighted that the links between cognitive style and gender and between cognitive style and system experience were found to be stronger than the relationship between system experience and gender. This work contributes both theory and methodology to multiple academic communities, including human-computer interaction, information retrieval and data mining. In terms of theory, it has helped to deepen the understanding of the effects of single and multiple human factors on users‘ preferences for Web-based applications. In terms of methodology, an integrated data mining analysis approach was proposed and was shown that is able to capture users‘ preferences.
210

Binding of visual features in human perception and memory

Jaswal, Snehlata January 2010 (has links)
The leit motif of this thesis is that binding of visual features is a process that begins with input of stimulation and ends with the emergence of an object in working memory so that it can be further manipulated for higher cognitive processes. The primary focus was on the binding process from 0 to 2500 ms, with stimuli defined by location, colour, and shape. The initial experiments explored the relative role of topdown and bottom-up factors. Task relevance was compared by asking participants to detect swaps in bindings of two features whilst the third was either unchanged, or made irrelevant by randomization from study to test, in a change detection task. The experiments also studied the differences among the three defining features across experiments where each feature was randomized, whilst the binding between the other two was tested. Results showed that though features were processed to different time scales, they were treated in the same way by Visual Working Memory processes. Relevant features were consolidated and irrelevant features were inhibited. Later experiments confirmed that consolidation was aided by iconic memory and the inhibitory process was primarily a post-perceptual active inhibition.

Page generated in 0.0549 seconds