• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 5512
  • 1072
  • 768
  • 625
  • 541
  • 355
  • 145
  • 96
  • 96
  • 96
  • 96
  • 96
  • 96
  • 95
  • 83
  • Tagged with
  • 11494
  • 6047
  • 2543
  • 1989
  • 1676
  • 1419
  • 1350
  • 1317
  • 1217
  • 1136
  • 1075
  • 1037
  • 1011
  • 891
  • 877
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
671

Artificial intelligence for the control of hybrid power filters

Van Schoor, George 15 August 2012 (has links)
D.Ing. / Training data are developed for an ANN controlling a laboratory scale HPC. Special attention is given to the development of a cost function to determine the optimal state of the HPC for a particular input state. The cost function uses the reactive power compensation efficiency (77Q), the distortion compensation efficiency (rid) and the losses in the HPC (PLHP0 as optimisation parameters. After a process of optimisation the ANN is trained with a randomised training set of size 2000. A 5:10:6 ANN topology representing 5 input layer neurons, 10 hidden layer neurons and 6 output layer neurons is used. The optimisation results in shorter training times as well as more effective training. A laboratory scale experiment, which practically proves that an ANN can make meaningful choices in terms of HPC control, is conducted. The adaptive behaviour of the ANN controller for the HPC is evaluated by means of the interactive integrated state space model. It is found that the ANN controller can sensibly adapt its output under conditions of line impedance change as well as conditions of load changes of users sharing the same point of coupling as the consumer being compensated. The conclusion from this research is that it is viable to apply AI in the control of an HPC. A non-linear, time-varying system such as this ideally lends itself to the application of ANN control. The total cost of the HPC is expected to be minimised while minimum standards in terms of compensation are still maintained. The performance of such an ANN controller is however strongly dependent on the integrity of the training data. Using an actual system to set up the training data would be the ideal in refining the ANN model. Devising a strategy to continually update the training of the ANN to ensure the relevancy with respect to the dynamic range of the ANN is recommended as an area for further research.
672

Applying Evolutionary Computation and Ensemble Approaches to Protein Contact Map and Protein Function Determination

Chapman, Samuel D. 13 January 2017 (has links)
<p> Proteins are important biological molecules that perform many different functions in an organism. They are composed of sequences of amino acids that play a large part in determining both their structure and function. In turn, the structures of proteins are related to their functions. Using computational methods for protein study is a popular approach, offering the possibility of being faster and cheaper than experimental methods. These software-based methods are able to take information such as the protein sequence and other empirical data and output predictions such as protein structure or function.</p><p> In this work, we have developed a set of computational methods that are used in the application of protein structure prediction and protein function prediction. For protein structure prediction, we use the evolution of logic circuits to produce logic circuit classifiers that predict the protein contact map of a protein based on high-dimensional feature data. The diversity of the evolved logic circuits allows for the creation of ensembles of classifiers, and the answers from these ensembles are combined to produce more-accurate answers. We also apply a number of ensemble algorithms to our results.</p><p> Our protein function prediction work is based on the use of six existing computational protein function prediction methods, of which four were optimized for use on a benchmark dataset, along with two others developed by collaborators. We used a similar ensemble framework, combining the answers from the six methods into an ensemble using an algorithm, CONS, that we helped develop.</p><p> Our contact map prediction study demonstrated that it was possible to evolve logic circuits for this purpose, and that ensembles of the classifiers improved performance. The results fell short of state-of-the-art methods, and additional ensemble algorithms failed to improve the performance. However, the method was also able to work as a feature detector, discovering salient features from the high-dimensional input data, a computationally-intractable problem. In our protein function prediction work, the combination of methods similarly led to a robust ensemble. The CONS ensemble, while not performing as well as the best individual classifier in absolute terms, was nevertheless very close in terms of performance. More intriguingly, there were many specific cases where it performed better than any single method, indicating that this ensemble provided valuable information not captured by any single methods. </p><p> To our knowledge, this is the first time the evolution of logic circuits has been used in any Bioinformatics problem, and it is expected that as the method becomes more developed, results will improve. It is also expected that the feature-detection aspect of this method can be used in other studies. The function prediction study also marks, to our knowledge, the most-comprehensive ensemble classification for protein function prediction. Finally, we expect that the ensemble classification methods used and developed in our protein structure and function work here will pave the way towards stronger ensemble predictors in the future.</p>
673

Organizational factors contributing to an effective information technology intelligence system.

Taskov, Konstantin 12 1900 (has links)
The purpose of this dissertation is to investigate the organizational factors that contribute to effective emerging information technology intelligence processes and products. Emerging information technology is defined as a technology which is little commercialized and is currently adopted by not more than twenty percent of the companies within a given industry. By definition, information technology intelligence is a subdivision of competitive intelligence and business intelligence. I discovered evidence that the information technology intelligence process includes assessment of information technology intelligence needs of consumers, collection of data from internal and external sources, analysis of the collected data and distribution of the analyzed data to the consumers. Exploratory factor analysis confirmed the existence of all the variables in the proposed research model. I found empirical evidence that the final technology intelligence product contributes to better decisions made by consumers, their better environmental scanning, and more funding to information technology departments in organizations from different industries and of different sizes.
674

Creativity and the Guilford-Zimmerman Temperament Survey

Martin, Donald Wesley 08 1900 (has links)
The purposes of this study are as follows: 1) to investigate the similarities and differences in the temperaments of a higher creative group and a lower creative group and 2) to investigate the effectiveness of the Guilford-Zimmerman Temperament Survey in identifying higher creative individuals and lower creative individuals, as measured by the AC Test of Creative Ability.
675

Local search algorithms for geometric object recognition: Optimal correspondence and pose

Beveridge, J. Ross 01 January 1993 (has links)
Recognizing an object by its shape is a fundamental problem in computer vision, and typically involves finding a discrete correspondence between object model and image features as well as the pose--position and orientation--of the camera relative to the object. This thesis presents new algorithms for finding the optimal correspondence and pose of a rigid 3D object. They utilize new techniques for evaluating geometric matches and for searching the combinatorial space of possible matches. An efficient closed-form technique for computing pose under weak-perspective (four parameter 2D affine) is presented, and an iterative non-linear 3D pose algorithm is used to support matching under full 3D perspective. A match error ranks matches by summing a fit error, which measures the quality of the spatial fit between corresponding line segments forming an object model and line segments extracted from an image, and an omission error, which penalizes matches which leave portions of the model omitted or unmatched. Inclusion of omission is crucial to success when matching to corrupted and partial image data. New optimal matching algorithms use a form of combinatorial optimization called local search, which relies on iterative improvement and random sampling to probabilistically find globally optimal matches. A novel variant has been developed, subset-convergent local search finds optimal matches with high probability on problems known to be difficult for other techniques. Specifically, it does well on a test suite of highly fragmented and cluttered data, symmetric object models, and multiple model instances. Problem search spaces grows exponentially in the number of potentially paired features n, yet empirical performance suggests computation is bounded by $n\sp2.$ Using the 3D pose algorithm during matching, local search solves problems involving significant amounts of 3D perspective. No previous work on geometric matching has generalized in this way. Our hybrid algorithm combines the closed-form weak-perspective pose and iterative 3D pose algorithms to efficiently solve matching problems involving perspective. For robot navigation, this algorithm recognizes 3D landmarks, and thereby permits a mobile robot to successfully update its estimated pose relative to these landmarks.
676

Information extraction as a basis for portable text classification systems

Riloff, Ellen Michele 01 January 1994 (has links)
Knowledge-based natural language processing systems have achieved good success with many tasks, but they often require many person-months of effort to build an appropriate knowledge base. As a result, they are not portable across domains. This knowledge-engineering bottleneck must be addressed before knowledge-based systems will be practical for real-world applications. This dissertation addresses the knowledge-engineering bottleneck for a natural language processing task called "information extraction". A system called AutoSlog is presented which automatically constructs dictionaries for information extraction, given an appropriate training corpus. In the domain of terrorism, AutoSlog created a dictionary using a training corpus and five person-hours of effort that achieved 98% of the performance of a hand-crafted dictionary that took approximately 1500 person-hours to build. This dissertation also describes three algorithms that use information extraction to support high-precision text classification. As more information becomes available on-line, intelligent information retrieval will be crucial in order to navigate the information highway efficiently and effectively. The approach presented here represents a compromise between keyword-based techniques and in-depth natural language processing. The text classification algorithms classify texts with high accuracy by using an underlying information extraction system to represent linguistic phrases and contexts. Experiments in the terrorism domain suggest that increasing the amount of linguistic context can improve performance. Both AutoSlog and the text classification algorithms are evaluated in three domains: terrorism, joint ventures, and microelectronics. An important aspect of this dissertation is that AutoSlog and the text classification systems can be easily ported across domains.
677

Large-scale dynamic optimization using teams of reinforcement learning agents

Crites, Robert Harry 01 January 1996 (has links)
Recent algorithmic and theoretical advances in reinforcement learning (RL) are attracting widespread interest. RL algorithms have appeared that approximate dynamic programming (DP) on an incremental basis. Unlike traditional DP algorithms, these algorithms do not require knowledge of the state transition probabilities or reward structure of a system. This allows them to be trained using real or simulated experiences, focusing their computations on the areas of state space that are actually visited during control, making them computationally tractable on very large problems. RL algorithms can be used as components of multi-agent algorithms. If each member of a team of agents employs one of these algorithms, a new collective learning algorithm emerges for the team as a whole. In this dissertation we demonstrate that such collective RL algorithms can be powerful heuristic methods for addressing large-scale control problems. Elevator group control serves as our primary testbed. The elevator domain poses a combination of challenges not seen in most RL research to date. Elevator systems operate in continuous state spaces and in continuous time as discrete event dynamic systems. Their states are not fully observable and they are non-stationary due to changing passenger arrival rates. As a way of streamlining the search through policy space, we use a team of RL agents, each of which is responsible for controlling one elevator car. The team receives a global reinforcement signal which appears noisy to each agent due to the effects of the actions of the other agents, the random nature of the arrivals and the incomplete observation of the state. In spite of these complications, we show results that in simulation surpass the best of the heuristic elevator control algorithms of which we are aware. These results demonstrate the power of RL on a very large scale stochastic dynamic optimization problem of practical utility.
678

Three-dimensional reconstruction under varying constraints on camera geometry for robotic navigation scenarios

Zhang, Zhongfei 01 January 1996 (has links)
3D reconstruction is an important research area in computer vision. With the wide spectrum of camera geometry constraints, a general solution is still open. In this dissertation, the topic of 3D reconstruction is addressed under several special constraints on camera geometry, and the 3D reconstruction techniques developed under these constraints have been applied to a robotic navigation scenario. The robotic navigation problems addressed include automatic camera calibration, visual servoing for navigation control, obstacle detection, and 3D model acquisition and extension. The problem of visual servoing control is investigated under the assumption of a structured environment where parallel path boundaries exist. A visual servoing control algorithm has been developed based on geometric variables extracted from this structured environment. This algorithm has been used for both automatic camera calibration and navigation servoing control. Close to real time performance is achieved. The problem of qualitative and quantitative obstacle detection is addressed with a proposal of three algorithms. The first two are purely qualitative in the sense that they only return yes/no answers. The third is quantitative in that it recovers height information for all the points in the scene. Three different constraints on camera geometry are employed. The first algorithm assumes known relative pose between cameras; the second algorithm is based on completely unknown camera relative pose; the third algorithm assumes partial calibration. Experimental results are presented for real and simulated data, and the performance of the three algorithms under different noise levels are compared in simulation. Finally the problem of model acquisition and extension is studied by proposing a 3D reconstruction algorithm using homography mapping. It is shown that given four coplanar correspondences, 3D structures can be recovered up to two solutions and with only one uniform scale factor, which is the distance from the camera center to the 3D plane formed by the four 3D points corresponding to the given four correspondences in the two camera planes. It is also shown that this algorithm is optimal in terms of the number of minimum required correspondences and in terms of the assumption of internal calibration.
679

On integrating apprentice learning and reinforcement learning

Clouse, Jeffery Allen 01 January 1996 (has links)
Apprentice learning and reinforcement learning are methods that have each been developed in order to endow computerized agents with the capacity to learn to perform multiple-step tasks, such as problem-solving tasks and control tasks. To achieve this end, each method takes differing approaches, with disparate assumptions, objectives, and algorithms. In apprentice learning, the autonomous agent tries to mimic a training agent's problem-solving behavior, learning based on examples of the trainer's action choices. In an attempt to learn to perform its task optimally, the learner in reinforcement learning changes its behavior based on scalar feedback about the consequences of its own actions. We demonstrate that a careful integration of the two learning methods can produce a more powerful method than either one alone. An argument based on the characteristics of the individuals maintains that a hybrid will be an improvement because of the complimentary strengths of its constituents. Although existing hybrids of apprentice learning and reinforcement learning perform better than their individual components, those hybrids have left many questions unanswered. We consider the following questions in this dissertation. How do the learner and trainer interact during training? How does the learner assimilate the trainer's expertise? How does the proficiency of the trainer affect the learner's ability to perform the task? And, when during training should the learner acquire information from the trainer? In our quest for answers, we develop the A scSK FOR H scELP integrated approach, and use it in our empirical study. With the new integrated approach, the learning agent is significantly faster at learning to perform optimally than learners employing either apprentice learning alone or reinforcement learning alone. The study indicates further that the learner can learn to perform optimally even when its trainer cannot; thus, the learner can outperform its trainer. Two strategies for determining when to acquire the trainer's aid show that simple approaches work well. The results of the study demonstrate that the A scSK FOR H scELP approach is effective for integrating apprentice learning and reinforcement learning, and support the conclusion that an integrated approach can be better than its individual components.
680

A trainable approach to coreference resolution for information extraction

McCarthy, Joseph Francis 01 January 1996 (has links)
This dissertation presents a new approach to solving the coreference resolution problem for a natural language processing (NLP) task known as information extraction. It describes a new system, named R scESOLVE, that uses machine learning techniques to determine when two phrases in a test co-refer, i.e., refer to the same thing. R scESOLVE can be used as a component within an information extraction system--a system that extracts information automatically from a corpus of texts that all focus on the same topic area--or it can be used as a stand-alone system to evaluate the relative contribution of different types of knowledge to the coreference resolution process. R scESOLVE represents an improvement over previous approaches to the coreference resolution problem, in that it uses a machine learning algorithm to handle some of the work that had previously been performed manually by a knowledge engineer. R scESOLVE can achieve performance that is as good as a system that was manually constructed for the same task, when both systems are given access to the same knowledge and tested on the same data. The machine learning algorithm used by R scESOLVE can be given access to different types of knowledge, some portions of which are very specific to a particular topic area or domain, and other portions are more general or domain-independent. An ablation experiment shows that domain-specific knowledge is very important to coreference resolution--the performance degradation when the domain-specific features are disabled is significantly worse than when a similarly-sized set of domain-independent features is disabled. However, even though domain-specific knowledge is important for coreference resolution, domain-independent features alone enable R scESOLVE to achieve 80% of the performance it achieves when domain-specific features are available. One explanation for why domain-independent knowledge can be used so effectively is illustrated in another domain, where the machine learning algorithm discovers domain-specific knowledge by assembling the domain-independent features of knowledge into domain-specific patterns. This ability of R scESOLVE to compensate for missing or insufficient domain-specific knowledge is a significant advantage for redeploying the system in new domains.

Page generated in 0.0583 seconds