• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 7517
  • 1106
  • 1048
  • 794
  • 483
  • 291
  • 237
  • 184
  • 90
  • 81
  • 64
  • 52
  • 44
  • 43
  • 42
  • Tagged with
  • 14536
  • 9347
  • 3969
  • 2378
  • 1933
  • 1930
  • 1738
  • 1648
  • 1534
  • 1449
  • 1382
  • 1360
  • 1358
  • 1302
  • 1282
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1001

Adaptive game AI

Spronck, Pieter Hubert Marie. January 2005 (has links)
Proefschrift Universiteit Maastricht. / Met index, lit. opg. - Met samenvatting in het Nederlands.
1002

Diffusion approximations for three-stage transfer lines with unreliable machines and finite buffers

January 1982 (has links)
by David A. Castanon, Bernard C. Levy, Stanley B. Gershwin. / "August, 1982." / Bibliography: leaf [2]
1003

Machine Learning Approaches to Modeling the Physiochemical Properties of Small Peptides

Jensen, Kyle, Styczynski, Mark, Stephanopoulos, Gregory 01 1900 (has links)
Peptide and protein sequences are most commonly represented as a strings: a series of letters selected from the twenty character alphabet of abbreviations for the naturally occurring amino acids. Here, we experiment with representations of small peptide sequences that incorporate more physiochemical information. Specifically, we develop three different physiochemical representations for a set of roughly 700 HIV–I protease substrates. These different representations are used as input to an array of six different machine learning models which are used to predict whether or not a given peptide is likely to be an acceptable substrate for the protease. Our results show that, in general, higher–dimensional physiochemical representations tend to have better performance than representations incorporating fewer dimensions selected on the basis of high information content. We contend that such representations are more biologically relevant than simple string–based representations and are likely to more accurately capture peptide characteristics that are functionally important. / Singapore-MIT Alliance (SMA)
1004

Étude analytique du fonctionnement de la machine synchrone autopilotée à double étoile à commutation naturelle de courant.

Moustafa, Ehab Mohamed, January 1900 (has links)
Th. doct. -ing.--Électrotech.--Grenoble--I.N.P., 1982. N°: DI 259.
1005

Traction machine winding and magnet design for electric vehicles

Niu, Xin January 2017 (has links)
Work had been established for traction machine design aspects in this research. The effect of multiphase design for Permanent Magnet (PM) machine was investigated. The electromagnetic characteristics of both 3-phase and 9-phase machine, along with different magnet designs, were simulated and analyzed by using the program developed during the process. The software used were FEMM and MATLAB. The iron loss for different designs was established, based on the analytical flux density obtained by 2-D stepping FEA method. The harmonic of flux waveform and rotating field were also considered for difference areas in the machine models. The prediction was compared with experimental data collected in open circuit. The simulation result shown that there was a minimum 4% torque gain and noticeable less torque ripples for 9-phase machine, comparing with 3-phase one, with the same excitation phase current. The embedded magnet rotor design was suggested to monitor the demagnetization of each magnet closely, since some area of the magnet could be demagnetized even when the working point of magnet was well distance away from the nonlinear region of its characteristic. There were about 6% less iron loss was produced in 9-phase model than 3-phase model. The implemented method for calculating iron loss was more accurate within 3500 rpm rotor speed comparing with other approaches.
1006

JAVA VIRTUAL MACHINE DESIGN FOR EMBEDDED SYSTEMS: ENERGY, TIME PREDICTABILITY AND PERFORMANCE

Sun, Yu 01 December 2010 (has links)
Embedded systems can be found everywhere in our daily lives. Due to the great variety of embedded devices, the platform independent Java language provides a good solution for embedded system development. Java virtual machine (JVM) is the most critical component of all kinds of Java platforms. Hence, it is extremely important to study the special design of JVM for embedded systems. The key challenges of designing a successful JVM for embedded systems are energy efficiency, time predictability and performance, which are investigated in this dissertation, respectively. We first study the energy issue of JVM on embedded systems. With a cycle-accurate simulator, we study each stage of Java execution separately to test the effects of different configurations in both software and hardware. After that, an alternative Adaptive Optimization System (AOS) model is introduced, which estimated the cost/benefit using energy data instead of running time. We tuned the parameters of this model to study how to improve the dynamic compilation and optimization in Jikes RVM in terms of energy consumption. In order to further reduce the energy dissipation of JVM on embedded systems, we study adaptive drowsy cache control for Java applications, where JVM can be used to make better decision on drowsy cache control. We explore the impact of different phases of Java applications on the timing behavior of cache usage. Then we propose several techniques to adaptively control drowsy cache to reduce energy consumption with minimal impact on performance. It is observed that traditional Java code generation and instruction fetch path are not efficient. So we study three hardware-based code caching strategies, which attempt to write and read the dynamically generated Java code faster and more energy-efficiently. Time predictability is another key challenge for JVM on embedded systems. So we exploit multicore computing to reduce the timing unpredictability caused by dynamic compilation and adaptive optimization. Our goal is to retain high performance comparable to that of traditional dynamic compilation and, at the same time, obtain better time predictability for JVM. We study pre-compilation techniques to utilize another core more efficiently. Furthermore, we develop Pre-optimization on Another Core (PoAC) scheme to replace AOS in Jikes JVM, which is very sensitive to execution time variation and impacts time predictability greatly. Finally, we propose two new approaches that automatically parallelizes Java programs at run-time, in order to meet the performance challenge of JVM on embedded systems. These approaches rely on run-time trace information collected during program execution, and dynamically recompiles Java byte code that can be executed in parallel. One approach utilizes trace information to improve traditional loop parallelization, and the other parallelizes traces instead of loop iterations.
1007

Delving deep into fetal neurosonography : an image analysis approach

Huang, Ruobing January 2017 (has links)
Ultrasound screening has been used for decades as the main modality to examine fetal brain development and to diagnose possible anomalies. However, basic clinical ultrasound examination of the fetal head is limited to axial planes of the brain and linear measurements which may have restrained its potential and efficacy. The recent introduction of three-dimensional (3D) ultrasound provides the opportunity to navigate to different anatomical planes and to evaluate structures in 3D within the developing brain. Regardless of acquisition methods, interpreting 2D/3D ultrasound fetal brain images require considerable skill and time. In this thesis, a series of automatic image analysis algorithms are proposed that exploit the rich sonographic patterns captured by the scans and help to simplify clinical examination. The original contributions include: 1. An original skull detection method for 3D ultrasound images, which achieves mean accuracy of 2.2 ± 1.6 mm compared to the ground truth (GT). In addition, the algorithm is utilised for accurate automated measurement of essential biometry in standard examinations: biparietal diameter (mean accuracy: 2.1 ± 1.4 mm) and head circumference (mean accuracy: 4.5 ± 3.7 mm). 2. A plane detection algorithm. It automatically extracts mid-sagittal plane that provides visualization of midline structures, which are crucial to assess central nervous system malformations. The automated planes are in accordance with manual ones (within 3.0 ± 3.5°). 3. A general segmentation framework for delineating fetal brain structures in 2D images. The automatically generated predictions are found to be agreed with the manual delineations (mean dice-similarity coefficient: 0.79 ± 0.07). As a by-product, the algorithm generated automated biometry. The results might be further utilized for morphological evaluation in future research. 4. An efficient localization model that is able to pinpoint the 3D locations of five key brain structures that are examined in a routine clinical examination. The predictions correlate with the ground truth: the average centre deviation is 1.8 ± 1.4 mm, and the size difference between them is 1.9 ± 1.5 mm. The application of this model may greatly reduce the time required for routine examination in clinical practice. 5. A 3D affine registration pipeline. Leveraging the power of convolutional neural networks, the model takes raw 3D brain images as input and geometrically transforms fetal brains into a unified coordinate system (proposed as a Fetal Brain Talairach system). The integration of these algorithms into computer-assisted analysis tools may greatly reduce the time and effort to evaluate 3D fetal neurosonography for clinicians. Furthermore, they will assist understanding of fetal brain maturation by distilling 2D/3D information directly from the uterus.
1008

Local learning by partitioning

Wang, Joseph 12 March 2016 (has links)
In many machine learning applications data is assumed to be locally simple, where examples near each other have similar characteristics such as class labels or regression responses. Our goal is to exploit this assumption to construct locally simple yet globally complex systems that improve performance or reduce the cost of common machine learning tasks. To this end, we address three main problems: discovering and separating local non-linear structure in high-dimensional data, learning low-complexity local systems to improve performance of risk-based learning tasks, and exploiting local similarity to reduce the test-time cost of learning algorithms. First, we develop a structure-based similarity metric, where low-dimensional non-linear structure is captured by solving a non-linear, low-rank representation problem. We show that this problem can be kernelized, has a closed-form solution, naturally separates independent manifolds, and is robust to noise. Experimental results indicate that incorporating this structural similarity in well-studied problems such as clustering, anomaly detection, and classification improves performance. Next, we address the problem of local learning, where a partitioning function divides the feature space into regions where independent functions are applied. We focus on the problem of local linear classification using linear partitioning and local decision functions. Under an alternating minimization scheme, learning the partitioning functions can be reduced to solving a weighted supervised learning problem. We then present a novel reformulation that yields a globally convex surrogate, allowing for efficient, joint training of the partitioning functions and local classifiers. We then examine the problem of learning under test-time budgets, where acquiring sensors (features) for each example during test-time has a cost. Our goal is to partition the space into regions, with only a small subset of sensors needed in each region, reducing the average number of sensors required per example. Starting with a cascade structure and expanding to binary trees, we formulate this problem as an empirical risk minimization and construct an upper-bounding surrogate that allows for sequential decision functions to be trained jointly by solving a linear program. Finally, we present preliminary work extending the notion of test-time budgets to the problem of adaptive privacy.
1009

Crystallization properties of molecular materials : prediction and rule extraction by machine learning

Wicker, Jerome January 2017 (has links)
Crystallization is an increasingly important process in a variety of applications from drug development to single crystal X-ray diffraction structure determination. However, while there is a good deal of research into prediction of molecular crystal structure, the factors that cause a molecule to be crystallizable have so far remained poorly understood. The aim of this project was to answer the seemingly straightforward question: can we predict how easily a molecule will crystallize? The Cambridge Structural Database contains almost a million examples of materials from the scientific literature that have crystallized. Models for the prediction of crystallization propensity of organic molecular materials were developed by training machine learning algorithms on carefully curated sets of molecules which are either observed or not observed to crystallize, extracted from a database of commercially available molecules. The models were validated computationally and experimentally, while feature extraction methods and high resolution powder diffraction studies were used to understand the molecular and structural features that determine the ease of crystallization. This led to the development of a new molecular descriptor which encodes information about the conformational flexibility of a molecule. The best models gave error rates of less than 5% for both cross-validation data and previously-unseen test data, demonstrating that crystallization propensity can be predicted with a high degree of accuracy. Molecular size, flexibility and nitrogen atom environments were found to be the most influential factors in determining the ease of crystallization, while microstructural features determined by powder diffraction showed almost no correlation with the model predictions. Further predictions on co-crystals show scope for extending the methodology to other relevant applications.
1010

Bayesian matrix factorisation : inference, priors, and data integration

Brouwer, Thomas Alexander January 2017 (has links)
In recent years the amount of biological data has increased exponentially. Most of these data can be represented as matrices relating two different entity types, such as drug-target interactions (relating drugs to protein targets), gene expression profiles (relating drugs or cell lines to genes), and drug sensitivity values (relating drugs to cell lines). Not only the size of these datasets is increasing, but also the number of different entity types that they relate. Furthermore, not all values in these datasets are typically observed, and some are very sparse. Matrix factorisation is a popular group of methods that can be used to analyse these matrices. The idea is that each matrix can be decomposed into two or more smaller matrices, such that their product approximates the original one. This factorisation of the data reveals patterns in the matrix, and gives us a lower-dimensional representation. Not only can we use this technique to identify clusters and other biological signals, we can also predict the unobserved entries, allowing us to prune biological experiments. In this thesis we introduce and explore several Bayesian matrix factorisation models, focusing on how to best use them for predicting these missing values in biological datasets. Our main hypothesis is that matrix factorisation methods, and in particular Bayesian variants, are an extremely powerful paradigm for predicting values in biological datasets, as well as other applications, and especially for sparse and noisy data. We demonstrate the competitiveness of these approaches compared to other state-of-the-art methods, and explore the conditions under which they perform the best. We consider several aspects of the Bayesian approach to matrix factorisation. Firstly, the effect of inference approaches that are used to find the factorisation on predictive performance. Secondly, we identify different likelihood and Bayesian prior choices that we can use for these models, and explore when they are most appropriate. Finally, we introduce a Bayesian matrix factorisation model that can be used to integrate multiple biological datasets, and hence improve predictions. This model hybridly combines different matrix factorisation models and Bayesian priors. Through these models and experiments we support our hypothesis and provide novel insights into the best ways to use Bayesian matrix factorisation methods for predictive purposes.

Page generated in 0.347 seconds