• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 5498
  • 3855
  • 479
  • 422
  • 373
  • 349
  • 302
  • 172
  • 128
  • 128
  • 128
  • 128
  • 128
  • 116
  • 73
  • Tagged with
  • 13613
  • 6028
  • 2619
  • 2531
  • 2037
  • 1944
  • 1609
  • 1532
  • 1512
  • 1464
  • 1446
  • 1178
  • 1026
  • 1021
  • 1006
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
241

An evolutionary method for training autoencoders for deep learning networks

Lander, Sean 18 November 2016 (has links)
<p> Introduced in 2006, Deep Learning has made large strides in both supervised an unsupervised learning. The abilities of Deep Learning have been shown to beat both generic and highly specialized classification and clustering techniques with little change to the underlying concept of a multi-layer perceptron. Though this has caused a resurgence of interest in neural networks, many of the drawbacks and pitfalls of such systems have yet to be addressed after nearly 30 years: speed of training, local minima and manual testing of hyper-parameters.</p><p> In this thesis we propose using an evolutionary technique in order to work toward solving these issues and increase the overall quality and abilities of Deep Learning Networks. In the evolution of a population of autoencoders for input reconstruction, we are able to abstract multiple features for each autoencoder in the form of hidden nodes, scoring the autoencoders based on their ability to reconstruct their input, and finally selecting autoencoders for crossover and mutation with hidden nodes as the chromosome. In this way we are able to not only quickly find optimal abstracted feature sets but also optimize the structure of the autoencoder to match the features being selected. This also allows us to experiment with different training methods in respect to data partitioning and selection, reducing overall training time drastically for large and complex datasets. This proposed method allows even large datasets to be trained quickly and efficiently with little manual parameter choice required by the user, leading to faster, more accurate creation of Deep Learning Networks.</p>
242

Cement and Artificial Stone Sculpture of Mexico

Bowling, Henry E. 06 1900 (has links)
The intention of this study is not to present the technique as a new one in the realm of sculpture, but rather to investigate the various ways in which cement is being employed in the sculptural form and to point out its prominent use as well as the reasons for its popularity in Mexico.
243

Reverse engineering an active eye

Schmidt-Cornelius, Hanson January 2002 (has links)
No description available.
244

An ontology model supporting multiple ontologies for knowledge sharing

Tamma, Valentina A. M. January 2001 (has links)
No description available.
245

The use of a monoclonal antibody to pregnant mare serum gonadotrophin in superovulation of cattle

Al-Furaiji, Mansour M. A. January 1989 (has links)
Embryo transfer plays a very important role in the cattle industry and its application requires a consistent supply of viable embryos for use in such programmes. One way of achieving this is the development of reliable superovulation regimens, yielding a large number of high quality embryos. Superovulation with pregnant mare serum gonadotrophin (PMSG) induces a second wave of follicles after ovulation because it is eliminated slowly from the peripheral blood, causing high concentrations of oestradiol. This oestradiol may have an adverse effect on fertilization and early embryonic development. Administering PMSG antiserum after ovulation may improve the quality by neutralizing PMSG activity. The object of this study was to examine the role of monoclonal anti- PMSG (Neutra-PMSG; Intervet UK) in a superovulatory regime for cattle based on PMSG with the objective of increasing the number of viable embryos produced. Two experiments were conducted in this study. In experiment 1, cows were superovulated with 2500 iu of PMSG (Folligon; Intervet UK) im on day 101 of their oestrous cycle, whereas in experiment 2, heifers were superovulated with 1000, 2000, 3000 or 4000 iu of PMSG (PMSG1, 2, 3 or 4, respectively) im on day 10 1 of their oestrous cycle. In experiments 1 and 2, animals were given 2 ml of PG (PG3) im 48 h after PMSG injection and oestrus was observed in the same manner as described above. When data were evaluated in respect of the PMSG/APMSG dose level, there were no significant differences (P>0.05) in the numbers of CL and usable number of embryos between APMSG treatment and the appropriate control. Treatment with APMSG in the 3000 iu PMSG dose group reduced (P 0.05) the numbers of LF compared to control (1.3 v 5.5). When data were analysed based on the dose levels of PMSG, the total number of ova/embryos and the number of usable embryos were higher in heifers which received 2000 iu of PMSG compared to those which received 1000, 3000 or 4000 iu (7.1 v 2.1, 6.6 or 5.6 and 6.3 v 2.1, 4.8 or 3.2, respectively) . In conclusion the results indicate that PMSG dose 2000 iu is the favoured dose for superovulating heifers. The administration of Neutra-PMSG 36, 48, 60, 72, 84 or 96 h after PG3 injection, despite reducing LF numbers and preventing the rise in 0E2 after ovulation, had no significant effect on the number of usable embryos recovered.
246

Automated Feature Engineering for Deep Neural Networks with Genetic Programming

Heaton, Jeff 19 April 2017 (has links)
<p> Feature engineering is a process that augments the feature vector of a machine learning model with calculated values that are designed to enhance the accuracy of a model's predictions. Research has shown that the accuracy of models such as deep neural networks, support vector machines, and tree/forest-based algorithms sometimes benefit from feature engineering. Expressions that combine one or more of the original features usually create these engineered features. The choice of the exact structure of an engineered feature is dependent on the type of machine learning model in use. Previous research demonstrated that various model families benefit from different types of engineered feature. Random forests, gradient-boosting machines, or other tree-based models might not see the same accuracy gain that an engineered feature allowed neural networks, generalized linear models, or other dot-product based models to achieve on the same data set. </p><p> This dissertation presents a genetic programming-based algorithm that automatically engineers features that increase the accuracy of deep neural networks for some data sets. For a genetic programming algorithm to be effective, it must prioritize the search space and efficiently evaluate what it finds. This dissertation algorithm faced a potential search space composed of all possible mathematical combinations of the original feature vector. Five experiments were designed to guide the search process to efficiently evolve good engineered features. The result of this dissertation is an automated feature engineering (AFE) algorithm that is computationally efficient, even though a neural network is used to evaluate each candidate feature. This approach gave the algorithm a greater opportunity to specifically target deep neural networks in its search for engineered features that improve accuracy. Finally, a sixth experiment empirically demonstrated the degree to which this algorithm improved the accuracy of neural networks on data sets augmented by the algorithm's engineered features. </p>
247

Creating emotionally aware performance environments : a phenomenological exploration of inferred and invisible data space

Povall, Richard Mark January 2003 (has links)
The practical research undertaken for this thesis - the building of interactive and non-interactive environments for performance - posits a radical recasting of the performing body in physical and digital space. The choreographic and thematic context of the performance work has forced us', as makers, to ask questions about the nature of digital interactivity which in turn feeds the work theoretically, technically and thematically. A computer views (and attempts to interpret) motion information through a video camera, and, by way of a scripting language, converts that information into MIDI' data. As the research has developed, our company has been able to design environments which respond sensitivelyto particular artistic / performance demands. I propose to show in this research that is it possible to design an interactive system that is part of a phenomenological performance space, a mechanical system with an ontological heart. This represents a significant shift in thinking from existing systems, is at the heart of the research developments and is what I consider to be one of the primary outcomes of this research, outcomes that are original and contribute to the body of knowledge in this area. The phenomenal system allows me to use technology in a poetic way, where the poetic aesthetic is dominant - it responds to the phenomenal dancer, rather than merely to the 'physico-chemical' (Merleau-Ponty 1964 pp. 10-I I) dancer. Other artists whose work attempts phenomenological approaches to working with technology and the human body are referenced throughout the writing.
248

Various considerations on performance measures for a classification of ordinal data

Nyongesa, Denis Barasa 13 August 2016 (has links)
<p> The technological advancement and the escalating interest in personalized medicine has resulted in increased ordinal classification problems. The most commonly used performance metrics for evaluating the effectiveness of a multi-class ordinal classifier include; predictive accuracy, Kendall's tau-b rank correlation, and the average mean absolute error (AMAE). These metrics are beneficial in the quest to classify multi-class ordinal data, but no single performance metric incorporates the misclassification cost. Recently, distance, which finds the optimal trade-off between the predictive accuracy and the misclassification cost was proposed as a cost-sensitive performance metric for ordinal data. This thesis proposes the criteria for variable selection and methods that accounts for minimum distance and improved accuracy, thereby providing a platform for a more comprehensive and comparative analysis of multiple ordinal classifiers. The strengths of our methodology are demonstrated through real data analysis of a colon cancer data set.</p>
249

An analysis of learning in weightless neural systems

Bradshaw, Nicholas P. January 1997 (has links)
This thesis brings together two strands of neural networks research - weightless systems and statistical learning theory - in an attempt to understand better the learning and generalisation abilities of a class of pattern classifying machines. The machines under consideration are n-tuple classifiers. While their analysis falls outside the domain of more widespread neural networks methods the method has found considerable application since its first publication in 1959. The larger class of learning systems to which the n-tuple classifier belongs is known as the set of weightless or RAM-based systems, because of the fact that they store all their modifiable information in the nodes rather than as weights on the connections. The analytical tools used are those of statistical learning theory. Learning methods and machines are considered in terms of a formal learning problem which allows the precise definition of terms such as learning and generalisation (in this context). Results relating the empirical error of the machine on the training set, the number of training examples and the complexity of the machine (as measured by the Vapnik- Chervonenkis dimension) to the generalisation error are derived. In the thesis this theoretical framework is applied for the first time to weightless systems in general and to n-tuple classifiers in particular. Novel theoretical results are used to inspire the design of related learning machines and empirical tests are used to assess the power of these new machines. Also data-independent theoretical results are compared with data-dependent results to explain the apparent anomalies in the n-tuple classifier's behaviour. The thesis takes an original approach to the study of weightless networks, and one which gives new insights into their strengths as learning machines. It also allows a new family of learning machines to be introduced and a method for improving generalisation to be applied.
250

Development of X-ray diamond simulants

Danoczi, Elizabeth Jane 07 April 2008 (has links)
X-ray machines that are designed to recover diamonds from an ore body, are used extensively on diamond mines. These machines are extremely expensive and at present, there are no reliable methods, outside the De Beers Group, of determining if the equipment is performing correctly. The object of this research was to manufacture X-ray translucent X-ray diamond simulants with known fluorescent signals ranging from bright to dim. These X-ray diamond simulants will then be used to evaluate the recovery efficiency of all X-ray machines on any diamond mine. The research successfully accomplished the following: 1) The design and building of optical equipment needed to measure the fluorescent signals produced by diamonds and the diamond simulants. 2) Setting up of equipment needed to manufacture the diamond simulants. 3) Determining the ingredients needed to make a diamond simulant and 4) Determining the recipe for the diamond simulants with different fluorescent signals, for diamonds of different sizes.

Page generated in 0.0812 seconds