• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 303
  • 96
  • 41
  • 24
  • 17
  • 11
  • 9
  • 6
  • 5
  • 5
  • 4
  • 3
  • 3
  • 3
  • 3
  • Tagged with
  • 612
  • 317
  • 203
  • 169
  • 139
  • 115
  • 101
  • 101
  • 87
  • 77
  • 65
  • 56
  • 55
  • 55
  • 54
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Development of Conceptual and Process Models of Growing Pains: A Mixed-Method Research Design

Visram, Faizah 06 August 2009 (has links)
Despite being a common childhood complaint there is little research on growing pains. Existing research is inconsistent with regard to sample selection and prevalence rates. There are only two English language intervention studies, and with the exception of associations noted in prevalence research, there has been no systematic research on the potential impact of growing pains on daily activities. Lack of a universal definition of growing pains poses difficulty for both diagnosis and research. The purposes of the current investigation were to propose a definition of growing pains grounded in literature and clinical practice, to develop a conceptual model of growing pains, and to understand childrens experiences with growing pains. A mixed-method research program involved four phases. In phase I, a survey of physicians indicated the following definition of growing pains: Intermittent pain of unknown etiology, occurring nocturnally in the lower limbs. Features that may occur in some cases, but not part of the definition, include arm pain and daytime pain. In phase II, non-parametric statistical analyses of child, familial, and environmental variables in a rheumatology clinic database were conducted to determine potential risk factors for growing pains. Logistic regression modeling indicated an association between growing pains and maternal illness or rash during the pregnancy, maternal smoking during the pregnancy, delayed pull to standing (i.e., greater than age 10 months), and family histories of back pain and arthritis. Potential mechanisms for these empirical associations are explored. In phase III, qualitative interviews with children were conducted to develop a grounded theory of how children process their experiences. Children engaged in a process of evaluating their current and past experiences of growing pains to determine how to manage specific pain episodes. Their evaluation was influenced by how they understood their pain which in turn was influenced by their intrapersonal and interpersonal experiences. Phase IV integrated results and existing literature to develop a conceptual model of growing pains which outlines characteristic features, predisposing factors, triggers, alleviating actions, and associated psychosocial features. Implications of the process theory and the conceptual model of growing pains with regard to clinical practice and future research are discussed.
12

Immunophenotypic Characteristics of Equine Monocytes and Alevolar Macrophages

Odemuyiwa, Solomon Olawole 14 May 2012 (has links)
Hematopoietic cells of the myelomonocytic lineage play a central role in orchestrating both innate and adaptive immunity. They are important in the control of infectious agents and in the pathogenesis of diseases characterized by dysregulated immune response. Like allergic asthma in human patients, recurrent airway obstruction (RAO) of horses is a disease exemplified by chronic airway inflammation in the absence of infectious agents. However, unlike allergic asthma, RAO is marked by preponderance of neutrophils rather than eosinophils in the airways. Attempts to understand the immunological basis of RAO by studying lymphocytes produced equivocal results. This thesis examined the possible role of alveolar macrophages (AM) recovered from bronchoalveolar lavage fluid (BALF) in RAO. Since macrophages are predominantly derived from circulating monocytes, the thesis investigated first the phenotypic characteristics of circulating monocytes, second those of macrophages in vitro derived from monocytes, and finally attributes of AM derived in vivo. Flow cytometric analysis following antibody staining of monocytes from 61 horses showed that the clustering pattern of human leukocytes may not always be extrapolated to horses when using this technique since clusters of granulocytes often spill over into the monocyte population. The study showed that DH24A, a monoclonal antibody directed against CD90, which recognizes T cells in other species, will specifically recognize granulocytes in horses and was therefore used to separate neutrophils from monocytes during analysis. In addition, investigation of circulating monocytes showed that expression of the hemoglobin-haptoglobin receptor CD163 on circulating monocytes is significantly increased in horses with systemic inflammation when compared with healthy horses. Evaluating cytokine and chemokine production by macrophages, it was demonstrated that CD163+ macrophages preferentially expressed IL10 while CD163- macrophages showed predominant expression of CCL17. It was, therefore, concluded that CD163+ IL10-producing macrophages of horses are homologues of the alternatively activated anti-inflammatory macrophage subset of humans. Finally, probing of alveolar macrophages for CD163 and CD206 expression showed a significant reduction in the proportion of CD163+ macrophages in horses with RAO. These findings suggest that RAO is associated with a reduction in anti-inflammatory macrophages, an observation that may in part explain the chronic airway inflammation associated with this disease.
13

Training Recurrent Neural Networks

Sutskever, Ilya 13 August 2013 (has links)
Recurrent Neural Networks (RNNs) are powerful sequence models that were believed to be difficult to train, and as a result they were rarely used in machine learning applications. This thesis presents methods that overcome the difficulty of training RNNs, and applications of RNNs to challenging problems. We first describe a new probabilistic sequence model that combines Restricted Boltzmann Machines and RNNs. The new model is more powerful than similar models while being less difficult to train. Next, we present a new variant of the Hessian-free (HF) optimizer and show that it can train RNNs on tasks that have extreme long-range temporal dependencies, which were previously considered to be impossibly hard. We then apply HF to character-level language modelling and get excellent results. We also apply HF to optimal control and obtain RNN control laws that can successfully operate under conditions of delayed feedback and unknown disturbances. Finally, we describe a random parameter initialization scheme that allows gradient descent with momentum to train RNNs on problems with long-term dependencies. This directly contradicts widespread beliefs about the inability of first-order methods to do so, and suggests that previous attempts at training RNNs failed partly due to flaws in the random initialization.
14

Training Recurrent Neural Networks

Sutskever, Ilya 13 August 2013 (has links)
Recurrent Neural Networks (RNNs) are powerful sequence models that were believed to be difficult to train, and as a result they were rarely used in machine learning applications. This thesis presents methods that overcome the difficulty of training RNNs, and applications of RNNs to challenging problems. We first describe a new probabilistic sequence model that combines Restricted Boltzmann Machines and RNNs. The new model is more powerful than similar models while being less difficult to train. Next, we present a new variant of the Hessian-free (HF) optimizer and show that it can train RNNs on tasks that have extreme long-range temporal dependencies, which were previously considered to be impossibly hard. We then apply HF to character-level language modelling and get excellent results. We also apply HF to optimal control and obtain RNN control laws that can successfully operate under conditions of delayed feedback and unknown disturbances. Finally, we describe a random parameter initialization scheme that allows gradient descent with momentum to train RNNs on problems with long-term dependencies. This directly contradicts widespread beliefs about the inability of first-order methods to do so, and suggests that previous attempts at training RNNs failed partly due to flaws in the random initialization.
15

Extensions in the theory of Lucas and Lehmer pseudoprimes

Loveless, Andrew David, January 2005 (has links) (PDF)
Thesis (Ph.D.)--Washington State University. / Includes bibliographical references.
16

A Hilbert space approach to multiple recurrence in ergodic theory

Beyers, Frederik J. C. January 2004 (has links)
Thesis (M.Sc.)(Mathematics)--University of Pretoria, 2004. / Title from opening screen (viewed March 27, 2006). Includes summary. Includes bibliographical references.
17

Study of Indicators of Recurrent Congestion on Urban Roadway Network Based on Bus Probes

Chen, Cheng January 2010 (has links)
No description available.
18

Comparison of Airway Response in Recurrent Airway Obstruction-Affected Horses Fed Steamed Versus Non-steamed Hay

Blumerich, Celeste Ann 24 July 2012 (has links)
Recurrent Airway Obstruction (RAO)-affected horses experience bronchoconstriction and airway inflammation in response to inhalation of irritants including hay molds. Steaming hay reduces fungal content, but the effect on the antigenic potential has not been investigated. We tested the hypothesis that RAO-affected horses develop less severe clinical disease when fed steamed versus non-steamed hay and this reduction coincides with decreased hay fungal content. Six RAO-affected horses in clinical remission were divided in two groups and fed steamed or non-steamed hay for 10 days using a two-way cross-over design. Hay was steamed using a commercial hay-steamer. Clinical assessment was performed daily. Full assessment, including airway endoscopy, tracheal mucous scores and maximal change in pleural pressure, was performed on days 1, 5, and 10. Bronchial fluid sampling and cytology were performed on days 1 and 10. Hay core samples were collected pre- and post-steaming and cultured to determine fungal and bacterial concentrations. Statistical analysis was based on data distribution and quantity and performed using SAS®. P-value <0.05 was significant. Steaming significantly decreased the number of bacterial and fungal colony-forming-units in hay. Horses fed non-steamed hay experienced a significant increase in clinical score and a trend towards airway neutrophilia, while parameters were unchanged in horses fed steamed hay. Only horses fed non-steamed hay experienced a significant increase in tracheal mucous score. Horses fed steamed hay gained significantly more weight compared to horses fed non-steamed hay, even though the amount of hay consumed not greater on a dry matter basis. These results indicate that steaming reduces the RAO-affected horse's response to hay which coincides with a reduction in viable fungal content of hay. / Master of Science
19

Contextual Recurrent Level Set Networks and Recurrent Residual Networks for Semantic Labeling

Le, Ngan Thi Hoang 01 May 2018 (has links)
Semantic labeling is becoming more and more popular among researchers in computer vision and machine learning. Many applications, such as autonomous driving, tracking, indoor navigation, augmented reality systems, semantic searching, medical imaging are on the rise, requiring more accurate and efficient segmentation mechanisms. In recent years, deep learning approaches based on Convolutional Neural Networks (CNNs) and Recurrent Neural Networks (RNNs) have dramatically emerged as the dominant paradigm for solving many problems in computer vision and machine learning. The main focus of this thesis is to investigate robust approaches that can tackle the challenging semantic labeling tasks including semantic instance segmentation and scene understanding. In the first approach, we convert the classic variational Level Set method to a learnable deep framework by proposing a novel definition of contour evolution named Recurrent Level Set (RLS). The proposed RLS employs Gated Recurrent Units to solve the energy minimization of a variational Level Set functional. The curve deformation processes in RLS is formulated as a hidden state evolution procedure and is updated by minimizing an energy functional composed of fitting forces and contour length. We show that by sharing the convolutional features in a fully end-to-end trainable framework, RLS is able to be extended to Contextual Recurrent Level Set (CRLS) Networks to address semantic segmentation in the wild problem. The experimental results have shown that our proposed RLS improves both computational time and segmentation accuracy against the classic variational Level Set-based methods whereas the fully end-to-end system CRLS achieves competitive performance compared to the state-of-the-art semantic segmentation approaches on PAS CAL VOC 2012 and MS COCO 2014 databases. The second proposed approach, Contextual Recurrent Residual Networks (CRRN), inherits all the merits of sequence learning information and residual learning in order to simultaneously model long-range contextual infor- mation and learn powerful visual representation within a single deep network. Our proposed CRRN deep network consists of three parts corresponding to sequential input data, sequential output data and hidden state as in a recurrent network. Each unit in hidden state is designed as a combination of two components: a context-based component via sequence learning and a visualbased component via residual learning. That means, each hidden unit in our proposed CRRN simultaneously (1) learns long-range contextual dependencies via a context-based component. The relationship between the current unit and the previous units is performed as sequential information under an undirected cyclic graph (UCG) and (2) provides powerful encoded visual representation via residual component which contains blocks of convolution and/or batch normalization layers equipped with an identity skip connection. Furthermore, unlike previous scene labeling approaches [1, 2, 3], our method is not only able to exploit the long-range context and visual representation but also formed under a fully-end-to-end trainable system that effectively leads to the optimal model. In contrast to other existing deep learning networks which are based on pretrained models, our fully-end-to-end CRRN is completely trained from scratch. The experiments are conducted on four challenging scene labeling datasets, i.e. SiftFlow, CamVid, Stanford background, and SUN datasets, and compared against various state-of-the-art scene labeling methods.
20

Comparing LSTM and GRU for Multiclass Sentiment Analysis of Movie Reviews.

Sarika, Pawan Kumar January 2020 (has links)
Today, we are living in a data-driven world. Due to a surge in data generation, there is a need for efficient and accurate techniques to analyze data. One such kind of data which is needed to be analyzed are text reviews given for movies. Rather than classifying the reviews as positive or negative, we will classify the sentiment of the reviews on the scale of one to ten. In doing so, we will compare two recurrent neural network algorithms Long short term memory(LSTM) and Gated recurrent unit(GRU). The main objective of this study is to compare the accuracies of LSTM and GRU models. For training models, we collected data from two different sources. For filtering data, we used porter stemming and stop words. We coupled LSTM and GRU with the convolutional neural networks to increase the performance. After conducting experiments, we have observed that LSTM performed better in predicting border values. Whereas, GRU predicted every class equally. Overall GRU was able to predict multiclass text data of movie reviews slightly better than LSTM. GRU was computationally expansive when compared to LSTM.

Page generated in 0.1259 seconds