• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 172
  • 43
  • 41
  • 22
  • 10
  • 10
  • 9
  • 8
  • 4
  • 1
  • 1
  • 1
  • Tagged with
  • 377
  • 115
  • 68
  • 63
  • 58
  • 47
  • 34
  • 34
  • 33
  • 32
  • 30
  • 30
  • 30
  • 30
  • 26
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
171

Effect of haptic guidance and error amplification robotic training interventions on the immediate improvement of timing among individuals that had a stroke / Effet de l’entrainement robotisé par réduction de l’erreur et augmentation de l’erreur sur le timing du mouvement chez la personne ayant eu un accident vasculaire cérébral

Bouchard, Amy January 2016 (has links)
Abstract : Many individuals that had a stroke have motor impairments such as timing deficits that hinder their ability to complete daily activities like getting dressed. Robotic rehabilitation is an increasingly popular therapeutic avenue in order to improve motor recovery among this population. Yet, most studies have focused on improving the spatial aspect of movement (e.g. reaching), and not the temporal one (e.g. timing). Hence, the main aim of this study was to compare two types of robotic rehabilitation on the immediate improvement of timing accuracy: haptic guidance (HG), which consists of guiding the person to make the correct movement, and thus decreasing his or her movement errors, and error amplification (EA), which consists of increasing the person’s movement errors. The secondary objective consisted of exploring whether the side of the stroke lesion had an effect on timing accuracy following HG and EA training. Thirty-four persons that had a stroke (average age 67 ± 7 years) participated in a single training session of a timing-based task (simulated pinball-like task), where they had to activate a robot at the correct moment to successfully hit targets that were presented a random on a computer screen. Participants were randomly divided into two groups, receiving either HG or EA. During the same session, a baseline phase and a retention phase were given before and after each training, and these phases were compared in order to evaluate and compare the immediate impact of HG and EA on movement timing accuracy. The results showed that HG helped improve the immediate timing accuracy (p=0.03), but not EA (p=0.45). After comparing both trainings, HG was revealed to be superior to EA at improving timing (p=0.04). Furthermore, a significant correlation was found between the side of stroke lesion and the change in timing accuracy following EA (r[subscript pb]=0.7, p=0.001), but not HG (r[subscript pb]=0.18, p=0.24). In other words, a deterioration in timing accuracy was found for participants with a lesion in the left hemisphere that had trained with EA. On the other hand, for the participants having a right-sided stroke lesion, an improvement in timing accuracy was noted following EA. In sum, it seems that HG helps improve the immediate timing accuracy for individuals that had a stroke. Still, the side of the stroke lesion seems to play a part in the participants’ response to training. This remains to be further explored, in addition to the impact of providing more training sessions in order to assess any long-term benefits of HG or EA. / Résumé : À la suite d’un accident vasculaire cérébral (AVC), plusieurs atteintes, comme un déficit de timing, sont notées, et ce, même à la phase chronique d’un AVC, ce qui nuit à l’accomplissement de tâches quotidiennes comme se vêtir. L’entrainement robotisé est un entrainement qui est de plus en plus préconisé dans le but d’améliorer la récupération motrice à la suite d’un AVC. Par contre, la plupart des études ont étudié les effets de l’entrainement robotisé sur l’amélioration de l’aspect spatial du mouvement (ex : la direction du mouvement), et non l’aspect temporel (ex : timing). L’objectif principal de ce projet était donc d’évaluer et de comparer l’impact de deux entrainements robotisés sur l’amélioration immédiate du timing soit : la réduction de l’erreur (RE), qui consiste à guider la personne à faire le mouvement désiré, et l’augmentation de l’erreur (AE), qui nuit au mouvement de la personne. L’objectif secondaire consistait à explorer s’il y avait une relation entre le côté de la lésion cérébrale et le changement dans les erreurs de timing suivant l’entrainement par RE et AE. Trente-quatre personnes atteintes d’un AVC au stade chronique (âge moyen de 67 ± 7 années) ont participé à cette étude, où ils devaient jouer à un jeu simulé de machine à boules. Les participants devaient activer une main robotisée au bon moment pour atteindre des cibles présentées aléatoirement sur un écran d’ordinateur. Les participants recevaient soit RE ou AE. Une ligne de base et une phase de rétention étaient données avant et après chaque entrainement, et elles étaient utilisées pour évaluer et comparer l’effet immédiat de RE et AE sur le timing. Les résultats ont démontré que RE permet d’améliorer les erreurs de timing (p=0,03), mais pas AE (p=0,45). De plus, la comparaison entre les deux entrainements a démontré que RE était supérieur à AE pour améliorer le timing (p=0,04). Par ailleurs, une corrélation significative a été notée entre le côté de la lésion cérébrale et le changement des erreurs de timing suivant AE (r[indice inférieur pb]=0,70; p=0,001), mais pas RE (r[indice inférieur pb]=0,18; p=0,24). En d’autres mots, une détérioration de l’exécution de la tâche de timing a été notée pour les participants ayant leur lésion cérébrale à gauche. Par contre, ceux ayant leur lésion à droite ont bénéficié de l’entrainement par AE. Bref, l’entrainement par RE peut améliorer les erreurs de timing pour les survivants d’AVC au stade chronique. Toutefois, le côté de la lésion cérébrale semble jouer un rôle important dans la réponse à l’entrainement par AE. Ceci demeure à être exploré, ainsi que l’impact d’un entrainement par RE et AE de plus longue durée pour en déterminer leurs effets à long terme.
172

Nonparametric Mixture Modeling on Constrained Spaces

Putu Ayu G Sudyanti (7038110) 16 August 2019 (has links)
<div>Mixture modeling is a classical unsupervised learning method with applications to clustering and density estimation. This dissertation studies two challenges in modeling data with mixture models. The first part addresses problems that arise when modeling observations lying on constrained spaces, such as the boundaries of a city or a landmass. It is often desirable to model such data through the use of mixture models, especially nonparametric mixture models. Specifying the component distributions and evaluating normalization constants raise modeling and computational challenges. In particular, the likelihood forms an intractable quantity, and Bayesian inference over the parameters of these models results in posterior distributions that are doubly-intractable. We address this problem via a model based on rejection sampling and an algorithm based on data augmentation. Our approach is to specify such models as restrictions of standard, unconstrained distributions to the constraint set, with measurements from the model simulated by a rejection sampling algorithm. Posterior inference proceeds by Markov chain Monte Carlo, first imputing the rejected samples given mixture parameters and then resampling parameters given all samples. We study two modeling approaches: mixtures of truncated Gaussians and truncated mixtures of Gaussians, along with Markov chain Monte Carlo sampling algorithms for both. We also discuss variations of the models, as well as approximations to improve mixing, reduce computational cost, and lower variance.</div><div><br></div><div>The second part of this dissertation explores the application of mixture models to estimate contamination rates in matched tumor and normal samples. Bulk sequencing of tumor samples are prone to contaminations from normal cells, which lead to difficulties and inaccuracies in determining the mutational landscape of the cancer genome. In such instances, a matched normal sample from the same patient can be used to act as a control for germline mutations. Probabilistic models are popularly used in this context due to their flexibility. We propose a hierarchical Bayesian model to denoise the contamination in such data and detect somatic mutations in tumor cell populations. We explore the use of a Dirichlet prior on the contamination level and extend this to a framework of Dirichlet processes. We discuss MCMC schemes to sample from the joint posterior distribution and evaluate its performance on both synthetic experiments and publicly available data.</div>
173

Piezoelectric Adaptive Mirrors for Ground-based and Space Telescopes

Wang, Kainan 17 January 2019 (has links) (PDF)
This thesis investigates various active control aspects of large aperture telescopes; both Earth-based and space telescopes are considered.The first part proposes a concept of piezoelectric adaptive thin shell reflector for future space telescopes; it exhibits excellent areal density and stowability, and thus, paves the way to future large aperture space telescopes. Controlling the surface figure of spherical or parabolic shell with in-plane stresses induced by a piezoelectric layer raises two problems: (i) Doubly curved shells are significantly stiffer than flat plates (especially for the optical modes associated with hoop strains) and (ii) When using segmented electrodes with different voltages, the surface figure is subject to edge fluctuations with a characteristic length depending on the reflector curvature R_c and thickness t according to sqrt(R_ct). Accurate surface figure corrections require that the electrode size D_e satisfies D_e<sqrt(R_ct). This results in a very large number of electrodes, leading to ill-conditioning in the Jacobian matrix of the system; to solve this, a hierarchical approach is proposed to inverse the Jacobian, based on Saint-Venant's principle. This chapter also proposes a petal configuration which aims at reducing the hoop stiffness and improving the foldability of the reflector. A small scale technology demonstrator has been manufactured in the framework of the ESA-GSTP project Multilayer Adaptive Thin Shell Reflectors for Future Space Telescopes (MATS). The demonstrator includes a polymer substrate (PEEK) and a spin-coated PVDF-TrFE piezoelectric layer activated by independent electrodes; it is used to validate the manufacturing process and the independent actuation of the electrodes.The second part deals with control-structure interaction in flat deformable mirrors for Adaptive Optics. The problem arises because of the increasing size of AO mirrors, leading to lower resonance frequencies, and the control bandwidth requirements to achieve a good wavefront error compensation. This chapter studies the conditions for spillover instability and highlights the main parameters controlling the phenomenon: the ratio between the control bandwidth and the resonance frequency and the modal damping. Two methods for damping augmentation are discussed, one passive, using inductive shunting of piezoelectric elements, and the other active, using the wavefront sensor and the array of control actuators as a modal filter.The third part focuses on the field stabilization control of the tip/tilt mirror under wind disturbances of the E-ELT telescope (a distinctive feature of the E-ELT as compared to other smaller telescopes is that it will be a wind-limited instead of a seeing-limited telescope). A literature survey is conducted of the spectral content of the wind disturbances on large telescopes, with a special attention on the high frequency decay rate. The analysis confirms the adequacy of a decoupled design of the field stabilization (M5) control loop. However, the reaction torques necessary to control the tip/tilt mirror M5 have been found to depend critically on the asymptotic decay rate of the wind tilt disturbance. These torques act as a disturbance on the telescope structure and, if the wind disturbance does not decay fast enough with the frequency (a>-3), it may generate significant wavefront errors in the primary mirror M1, in a frequency range (30-100Hz) which may be difficult to eliminate by Adaptive Optics. / Doctorat en Sciences de l'ingénieur et technologie / info:eu-repo/semantics/nonPublished
174

Machine learning and augmented data for automated treatment planning in complex external beam radiation therapy

Lempart, Michael January 2019 (has links)
External beam radiation therapy is currently one of the most commonly used modalities for treating cancer. With the rise of new technologies and increasing computational power, machine learning, deep learning and artificial intelligence applications used for classification and regression problems have begun to find their way into the field of radiation oncology. One such application is the automated generation of radiotherapy treatment plans, which must be optimized for every single patient. The department of radiation physics in Lund, Sweden, has developed an autoplanning software, which in combination with a commercially available treatment planning system (TPS), can be used for automatic creation of clinical treatment plans. The parameters of a multivariable cost function are changed iteratively, making it possible to generate a great amount of different treatment plans for a single patient. The output leads to optimal, near-optimal, clinically acceptable or even non-acceptable treatment plans. In this thesis, the possibility of using machine and deep learning to minimize the amount of treatment plans generated by the autoplanning software as well as the possibility of finding cost function parameters that lead to clinically acceptable optimal or near-optimal plans is evaluated. Data augmentation is used to create matrices of optimal treatment plan parameters, which are stored in a training database.  Patient specific training features are extracted from the TPS, as well as from the bottleneck layer of a trained deep neural network autoencoder. The training features are then matched against the same features extracted for test patients, using a k-nearest neighbor algorithm. Finally, treatment plans for a new patient are generated using the output plan parameter matrices of its nearest neighbors. This allows for a reduction in computation time as well as for finding suitable cost function parameters for a new patient.
175

Ajuste do modelo matemático de uma aeronave com sistema de aumento de estabilidade com base em ensaios em túnel de vento / Adjustment of an aircraft mathematical model with stability augmentation system based on wind tunnel analysis

Mattos, Wellington da Silva 03 August 2007 (has links)
O presente trabalho descreve a aplicação de um método de ajuste de modelo, com base em resultados experimentais obtidos em túnel de vento, a uma aeronave com sistema de aumento de estabilidade longitudinal (LSAS). O estudo inclui uma revisão de métodos para ajuste de modelos, o desenvolvimento do modelo matemático da aeronave e uma descrição dos ensaios em túnel de vento da aeronave com o LSAS. O sistema automático de controle é composto de (1) um sistema de aquisição de dados, que processa o sinal do sensor e envia um sinal de comando para o atuador; (2) um potenciômetro, usado como sensor de ângulo de arfagem; e (3) um servo motor, usado como atuador do canard. O modelo de aeronave é baseado no Grumman X-29, que tem asa de enflechamento negativo e canard. Sua margem de estabilidade estática pode ser ajustada mudando a posição do centro de rotação que, por sua vez, coincide com a posição do centro de gravidade da aeronave através de balanceamento do peso. O ajuste do modelo matemático do avião é conduzido, no ambiente Matlab/Simulink, com a modificação dos parâmetros das derivadas de estabilidade da aeronave, do filtro digital e da dinâmica do sensor e do atuador. O objetivo é obter uma correlação ótima entre resultados teóricos e experimentais. O método da análise da sensibilidade paramétrica é escolhido para o ajuste do modelo. Numa primeira fase do estudo, a comparação entre resultados experimentais e numéricos é feita com base nas freqüências e razões de amortecimento da variação do ângulo de arfagem em resposta a uma entrada do tipo impulso de deflexão do canard. Numa segunda fase a comparação é baseada diretamente na resposta no tempo do ângulo de arfagem numérico e experimental para a mesma entrada impulso do canard. Três posições do centro de gravidade são analisadas, uma em que a aeronave é estaticamente estável e duas em que ela é instável. Os resultados mostram grande variação dos parâmetros ajustados indicando a necessidade de aperfeiçoamento na implementação da metodologia utilizada. / The present work describes the application of a model updating method, based on experimental wind tunnel data to an aircraft longitudinal stability augmentation system (LSAS). The study includes a revision of model updating methods, the development of the aircraft mathematical model and the description of a previously conducted, aircraft LSAS wind tunnel testing. The LSAS is comprised by (1) a data acquisition system, which processes the sensor signal and sends the control command to the actuator; (2) a potentiometer, used as a pitch angle sensor; and (3) a servo motor, used to actuate canard deflection. The aircraft model is based on the Grumman X-29, which has canard and forward swept wing. Its static stability margin can be adjusted by changing the center of rotation position which, in turn, coincides with the aircraft center of gravity position through weight balance. The airplane mathematical model updating is carried out, in the Matlab/Simulink environment, by adjusting model parameters for aircraft stability derivatives, digital filter, sensor and servo dynamics. The objective is to obtain an optimal correlation between numerical and experimental results. The parametric sensitivity analysis method is chosen for model updating. In a first phase of the study the comparison between theoretical and experimental results is based on frequencies and damping ratios for aircraft pitch angle response to an impulse canard deflection input. In a second phase the comparison is based directly on experimental and numerical pitch angle time response to the same impulse canard deflection input. Three center of gravity positions are analyzed, one for which the aircraft is statically stable and two for which it is unstable. Results show large variations among adjusted parameters indicating the need for improvements in the implementation of the adopted methodology.
176

Space Use, Resource Selection, and Survival of Reintroduced Bighorn Sheep

Robinson, Rusty Wade 01 August 2017 (has links)
Successful management of bighorn sheep depends on understanding the mechanisms responsible for population growth or decline, habitat selection, and utilization distribution after translocations. We studied a declining population of desert bighorn sheep in the North San Rafael Swell, Utah to determine birthdates of neonates, demographics, limiting factors, population size, probable cause of death, production, and survival. We documented 19 mortalities attributed to a variety of causes including cougar predation (n = 10, 53%), bluetongue virus (n = 2, 11%), reproductive complications (n = 2, 11%), hunter harvest (n = 1, 5%), and unknown (n = 4, 21%). Annual survival of females was 73% (95% CI = 0.55—0.86) in 2012 and 73% (95% CI = 0.55—0.86) in 2013. Adult male survival was 75% in 2012 (95% CI = 0.38—0.94) and 88% (95% CI = 0.50—0.98) in 2013. Disease testing revealed the presence of pneumonia-related pathogens. The population increased from an estimated 127 in 2012 to 139 in 2013 (λ = 1.09). Lamb:ewe ratios were 47:100 in 2012 and 31:100 in 2013. Mean birthing dates were 21 May in 2012 and 20 May in 2013. Spatial separation from domestic sheep and goats, and aggressive harvest of cougars, may have aided in the recovery of this population after disease events. Second, we investigated the timing of parturition and nursery habitat of desert bighorn sheep in the North San Rafael Swell to determine the influence of vegetation, topography, and anthropogenic features on resource selection. We monitored 38 radio-tagged ewes to establish birthing dates. We documented birthdates of 45 lambs. We used collar-generated GPS locations to perform logistic regression within a model-selection framework to differentiate between nursery and random locations (n = 750 for each) based on a suite of covariates. The top model included elevation, slope, ruggedness, aspect, vegetation type, distance to trails, and distance to roads. We used these variables to create a GIS model of nursery habitat for the North San Rafael (desert bighorns) and the Green River Corridor (Rocky Mountain bighorns). Ewes showed preference for steep, north-facing slopes, rugged terrain, lower elevation, and avoidance of roads. Our model provides managers with a map of high probability nursery areas of desert and Rocky Mountain bighorns to aid in conservation planning and mitigate potential conflicts with industry and domestic livestock. Finally, we monitored 127 reintroduced female bighorn sheep in three adjacent restored populations to investigate if the size and overlap of habitat use by augmented bighorns differed from resident bighorns. The size of seasonal ranges for residents was generally larger than augmented females. However, there was a shift in utilization distribution in all three populations after augmentation. Overlap indices between resident and augmented sheep varied by source herd. These data will help managers understand the dynamics of home range expansion and the overlap between provenance groups following augmentations.
177

End-to-End Full-Page Handwriting Recognition

Wigington, Curtis Michael 01 May 2018 (has links)
Despite decades of research, offline handwriting recognition (HWR) of historical documents remains a challenging problem, which if solved could greatly improve the searchability of online cultural heritage archives. Historical documents are plagued with noise, degradation, ink bleed-through, overlapping strokes, variation in slope and slant of the writing, and inconsistent layouts. Often the documents in a collection have been written by thousands of authors, all of whom have significantly different writing styles. In order to better capture the variations in writing styles we introduce a novel data augmentation technique. This methods achieves state-of-the-art results on modern datasets written in English and French and a historical dataset written in German.HWR models are often limited by the accuracy of the preceding steps of text detection and segmentation.Motivated by this, we present a deep learning model that jointly learns text detection, segmentation, and recognition using mostly images without detection or segmentation annotations.Our Start, Follow, Read (SFR) model is composed of a Region Proposal Network to find the start position of handwriting lines, a novel line follower network that incrementally follows and preprocesses lines of (perhaps curved) handwriting into dewarped images, and a CNN-LSTM network to read the characters. SFR exceeds the performance of the winner of the ICDAR2017 handwriting recognition competition, even when not using the provided competition region annotations.
178

Quantal Mechanisms Underlying Stimulation-induced Augmentation and Potentiation

Cheng, Hong 01 May 1998 (has links)
Repetitive stimulation of motor nerves causes an increase in the number of packets of transmitter ("quanta") that can be released in the ensuing period. This represents a type of conditioning, in which synaptic transmission may be enhanced by prior activity. Despite many studies of this phenomenon, there have been no investigations of the quantal mechanisms underlying these events, due to the rapid changes in transmitter output and the short time periods involved. To examine this problem, a method was developed in which estimates of the quantal release parameters could be obtained over very brief periods (3 s). Conventional microelectrode techniques were used to record miniature endplate potentials (MEPPs) from isolated frog (Rana pipiens) cutaneous pectoris muscles, before and after repetitive (40 sec at 80 Hz) nerve stimulation. Estimates were obtained of m (number of quanta released), n (number of functional release sites), p (mean probability of release) and var$\rm\sb{s}$p (spatial variance in p) using a method that employs counts of MEPPs per unit time. Fluctuations in the estimates were reduced using a moving bin technique (bin size = 3 s, $\Delta$bin = 1 s). Muscle contraction was prevented using low Ca$\sp{2+},$ high Mg$\sp{2+}$ Ringer or normal Ringer to which $\mu$-conotoxin GIIIA was added. These studies showed that: (1) the post-stimulation increase in transmitter release was dependent on stimulation frequency and not on the total number of stimulus impulses. When the total number of pulses was kept constant, the high frequency pattern produced a higher level of transmitter release than did the lower frequency patterns; (2) augmentation and potentiation were present in both low Ca$\sp{2+},$ high Mg$\sp{2+}$ and normal Ringer solutions, but potentiation, m, n, p and var$\rm\sb{s}$p were greater in normal Ringer solution than in low Ca$\sp{2+},$ high Mg$\sp{2+}$ solution. In low Ca$\sp{2+},$ high Mg$\sp{2+}$ solution, there was a larger decrease in n compared to p; (3) hypertonicity (addition of 100 mM sucrose) produced a marked increase in both basal and stimulation-induced values of m, n, and p. By contrast, there was a marked increase in the stimulation-induced but not the basal values of var$\rm\sb{s}$p; (4) hypertonicity produced a decrease in augmentation but had no effect on potentiation; (5) augmentation and potentiation appeared to involve mitochondrial uptake and efflux of cytoplasmic Ca$\sp{2+}.$ Tetraphenylphosphonium (which blocks mitochondrial Ca$\sp{2+}$ efflux and uptake) decreased augmentation and potentiation in low Ca$\sp{2+},$ high Mg$\sp{2+}$ solutions but increased potentiation in the same solution made hypertonic with 100 mM sucrose; (6) the overall findings suggest that this new method may be useful for investigating the subcellular dynamics of transmitter release following nerve stimulation.
179

A COMPUTATIONAL STUDY OF PATCH IMPLANTATION AND MITRAL VALVE MECHANICS

Singh, Dara 01 January 2019 (has links)
Myocardial infarction (i.e., a heart attack) is the most common heart disease in the United States. Mitral valve regurgitation, or the backflow of blood into the atrium from the left ventricle, is one of the complications associated with myocardial infarction. In this dissertation, a validated model of a sheep heart that has suffered myocardial infarction has been employed to study mitral valve regurgitation. The model was rebuilt with the knowledge of geometrical changes captured with MRI technique and is assigned with anisotropic, inhomogeneous, nearly incompressible and highly non-linear material properties. Patch augmentation was performed on its anterior leaflet, using a simplified approach, and its posterior leaflet, using a more realistic approach. In this finite element simulation, we virtually installed an elliptical patch within the central portion of the posterior leaflet. To the best of the author’s knowledge, this type of simulation has not been performed previously. In another simulation, the effect of patch within the anterior leaflet was simulated. The results from the two different surgical simulations show that patch implantation helps the free edges of the leaflets come close to one another, which leads to improved coaptation. Additionally, the changes in chordal force distributions are also reported. Finally, this study answers a few questions regarding mitral valve patch augmentation surgeries and emphasizes the importance of further investigations on the influence of patch positioning and material properties on key outcomes. The ultimate goal is to use the proposed techniques to assess human models that are patient-specific.
180

Text Augmentation: Inserting markup into natural language text with PPM Models

Yeates, Stuart Andrew January 2006 (has links)
This thesis describes a new optimisation and new heuristics for automatically marking up XML documents, and CEM, a Java implementation, using PPM models. CEM is significantly more general than previous systems, marking up large numbers of hierarchical tags, using n-gram models for large n and a variety of escape methods. Four corpora are discussed, including the bibliography corpus of 14682 bibliographies laid out in seven standard styles using the BibTeX system and marked up in XML with every field from the original BibTeX. Other corpora include the ROCLING Chinese text segmentation corpus, the Computists' Communique corpus and the Reuters' corpus. A detailed examination is presented of the methods of evaluating mark up algorithms, including computation complexity measures and correctness measures from the fields of information retrieval, string processing, machine learning and information theory. A new taxonomy of markup complexities is established and the properties of each taxon are examined in relation to the complexity of marked up documents. The performance of the new heuristics and optimisation are examined using the four corpora.

Page generated in 0.2 seconds