• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 464
  • 87
  • 74
  • 64
  • 48
  • 39
  • 16
  • 10
  • 8
  • 5
  • 4
  • 4
  • 3
  • 2
  • 2
  • Tagged with
  • 995
  • 130
  • 84
  • 79
  • 76
  • 71
  • 66
  • 65
  • 63
  • 62
  • 59
  • 58
  • 54
  • 51
  • 48
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
291

Human visual-vestibular interactions during postural responses to brief falls

Wicke, Roger William January 1980 (has links)
Thesis (Ph.D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1980. / MICROFICHE COPY AVAILABLE IN ARCHIVES AND ENGINEERING. / Bibliography: leaves 263-276. / by Roger William Wicke. / Ph.D.
292

Quantification des processus responsables de l’accélération des glaciers émissaires par méthodes inverses / Quantifying the processes at the root of the observed acceleration of icestreams from inverse methods

Mosbeux, Cyrille 05 December 2016 (has links)
Le réchauffement climatique actuel a une conséquence directe sur la perte de masse des calottes polaires. Reproduire les mécanismes responsables de cette perte de masse et prévoir la contribution des calottes à l’élévation du niveau des océans d’ici la fin du siècle est dès lors l’un des défis majeurs de la modélisation de l’écoulement des calottes polaires. Les modèles d’écoulement permettent de réaliser de telles prévisions mais ces simulations, à court terme, sont très sensibles à leur état initial habituellement construit à partir d’observations de terrain. Malheureusement, certains paramètres comme le frottement entre la glace et le socle rocheux ainsi que la topographie basale sont souvent méconnus à cause du manque d’observations directes ou des larges incertitudes liées à ces observations. Améliorer la connaissance de ces deux paramètres à la fois pour le Groenland et l’Antarctique est donc un pré-requis pour réaliser des projections fiables. Les méthodes d’assimilation de données et les méthodes inverses permettent alors de surmonter ce problème.Cette thèse présente deux algorithmes d’assimilation de données permettant de mieux contraindre simultanément le frottement basal et la topographie basale à partir d’observations de surface. L’un des algorithmes est entièrement basé sur la méthode adjointe tandis que le second se base sur une méthode cyclique couplant l’inversion du frottement basal avec la méthode adjointe et l’inversion de la géométrie basale à l’aide de la relaxation newtonienne. Les deux algorithmes ont été implémentés dans le modèle d’écoulement de glace éléments finis Elmer/Ice et testés dans une expérience jumelle qui montre une nette amélioration de la connaissance des deux paramètres recherchés. L’application des deux algorithmes à la région de la Terre de Wilkes réduit l’incertitude liée aux conditions basales en permettant, par exemple, d’obtenir plus de détails sur la géométrie basale en comparaison avec les modèles numériques de terrain habituels. De plus la reconstruction simultanée du frottement et de la géométrie basale permet de réduire significativement les anomalies de divergence de flux habituellement obtenues lors de l’inversion du frottement seul. Nous étudions finalement l’impact des conditions basales ainsi inversées sur des simulations pronostiques afin de comparer la capacité des deux algorithmes à mieux contraindre la contribution future des calottes polaires à l’augmentation du niveau des océans. / The current global warming has direct consequences on ice-sheet mass loss. Reproducing the responsible mechanisms and forecasting the potential ice-sheets contribution to 21st century sea level rise is one of the major challenges in ice-sheet and ice flow modelling. Ice flow models are now routinely used to forecast the potential ice-sheets contribution to sea level rise. Such short term simulations are very sensitive to model initial state, usually build from field observations. However, some parameters, such as the basal friction between icesheet and bedrock as well as the basal topography, are still badly known because of a lake of direct observations or large uncertainty on measurements. Improving the knowledge of these two parameters for Greenland and Antarctica is therefore a prerequisite for making reliable projections. Data assimilation and inverse methods have been developed in order to overcome this problem. This thesis presents two different assimilation algorithms to better constrain simulaneouslybasal friction and bedrock elevation parameters using surface observations. The first algorithm is entierly based on adjoint method while the second algorithm uses a cycling method coupling inversion of basal friction with adjoint method and inversion of bedrock topography with nudging method. Both algorithms have been implemented in the finite element ice sheet and ice flow model Elmer/Ice and tested in a twin experiment showing a clear improvement of both parameters knowledge. The application of both algorithms to regions such as the Wilkes Land in Antartica reduces the uncertainty on basal conditions, for instance providing more details to the bedrock geometry when compared to usual DEM. Moreover,the reconstruction of both bedrock elevation and basal friction significantly decreases ice flux divergence anomalies when compared to classical methods where only friction is inversed. We finaly sudy the impact of such inversion on pronostic simulation in order to compare the efficiency of the two algorithms to better constrain future ice-sheet contribution to sea level rise.
293

Transporte, escape de partículas e propriedades dinâmicas de mapeamentos não lineares / Transport, escape of particles and dynamical properties for non-linear mappings

Diogo Ricardo da Costa 28 February 2014 (has links)
Investigaremos algumas propriedades dinâmicas e de transporte para um conjunto de partículas clássicas não interagentes em diversos sistemas físicos. Os sistemas descritos aqui, em sua maioria, apresentam estrutura mista no espaço de fase no sentido de que curvas invariantes do tipo spanning, mares de caos e ilhas periódicas estão presentes. A descrição de cada sistema será feita utilizando mapeamentos discretos não lineares. Detalharemos a forma de obter os mapeamentos assim como discutiremos algumas de suas propriedades dinâmicas. Expoentes de Lyapunov serão utilizados para caracterizar a região de caos nos sistemas. Hipóteses de escala são usadas para provar que certos observáveis, por exemplo a energia média ao longo de mares de caos, são invariantes de escala. Consideraremos também que quando uma partícula, ou de forma equivalente um conjunto delas atinge uma determinada altura no espaço de fases, ela pode escapar. Ao estudar o escape de partículas, vemos que o histograma do número de partículas que atingem uma certa altura (ou energia) h no espaço de fases em uma dada iterada n, ao qual observamos ser invariante de escala, cresce rapidamente até atingir um máximo e então tende à zero para n grande. Quando a altura h varia proporcionalmente a posição da primeira curva invariante spanning, podemos confirmar uma invariância de escala do histograma de frequências. O mesmo ocorre para a probabilidade de sobrevivência da partícula à dinâmica. Neste contexto, abordaremos os seguintes problemas: (1) Um guia de ondas senoidalmente corrugado; (2) Uma família de mapas Hamiltonianos bidimensionais que recupera diversos modelos; (3) Partículas confinadas em uma caixa com potenciais infinitos nas bordas e contendo em seu interior um poço de potencial dependente periodicamente do tempo; (4) Analisaremos um bilhar ovóide com dependência temporal introduzida através de giro, onde para certas condições observamos que este não apresenta um aparente crescimento ilimitado de energia (aceleração de Fermi), desta forma sendo um possível contra-exemplo da conjectura LRA. Esta tese é um resumo de 8 artigos que foram publicados em revistas internacionais. / We investigate some dynamical and transport properties for a set of non-interacting classical particles. The systems here described, for the most part, present mixed structure in the phase space in the sense that invariant spanning curves, chaotic seas and periodic islands are present. The dynamics of each model is described by using non-linear mappings. We show all the details to construct the mappings and discuss some of their dynamical properties including fixed points stability among others. Lyapunov exponents will be obtained to characterize the chaotic dynamics observed in the phase space. Moreover some scaling hypotheses are used to prove that certain observables, including the average energy, are scaling invariant. We consider also that when a particle or an ensemble of them reach a certain portion of the phase space, they can escape. When studying the escape, we see that the histogram for the number of particles that reach certain height (or energy) h in the phase space for the iteration n, for which we observe to be scaling invariant, grows quickly until reaching a maximum and then goes towards zero for large enough n. When changing the height h proportionally to the position of the first invariant spanning curve, we can confirm the scaling invariance. The same happens for the survival probability for a particle in the chaotic dynamics. In this way, we will discuss the following problems: (1) A corrugated waveguide; (2) A family of two-dimensional Hamiltonian mappings which can reproduce different scaling exponents; (3) Particles confined to bounce in the interior of a time-dependent potential well; (4) We will analyse a rotating oval billiard, where for certain conditions we observed that this system does not present the unbounded energy growth (Fermi acceleration), in this way it is a possible counterexample of the LRA conjecture. This thesis is as summary of eight papers already published.
294

Multi-Sensor Based Activity Recognition˸ Development and Validation in Real-Life context / Développement d'un dispositif de reconnaissance de quantification de l'activité physique et d'estimation de la dépense énergétique à partir de la fusion de capteurs.

Abdul Rahman, Hala 20 December 2017 (has links)
Non communiqué. / Non communiqué.
295

Algorithm and Hardware Design for Efficient Deep Learning Inference

January 2018 (has links)
abstract: Deep learning (DL) has proved itself be one of the most important developements till date with far reaching impacts in numerous fields like robotics, computer vision, surveillance, speech processing, machine translation, finance, etc. They are now widely used for countless applications because of their ability to generalize real world data, robustness to noise in previously unseen data and high inference accuracy. With the ability to learn useful features from raw sensor data, deep learning algorithms have out-performed tradinal AI algorithms and pushed the boundaries of what can be achieved with AI. In this work, we demonstrate the power of deep learning by developing a neural network to automatically detect cough instances from audio recorded in un-constrained environments. For this, 24 hours long recordings from 9 dierent patients is collected and carefully labeled by medical personel. A pre-processing algorithm is proposed to convert event based cough dataset to a more informative dataset with start and end of coughs and also introduce data augmentation for regularizing the training procedure. The proposed neural network achieves 92.3% leave-one-out accuracy on data captured in real world. Deep neural networks are composed of multiple layers that are compute/memory intensive. This makes it difficult to execute these algorithms real-time with low power consumption using existing general purpose computers. In this work, we propose hardware accelerators for a traditional AI algorithm based on random forest trees and two representative deep convolutional neural networks (AlexNet and VGG). With the proposed acceleration techniques, ~ 30x performance improvement was achieved compared to CPU for random forest trees. For deep CNNS, we demonstrate that much higher performance can be achieved with architecture space exploration using any optimization algorithms with system level performance and area models for hardware primitives as inputs and goal of minimizing latency with given resource constraints. With this method, ~30GOPs performance was achieved for Stratix V FPGA boards. Hardware acceleration of DL algorithms alone is not always the most ecient way and sucient to achieve desired performance. There is a huge headroom available for performance improvement provided the algorithms are designed keeping in mind the hardware limitations and bottlenecks. This work achieves hardware-software co-optimization for Non-Maximal Suppression (NMS) algorithm. Using the proposed algorithmic changes and hardware architecture With CMOS scaling coming to an end and increasing memory bandwidth bottlenecks, CMOS based system might not scale enough to accommodate requirements of more complicated and deeper neural networks in future. In this work, we explore RRAM crossbars and arrays as compact, high performing and energy efficient alternative to CMOS accelerators for deep learning training and inference. We propose and implement RRAM periphery read and write circuits and achieved ~3000x performance improvement in online dictionary learning compared to CPU. This work also examines the realistic RRAM devices and their non-idealities. We do an in-depth study of the effects of RRAM non-idealities on inference accuracy when a pretrained model is mapped to RRAM based accelerators. To mitigate this issue, we propose Random Sparse Adaptation (RSA), a novel scheme aimed at tuning the model to take care of the faults of the RRAM array on which it is mapped. Our proposed method can achieve inference accuracy much higher than what traditional Read-Verify-Write (R-V-W) method could achieve. RSA can also recover lost inference accuracy 100x ~ 1000x faster compared to R-V-W. Using 32-bit high precision RSA cells, we achieved ~10% higher accuracy using fautly RRAM arrays compared to what can be achieved by mapping a deep network to an 32 level RRAM array with no variations. / Dissertation/Thesis / Doctoral Dissertation Electrical Engineering 2018
296

Programas de aceleração de startups: análise comparativa dos mecanismos de aceleração dos programas Start-Up Brasil e Start-Up Chile

Zortea, Carla Giovana Ceron 31 May 2016 (has links)
Submitted by Silvana Teresinha Dornelles Studzinski (sstudzinski) on 2016-08-23T13:49:32Z No. of bitstreams: 1 Carla Giovana Ceron Zortea_.pdf: 1743114 bytes, checksum: 481d5ce9a934c3302b0679ff3339df50 (MD5) / Made available in DSpace on 2016-08-23T13:49:32Z (GMT). No. of bitstreams: 1 Carla Giovana Ceron Zortea_.pdf: 1743114 bytes, checksum: 481d5ce9a934c3302b0679ff3339df50 (MD5) Previous issue date: 2016-05-31 / Nenhuma / As startups aumentaram o interesse pelo empreendedorismo, sendo vistas como alternativa para alcançar sucesso, dinheiro e prestígio por meio de atividades criativas, com grande potencial de rentabilidade, mas também de alto risco. O reconhecimento da importância de fomentar a criação de empresas de base tecnológica como um elemento essencial para os países em desenvolvimento, estimulando a competitividade e a criatividade nos mercados mediante a inovação, está aumentando. Este ecossistema inclui, segundo a OECD (2015b), uma série de fatores, como a regulação, o acesso a investimentos, condições de mercado adequadas, cultura empresarial, produção e disseminação de conhecimento. Este estudo tem como objetivo analisar e comparar os mecanismos de aceleração de programas públicos para incentivar startups Start-Up Brasil e Start-Up Chile, colaborar para a sua evolução, e, na medida do possível, contribuir para os estudos de incentivo ao empreendedorismo em mercados emergentes. A estratégia de pesquisa empregada foi a de estudo de casos múltiplos, e os programas foram analisados como casos isolados. Para a coleta de dados, utilizou-se pesquisa documental, entrevistas com empresários que passaram pelo processo de aceleração de cada um dos programas e entrevistas com os membros das suas equipes de aceleração. A narrativa dos casos manteve a preocupação de triangular os dados e mostrar os diferentes entendimentos das partes sobre os mecanismos usados pelos programas. Na sequência, foi realizada a análise cruzada dos casos, na qual, foram comparadas as contribuições dos programas e elencadas sugestões para a sua evolução. Os resultados mostraram que os mecanismos de aceleração adotadas pelos programas favorecem o desempenho das startups e que o Start-Up Brasil e o Start-Up Chile, em geral, desempenharam papéis importantes nas trajetórias dessas empresas. Analisados os modos de operação dos programas e estudadas as indicações de melhorias feitas pelos entrevistados, foram elencadas sugestões na atuação de cada um dos seus mecanismos. / Startups caused increased interest by entrepreneurship, being seen as an alternative to achieve success, money and prestige through creative business, with great potential for profitability but also high risk. It's increasing the recognition importance of stimulating the creation of technology-based companies to an element important for developing countries, stimulating competitiveness and creativity in markets through innovation. This ecosystem includes, according to OECD (2015b), a number of factors such as regulation, access to investments, appropriate market conditions, entrepreneurial culture, knowledge production and dissemination. This study aims to analyze and compare the acceleration mechanisms of government programs for encouraging startups Start-up Brazil and Start-Up Chile, contribute to their evolution, and, as far as possible, contribute to the studies of encouraging entrepreneurship in emerging markets. The research strategy used was the study of multiple cases, and the programs analyzed as single cases. To collect data, we used desk research, interviews with entrepreneurs who have gone through the acceleration process of each of them and members of their teams’ acceleration. The narrative of cases remained the concern of data triangulation and shows the different understandings of the parties regarding the mechanisms used by the programs. Following, a cross analysis of cases was performed. In it, the programs’ contributions were compared and were listed suggestions for their evolution. The results showed that the acceleration mechanisms adopted by the programs, are supportive to the startups performance and that Start-Up Brasil e Start-Up Chile generally played important roles in their trajectories. After analyzing the programs operating modes and study the suggestions for improvement made by the respondents, it was possible to list suggestions for actions in each of its mechanisms.
297

Slow Harmonic Acceleration: Normal Values & the Effect of Age

Akin, Faith W., Riska, Kristal M., Murnane, Owen D. 21 November 2014 (has links)
The purpose of this session is to present normative data for slow harmonic acceleration (SHA) on rotary chair in healthy individuals ranging in age from 18 to 72 years. Eye movement was recorded using video-oculography during SHA at frequencies of 0.01, 0.04, 0.016, and 0.64 Hz.
298

Normative Data and Reliability of Slow Harmonic Acceleration and Velocity Step Rotary Chair Protocols

Barnard, Emily, Riska, Kristal M., Akin, Faith W., Murnane, Owen D. 21 November 2014 (has links)
The purpose of this poster session is to present normative data and test-retest reliability for slow harmonic acceleration (SHA) and velocity step testing (VST) on rotary chair in healthy individuals.
299

The Clinical Utility of Slow Harmonic Acceleration as Compared to Caloric Irrigations

Riska, Kristal M., Akin, Faith W., Murnane, Owen D. 21 November 2014 (has links)
The purpose of this session is to examine the clinical utility of slow harmonic acceleration (SHA) during rotary chair testing by comparing the response characteristics of SHA to caloric irrigations in 800 patients with dizziness.
300

Advanced optimization and sampling techniques for biomolecules using a polarizable force field

Litman, Jacob Mordechai 01 May 2019 (has links)
Biophysical simulation can be an excellent complement to experimental techniques, but there are unresolved practical constraints to simulation. While computers have continued to improve, the scale of systems we wish to study has continued to increase. This has driven the use of approximate energy functions (force fields), compensating for relatively short simulations via careful structure preparation and accelerated sampling techniques. To address structure preparation, we developed the many-body dead end elimination (MB-DEE) optimizer. We first proved the MB-DEE algorithm on a set of PCNA crystal structures, and accelerated it on GPUs to optimize 472 homology models of proteins implicated in inherited deafness. Advanced physics has been clearly demonstrated to help optimize structures, and with GPU acceleration, this becomes a possibility for large numbers of structures. We also show the novel “simultaneous bookending” algorithm, which is a new approach to indirect free energy (IFE) methods. These first perform simulations under a cheaper “reference” potential, then correct the thermodynamics to a more sophisticated “target” potential, combining the speed of the reference potential with the accuracy of the target potential. Simultaneous bookending is shown as a valid IFE approach, and methods to realize speedups vs. the direct path are discussed. Finally, we are developing the Monte Carlo Orthogonal Space Random Walk (MC-OSRW) algorithm for high-performance alchemical free energy simulations, bypassing some of the difficulty in OSRW methods. This work helps prevent inaccuracies caused by simpler electrostatic models by making advanced polarizable force fields more accessible for routine simulation.

Page generated in 0.0257 seconds