• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1
  • 1
  • 1
  • Tagged with
  • 4
  • 4
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Merging of Diverse Encrypted PCM Streams

Duffy, Harold A. 10 1900 (has links)
International Telemetering Conference Proceedings / October 28-31, 1996 / Town and Country Hotel and Convention Center, San Diego, California / The emergence of encrypted PCM as a standard within DOD makes possible the correction of time skews between diverse data sources. Time alignment of data streams can be accomplished before decryption and so is independent of specific format. Data quality assessment in order to do a best-source selection remains problematic, but workable.
2

Statistical Models and Algorithms for Studying Hand and Finger Kinematics and their Neural Mechanisms

Castellanos, Lucia 01 August 2013 (has links)
The primate hand, a biomechanical structure with over twenty kinematic degrees of freedom, has an elaborate anatomical architecture. Although the hand requires complex, coordinated neural control, it endows its owner with an astonishing range of dexterous finger movements. Despite a century of research, however, the neural mechanisms that enable finger and grasping movements in primates are largely unknown. In this thesis, we investigate statistical models of finger movement that can provide insights into the mechanics of the hand, and that can have applications in neural-motor prostheses, enabling people with limb loss to regain natural function of the hands. There are many challenges associated with (1) the understanding and modeling of the kinematics of fingers, and (2) the mapping of intracortical neural recordings into motor commands that can be used to control a Brain-Machine Interface. These challenges include: potential nonlinearities; confounded sources of variation in experimental datasets; and dealing with high degrees of kinematic freedom. In this work we analyze kinematic and neural datasets from repeated-trial experiments of hand motion, with the following contributions: We identified static, nonlinear, low-dimensional representations of grasping finger motion, with accompanying evidence that these nonlinear representations are better than linear representations at predicting the type of object being grasped over the course of a reach-to-grasp movement. In addition, we show evidence of better encoding of these nonlinear (versus linear) representations in the firing of some neurons collected from the primary motor cortex of rhesus monkeys. A functional alignment of grasping trajectories, based on total kinetic energy, as a strategy to account for temporal variation and to exploit a repeated-trial experiment structure. An interpretable model for extracting dynamic synergies of finger motion, based on Gaussian Processes, that decomposes and reduces the dimensionality of variance in the dataset. We derive efficient algorithms for parameter estimation, show accurate reconstruction of grasping trajectories, and illustrate the interpretation of the model parameters. Sound evidence of single-neuron decoding of interpretable grasping events, plus insights about the amount of grasping information extractable from just a single neuron. The Laplace Gaussian Filter (LGF), a deterministic approximation to the posterior mean that is more accurate than Monte Carlo approximations for the same computational cost, and that in an off-line decoding task is more accurate than the standard Population Vector Algorithm.
3

Extração não supervisionada de dados da web utilizando abordagem independente de formato

Porto, André Luiz Lopes 17 November 2015 (has links)
Submitted by Lenieze Lira (leniezeblira@gmail.com) on 2016-07-25T13:47:02Z No. of bitstreams: 1 Dissertação - André Luiz Lopes Porto.pdf: 14791950 bytes, checksum: be2de076023a64a02a6a43c99e9977d8 (MD5) / Approved for entry into archive by Divisão de Documentação/BC Biblioteca Central (ddbc@ufam.edu.br) on 2016-07-28T13:48:47Z (GMT) No. of bitstreams: 1 Dissertação - André Luiz Lopes Porto.pdf: 14791950 bytes, checksum: be2de076023a64a02a6a43c99e9977d8 (MD5) / Approved for entry into archive by Divisão de Documentação/BC Biblioteca Central (ddbc@ufam.edu.br) on 2016-07-28T13:50:19Z (GMT) No. of bitstreams: 1 Dissertação - André Luiz Lopes Porto.pdf: 14791950 bytes, checksum: be2de076023a64a02a6a43c99e9977d8 (MD5) / Made available in DSpace on 2016-07-28T13:50:19Z (GMT). No. of bitstreams: 1 Dissertação - André Luiz Lopes Porto.pdf: 14791950 bytes, checksum: be2de076023a64a02a6a43c99e9977d8 (MD5) Previous issue date: 2015-11-17 / In this thesis we propose a new method for extraction data in rich Web pages that uses only the textual content of these pages. Our method, called FIEX (Format Independent Web Data Extraction), is based on information extraction techniques for text segmentation, and can extract data from Web pages where methods of state of the art based on data alignment techniques fail due to inconsistency between the logical structure of Web pages and the conceptual structure of the data represented in them. The FIEX, unlike the methods previously proposed in the literature, is able to extract data using only the textual content of a Web pages in challenging scenarios such as severe cases of textual elements compounds, in which various values of interest for extraction are represented by only one HTML element. To perform the extraction data of the web pages, FIEX is based on techniques of elimination noise by information redundancy and an information extraction method for text segmentation known in the literature as ONDUX (On-Demand Unsupervised Learning for Information Extraction). In our experiments, we used various Web pages collections of di erent areas of products and e-commerce stores with goal to extract data from product descriptions. The choose of this type of Web page, due to the large amount of data these pages are contained in severe cases of textual elements compounds. According to the results obtained in our experiments in various areas of products and e-commerce stores, we validate the hypothesis that the extraction based on only textual features is possible and e ective. / Nessa dissertação de mestrado propomos um novo método para extração em páginas Web ricas em dados que utiliza apenas o conteúdo textual destas páginas. Nosso método, chamado de FIEX (Format Independent Web Data Extraction), é baseado em técnicas de extração de informação por segmentação de texto, e consegue extrair dados de páginas Web nas quais métodos do estado-da-arte baseados em técnicas de alinhamento de dados não conseguem devido à inconsistência entre a estrutura lógica das páginas Web e a estrutura conceitual dos dados nelas representadas. O FIEX, diferentemente dos métodos previamente propostos na literatura, é capaz de extrair dados apenas utilizando o conteúdo textual de uma página Web em cenários desa adores como casos severos de elementos textuais compostos, nos quais diversos valores de interesse para extração estão representados por apenas um elemento HTML. Para realizar a extração dos dados de páginas Web, o FIEX, é baseado em técnicas de eliminação de ruídos por redundância de informação e um método de extração de informação por segmentação de texto conhecido na literatura como ONDUX (On-Demand Unsupervised Learning for Information Extraction). Em nossos experimentos, utilizamos várias coleções de páginas Web de diferentes domínios de produtos e de lojas de comércio eletr ônico com objetivo de extrair dados de descrições de produtos. A escolha desse tipo de página Web, deve-se à grande quantidade de dados destas páginas estarem contidos em casos severos de elementos textuais compostos. De acordo com os resultados obtidos em nossos experimentos em diferentes domínios de produtos e lojas de comércio eletrônico, validamos a hipótese de que a extração baseada em apenas características textuais é possível e e caz.
4

Développement des méthodes bio analytique pour l’analyse quantitative et qualitative des peptides et protéines marqués par le couplage de la chromatographie et la spectrométrie de masse / Development of bio-analytical methods for the quantitative and qualitative analysis of labelled peptides and proteins via hyphenation of chromatography and mass spectrometry

Holste, Angela Sarah 24 February 2014 (has links)
Cette thèse est le résultat d’une cotutelle entre l'Université de Pau et des Pays de l'Adour (UPPA) à Pau, en France et l'Université Christian Albrecht (CAU) à Kiel, en Allemagne. Dans le cadre de cette collaboration internationale, des méthodes bio-analytiques sont développées pour analyser quantitativement et qualitativement des peptides et protéines marquées par le couplage de la chromatographie avec la spectrométrie de masse. Les peptides et les digestats des protéines sont marquées selon un protocole optimisé par des lanthanides en utilisant des composés à base de DOTA. La séparation des peptides est réalisée par IP-RP-nanoHPLC. Des données complémentaires sont acquises par MALDI-MS pour l'identification et par ICP-MS pour la quantification. Dans ce contexte, une étape de pré-nettoyage en ligne est développée et mise en œuvre dans le protocole de séparation par nanoHPLC. Cette étape permet l'élimination efficace des réactifs appliqués en excès et ainsi la diminution du bruit de fond lié à la présence de métaux lors des analyses par ICP-MS. Les données obtenues sont alors plus facile à interpréter, la sensibilité des signaux des peptides n’étant par ailleurs pas modifié. L'extraction en phase solide (SPE) appliquée comme alternative entraîne des pertes importantes de peptides et peut être considérée comme inadaptée pour l'analyse quantitative. Des additifs pour éluants de nanoHPLC, tels que l'EDTA et le HFBA sont testés et jugés non bénéfiques pour l'analyse des échantillons peptidiques normaux. HFBA peut être reconsidéré pour une application spéciale sur des peptides très hydrophiles. Des peptides marqués sont développés. Leur utilisation en quantité connue pourrait permettre la quantification rapide et simple d'un échantillon de digestat à faible complexité. De plus, cet ensemble de peptides permet la superposition fiable des chromatogrammes, et ainsi de comparer des données complémentaires obtenues par l’analyse d’échantillon par ICP-MS et MALDI-MS. Expériences d'application avec le couplage laser femtoseconde avec ICP-MS sont effectuées sur des plaques métalliques de MALDI MS et montrent des résultats très prometteurs. Pour cela, les échantillons préalablement identifiés par MALDI-MS sont analysés par fsLA-ICP-MS. Les premières tentatives de quantification sur la plaque en acier modifiée sont satisfaisantes et donnent des résultats répondant aux attentes. L’optimisation des paramètres de MALDI-MS facilite l’identification des peptides. / This PhD thesis was a Cotutelle between the Université de Pau et des Pays de l’Adour (UPPA) in Pau, France and the Christian-Albrechts University (CAU) in Kiel, Germany. In the course of this international collaboration, bio-analytical methods for the quantitative and qualitative analysis of labelled peptides and proteins were developed, which were based on the hyphenation of chromatography with mass spectrometry. Peptides and protein digests were lanthanide labelled using DOTA-based compounds according to an optimised protocol. Separation on the peptide level was performed using IP-RP-nanoHPLC. Complementary data sets were acquired using MALDI-MS for identification and ICP-MS for quantification. In this context, an online precleaning step was developed and implemented in the nanoHPLC separation routine, which allowed for effective removal of excess reagents. This lead to lowered metal backgrounds during ICP-MS measurements and thus better data interpretability, while guarding peptide recovery at a maximum level. An alternative offline purification using solid phase extraction (SPE) resulted in important peptide losses and can be considered unsuitable for quantitative analysis. Additives to the nanoHPLC eluents, such as HFBA and EDTA were tested and not deemed beneficial for the analysis of normal peptide samples. HFBA can be reconsidered for special application on very hydrophilic peptide species. A set of labelled peptides was developed, which due to application of known quantities could be employed for quick and simple quantification of a low complexity digest sample. In addition this peptide set allowed for the reliable superposition of chromatograms, enabling sample comparability especially for complementary ICP-MS and MALDI-MS data. Experiments for application of fsLA-ICP-MS on MALDI-MS target plates were conducted and showed very promising results. For this purpose, samples that were already identified using MALDI-MS were supposed to be remeasured using fsLA-ICP-MS. First quantification attempts on the modified steel target plate were successful and in the range of expectance. Adjusted parameters for MALDI-MS allowed for proper peptide identifications.

Page generated in 0.0769 seconds