• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 762
  • 242
  • 119
  • 117
  • 37
  • 34
  • 16
  • 8
  • 8
  • 7
  • 7
  • 6
  • 6
  • 6
  • 6
  • Tagged with
  • 1735
  • 354
  • 303
  • 277
  • 261
  • 242
  • 191
  • 191
  • 183
  • 182
  • 181
  • 170
  • 166
  • 166
  • 163
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
321

Parallel Distributed Processing of Realtime Telemetry Data

Murphy, Donald P. 10 1900 (has links)
International Telemetering Conference Proceedings / October 26-29, 1987 / Town and Country Hotel, San Diego, California / An architecture is described for Processing Multiple digital PCM telemetry streams. This architecture is implemented using a collection of Motorola mono-board microprocessor units (MPUs) in a single chassis called an Intermediate Processing Unit (IPU). Multiple IPUs can be integrated using a common input data bus. Each IPU is capable of processing a single PCM digital telemetry stream. Processing, in this context, includes conversion of raw sample count data to engineering units; computation of derived quantities from measurement sample data; calculation of minimum, maximum, average and cyclic [(maximum - minimum)/2] values for both measurement and derived data over a preselected time interval; out-of-limit, dropout and wildpoint detection; strip chart recording of selected data; transmission of both measurement and derived data to a high-speed, large-capacity disk storage subsystem; and transmission of compressed data to the host computer for realtime processing and display. All processing is done in realtime with at most two PCM major frames time latency.
322

The Randomized Kaczmarz Method with Application on Making Macroeconomic Predictions

Wan, Dejun 01 January 2016 (has links)
This paper will demonstrate the principles and important facts of the randomized Kaczmarz algorithm as well as its extended version proposed by Zouzias and Ferris. Through the analysis made by Strohmer and Vershynin as well as Needell, it can be shown that the randomized Kaczmarz method is theoretically applicable in solving over-determined linear systems with or without noise. The extension of the randomized Kaczmarz algorithm further applies to the linear systems with non-unique solutions. In the experiment section of this paper, we compare the accuracies of the algorithms discussed in the paper in terms of making real-world macroeconomic analyses and predictions. The extended randomized Kaczmarz method outperforms both the randomized Kaczmarz method and the randomized Gauss-Seidel method on our data sets.
323

BACKWARD PROPAGATION BASED ALGORITHMS FOR HIGH-PERFORMANCE IMAGE FORMATION

Lee, Hua, Lockwood, Stephanie, Tandon, James, Brown, Andrew 10 1900 (has links)
International Telemetering Conference Proceedings / October 23-26, 2000 / Town & Country Hotel and Conference Center, San Diego, California / In this paper, we present the recent results of theoretical development and software implementation of a complete collection of high-performance image reconstruction algorithms designed for high-resolution imaging for various data acquisition configurations.
324

Matematiklärares val av huvudräkningsstrategier inom addition : En intervjustudie av 6 lärare i Sverige och England

Lübking, Amanda January 2016 (has links)
Studien utgår från en kvalitativt inriktad analys och metod, där syftet är att undersöka vilka huvudräkningsstrategier som 3 svenska respektive 3 engelska lärare undervisar i, inom addition. Det empiriska materialet består av intervjuer som sedan har transkriberats och analyserats. Studien har låtit sig inspireras av ett hermeneutiskt perspektiv kombinerat med Grounded Theory. Hermeneutiskt perspektiv innebär att all data som finns i studien har tolkats och analyserats. Grounded Theory är ett vanligt sätt att analysera kvalitativ data och genom denna teori har kategorier skapats utifrån det resultat som framkommit. Resultatet av studien visar att de svenska lärarnas vanligaste val av strategier är: talsorter var för sig och algoritmer. De engelska lärarnas vanligaste val av strategier är: tiokamrater, additionstabellen och algoritmer. Av resultatet framkommer även de skillnader som finns mellan de intervjuade lärarnas val av strategier. / The study is based on a qualitatively directed method and the main aim is to investigate, and examine the mental arithmetic strategies of both Swedish and English teachers. In particular, the methods they use, to teach addition. The empirical data within this research is based on first hand interviews with six teachers; three from Sweden, and three from England. The study is inspired by a Hermeneutic Perspective in combination with Grounded Theory. The word hermeneutic, means that all of the data contained in the study has been interpreted and analyzed by the researcher. Grounded Theory is a common method used to analyze qualitative data. With this theory, categories have been created on the basis of the results obtained within this study. The results of the study show that Swedish teachers' most common choices of strategies are; partitioning method, and algorithms. In contrast, the English teachers’ strategies are the following; number bonds, the additiontable and algorithms. The results of this study also illustrate the differences between the Swedish and the English teachers’ choices of strategies.
325

Simulations of subsurface multiphase flow including polymer flooding in oil reservoirs and infiltration in vadose zone

Yuan, Changli 31 August 2010 (has links)
With the depletion of oil reserves and increase in oil price, the enhanced oil recovery methods such as polymer flooding to increase oil production from water flooded fields are becoming more attractive. Effective design of these processes is challenging because the polymer chemistry has a strong effect on reaction and fluid rheology, which in turn has a strong effect on fluid transport. We have implemented a well-established polymer model within the Implicit Parallel Accurate Reservoir Simulator (IPARS), which enables parallel simulation of non-Newtonian fluid flow through porous media. The following properties of polymer solution are modeled in this work: 1) polymer adsorption; 2) polymer viscosity as a function of salinity, hardness, polymer concentration, and shear rate; 3) permeability reduction; 4) inaccessible pore volume. IPARS enables field-scale polymer flooding simulation with its parallel computation capability. In this thesis, several numerical examples are presented. The result of polymer module is verified by UTCHEM, a three-dimensional chemical flood simulator developed at the University of Texas at Austin. The parallel capability is also tested. The influence of different shear rate calculations is investigated in homogeneous and heterogeneous reservoirs. We observed that the wellbore velocity calculation instead of Darcy velocity reduces the grid effect for coarse mesh. We noted that the injection bottom hole pressure is very sensitive to the shear rate calculation. However, cumulative oil recovery and overall oil saturation appear to not be sensitive to grid and shear rate calculation for same reservoir. There are two models to model the ground water infiltration in vadose zone. One is Richard’s Equation (RE) model. And the other is two-phase flow model. In this work, we compare the two-phase model with an RE model to ascertain, under common scenarios such as infiltration or injection of water into initially dry soils, the similarities and differences in solutions behaviors, the ability of each model to simulate such infiltration processes under realistic scenarios, and to investigate the numerical efficiencies and difficulties which arise in these models. Six different data sets were assembled as benchmark infiltration problems in the unsaturated zone. The comparison shows that two-phase model holds for general porous media and is not limited by several assumptions that must be made for the RE formulation, while RE is applicable only for shallow regions (vadose) that are only several meters in depth and a fully saturated bottom boundary condition must be assumed. / text
326

Theoretical and Experimental Aspects of Quantum Cryptographic Protocols

Lamoureux, Louis-Philippe 20 June 2006 (has links)
La mécanique quantique est sans aucun doute la théorie la mieux vérifiée qui n’a jamais existée. En se retournant vers le passé, nous constatons qu’un siècle de théorie quantique a non seulement changé la perception que nous avons de l’univers dans lequel nous vivons mais aussi est responsable de plusieurs concepts technologiques qui ont le potentiel de révolutionner notre monde. La présente dissertation a pour but de mettre en avance ces potentiels, tant dans le domaine théorique qu’expérimental. Plus précisément, dans un premier temps, nous étudierons des protocoles de communication quantique et démontrerons que ces protocoles offrent des avantages de sécurité qui n’ont pas d’égaux en communication classique. Dans un deuxième temps nous étudierons trois problèmes spécifiques en clonage quantique ou chaque solution apportée pourrait, à sa façon, être exploitée dans un problème de communication quantique. Nous débuterons par décrire de façon théorique le premier protocole de communication quantique qui a pour but la distribution d’une clé secrète entre deux parties éloignées. Ce chapitre nous permettra d’introduire plusieurs concepts et outils théoriques qui seront nécessaires dans les chapitres successifs. Le chapitre suivant servira aussi d’introduction, mais cette fois-ci penché plutôt vers le côté expériemental. Nous présenterons une élégante technique qui nous permettra d’implémenter des protocoles de communication quantique de façon simple. Nous décrirons ensuite des expériences originales de communication quantique basées sur cette technique. Plus précisément, nous introduirons le concept de filtration d’erreur et utiliserons cette technique afin d’implémenter une distribution de clé quantique bruyante qui ne pourrait pas être sécurisé sans cette technique. Nous démontrerons ensuite des expériences implémentant le tirage au sort quantique et d’identification quantique. Dans un deuxième temps nous étudierons des problèmes de clonage quantique basé sur le formalisme introduit dans le chapitre d’introduction. Puisqu’il ne sera pas toujours possible de prouver l’optimalité de nos solutions, nous introduirons une technique numérique qui nous permettra de mettre en valeur nos résultats.
327

Towards securing pervasive computing systems by design: a language approach

Jakob, Henner 27 June 2011 (has links) (PDF)
Dans de multiples domaines, un nombre grandissant d'applications interagissant avec des entités ommunicantes apparaissent dans l'environnement pour faciliter les activités quotidiennes (domotique et télémédecine). Leur impact sur la vie de tous les jours des utilisateurs rend ces applications critiques: leur défaillance peut mettre en danger des personnes et leurs biens. Bien que l'impact de ces défaillances puisse être majeur, la sécurité est souvent considérée comme un problème secondaire dans le processus de développement et est traitée par des approches ad hoc. Cette thèse propose d'intégrer des aspects de sécurité dans le cycle de développement des systèmes d'informatique ubiquitaire. La sécurité est spécifiée à la conception grâce à des déclarations dédiées et de haut niveau. Ces déclarations sont utilisées pour générer un support de programmation afin de faciliter l'implémentation des mécanismes de sécurité, tout en séparant ces aspects de sécurité de la logique applicative. Notre approche se concentre sur le contrôle d'accès aux entités et la protection de la vie privée. Notre travail a été implémenté et fait levier sur une suite outillée existante couvrant le cycle de développement logiciel.
328

Neutral and Adaptive Processes Shaping Genetic Variation in Spruce Species

Stocks, Michael January 2013 (has links)
Population genetic analyses can provide information about both neutral and selective evolutionary processes shaping genetic variation. In this thesis, extensive population genetic methods were used to make inferences about genetic drift and selection in spruce species. In paper I we studied four species from the Qinghai-Tibetan Plateau (QTP): Picea likiangensis, P. purpurea, P. wilsonii and P. schrenkiana. Big differences in estimates of genetic diversity and Ne were observed in the more restricted species, P. schrenkiana, and the other more widely distributed species. Furthermore, P. purpurea appears to be a hybrid between P. likiangensis and P. wilsonii. In paper II we used Approximate Bayesian Computation (ABC) to find that the data support a drastic reduction of Ne in Taiwan spruce around 300-500 kya, in line with evidence from the pollen records. The split from P. wilsonii was dated to between 4-8 mya, around the time that Taiwan was formed. These analyses relied on a small sample size, and so in Paper III we investigated the impact of small datasets on the power to distinguish between models in ABC. We found that when genetic diversity is low there is little power to distinguish between simple coalescent models and this can determine the number of samples and loci required. In paper IV we studied the relative importance of genetic drift and selection in four spruce species with differing Ne: P. abies, P. glauca, P. jezoensis and P. breweriana. P. breweriana, which has a low Ne, exhibits a low fraction of adaptive substitutions, while P. abies has high Ne and a high fraction of adaptive substitutions. The other two spruce, however, do not support this suggesting other factors a more important. In paper V we find that several SNPs correlate with both a key adaptive trait (budset) and latitude. The expression of one in particular (PoFTL2) correlates with budset and was previously indentified in P. abies. These studies have helped characterise the importance of different population genetic processes in shaping genetic variation in spruce species and has laid some solid groundwork for future studies of spruce.
329

Entanglement detection and fractional quantum Hall effect in optical lattices

Palmer, Rebecca Natalie January 2008 (has links)
We consider the purity-based entanglement detection scheme introduced in [C. Moura Alves and D. Jaksch, Phys. Rev. Lett. 93, 110501 (2004)]. We describe how it could be implemented in an optical lattice using two-atom loss, and prove that in this form it detects all pure entangled states even without any spatial resolution. We then prove that correcting for certain reasonable types of experimental error is possible, and practical for error rates up to the order of one over the number of lattice sites considered. Limited spatial resolution similarly becomes a significant improvement over no spatial resolution only at nearly single site level. We also show how to use this process for state parameter estimation and collapse-revival evidence of entanglement, for which it remains useful even when the error rate is too high to permit unambiguous entanglement detection. We also consider an optical lattice bosonic analogue of the fractional quantum Hall (FQH) effect. This system can reach high “magnetic fields” very difficult to attain in the solid state FQH system, where the discrete nature of the lattice becomes important. Near simple rational numbers l/n of flux quanta per lattice cell, we find that the single particle states become nearly periodic with period n lattice sites, and have an n fold degeneracy which leads to FQH states resembling those of n-internal-state particles. Standard time of flight expansion would reveal this periodicity and be able to distinguish FQH states from vortex lattice or Mott insulator states. Shot noise correlation would provide further information on the nature of the FQH states.
330

Emergent rhythmic structures as cultural phenomena driven by social pressure in a society of artificial agents

Magalhaes Martins, Joao Pedro January 2012 (has links)
This thesis studies rhythm from an evolutionary computation perspective. Rhythm is the most fundamental dimension of music and can be used as a ground to describe the evolution of music. More specifically, the main goal of the thesis is to investigate how complex rhythmic structures evolve, subject to the cultural transmission between individuals in a society. The study is developed by means of computer modelling and simulations informed by evolutionary computation and artificial life (A-Life). In this process, self-organisation plays a fundamental role. The evolutionary process is steered by the evaluation of rhythmic complexity and by the exposure to rhythmic material. In this thesis, composers and musicologists will find the description of a system named A-Rhythm, which explores the emerged behaviours in a community of artificial autonomous agents that interact in a virtual environment. The interaction between the agents takes the form of imitation games. A set of necessary criteria was established for the construction of a compositional system in which cultural transmission is observed. These criteria allowed the comparison with related work in the field of evolutionary computation and music. In the development of the system, rhythmic representation is discussed. The proposed representation enabled the development of complexity and similarity based measures, and the recombination of rhythms in a creative manner. A-Rhythm produced results in the form of simulation data which were evaluated in terms of the coherence of repertoires of the agents. The data shows how rhythmic sequences are changed and sustained in the population, displaying synchronic and diachronic diversity. Finally, this tool was used as a generative mechanism for composition and several examples are presented.

Page generated in 0.0773 seconds