• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 768
  • 242
  • 119
  • 117
  • 37
  • 34
  • 16
  • 8
  • 8
  • 7
  • 7
  • 6
  • 6
  • 6
  • 6
  • Tagged with
  • 1745
  • 355
  • 304
  • 279
  • 262
  • 243
  • 191
  • 191
  • 185
  • 182
  • 181
  • 170
  • 169
  • 168
  • 161
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
311

Calculating Distribution Function and Characteristic Function using Mathematica

Chen, Cheng-yu 07 July 2010 (has links)
This paper deals with the applications of symbolic computation of Mathematica 7.0 (Wolfram, 2008) in distribution theory. The purpose of this study is twofold. Firstly, we will implement some functions to extend Mathematica capabilities to handle symbolic computations of the characteristic function for linear combination of independent univariate random variables. These functions utilizes pattern-matching codes that enhance Mathematica's ability to simplify expressions involving the product and summation of algebraic terms. Secondly, characteristic function can be classified into commonly used distributions, including six discrete distributions and seven continuous distributions, via the pattern-matching feature of Mathematica. Finally, several examples will be presented. The examples include calculating limit of characteristic function of linear combinations of independent random variables, and applications of coded functions and illustrate the central limit theorem, the law of large numbers and properties of some distributions.
312

Etude expérimentale des dynamiques temporelles du comportement normal et pathologique chez le rat et la souris / Supervised and unsupervised investigation of the temporal dynamics of normal and pathological behaviour in the mouse and rat

Carreno-Muñoz, Maria Isabel 22 September 2017 (has links)
Le développement d'outils de phénotypage comportemental sophistiqués est indispensable pour comprendre le fonctionnement cognitif. A partir d'une analyse élaborée de tests comportementaux classiques, mes résultats suggèrent que l'hypersensibilité sensorielle associée à un canal potassique spécifique (BkCa) participe aux divers troubles comportementaux du syndrome de l'X-Fragile et du spectre autistique. Grâce à un dispositif expérimental nouveau et original, comprenant des capteurs de pression hyper-sensibles à même de détecter les moindres mouvement d'un rat ou d'une souris avec une sensibilité et une précision temporelle exceptionnelles, j'ai pu identifier des composantes comportementales normales et pathologiques inédites, telles que des tremblements ou la dynamique des forces mises en jeu dans divers mouvements, qui modifieront certainement nos capacités d'investigation des mécanismes impliqués dans la douleur, la peur ou la locomotion, dans les conditions normales et pathologiques. / Modern neuroscience highlights the need for designing sophisticated behavioral readout of internal cognitive states. From a thorough analysis of classical behavioural tests, my results support the hypothesis that sensory hypersensitivity might be the cause of other behavioural deficits, and confirm the potassium channel BKCa as a potentially relevant molecular target for the development of drug edication against Fragile X Syndrome / Autism Spectrum Disorders. I have also used an innovative device, based on pressure sensors that can non-invasively detect the slightest animal movement with unprecedented sensitivity and time resolution, during spontaneous behaviour. Analysing this signal with sophisticated computational tools, I could demonstrate the outstanding potential of this methodology for behavioural phenotyping in general, and more specifically for the investigation of pain, fear or locomotion in normal mice and models of neurodevelopmental and neurodegenerative disorders.
313

Human computation appliqué au trading algorithmique / Human computation applied to algorithmic trading

Vincent, Arnaud 14 November 2013 (has links)
Le trading algorithmique utilisé à des fins spéculatives a pris un véritable essor depuis les années 2000, en optimisant d'abord l'exécution sur les marchés d'ordres issus de décisions humaines d'arbitrage ou d'investissement, puis en exécutant une stratégie d'investissement pré-programmée ou systématique où l'humain est cantonné au rôle de concepteur et de superviseur. Et ce, malgré les mises en garde des partisans de l'Efficient Market Hypothesis (EMH) qui indiquent que pourvu que le marché soit efficient, la spéculation est vaine.Le Human Computation (HC) est un concept singulier, il considère le cerveau humain comme le composant unitaire d'une machine plus vaste, machine qui permettrait d'adresser des problèmes d'une complexité hors de portée des calculateurs actuels. Ce concept est à la croisée des notions d'intelligence collective et des techniques de Crowdsourcing permettant de mobiliser des humains (volontaires ou non, conscients ou non, rémunérés ou non) dans la résolution d'un problème ou l'accomplissement d'une tâche complexe. Le projet Fold-it en biochimie est ainsi venu apporter la preuve indiscutable de la capacité de communautés humaines à constituer des systèmes efficaces d'intelligence collective, sous la forme d'un serious game en ligne.Le trading algorithmique pose des difficultés du même ordre que celles rencontrées par les promoteurs de Fold-it et qui les ont conduits à faire appel à la CPU humaine pour progresser de façon significative. La question sera alors de savoir où et comment utiliser le HC dans une discipline qui se prête très mal à la modélisation 3D ou à l'approche ludique afin d'en mesurer l'efficacité.La qualification et la transmission de l'information par réseaux sociaux visant à alimenter un système de trading algorithmique et fondé sur ce principe de HC constituent la première expérimentation de cette thèse. L'expérimentation consistera à analyser en temps réel le buzz Twitter à l'aide de deux méthodes différentes, une méthode asémantique qui cible les événements inattendus remontés par le réseau Twitter (comme l'éruption du volcan islandais en 2010) et une méthode sémantique plus classique qui cible des thématiques connues et anxiogènes pour les marchés financiers. On observe une amélioration significative des performances des algorithmes de trading uniquement sur les stratégies utilisant les données de la méthode asémantique.La deuxième expérimentation de HC dans la sphère du trading algorithmique consiste à confier l'optimisation de paramètres de stratégies de trading à une communauté de joueurs, dans une démarche inspirée du jeu Fold-it. Dans le jeu en ligne baptisé Krabott, chaque solution prend la forme d'un brin d'ADN, les joueurs humains sont alors sollicités dans les phases de sélection et de reproduction des individus-solutions.Krabott démontre la supériorité des utilisateurs humains sur la machine dans leurs capacités d'exploration et leurs performances moyennes quelle que soit la façon dont on compare les résultats. Ainsi, une foule de plusieurs centaines de joueurs surperforme systématiquement la machine sur la version Krabott V2 et sur l'année 2012, résultats confirmés avec d'autres joueurs sur la version Krabott V3 en 2012-2013. Fort de ce constat, il devient possible de construire un système de trading hybride homme-machine sur la base d'une architecture de HC où chaque joueur est la CPU d'un système global de trading.La thèse conclut sur l'avantage compétitif qu'offrirait la mise en œuvre d'une architecture de HC à la fois sur l'acquisition de données alimentant les algorithmes de trading et sur la capacité d'un tel système à optimiser les paramètres de stratégies existantes. Il est pertinent de parier à terme sur la capacité de la foule à concevoir et à maintenir de façon autonome des stratégies de trading algorithmique, dont la complexité finirait par échapper totalement à la compréhension humaine individuelle. / Algorithmic trading, designed for speculative purposes, really took off in the early 2000's, first for optimizing market orders based on human decisions and then for executing trading strategies in real time. In this systematic trading approach, human intervention is limited to system supervision and maintenance. The field is growing even though the Efficient Market Hypothesis says that in an efficient market, speculation is futile.Human Computation is an unusual concept which considers human brains as a part of a much larger machine, with the power to tackle problems that are too big for today's computers. This concept is at the crossroads between two older ideas: collective intelligence and crowdsourcing able to involve humans (whether they are paid or not, they realize it or not) in problem solving or to achieve a complex task. The Fold-it project in biochemistry proved the ability of a human community to set up an efficient collective intelligence system based on a serious online game.Algorithmic trading is on same difficulty level of complexity as the problem tackled by Fold-it's creators. In that case “human CPU” really helped in solving 3D puzzles. The question is whether Human Computation could be used in algorithmic trading even though there are no 3D structures or user-friendly puzzles to deal with.The first experiment in this thesis is based on the idea that information flows in social media may provide input to algorithmic trading systems based on Human Computation principles. Twitter, the micro blogging platform, was chosen in order to track (1) words that may have an impact of financial markets and (2) unexpected events such as the eruption of the Icelandic volcano. We demonstrate that a significant increase in P&L can be achieved in the second case by treating the unexpected events as alerts.The second experiment with Human Computation in algorithmic trading aims to get a community of internet users to optimize parameters of the trading strategies, in the way that the Fold-it game did. In this online game called “Krabott” solutions are presented as friendly virtual bots each containing a specific set of parameters for a particular trading strategy in its DNA. Humans who are playing the game, interact in the selection and reproduction steps for each new “Krabott”.In this game the Krabotts “bred” by players outperformed those resulting from a computer optimization process. We tested two different versions of Krabott during the years 2012 and 2013, and in both cases the population bred by the players outperformed the “computer only” ones. This suggests that it may be possible to set up a whole hybrid human-computer system based on Human Computation where each player is a kind of single CPU within a global trading system.The thesis concludes by discussing the types of competitive advantages that structures based on Human Computation have for data acquisition into a trading system or for optimizing the parameters of existing trading strategies. Going further we expect that in the years to come Human Computation will be able to set up and update algorithmic trading strategies, whose complexity exceeds what an individual person could comprehend.
314

Achieving privacy-preserving distributed statistical computation

Liu, Meng-Chang January 2012 (has links)
The growth of the Internet has opened up tremendous opportunities for cooperative computations where the results depend on the private data inputs of distributed participating parties. In most cases, such computations are performed by multiple mutually untrusting parties. This has led the research community into studying methods for performing computation across the Internet securely and efficiently. This thesis investigates security methods in the search for an optimum solution to privacy- preserving distributed statistical computation problems. For this purpose, the nonparametric sign test algorithm is chosen as a case for study to demonstrate our research methodology. Two privacy-preserving protocol suites using data perturbation techniques and cryptographic primitives are designed. The first protocol suite, i.e. the P22NSTP, is based on five novel data perturbation building blocks, i.e. the random probability density function generation protocol (RpdfGP), the data obscuring protocol (DOP), the secure two-party comparison protocol (STCP), the data extraction protocol (DEP) and the permutation reverse protocol (PRP). This protocol suite enables two parties to efficiently and securely perform the sign test computation without the use of a third party. The second protocol suite, i.e. the P22NSTC, uses an additively homomorphic encryption scheme and two novel building blocks, i.e. the data separation protocol (DSP) and data randomization protocol (DRP). With some assistance from an on-line STTP, this protocol suite provides an alternative solution for two parties to achieve a secure privacy-preserving nonparametric sign test computation. These two protocol suites have been implemented using MATLAB software. Their implementations are evaluated and compared against the sign test computation algorithm on an ideal trusted third party model (TTP-NST) in terms of security, computation and communication overheads and protocol execution times. By managing the level of noise data item addition, the P22NSTP can achieve specific levels of privacy protection to fit particular computation scenarios. Alternatively, the P22NSTC provides a more secure solution than the P22NSTP by employing an on-line STTP. The level of privacy protection relies on the use of an additively homomorphic encryption scheme, DSP and DRP. A four-phase privacy-preserving transformation methodology has also been demonstrated; it includes data privacy definition, statistical algorithm decomposition, solution design and solution implementation.
315

Divergência populacional e expansão demográfica de Dendrocolaptes platyrostris (Aves: Dendrocolaptidae) no final do Quaternário / Population divergence and demographic expansion of Dendrocolaptes platyrostris (Aves: Dendrocolaptidae) in the late Quaternary

Ricardo Fernandes Campos Junior 29 October 2012 (has links)
Dendrocolaptes platyrostris é uma espécie de ave florestal associada às matas de galeria do corredor de vegetação aberta da América do sul (D. p. intermedius) e à Floresta Atlântica (D. p. platyrostris). Em um trabalho anterior, foi observada estrutura genética populacional associada às subespécies, além de dois clados dentro da Floresta Atlântica e evidências de expansão na população do sul, o que é compatível com o modelo Carnaval-Moritz. Utilizando approximate Bayesian computation, o presente trabalho avaliou a diversidade genética de dois marcadores nucleares e um marcador mitocondrial dessa espécie com o objetivo de comparar os resultados obtidos anteriormente com os obtidos utilizando uma estratégia multi-locus e considerando variação coalescente. Os resultados obtidos sugerem uma relação de politomia entre as populações que se separaram durante o último período interglacial, mas expandiram após o último máximo glacial. Este resultado é consistente com o modelo de Carnaval-Moritz, o qual sugere que as populações sofreram alterações demográficas devido às alterações climáticas ocorridas nestes períodos. Trabalhos futuros incluindo outros marcadores e modelos que incluam estabilidade em algumas populações e expansão em outras são necessários para avaliar o presente resultado / Dendrocolaptes platyrostris is a forest specialist bird associated to gallery forests of the open vegetation corridor of South America (D. p. intermedius) and to the Atlantic forest (D. p. platyrostris). A previous study showed a population genetic structure associated with the subspecies, two clades within the Atlantic forest, and evidence of population expansion in the south, which is compatible with Carnaval- Moritz\'s model. The present study evaluated the genetic diversity of two nuclear and one mitochondrial markers of this species using approximate Bayesian computation, in order to compare the results previously obtained with those based on a multi-locus strategy and considering the coalescent variation. The results suggest a polytomic relationship among the populations that split during the last interglacial period and expanded after the last glacial maximum. This result is consistent with the model of Carnaval-Moritz, which suggests that populations have undergone demographic changes due to climatic changes that occurred in these periods. Future studies including other markers and models that include stability in some populations and expansion in others are needed to evaluate the present result
316

L'organisation moléculaire de l'eau liquide à l'interface avec des fluides apolaires / Molecular organisation of liquid water at phase interfaces with non-polar fluids

Tsoneva, Yana 08 November 2016 (has links)
La Structuration des molécules d’eau à l’interface eau/vapeur d’eau fait l’objet de l’intérêt scientifique depuis des années. La plupart des études sont focalisées sur le bulk d’eau mais des études plus détaillées sur l’eau de surface sont nécessaires. De plus, les interfaces avec les alcanes sont intéressantes d’un point de vue biologique et industriel. Puisque pour des applications biologiques et industrielles les interfaces eau/air et eau/huile possèdent des médiateurs amphiphiles, l’influence d’une monocouche de tensioactif sur la structuration de la surface de l’eau mérite aussi une attention particulière.Dans cette thèse, plusieurs modèles atomistiques d’eau ont été sélectionnés. Des simulations de dynamique moléculaire classique ont été réalisées à 298K pour le bulk d’eau, des systèmes eau/vapeur d’eau et eau/alcane (C5-C9), ainsi que des systèmes eau/DLPC/vapeur d’eau et eau/DLPC/octane (DLPC : dilauroyl phosphatidylcholine). Plusieurs propriétés structurelles, ainsi que les moments dipolaires, la tension de surface et les liaisons hydrogène, du bulk d’eau et des couches de surface d’eau ont été examinées grâce à la fonction de distribution radiale et les diagrammes de Voronoi. L’objectif a été d’estimer l’impact de l’incorporation de la polarisabilité sur les propriétés de l’eau et de sélectionner un modèle optimal (qualité/temps de calcul) pour leur description, ainsi que d’enrichir les données existantes sur la structuration de l’eau à l’interface.Cette étude aborde la structuration de l’eau du bulk et de la surface à l’interface avec la vapeur d’eau ou des alcanes. Un des objectifs de ce travail a été d’évaluer la reproductibilité des données expérimentales en utilisant différents modèles et de s’assurer pour quelles propriétés l’utilisation d’un modèle polarisable est critique. De simples modèles polarisables basés sur les oscillateurs de Drude ont été testés afin de limiter le temps de calcul. Pour le bulk d’eau et les systèmes eau/vapeur d’eau, les modèles TIP4P, SWM4-NDP et COS/G2 ont été les plus performants. Dans la mesure où le modèle TIP4P produit des résultats commensurables avec les modèles polarisables, il a été utilisé pour la simulation des interfaces eau/alcanes (C5-C9). Les molécules à la surface sont organisées de manière plus compacte et moins ordonnée. Cette diminution de l’ordre est principalement due aux liaisons hydrogène qui sont deux fois plus nombreuses dans le bulk qu’en surface. Les analyses de Voronoi ont montré que la coordination tétraédrique n’est pas si claire et que des polyèdres plus complexes sont formés. Les couches de surfaces trouvées s’avèrent être formées de 2 sous-couches, interne et externe, avec des polarités inégales orientées de manières opposées, définissant des zones de charges résiduelles à l’interface.En plus des systèmes avec un contact direct entre l’eau et le fluide apolaire, des interfaces comportant des monocouches de lipides ont été modélisées. La compacité de l’eau de surface, déjà renforcée par la présence d’alcanes, a été augmentée davantage par l’introduction de lipide. Néanmoins, l’orientation de l’eau a été changée et la polarité de la surface inversée, équilibrée par les têtes lipidiques au lieu des sous-couches externes diffuses.Les résultats principaux de cette thèse de doctorat sont les suivants:1. Il a été montré que l’utilisation de modèles d’eau polarisable n’est pas nécessaire pour une évaluation correcte d’un certain nombre de propriétés, mais elle est critique pour les moments dipolaires et la tension de surface.2. Pour la première fois une analyse structurelle a été réalisée en utilisant les diagrammes de Voronoi et un ensemble de modèles d’eau démontrant la différence entre propriétés de l’eau liquide en bulk et en surface.3. Considérant le nombre limité de données existantes, l’étude d’une monocouche de DLPC solide condensée à l’interface eau/vapeur et eau/octane en utilisant différents modèles d’eau a été une contribution originale. / The structuring of water molecules at the water/vapour interface is an object of scientific interest for decades. Most of the existing theoretical studies are focused on bulk water but there is still need of a more detailed research on surface water. In addition, interfaces with alkanes are interesting as being instructive from both biological and industrial perspectives. Since in both bio- and industrial applications water/air and water/oil interfaces are mediated by amphiphiles, the role of a surfactant monolayer on surface water structuring deserves more attention as well.In the present Ph. D. thesis several atomistic water models were chosen and classical molecular dynamics simulations were carried out on bulk water, water/vapour and water/alkane (from pentane to nonane) systems, as well as on water/DLPC/vapour and water/DLPC/octane models, DLPC being dilauroyl phosphatidylcholine. In all cases the temperature was kept at 298 K. Several structural properties of bulk and surface water layers were examined by means of radial distribution functions and Voronoi diagrams. Dipole moments, surface tension and hydrogen bonding were tackled too. The objective was to estimate the impact of accounting for polarisability on the water properties of interest and to select a cost-efficient water model for describing them, as well as to add new data to the existing knowledge about interfacial water structuring.The study addresses the water structuring in bulk and surfacial water at the interface with vapour or alkanes of different chain length. One of the aims of the work was to assess the reproducibility of experimental data using an assortment of polarisable and non-polarisable water models and to check for which properties the utilisation of polarisable models is critical. Simple polarisable models based on Drude oscillators were tested in order to keep the computational costs low. For bulk water and water/vapour systems the models TIP4P, SWM4-NDP and COS/G2 performed the best. Since the TIP4P model produced results commeasurable with the polarisable ones, it was used predominantly further on to simulate water/alkane (C5-C9) interface and to quantify the structural parameters of water obtained from RDFs and Voronoi analyses. The molecules in this layer are organised in a more compact and less ordered manner. The ordering is owed mainly to hydrogen bonds which are twice as many in the bulk compared to the surface. The analysis of the Voronoi diagrams showed that the tetrahedral coordination was blurred and more complex polyhedra were formed. The surface layer was found to consist of two sublayers, inner and outer, with oppositely oriented unequal polarity, defining areas of residual charges at the interface.In addition to the systems with direct contact between water and non-polar fluids, interfaces mediated by lipid monolayers were modelled. The monolayer was meant to seam together the two phases. The compactness of the surfacial water, which was enhanced by the presence of alkanes, was tightened further by the lipid introduction. However, the water orientation was changed and the surfacial polarity was inverted, balanced by the lipid heads instead of the diffuse outer sublayer.The main contributions of the Ph.D. thesis are as follows:1. It is shown that the usage of a polarisable water model is not necessary for correct evaluation of a number of properties, but is critical for characteristics such as dipole moments and surface tension.2. For the first time a structural analysis has been made using Voronoi diagrams and an assortment of water models which demonstrates the difference between bulk and surfacial characteristics of liquid water.3. An original contribution is the study of a solid-condensed DLPC monolayer at the water/vapour interface utilising different water models and at the interface of water/octane, considering the limited experimental data available.
317

Exploring and extending eigensolvers for Toeplitz(-like) matrices : A study of numerical eigenvalue and eigenvector computations combined with matrix-less methods

Knebel, Martin, Cers, Fredrik, Groth, Oliver January 2022 (has links)
We implement an eigenvalue solving algorithm proposed by Ng and Trench, specialized for Toeplitz(-like) matrices, utilizing root finding in conjunction with an iteratively calculated version of the characteristic polynomial. The solver also yields corresponding eigenvectors as a free bi-product. We combine the algorithm with matrix-less methods in order to yield eigenvector approximations, and examine its behavior both regarding demands for time and computational power. The algorithm is fully parallelizable, and although solving of all eigenvalues to the bi-Laplacian discretization matrix - which we used as a model matrix - is not superior to standard methods, we see promising results when using it as an eigenvector solver, using eigenvector approximations from standard solvers or a matrix-less method. We also note that an advantage of the algorithm we examine is that it can calculate singular, specific eigenvalues (and the corresponding eigenvectors), anywhere in the spectrum, whilst standard solvers often have to calculate all eigenvalues, which could be a useful feature. We conclude that - while the algorithm shows promising results - more experiments are needed, and propose a number of topics which could be studied further, e.g. different matrices (Toeplitz-like, full), and looking at even larger matrices.
318

Maximal Entropy Formalism for Quantum State Tomography and Applications

Rishabh Gupta (19452091) 23 August 2024 (has links)
<p dir="ltr">This thesis advances the methodologies of quantum state tomography (QST) to validate and optimize quantum processing on Noisy Intermediate-Scale Quantum (NISQ) devices, crucial for the transition to practical quantum systems. Inspired by recent advancements in the field, we propose a novel QST method based on the maximal entropy formalism, specifically addressing scenarios with incomplete measurement sets to provide a robust framework for state reconstruction. We extend this formalism to an informationally complete (IC) set of observables and introduce a variational approach for quantum state preparation, easily implementable on near-term quantum devices. Our developed maximal entropy-based QST protocol is applied to ultrafast molecular dynamics specifically for studying photoexcited ammonia molecule, enabling direct measurement and manipulation of electronic quantum coherences and exploring entanglement effects in molecular systems. Through this approach, we achieve a groundbreaking milestone by, for the first time, constructing the entanglement entropy of the electronic subsystem - an otherwise inaccessible metric. In doing so it also provides the first physical interpretation of the maximal entropy parameters in an experimental setting and highlights the potential for feedback between time-resolved quantum dynamics and quantum information science. Furthermore, building upon our advancements in state tomography, we propose a variational quantum algorithm for Hamiltonian learning that leverages the time dynamics of observables. Additionally, we reverse engineer the maximal entropy approach and demonstrate the use of entropy to refine the traditional geometric Brownian motion (GBM) method for better capturing real system complexities by addressing its log-normality restrictions, which opens new avenues for quantum sampling techniques. Through these contributions, this thesis showcases the Maximal Entropy formalism’s efficacy in QST and set the stage for future innovations and applications in cutting-edge quantum research.</p>
319

Towards Secure Outsourced Data Services in the Public Cloud

Sun, Wenhai 25 July 2018 (has links)
Past few years have witnessed a dramatic shift for IT infrastructures from a self-sustained model to a centralized and multi-tenant elastic computing paradigm -- Cloud Computing, which significantly reshapes the landscape of existing data utilization services. In truth, public cloud service providers (CSPs), e.g. Google, Amazon, offer us unprecedented benefits, such as ubiquitous and flexible access, considerable capital expenditure savings and on-demand resource allocation. Cloud has become the virtual ``brain" as well to support and propel many important applications and system designs, for example, artificial intelligence, Internet of Things, and so forth; on the flip side, security and privacy are among the primary concerns with the adoption of cloud-based data services in that the user loses control of her/his outsourced data. Encrypting the sensitive user information certainly ensures the confidentiality. However, encryption places an extra layer of ambiguity and its direct use may be at odds with the practical requirements and defeat the purpose of cloud computing technology. We believe that security in nature should not be in contravention of the cloud outsourcing model. Rather, it is expected to complement the current achievements to further fuel the wide adoption of the public cloud service. This, in turn, requires us not to decouple them from the very beginning of the system design. Drawing the successes and failures from both academia and industry, we attempt to answer the challenges of realizing efficient and useful secure data services in the public cloud. In particular, we pay attention to security and privacy in two essential functions of the cloud ``brain", i.e. data storage and processing. Our first work centers on the secure chunk-based deduplication of encrypted data for cloud backup and achieves the performance comparable to the plaintext cloud storage deduplication while effectively mitigating the information leakage from the low-entropy chunks. On the other hand, we comprehensively study the promising yet challenging issue of search over encrypted data in the cloud environment, which allows a user to delegate her/his search task to a CSP server that hosts a collection of encrypted files while still guaranteeing some measure of query privacy. In order to accomplish this grand vision, we explore both software-based secure computation research that often relies on cryptography and concentrates on algorithmic design and theoretical proof, and trusted execution solutions that depend on hardware-based isolation and trusted computing. Hopefully, through the lens of our efforts, insights could be furnished into future research in the related areas. / Ph. D. / Past few years have witnessed a dramatic shift for IT infrastructures from a self-sustained model to a centralized and multi-tenant elastic computing paradigm – Cloud Computing, which significantly reshapes the landscape of existing data utilization services. In truth, public cloud service providers (CSPs), e.g. Google, Amazon, offer us unprecedented benefits, such as ubiquitous and flexible access, considerable capital expenditure savings and on-demand resource allocation. Cloud has become the virtual “brain” as well to support and propel many important applications and system designs, for example, artificial intelligence, Internet of Things, and so forth; on the flip side, security and privacy are among the primary concerns with the adoption of cloud-based data services in that the user loses control of her/his outsourced data. Encryption definitely provides strong protection to user sensitive data, but it also disables the direct use of cloud data services and may defeat the purpose of cloud computing technology. We believe that security in nature should not be in contravention of the cloud outsourcing model. Rather, it is expected to complement the current achievements to further fuel the wide adoption of the public cloud service. This, in turn, requires us not to decouple them from the very beginning of the system design. Drawing the successes and failures from both academia and industry, we attempt to answer the challenges of realizing efficient and useful secure data services in the public cloud. In particular, we pay attention to security and privacy in two essential functions of the cloud “brain”, i.e. data storage and processing. The first part of this research aims to provide a privacy-preserving data deduplication scheme with the performance comparable to the existing cloud backup storage deduplication. In the second part, we attempt to secure the fundamental information retrieval functions and offer effective solutions in various contexts of cloud data services.
320

Scalable, Memory-Intensive Scientific Computing on Field Programmable Gate Arrays

Mirza, Salma 01 January 2010 (has links) (PDF)
Cache-based, general purpose CPUs perform at a small fraction of their maximum floating point performance when executing memory-intensive simulations, such as those required for many scientific computing problems. This is due to the memory bottleneck that is encountered with large arrays that must be stored in dynamic RAM. A system of FPGAs, with a large enough memory bandwidth, and clocked at only hundreds of MHz can outperform a CPU clocked at GHz in terms of floating point performance. An FPGA core designed for a target performance that does not unnecessarily exceed the memory imposed bottleneck can then be distributed, along with multiple memory interfaces, into a scalable architecture that overcomes the bandwidth limitation of a single interface. Interconnected cores can work together to solve a scientific computing problem and exploit a bandwidth that is the sum of the bandwidth available from all of their connected memory interfaces. The implementation demonstrates this concept of scalability with two memory interfaces through the use of available FPGA prototyping platforms. Even though the FPGAs operate at 133 MHz, which is twenty one times slower than an AMD Phenom X4 processor operating at 2.8 GHz, the system of two FPGAs performs eight times slower than the processor for the example problem of SMVM in heat transfer. However, the system is demonstrated to be scalable with a run-time that decreases linearly with respect to the available memory bandwidth. The floating point performance of a single board implementation is 12 GFlops which doubles to 24 GFlops for a two board implementation, for a gather or scatter operation on matrices of varying sizes.

Page generated in 0.0826 seconds