• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 56
  • 23
  • 13
  • 8
  • 6
  • 5
  • 5
  • 4
  • 3
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 220
  • 220
  • 85
  • 73
  • 48
  • 43
  • 32
  • 25
  • 24
  • 22
  • 20
  • 18
  • 17
  • 16
  • 16
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
121

A CTD Biotag for Mid-sized Marine Predators

Broadbent, Heather 01 January 2012 (has links)
Biologging tools for investigating the study of fine-scale linkages between animal behavior and the physical microstructure of the marine habitat are technically limited by substantial size, high cost or low sensor resolution. However, recent advances in electronic technologies and process techniques present attractive alternatives to current tag designs. Motivated by the need for a low-cost, compact CTD biotag for medium-sized marine animals, the University of South Florida Center for Ocean Technology developed a multi-sensor biotag for quantitative measurements of ocean salinity. This dissertation describes the development and performance of a novel CTD biotag used for animal-borne measurements of the physical microstructure of marine ecosystems. Printed circuit board processes were used to fabricate a liquid crystal polymer- based conductivity, temperature and depth sensor board. Tests performed in the laboratory exhibited good sensor repeatability between the measured and the predicted variables indicating that the initial design and fabrication process is suitable for the construction of a CTD sensor board. The conductivity cells showed good sensor integrity for the entire conductivity range (0- 70 mS/cm), thus demonstrating the potential for a highly resolved salinity system. The CTD sensor board was integrated into two initial multi-sensor biologging systems that consisted of reconfigurable modular circuit boards. The design and initial performance of a 4-electrode conductivity cell circuit was discussed and preliminary tests showed a sensor accuracy of 0.0161 mS/cm. A potential packaging material was analyzed for use on the temperature and pressure sensors and initial tests showed good sensor sensitivities (-2.294 °C/kohms and 1.9192 mV/dbar, respectively). Underwater packaging of the biotag was presented in this work along with three different field observations. Vertical profiles of conductivity, temperature and depth in the Gulf of Mexico were obtained and compared to a commercial instrument. On the West Florida shelf, conductivity, temperature, depth and salinity data were obtained from loggerhead turtle deployments. Data collected showed that the tagged turtle encountered a highly variable salinity range (30.6- 35.3) while at depth (20 m). This data trend captured was in agreement with shelf characteristics (tidal fluxes and water mass features) and moored instruments. Finally, observations that were undertaken in Bayboro Harbor showed no biofouling to the conductivity electrodes during a 14 day deployment. This biotag is the first to use a PCB-based low-cost CTD to collect animal-borne salinity measurements.
122

Visualization of Particle In Cell Simulations / Visualization of Particle In Cell Simulations

Ljung, Patric January 2000 (has links)
A numerical simulation case involving space plasma and the evolution of instabilities that generates very fast electrons, i.e. approximately at half of the speed of light, is used as a test bed for scientific visualisation techniques. A visualisation system was developed to provide interactive real-time animation and visualisation of the simulation results. The work focuses on two themes and the integration of them. The first theme is the storage and management of the large data sets produced. The second theme deals with how the Visualisation System and Visual Objects are tailored to efficiently visualise the data at hand. The integration of the themes has resulted in an interactive real-time animation and visualisation system which constitutes a very powerful tool for analysis and understanding of the plasma physics processes. The visualisations contained in this work have spawned many new possible research projects and provided insight into previously not fully understood plasma physics phenomena.
123

Constructing a Clinical Research Data Management System

Quintero, Michael C. 04 November 2017 (has links)
Clinical study data is usually collected without knowing what kind of data is going to be collected in advance. In addition, all of the possible data points that can apply to a patient in any given clinical study is almost always a superset of the data points that are actually recorded for a given patient. As a result of this, clinical data resembles a set of sparse data with an evolving data schema. To help researchers at the Moffitt Cancer Center better manage clinical data, a tool was developed called GURU that uses the Entity Attribute Value model to handle sparse data and allow users to manage a database entity’s attributes without any changes to the database table definition. The Entity Attribute Value model’s read performance gets faster as the data gets sparser but it was observed to perform many times worse than a wide table if the attribute count is not sufficiently large. Ultimately, the design trades read performance for flexibility in the data schema.
124

Reliability and cost efficiency in coding-based in-network data storage and data retrieval for IoT/WSNs / Fiabilité et efficacité de l'usage énergétique dans le stockage et récupérabilité des données basées sur la théorie du codage dans les réseaux sans fil intégré dans le contexte du IoT

Souza Oliveira, Camila Helena 09 December 2015 (has links)
Dans cette thèse, nous nous intéressons à cette gestion des données dans les réseaux de capteurs sans fil intégrés dans un contexte IoT. Plus précisément, nous aborderons la problématique du stockage des données au sein même du réseau de capteurs en se posant la question suivante : Comment stocker provisoirement les données dans le réseau de capteurs de sorte que ces données soient facilement accessible par les consommateurs tout en assurant le meilleur compromis entre la fiabilité de livraison des donnés et la préservation des ressources énergétiques des capteurs ?Il s'agit dans un premier temps de proposer un système fiable de stockage de données basé sur la théorie du codage réseau et sur le modèle de communication « Publish/Subscribe ». Le système proposé est adapté à l'architecture des réseaux de capteurs ainsi qu'aux besoins des applications et services IoT localisés. Pour démontrer la validité du système de stockage proposé, des évaluations de performances au travers d'une analyse mathématique et de simulations sont conduites. Celles-ci montrent clairement une augmentation de la fiabilité de la livraison des données aux consommateurs avec un taux de livraison des paquets de 80% en moyenne. Afin d'améliorer encore plus les performances du système de stockage de données, nous proposons, dans un second temps, l'optimisation du système afin que celui-ci puisse réaliser le stockage des données de manière adaptative et autonome, tout en assurant le meilleur compromis entre fiabilité et coût. Ce dernier se traduit par l'impact du système de stockage sur la consommation d'énergie du réseau de capteurs sans fil. À notre connaissance, notre système est le premier à proposer d'assurer la fiabilité du stockage des données en fonction des demandes des services et des conditions du réseau. L'évaluation des performances, par simulation, de notre système de stockage adaptatif et autonome montre que l'optimisation du stockage des données (formulée sous forme d'un processus de décision Markovien (MDP)) selon les conditions de fonctionnement du réseau permet l'accès à 70% de données en plus comparativement au système non-adaptatif proposé précédemment. Ce résultat est obtenu tout en augmentant la durée de vie du réseau de 43%.Après avoir travaillé sur l'aspect quantitatif des performances du réseau à travers une étude sur le compromis coût - consommation énergétique, nous nous intéresserons dans la troisième contribution de cette thèse à l'utilisation de notre système de stockage dans des réseaux de capteurs sans fil disposant de cycles de services (cycle d'endormissement-réveil) variables. Aujourd'hui, les réseaux de capteurs reposant sur le standard 802.15.4 peuvent utiliser des cycles de services variables et avoir recours à l'endormissement des nœuds dans le but d'économiser leur énergie. Dans une première partie de cette contribution, nous avons ainsi proposé une amélioration du mécanisme de gestion du cycle de service (duty cycle) du standard 802.15.4 afin de le rendre dynamique et adaptable au trafic réseau. L'évaluation des performances par simulations de l'amélioration proposée montre que celle-ci aboutit à une économie d'énergie très significative tout en permettant au réseau de capteurs sans fil de remplir sa mission de prise en charge du trafic généré. Dans une seconde partie de cette contribution, nous évaluons les performances de notre système de stockage de données dans le but d'évaluer si un tel mécanisme pourrait cohabiter positivement avec un mécanisme de cycle de service variable (condition d'exploitation réaliste du réseau). L'évaluation des performances montre que l'activation d'un cycle de service variable dans le réseau de capteurs n'apporte aucune amélioration au niveau de la consommation énergétique mais que le compromis optimal entre la fiabilité et la consommation énergétique obtenu par notre système de stockage adaptatif et autonome n'est pas non plus affecté, celui-ci est maintenu / Wireless Sensor Networks (WSN) are made up of small devices limited in terms of memory, processing and energy capacity. They work interconnected and autonomously in order to monitoring a region or an object of interest. The evolution in the development of devices more powerful (with new capability such as energy harvesting and acting) and less expensive made the WSNs a crucial element in the emergence of Internet of Things (IoT). Nonetheless, assuming the new applications and services offered in the IoT scenario, new issues arise in the data management performed in the WSNs. Indeed, in this new context, WSNs have to deal with a large amount of data, now consumed on-demand, while ensure a good trade-off between its reliability and retrievability, and the energy consumption. In the scope of this thesis, we are interested in the data management in the WSN in the context of IoT realm. Specifically, we approach the problem of in-network data storage by posing the following question: How to store data for a short term in the WSNs so that the data could be easily retrieved by the consumers while ensuring the best trade-off between data reliability and conservation of energy resources? Foremost, we propose a reliable data storage scheme based on coding network, and assuming a communication model defined by the Publish/Subscribe paradigm. We validate the efficiency of our proposal by a theoretical analyses that is corroborate by a simulation evaluation. The results show that our scheme achieves a reliability of 80% in data delivery with the best cost-benefit compared to other data storage scheme. Aiming to further improve the performance of the data storage scheme proposed in our first contribution, we propose its optimization (modeling it as a Markov Decision Process (MDP)) in order to store data with optimal trade-off between reliability and communication overhead (in this context, also seen as energy consumption), and in an autonomously and adaptive way. For the best of our knowledge, our optimized data storage scheme is the only to ensure data reliability while adapt itself according to the service requirements and network condition. In addition, we propose a generalization of the mathematical model used in our first contribution, and a system model that defines the integration of WSNs performing our data storage scheme in the context for which it was envisaged, the IoT realm. Our performance evaluation shows that our optimization allows the consumers to retrieve up to 70% more packets than a scheme without optimization whereas increase the network lifetime of 43%.Finally, after being interested in finding the best trade-off between reliability and cost, we now focus on an auxiliary way to reduce the energy consumption in the sensor nodes. As our third contribution, we propose a study, in two parts, to measure how much a node activity scheduling can save energy. First, we propose an improvement in the duty cycle mechanism defined in the 802.15.4. Then, we propose a duty cycle mechanism introduced into our data storage scheme aiming at saving energy in the storage nodes. The simulation results show that our solution to the duty cycle mechanism in 802.15.4 led in considerable saving in energy costs. However, regarding duty cycle in our data storage scheme, it did not end up in more energy saving. Actually, as our optimized scheme already saves as much resource energy as possible while ensuring high reliability, the duty cycle mechanism can not improve the energy saving without compromise the data reliability. Nonetheless, this result corroborates that our scheme, indeed, performs under the optimal trade-off between reliability and communication overhead (consumption energy)
125

Modelling data storage in nano-island magnetic materials

Kalezhi, Josephat January 2011 (has links)
Data storage in current hard disk drives is limited by three factors. These are thermal stability of recorded data, the ability to store data, and the ability to read back the stored data. An attempt to alleviate one factor can affect others. This ultimately limits magnetic recording densities that can be achieved using traditional forms of data storage. In order to advance magnetic recording and postpone these inhibiting factors, new approaches are required. One approach is recording on Bit Patterned Media (BPM) where the medium is patterned into nanometer-sized magnetic islands where each stores a binary digit.This thesis presents a statistical model of write errors in BPM composed of single domain islands. The model includes thermal activation in a calculation of write errors without resorting to time consuming micromagnetic simulations of huge populations of islands. The model incorporates distributions of position, magnetic and geometric properties of islands. In order to study the impact of island geometry variations on the recording performance of BPM systems, the magnetometric demagnetising factors for a truncated elliptic cone, a generalised geometry that reasonably describe most proposed island shapes, were derived analytically.The inclusion of thermal activation was enabled by an analytic derivation of the energy barrier for a single domain island. The energy barrier is used in a calculation of transition rates that enable the calculation of error rates. The model has been used to study write-error performance of BPM systems having distributions of position, geometric and magnetic property variations. Results showed that island intrinsic anisotropy and position variations have a larger impact on write-error performance than geometric variations.The model was also used to study thermally activated Adjacent Track Erasure (ATE) for a specific write head. The write head had a rectangular main pole of 13 by 40 nm (cross-track x down-track) with pole trailing shield gap of 5 nm and pole side shield gap of 10 nm. The distance from the pole to the top surface of the medium was 5 nm, the medium was 10 nm thick and there was a 2 nm interlayer between the soft underlayer (SUL) and the medium, making a total SUL to pole spacing of 17 nm. The results showed that ATE would be a major problem and that cross-track head field gradients need to be more tightly controlled than down-track. With the write head used, recording at 1 Tb/in² would be possible on single domain islands.
126

Efficient Storage and Domain-Specific Information Discovery on Semistructured Documents

Farfan, Fernando R 12 November 2009 (has links)
The increasing amount of available semistructured data demands efficient mechanisms to store, process, and search an enormous corpus of data to encourage its global adoption. Current techniques to store semistructured documents either map them to relational databases, or use a combination of flat files and indexes. These two approaches result in a mismatch between the tree-structure of semistructured data and the access characteristics of the underlying storage devices. Furthermore, the inefficiency of XML parsing methods has slowed down the large-scale adoption of XML into actual system implementations. The recent development of lazy parsing techniques is a major step towards improving this situation, but lazy parsers still have significant drawbacks that undermine the massive adoption of XML. Once the processing (storage and parsing) issues for semistructured data have been addressed, another key challenge to leverage semistructured data is to perform effective information discovery on such data. Previous works have addressed this problem in a generic (i.e. domain independent) way, but this process can be improved if knowledge about the specific domain is taken into consideration. This dissertation had two general goals: The first goal was to devise novel techniques to efficiently store and process semistructured documents. This goal had two specific aims: We proposed a method for storing semistructured documents that maps the physical characteristics of the documents to the geometrical layout of hard drives. We developed a Double-Lazy Parser for semistructured documents which introduces lazy behavior in both the pre-parsing and progressive parsing phases of the standard Document Object Model's parsing mechanism. The second goal was to construct a user-friendly and efficient engine for performing Information Discovery over domain-specific semistructured documents. This goal also had two aims: We presented a framework that exploits the domain-specific knowledge to improve the quality of the information discovery process by incorporating domain ontologies. We also proposed meaningful evaluation metrics to compare the results of search systems over semistructured documents.
127

Plataforma computacional de auxílio à comercialização de energia elétrica / Electric energy commercialisation support platform

Lanzotti, Carla Regina 02 March 2006 (has links)
Orientador: Paulo de Barros Correia / Tese (doutorado) - Universidade Estadual de Campinas, Faculdade de Engenharia Mecânica / Made available in DSpace on 2018-08-29T00:34:18Z (GMT). No. of bitstreams: 1 Lanzotti_CarlaRegina_D.pdf: 4027525 bytes, checksum: 45f3e8aec6a9ec1acf3263ac5c80d6a4 (MD5) Previous issue date: 2006 / Resumo: A modificação no segmento de comercialização de energia elétrica impulsionou a busca por mecanismos que estimulassem a competição e auxiliassem a determinação do preço da energia elétrica. Um dos mecanismos adotados para estimular a competição entre os agentes do mercado é o leilão, que pode ser formulado de acordo com as necessidades do seu proponente. Para atuar em um mercado competitivo é necessário que o agente do setor elétrico esteja bem informado para extrair bons resultados nas negociações. Partindo deste princípio, algumas ferramentas baseadas na teoria dos jogos e leilão foram desenvolvidas pelo grupo de comercialização de energia elétrica da Unicamp, com o intuito de aplicar modelos para o setor elétrico no contexto das regras adotadas no mercado brasileiro. Este trabalho especifica e desenvolve um ambiente computacional para integrar tanto as ferramentas implementadas quanto aquelas em desenvolvimento e incorpora uma base de dados para armazenar as principais informações do mercado de energia elétrica, do sistema elétrico e da economia. Desta forma, a plataforma de auxílio à comercialização de energia elétrica corresponde a um software formado por três módulos, incorporando um módulo de simulação de leilão, um módulo para auxiliar as estratégias de contratação e um módulo de banco de dados. A aplicação da plataforma de comercialização visa facilitar a atuação dos agentes do mercado de energia elétrica, auxiliando a verificação e definição de lances a fim de gerar uma maior eficiência dos seus lances em um leilão realizado pela CCEE e também para elaborarem os seus próprios leilões, com a finalidade de explorar a liberdade de contratação bilateral / Abstract: Recent changes in the regulation of electric energy commercialisation have driven the search for mecanisms capable of determining the electric energy price and allowing a better competition. One of the mecanisms to leverage competition between market players is auction, which can be set up to meet the needs of the proponent. An electric sector agent should be kept up to date in order to establish profitable businesses. Having this assumption in mind some tools were developed based on game theory and auction by the Unicamp electric energy commercialisation group, intending the application of models to electric sector in the context of brazilian market regulation. This paper specifies and develop a computer aided environment to integrate the developed tools to new ones as well as a data base to store main information from electric energy market, electric system and economy. As a matter of fact, the electric energy commercialisation support platform corresponds to a software made up by three modules, including a auction simulation module, a contract strategy aid module and a data base module. In the specification of the platform it was used a concept of modularity which allows the inclusion of addditional modules to the structure and the evolution of the tools to cope with the market needs. The platform is to help electric energy market players to check and make offers to generate a greater effectiveness in the proposals, in order to explore the freedom of bilateral contracts / Doutorado / Planejamento de Sistemas Energeticos / Doutora em Planejamento de Sistemas Energéticos
128

TI verde – o armazenamento de dados e a eficiência energética no data center de um banco brasileiro / Green IT – the data storage and the energy efficiency in a brazilian bank data center

Silva, Newton Rocha da 04 March 2015 (has links)
Submitted by Nadir Basilio (nadirsb@uninove.br) on 2015-07-27T16:22:43Z No. of bitstreams: 1 Newton Rocha da Silva.pdf: 1739667 bytes, checksum: 9f957689d728b32603a096b0af84765b (MD5) / Made available in DSpace on 2015-07-27T16:22:43Z (GMT). No. of bitstreams: 1 Newton Rocha da Silva.pdf: 1739667 bytes, checksum: 9f957689d728b32603a096b0af84765b (MD5) Previous issue date: 2015-03-04 / The Green IT focuses on the study and design practice, manufacturing, use and disposal of computers, servers, and associated subsystems, efficiently and effectively, with less impact to the environment. It´s major goal is to improve performance computing and reduce energy consumption and carbon footprint. Thus, the green information technology is the practice of environmentally sustainable computing and aims to minimize the negative impact of IT operations to the environment. On the other hand, the exponential growth of digital data is a reality for most companies, making them increasingly dependent on IT to provide sufficient and real-time information to support the business. This growth trend causes changes in the infrastructure of data centers giving focus on the capacity of the facilities issues due to energy, space and cooling for IT activities demands. In this scenario, this research aims to analyze whether the main data storage solutions such as consolidation, virtualization, deduplication and compression, together with the solid state technologies SSD or Flash Systems are able to contribute to an efficient use of energy in the main data center organization. The theme was treated using qualitative and exploratory research method, based on the case study, empirical and documentary research such as technique to data collect, and interviews with IT key suppliers solutions. The case study occurred in the main Data Center of a large Brazilian bank. As a result, we found that energy efficiency is sensitized by technological solutions presented. Environmental concern was evident and showed a shared way between partners and organization studied. The maintaining of PUE - Power Usage Effectiveness, as energy efficiency metric, at a level of excellence reflects the combined implementation of solutions, technologies and best practices. We conclude that, in addition to reducing the consumption of energy, solutions and data storage technologies promote efficiency improvements in the Data Center, enabling more power density for the new equipment installation. Therefore, facing the digital data demand growth is crucial that the choice of solutions, technologies and strategies must be appropriate not only by the criticality of information, but by the efficient use of resources, contributing to a better understanding of IT importance and its consequences for the environment. / A TI Verde concentra-se em estudo e prática de projeto, fabricação, utilização e descarte de computadores, servidores e subsistemas associados, de forma eficiente e eficaz, com o mínimo ou nenhum impacto ao meio ambiente. Seu objetivo é melhorar o desempenho da computação e reduzir o consumo de energia e a pegada de carbono. Nesse sentido, a tecnologia da informação verde é a prática da computação ambientalmente sustentável e tem como objetivo minimizar o impacto negativo das operações de TI no meio ambiente. Por outro lado, o crescimento exponencial de dados digitais é uma realidade para a maioria das empresas, tornando-as cada vez mais dependentes da TI para disponibilizar informações em tempo real e suficiente para dar suporte aos negócios. Essa tendência de crescimento provoca mudanças na infraestrutura dos Data Centers dando foco na questão da capacidade das instalações devido à demanda de energia, espaço e refrigeração para as atividades de TI. Nesse cenário, esta pesquisa objetiva analisar se as principais soluções de armazenamento de dados, como a consolidação, a virtualização, a deduplicação e a compactação, somadas às tecnologias de discos de estado sólido do tipo SSD ou Flash são capazes de colaborar para um uso eficiente de energia elétrica no principal Data Center da organização. A metodologia de pesquisa foi qualitativa, de caráter exploratório, fundamentada em estudo de caso, levantamento de dados baseado na técnica de pesquisa bibliográfica e documental, além de entrevista com os principais fornecedores de soluções de TI. O estudo de caso foi o Data Center de um grande banco brasileiro. Como resultado, foi possível verificar que a eficiência energética é sensibilizada pelas soluções tecnológicas apresentadas. A preocupação ambiental ficou evidenciada e mostrou um caminho compartilhado entre parceiros e organização estudada. A manutenção do PUE - Power Usage Effectiveness (eficiência de uso de energia) como métrica de eficiência energética mantida em um nível de excelência é reflexo da implementação combinada de soluções, tecnologias e melhores práticas. Conclui-se que, além de reduzir o consumo de energia elétrica, as soluções e tecnologias de armazenamento de dados favorecem melhorias de eficiência no Data Center, viabilizando mais densidade de potência para a instalação de novos equipamentos. Portanto, diante do crescimento da demanda de dados digitais é crucial que a escolha das soluções, tecnologias e estratégias sejam adequadas, não só pela criticidade da informação, mas pela eficiência no uso dos recursos, contribuindo para um entendimento mais evidente sobre a importância da TI e suas consequências para o meio ambiente.
129

Billing and receivables database application

Lukalapu, Sushma 01 January 2000 (has links)
The purpose of this project is to design, build, and implement an information retrieval database system for the Accounting Department at CSUSB. The database will focus on the financial details of the student accounts maintained by the accounting personnel. It offers detailed information pertinent to tuition, parking, housing, boarding, etc.
130

Energy Agile Cluster Communication

Mustafa, Muhammad Zain 18 March 2015 (has links)
Computing researchers have long focused on improving energy-efficiency?the amount of computation per joule? under the implicit assumption that all energy is created equal. Energy however is not created equal: its cost and carbon footprint fluctuates over time due to a variety of factors. These fluctuations are expected to in- tensify as renewable penetration increases. Thus in my work I introduce energy-agility a design concept for a platform?s ability to rapidly and efficiently adapt to such power fluctuations. I then introduce a representative application to assess energy-agility for the type of long-running, parallel, data-intensive tasks that are both common in data centers and most amenable to delays from variations in available power. Multiple variants of the application are implemented to illustrate the fundamental tradeoffs in designing energy-agile parallel applications. I find that with inactive power state transition latencies of up to 15 seconds, a design that regularly ”blinks” servers out- performs one that minimizes transitions by only changing power states when power varies. While the latter approach has much lower transition overhead, it requires additional I/O, since servers are not always concurrently active. Unfortunately, I find that most server-class platforms today are not energy-agile: they have transition la- tencies beyond one minute, forcing them to minimize transition and incur additional I/O.

Page generated in 0.0556 seconds