• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 326
  • 181
  • 59
  • 49
  • 40
  • 36
  • 8
  • 6
  • 5
  • 5
  • 4
  • 3
  • 3
  • 3
  • 1
  • Tagged with
  • 842
  • 125
  • 120
  • 101
  • 89
  • 81
  • 75
  • 73
  • 67
  • 53
  • 46
  • 46
  • 45
  • 41
  • 40
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
241

Calibração de modelos hidráulicos de redes de abastecimento de água de sistemas reais admitindo vazamentos / Calibration analysis considering leakage applied to existing water supply systems

Fernando Colombo 09 February 2007 (has links)
Considerando a necessidade do controle mais efetivo dos sistemas de distribuição de água para abastecimento, a calibração constitui etapa fundamental a ser trilhada para garantir a reprodução do comportamento dos sistemas sob as mais diversas condições operacionais. Apesar da importância da calibração e da diversidade de modelos construídos com esse propósito no ambiente acadêmico, essa prática não tem sido amplamente utilizada pelas companhias, que demonstram certa relutância em fazer uso de modelos matemáticos. É interessante, portanto, que os modelos existentes sejam intensivamente testados em sistemas reais para que se possam estabelecer diretrizes para a sua utilização, bem como uma maior confiança no seu emprego. O presente trabalho de pesquisa visou o estudo de aplicação de um modelo especialmente construído para a calibração de sistemas, através do qual é possível identificar variáveis de campo tais como rugosidades, diâmetros, parâmetros do modelo de vazamentos, etc. O modelo empregado é abrangente o suficiente para incorporar vazamentos e demandas variáveis com a pressão. Através dele, foram realizados dois estudos de caso e discutidas as razões pelas quais discrepâncias entre os valores simulados e observados foram detectadas, apesar da consistência das respostas produzidas via simulação. Algumas recomendações são feitas também no sentido de viabilizar estudos dessa natureza que possam produzir ferramental diretamente utilizável pelas concessionárias de água para abastecimento. / Considering the need for more effective control of water supply distribution systems, the calibration constitutes fundamental step to guarantee that the system behavior under several operational conditions can be reproduced. Instead of the importance of calibration and the variety of models built by the academy for this purpose, this practice has not been broadly used by the water industry, which demonstrates certain reluctance regards the use of mathematical models. It is interesting, hence, that existing models are intensively tested in real systems to establish guidelines and consequent confidence in their use. The present research focused on the study of a model especially built for the calibration of water supply systems, through which it is possible identify field variables such as roughness, diameters, parameters of leakage model, etc. The model is comprehensive enough to incorporate leakage and pressure driven demands. Two study cases were analyzed supported by the model and discussed the reasons by which disagreement between simulated and observed data were verified, instead of the consistency of results produced by simulation. Some recommendations are made to become practical this kind of study as useful tool for the water industry.
242

A study of natural CO₂ reservoirs : mechanisms and pathways for leakage and implications for geologically stored CO₂

Miocic, Johannes Marijan January 2016 (has links)
Carbon Capture and Storage (CCS) is a suite of technologies available to directly reduce carbon dioxide (CO2) emissions to the atmosphere from fossil fuelled power plants and large industrial point sources. For a safe deployment of CCS it is important that CO2 injected into deep geological formations does not migrate out of the storage site. Characterising and understanding possible migration mechanisms and pathways along which migration may occur is therefore crucial to ensure secure engineered storage of anthropogenic CO2. In this thesis naturally occurring CO2 accumulations in the subsurface are studied as analogue sites for engineered storage sites with respect to CO2 migration pathways and mechanisms that ensure the retention of CO2 in the subsurface. Geological data of natural CO2 reservoirs world-wide has been compiled from published literature and analysed. Results show that faults are the main pathways for migration of CO2 from subsurface reservoirs to the surface and that the state and density of CO2, pressure of the reservoir, and thickness of the caprock influence the successful retention of CO2. Gaseous, low density CO2, overpressured reservoirs, and thin caprocks are characteristics of insecure storage sites. Two natural CO2 reservoirs have been studied in detail with respect to their fault seal properties. This includes the first study of how fault rock seals behave in CO2 reservoirs. It has been shown that the bounding fault of the Fizzy Field reservoir in the southern North Sea can with hold the amount of CO2 trapped in the reservoir at current time. A initially higher gas column would have led to across fault migration of CO2 as the fault rock seals would not have been able to withhold higher pressures. Depending on the present day stress regime the fault could be close to failure. At the natural CO2 reservoir of St. Johns Dome, Arizona, migration of CO2 to the surface has been occurring for at least the last 500 ka. Fault seal analysis shows that this migration is related to the fault rock composition and the orientation of the bounding fault in the present day stress field. Using the U-Th disequilibrium method the ages of travertine deposits of the St. Johns Dome area have been determined. The results illustrate that along one fault CO2 migration took place for at least 480 ka and that individual travertine mounds have had long lifespans of up to ~350 ka. Age and uranium isotope trends along the fault have been interpreted as signs of a shrinking CO2 reservoir. The amount of CO2 calculated to have migrated out of the St. Johns Dome is up to 113 Gt. Calculated rates span from 5 t/yr to 30,000 t/yr and indicate that at the worst case large amounts of CO2 can migrate rapidly from the subsurface reservoir along faults to the surface. This thesis highlights the importance of faults as fluid pathways for vertical migration of CO2. It has been also shown that they can act as baffles for CO2 migration and that whether a fault acts as pathway or baffle for CO2 can be predicted using fault seal analysis. However, further work is needed in order to minimise the uncertainties of fault seal analysis for CO2 reservoirs.
243

Cryptographic techniques for hardware security

Tselekounis, Ioannis January 2018 (has links)
Traditionally, cryptographic algorithms are designed under the so-called black-box model, which considers adversaries that receive black-box access to the hardware implementation. Although a "black-box" treatment covers a wide range of attacks, it fails to capture reality adequately, as real-world adversaries can exploit physical properties of the implementation, mounting attacks that enable unexpected, non-black-box access, to the components of the cryptographic system. This type of attacks is widely known as physical attacks, and has proven to be a significant threat to the real-world security of cryptographic systems. The present dissertation is (partially) dealing with the problem of protecting cryptographic memory against physical attacks, via the use of non-malleable codes, which is a notion introduced in a preceding work, aiming to provide privacy of the encoded data, in the presence of adversarial faults. In the present thesis we improve the current state-of-the-art on non-malleable codes and we provide practical solutions for protecting real-world cryptographic implementations against physical attacks. Our study is primarily focusing on the following adversarial models: (i) the extensively studied split-state model, which assumes that private memory splits into two parts, and the adversary tampers with each part, independently, and (ii) the model of partial functions, which is introduced by the current thesis, and models adversaries that access arbitrary subsets of codeword locations, with bounded cardinality. Our study is comprehensive, covering one-time and continuous, attacks, while for the case of partial functions, we manage to achieve a stronger notion of security, that we call non-malleability with manipulation detection, that in addition to privacy, it also guarantees integrity of the private data. It should be noted that, our techniques are also useful for the problem of establishing, private, keyless communication, over adversarial communication channels. Besides physical attacks, another important concern related to cryptographic hardware security, is that the hardware fabrication process is assumed to be trusted. In reality though, when aiming to minimize the production costs, or whenever access to leading-edge manufacturing facilities is required, the fabrication process requires the involvement of several, potentially malicious, facilities. Consequently, cryptographic hardware is susceptible to the so-called hardware Trojans, which are hardware components that are maliciously implanted to the original circuitry, having as a purpose to alter the device's functionality, while remaining undetected. Part of the present dissertation, deals with the problem of protecting cryptographic hardware against Trojan injection attacks, by (i) proposing a formal model for assessing the security of cryptographic hardware, whose production has been partially outsourced to a set of untrusted, and possibly malicious, manufacturers, and (ii) by proposing a compiler that transforms any cryptographic circuit, into another, that can be securely outsourced.
244

Impactos de mudanças nos ventos de oeste do Hemisfério Sul no vazamento das Agulhas / Impacts of changes in the Southern Hemisphere westerlies in the Agulhas leakage

Gonçalves, Rafael Carvalho 02 March 2012 (has links)
Ao sul da África, a Corrente das Agulhas sofre uma abrupta retroflexão, liberando anéis com águas mais quentes e mais salinas do Oceano Índico na região sudeste do Atlântico Sul. A transferência de águas do Índico para o Atlântico por meio de anéis e filamentos na região de retroflexão da Corrente das Agulhas é referida na literatura como o vazamento das Agulhas. Esse vazamento conecta os giros subtropicais do Atlântico Sul e do Índico, sendo parcialmente responsável pela alta salinidade do Oceano Atlântico. A comunicação entre esses dois giros subtropicais na área de retroflexão da Corrente das Agulhas é limitado ao sul pela Frente Subtropical, que é controlada pela posição do rotacional zero do tensão de cisalhamento do vento. Desde o final da década de 1960, os ventos de oeste do Hemisfério Sul tem sofrido uma migração em direção ao polo como reflexo da tendência positiva do índice do modo anular sul (SAM). Para investigar o impacto dessas mudanças na circulação atmosférica no vazamento das Agulhas, foi implementada uma rodada do modelo HYCOM forçada com médias mensais dos produtos de reanálise do NCEP entre 1948 e 2010. Os resultados mostram um aumento no vazamento das Agulhas de 1.1 Sv por década entre 1960 e 2010. O aumento nesse transporte interoceânico está relacionado a uma migração para o sul da Frente Subtropical, forçada pelo deslocamento para o sul dos ventos de oeste. Os resultados também mostram uma tendência positiva nos campos de altura da superfície livre e temperatura na região das Agulhas, sendo esses, consequência da migração para o sul da Frente Subtropical. A tendência positiva desses campos e o deslocamento para o sul da Frente Subtropical seguem a tendência positiva do índice da SAM, com valores mais altos durante o verão austral. Como a tendência do índice da SAM tem sido atribuída à redução na camada de ozônio e ao aumento na concentração dos gases causadores do efeito estufa, os resultados aqui apresentados salientam as consequências das mudanças climáticas antropogênicas na distribuição de sal e calor dos oceanos. / South of Africa, the southwestward flowing Agulhas Current retroflects abruptly, shedding rings with saltier and warmer Indian Ocean waters into the relatively colder and fresher southeast portion of the South Atlantic. This Agulhas leakage connects the South Atlantic and Indian oceans subtropical gyres, and is partly responsible for the Atlantic Ocean high salinity. The connection between Indian Ocean and South Atlantic at the Agulhas retroflection area is limited to the south by the Subtropical Front, and is largely controlled by the location of the zero wind stress curl. Since the late 1960s, the Southern Hemisphere westerly winds have been showing a poleward shift possibly in response to the positive trend of the southern annular mode (SAM). To access the impact of these changes of the atmospheric forcing on the Agulhas leakage, an implementation of the HYCOM, forced with monthly means of NCEP/Reanalysis since 1948 was run. The results show an Agulhas leakage increase of 1.1 Sv per decade between 1960 and 2010. This inter-basin transport increase is correlated to a southward shift of the Subtropical Front, forced by the poleward migration of the westerlies. The results also show a positive trend in sea surface height and temperature at the Agulhas region as a consequence of the poleward shift of the Subtropical Front. The positive trends of these fields and the displacement of the Subtropical Front follow the positive trend of the SAM index, with higher values during austral summer months. As the SAM index trend is been assigned to the ozone depletion and to the increase of the greenhouse gases, these results highlight the consequences of the anthropogenic atmospheric changes on the heat and salt distribution within the oceans.
245

Leakage and Value Chain in Relation to Cruise Industry

Sun, Huimin January 2019 (has links)
The Cruise Lines Association draws an optimistic scenario for cruise, and points out the fastest growing market is in Asia, where Chinese are the main force. Cruise travelling, as a new economic engine, is developing rapidly in China. However, among all the cruise terminals in China mainland, WSICT is the sole profitable port. Serious leakages of cruise industry are considered as the cause. What result in the leakage? In this thesis, a cruise value chain is proposed, covering main stages from planning to shore visiting. By tracking the cash flow in the value chain, potential sources of leakage are come up and then verified in the case study of Shanghai, where two typical companies, WSICT and Ctrip, are further analyzed. The investigation reveals some Chinese-Styled characters, such as “Retailer charter selling”, “packaged cruise products” and Chinese passengers’ preferences. All these factors impact on the value chain differently. The results implicate severe leakages in cruise planning, visiting and shopping. The preliminary success of WSICT could be attributed to its earliest participation into cruise, and geographical advantages. For travel agency like Ctrip, the typical retailer charter selling entitles them the right to design products, so they add extra services such as insurance and shore visiting to enhance profits. Totally, except for port companies and travel intermediaries, limited local communities participate in cruise industry. Hence, more positive policies are essential to motivate local communities.
246

Information leakage and sharing in decentralized systems

LUO, Huajiang 01 January 2018 (has links)
This thesis presents two essays that explore firms’ incentive to share information in a multi-period decentralized supply chain and between competing firms. In the first essay, we consider a two-period supply chain in which one manufacturer supplies to a retailer. The retailer possesses some private demand information about the uncertain demand and decides whether to share the information with manufacturer. If an information sharing agreement is achieved, the retailer will share the observed demand information truthfully to the manufacturer. Then the selling season with two periods starts. In each period, the manufacturer decides on a wholesale price, which the retailer considers when deciding on the retail price. The manufacturer can observe the retailer's period-1 decision and the realized period-1 demand, and use this information when making the period-2 wholesale price decision. Thus, without information sharing, the two firms play a two-period signaling game. We find that voluntary information sharing is not possible because it benefits the manufacturer but hurts the retailer. However, different from one-period model, in which no information sharing can be achieved even with side payment, the manufacturer can make a side payment to the retailer to induce information sharing when the demand range is small. Both firms benefit from more accurate information regardless whether the retailer shares information. We also extend the two-period model to three-period model and infinite-period model, we find that the above results are robust. The second essay studies the incentives for information sharing between two competing firms with different production timing strategies. Each firm is planning to produce a new (upgraded) product. One firm adopts routine timing, whereby her production time is fixed according to her tradition of similar or previous models of the product. The other firm uses strategic timing, whereby his production time can be strategically chosen: be it before, simultaneously with, and after the routine firm. The two firms simultaneously choose whether or not to disclose their private demand information, make their quantity decisions based on any demand information available, and then compete in the market. We find that when the demand uncertainty is not high, both firms sharing information is the unique equilibrium outcome. Exactly one firm (the routine firm) sharing information can arise in equilibrium when the demand uncertainty is intermediate. These results are in stark contrast to extant literature which has shown that, for Cournot competitors with substitutable goods, no firm is willing to share demand information. Production timing is thus identified as a key driving force for horizontal information sharing, which might have been overlooked before. Surprisingly, when the competition becomes more intense, firms are more willing to share information. It is the information asymmetry that fundamentally change the strategic firm’s timing. We highlight the impact of signaling demand information for an early-production firm on the timing strategies, under different information sharing arrangements.
247

IMPROVING PERFORMANCE AND ENERGY EFFICIENCY FOR THE INTEGRATED CPU-GPU HETEROGENEOUS SYSTEMS

Wen, Hao 01 January 2018 (has links)
Current heterogeneous CPU-GPU architectures integrate general purpose CPUs and highly thread-level parallelized GPUs (Graphic Processing Units) in the same die. This dissertation focuses on improving the energy efficiency and performance for the heterogeneous CPU-GPU system. Leakage energy has become an increasingly large fraction of total energy consumption, making it important to reduce leakage energy for improving the overall energy efficiency. Cache occupies a large on-chip area, which are good targets for leakage energy reduction. For the CPU cache, we study how to reduce the cache leakage energy efficiently in a hybrid SPM (Scratch-Pad Memory) and cache architecture. For the GPU cache, the access pattern of GPU cache is different from the CPU, which usually has little locality and high miss rate. In addition, GPU can hide memory latency more effectively due to multi-threading. Because of the above reasons, we find it is possible to place the cache lines of the GPU data caches into the low power mode more aggressively than traditional leakage management for CPU caches, which can reduce more leakage energy without significant performance degradation. The contention in shared resources between CPU and GPU, such as the last level cache (LLC), interconnection network and DRAM, may degrade both CPU and GPU performance. We propose a simple yet effective method based on probability to control the LLC replacement policy for reducing the CPU’s inter-core conflict misses caused by GPU without significantly impacting GPU performance. In addition, we develop two strategies to combine the probability based method for the LLC and an existing technique called virtual channel partition (VCP) for the interconnection network to further improve the CPU performance. For a specific graph application of Breadth first search (BFS), which is a basis for graph search and a core building block for many higher-level graph analysis applications, it is a typical example of parallel computation that is inefficient on GPU architectures. In a graph, a small portion of nodes may have a large number of neighbors, which leads to irregular tasks on GPUs. These irregularities limit the parallelism of BFS executing on GPUs. Unlike the previous works focusing on fine-grained task management to address the irregularity, we propose Virtual-BFS (VBFS) to virtually change the graph itself. By adding virtual vertices, the high-degree nodes in the graph are divided into groups that have an equal number of neighbors, which increases the parallelism such that more GPU threads can work concurrently. This approach ensures correctness and can significantly improve both the performance and energy efficiency on GPUs.
248

High Level VHDL Modeling of a Low-Power ASIC for a Tour Guide

Kailasam, Umadevi 29 March 2004 (has links)
We present the high level (VHDL) modeling and high level synthesis of an ASIC (TOUR NAVIGATOR) for a portable hand held device - a tour guide. The tour guide is based on location-aware mobile computing, which gives the information of the current location to the user. The TOUR NAVIGATOR designed in this work is interfaced with off-the-shelf components to realise the tour guide system. The current location is given by an on-board GPS receiver chip. The TOUR NAVIGATOR is a search and play module which interfaces with the flash memory, GPS receiver and the audio codec. The functionality of the TOUR NAVIGATOR is to search the flash memory for audio data corresponding to the current GPS co-ordinate, which is an input to the TOUR NAVIGATOR. The look-up table containing the GPS coordinates and the corresponding audio files are loaded into the flash memory, where in each GPS entry in the table is indexed by the co-ordinates, and an audio file that contains information about the locations is associated with it. When there is a match, the audio file is streamed to the codec. The functionality of the interface of the TOUR NAVIGATOR with the memory module is verified at the RTL using Cadence-NCLaunch. The layout implementation of the TOUR NAVIGATOR is done using an automatic place and route tool (Silicon Ensemble), which uses standard cells for the entire design. Leakage power reduction is done by introducing sleep transistors in the standard cells. The TOUR NAVIGATOR is put into a "sleep" mode when there is no operation of the tour guide, thus giving significant power savings.
249

A Complete Probabilistic Framework for Learning Input Models for Power and Crosstalk Estimation in VLSI Circuits

Ramalingam, Nirmal Munuswamy 06 October 2004 (has links)
Power disspiation is a growing concern in VLSI circuits. In this work we model the data dependence of power dissipation by learning an input model which we use for estimation of both switching activity and crosstalk for every node in the circuit. We use Bayesian networks to effectively model the spatio-temporal dependence in the inputs and we use the probabilistic graphical model to learn the structure of the dependency in the inputs. The learned structure is representative of the input model. Since we learn a causal model, we can use a larger number of independencies which guarantees a minimal structure. The Bayesian network is converted into a moral graph, which is then triangulated. The junction tree is formed with its nodes representing the cliques. Then we use logic sampling on the junction tree and the sample required is really low. Experimental results with ISCAS '85 benchmark circuits show that we have achieved a very high compaction ratio with average error less than 2%. As HSPICE was used the results are the most accurate in terms of delay consideration. The results can further be used to predict the crosstalk between two neighboring nodes. This prediction helps in designing the circuit to avoid these problems.
250

Penetration through the staple punctures on five N95 respirator models

Medina, Daniel E. 03 November 2014 (has links)
Certain N95 FFR models that staple the head straps directly onto filtering material are commercially available. This method of assembly can tear or reduce fiber density in the immediate area surrounding the staple punctures. Five N95 FFR models were evaluated to determine if staple punctures on the filter medium reduce the protection offered by the respirators. Total penetrations were measured with the staple punctures intact and when stretching the head straps a distance equivalent to a 95% male head circumference. Filter penetration were measured by sealing the staple punctures. Aerosols of 200, 500, and 1000 nm were used to challenge respirators at 28 and 85 Liter/min flow rates. Staple punctures were visually inspected by macrophotography with a light source on the opposing side of the punctures. Three FFR models had greater mean leakages than the remaining two. However, only two FFR models had statistically significant greater total penetrations than filter penetrations. Pulling the head straps increased total penetration, but was not statistically significant. Filter penetrations were greatest at 85 Liter/min and 200 nm, while leakages were greatest at 28 Liter/min flow rate and 1000 nm. Leakage through the staple punctures had greater contributions to total penetration than filter penetration allowing a greater percentage of 1000 nm particles into the breathing zone. Leakage was dependent on the tearing of the filter material or the reduction of fiber density near the puncture, regardless of filter efficiency. Total penetration to filter penetration ratios showed that leakage was greater than filter penetration 15 fold for 1,000 nm. This value is similar to what has been reported for face seal leaks on human subjects. Protection factors were reduced from ~930 to ~60 when the staple punctures created a tear. N95 FFR with stapled head straps that puncture the filter medium should be avoided because they can reduce protection to the user.

Page generated in 0.0397 seconds