Spelling suggestions: "subject:"largescale"" "subject:"largerscale""
721 |
Low-Cost, Environmentally Friendly Electric Double-Layer Capacitors : Conept, Materials and ProductionAndres, Britta January 2017 (has links)
Today’s society is currently performing an exit from fossilfuel energy sources. The change to sustainable alternativesrequires inexpensive and environmentally friendly energy storagedevices. However, most current devices contain expensive,rare or toxic materials. These materials must be replaced bylow-cost, abundant, nontoxic components.In this thesis, I suggest the production of paper-based electricdouble-layer capacitors (EDLCs) to meet the demand oflow-cost energy storage devices that provide high power density.To fulfill the requirements of sustainable and environmentallyfriendly devices, production of EDLCs that consist of paper,graphite and saltwater is proposed. Paper can be used as aseparator between the electrodes and as a substrate for theelectrodes. Graphite is suited for use as an active material in theelectrodes, and saltwater can be employed as an electrolyte.Westudied and developed different methods for the productionof nanographite and graphene from graphite. Composites containingthese materials and similar advanced carbon materialshave been tested as electrode materials in EDLCs. I suggest theuse of cellulose nanofibers (CNFs) or microfibrillated cellulose(MFC) as a binder in the electrodes. In addition to improvedmechanical stability, the nanocellulose improved the stabilityof graphite dispersions and the electrical performance of theelectrodes. The influence of the cellulose quality on the electricalproperties of the electrodes and EDLCs was investigated.The results showed that the finest nanocellulose quality is notthe best choice for EDLC electrodes; MFC is recommended forthis application instead. The results also demonstrated thatthe capacitance of EDLCs can be increased if the electrodemasses are adjusted according to the size of the electrolyte ions.Moreover, we investigated the issue of high contact resistancesat the interface between porous carbon electrodes and metalcurrent collectors. To reduce the contact resistance, graphitefoil can be used as a current collector instead of metal foils.Using the suggested low-cost materials, production methodsand conceptual improvements, it is possible to reduce the material costs by more than 90% in comparison with commercialunits. This confirms that paper-based EDLCs are apromising alternative to conventional EDLCs. Our findings andadditional research can be expected to substantially supportthe design and commercialization of sustainable EDLCs andother green energy technologies. / I dagens samhälle pågår en omställning från användning avfossila energikällor till förnybara alternativ. Denna förändringkräver miljövänliga och kostnadseffektiva elektriska energilagringsenheterför att möjliggöra en kontinuerlig energileverans.Dagens energilagringsenheter innehåller ofta dyra, sällsyntaeller giftiga material som behöver bytas ut för att nå hållbaralösningar.I denna avhandling föreslås att tillverka pappersbaseradesuperkondensatorer som möter kraven för kostnadseffektivaelektriska energilagrare med hög effekttäthet. För att nå kravenpå miljömässigt hållbara enheter föreslås användning avendast papper, grafit och saltvatten. Papper kan användas somseparator mellan elektroder likväl som substrat vid elektrodbestrykning.Grafit kan användas som aktivt elektrodmaterialoch saltvatten fungerar som elektrolyt. Olika metoder har härutvecklats för att producera nanografit och grafen från grafit.Dessa material har tillsammans med liknande, kommersiellt tillgängliga,avancerade kolmaterial testats i elektrodkompositerför superkondensatorer. Som bindemedel i dessa kompositerföreslås nanofibrillerad eller mikrofibrillerad cellulosa. Jaghar demonstrerat att nanocellulosa ökar dispersionsstabilitetensamt förbättrar den mekaniska stabiliteten och dom elektriskaegenskaperna i elektroderna. Hur cellulosans kvalitet påverkarelektroderna har undersökts och visar att den finaste kvaliteteninte är det bästa valet för superkondensatorer, istället rekommenderasmikrofibrillerad cellulosa. Utöver detta demonstrerasmöjligheten att öka superkondensatorernas kapacitans genomatt balansera elektrodernas massa med hänsyn till jonernasstorlek i elektrolyten. I avhandlingen diskuteras även svårigheternamed hög kontaktresistans i gränssnittet mellan porösakolstrukturer och metallfolie och hur detta kan undvikas omgrafitfolie används som kontakt.Genom att använda de material, produktionstekniker ochkonceptförbättringar som föreslås i avhandlingen är det möjligtatt reducera materialkostnaderna med mer än 90% i jämförelsemed kommersiella superkondensatorer. Detta bekräftar att pappersbaserade superkondensatorer är ett lovande alternativoch våra resultat tillsammans med vidare utveckling harstor potential att stödja övergången till miljömässigt hållbarasuperkondensatorer och annan grön energiteknik. / <p>Vid tidpunkten för disputationen var följande delarbeten opublicerade: delarbete 6 inskickat.</p><p>At the time of the doctoral defence the following papers were unpublished: paper 6 submitted.</p>
|
722 |
Clustering studies of radio-selected galaxiesPassmoor, Sean Stuart January 2011 (has links)
Philosophiae Doctor - PhD / We investigate the clustering of HI-selected galaxies in the ALFALFA survey and compare results with those obtained for HIPASS. Measurements of the angular correlation function and the inferred 3D-clustering are compared with results from direct spatial-correlation measurements. We are able to measure clustering on smaller angular scales and for galaxies with lower HI masses than was previously possible. We calculate the expected clustering of dark matter using the redshift distributions of HIPASS and ALFALFA and show that the ALFALFA sample is somewhat more anti-biased with respect to dark matter than the HIPASS sample. We are able to conform the validity of the dark matter correlation predictions by performing simulations of the non-linear structure formation. Further we examine how the bias evolves with redshift for radio galaxies detected in the the first survey. / South Africa
|
723 |
Redshift-space distortions as a probe of dark energyGouws, Liesbeth-Helena January 2014 (has links)
>Magister Scientiae - MSc / We begin by finding a system of differential equations for the background and linearly perturbed variables in the standard, ɅCDM model, using the Einstein Field Equations, and then solving these numerically. Later, we extend this to dynamical dark energy models parameterised by an equation of state, w, and a rest frame speed of sound, cs. We pay special attention to the large-scale behaviour of Δm, the gauge invariant, commoving matter density, since the approximation Δm ≃ δm, where δm is the longitudinal gauge matter density, is more commonly used, but breaks down at large scales. We show how the background is affected by w only, so measurements of perturbations are required to constrain cs. We examine how the accelerated expansion of the universe, caused by dark energy, slows down the growth rate of matter. We then show the matter power spectrum is not in itself useful for constraining dark energy models, but how redshift-space distortions can be used to extract the growth rate from the galaxy power spectrum, and hence how redshift-space power spectra can be used to constrain different dark energy models. We find that on small scales, the growth rate is more dependent on w, while on large scales, it depends more on cs.
|
724 |
Towards a full genome-scale model of yeast metabolismStanford, Natalie Jane January 2011 (has links)
Gaining a quantitative understanding of metabolic behaviour has long been a major scientific goal. Beginning with crude mass balance experiments and progressing through enzyme kinetics, single-pathway models and collaborative efforts such as a community- based yeast reconstruction and onwards to the digital human. The primary goal of this research was to generate a large-scale kinetic metabolic model of yeast metabolism. As a community our ability to produce large-scale dynamic metabolic models has typically been limited by the time and cost involved in obtaining exact measurements of all relevant kinetic parameters. Attempts have been made to bring about a greater understanding by using computational approaches such as flux balance analysis, and also laboratory approaches such as metabolic profiling. Unfortunately these approaches alone do not go far enough to allow for a rich understanding of the metabolic behaviour.Methods were developed that allowed known data such as fluxes, equilibrium constants and metabolite concentrations to be used in first-approximation strategies. These made possible the construction of a thermodynamically consistent model that was reflective of the organism and growth conditions under which the known data were measured. Efforts were made to improve the strategy by developing already known dynamic flux measurement techniques so they were more reflective of the type of data required for constructing the metabolic model. The model constructed, using data from a specific yeast strain in a continuous culture environment, and included 284 reactions. The model showed a reasonable reproduction of system behaviour after perturbations of extracellular glucose above and below the operating conditions, after identification and substitution of just two exact rate laws of reactions that showed high control over the system. The methods developed require little knowledge beyond the stoichiometric matrix in the first instance, and as such, are applicable to any organism that has a reasonably comprehensive network reconstruction available.
|
725 |
Branch and Price Solution Approach for Order Acceptance and Capacity Planning in Make-to-Order OperationsMestry, Siddharth D, Centeno, Martha A, Faria, Jose A, Damodaran, Purushothaman, Chin-Sheng, Chen 25 March 2010 (has links)
The increasing emphasis on mass customization, shortened product lifecycles, synchronized supply chains, when coupled with advances in information system, is driving most firms towards make-to-order (MTO) operations. Increasing global competition, lower profit margins, and higher customer expectations force the MTO firms to plan its capacity by managing the effective demand. The goal of this research was to maximize the operational profits of a make-to-order operation by selectively accepting incoming customer orders and simultaneously allocating capacity for them at the sales stage. For integrating the two decisions, a Mixed-Integer Linear Program (MILP) was formulated which can aid an operations manager in an MTO environment to select a set of potential customer orders such that all the selected orders are fulfilled by their deadline. The proposed model combines order acceptance/rejection decision with detailed scheduling. Experiments with the formulation indicate that for larger problem sizes, the computational time required to determine an optimal solution is prohibitive. This formulation inherits a block diagonal structure, and can be decomposed into one or more sub-problems (i.e. one sub-problem for each customer order) and a master problem by applying Dantzig-Wolfe’s decomposition principles. To efficiently solve the original MILP, an exact Branch-and-Price algorithm was successfully developed. Various approximation algorithms were developed to further improve the runtime. Experiments conducted unequivocally show the efficiency of these algorithms compared to a commercial optimization solver. The existing literature addresses the static order acceptance problem for a single machine environment having regular capacity with an objective to maximize profits and a penalty for tardiness. This dissertation has solved the order acceptance and capacity planning problem for a job shop environment with multiple resources. Both regular and overtime resources is considered. The Branch-and-Price algorithms developed in this dissertation are faster and can be incorporated in a decision support system which can be used on a daily basis to help make intelligent decisions in a MTO operation.
|
726 |
A Dual Dielectric Approach for Performance Aware Reduction of Gate Leakage in Combinational CircuitsMukherjee, Valmiki 05 1900 (has links)
Design of systems in the low-end nanometer domain has introduced new dimensions in power consumption and dissipation in CMOS devices. With continued and aggressive scaling, using low thickness SiO2 for the transistor gates, gate leakage due to gate oxide direct tunneling current has emerged as the major component of leakage in the CMOS circuits. Therefore, providing a solution to the issue of gate oxide leakage has become one of the key concerns in achieving low power and high performance CMOS VLSI circuits. In this thesis, a new approach is proposed involving dual dielectric of dual thicknesses (DKDT) for the reducing both ON and OFF state gate leakage. It is claimed that the simultaneous utilization of SiON and SiO2 each with multiple thicknesses is a better approach for gate leakage reduction than the conventional usage of a single gate dielectric (SiO2), possibly with multiple thicknesses. An algorithm is developed for DKDT assignment that minimizes the overall leakage for a circuit without compromising with the performance. Extensive experiments were carried out on ISCAS'85 benchmarks using 45nm technology which showed that the proposed approach can reduce the leakage, as much as 98% (in an average 89.5%), without degrading the performance.
|
727 |
VLSI Architecture and FPGA Prototyping of a Secure Digital Camera for Biometric ApplicationAdamo, Oluwayomi Bamidele 08 1900 (has links)
This thesis presents a secure digital camera (SDC) that inserts biometric data into images found in forms of identification such as the newly proposed electronic passport. However, putting biometric data in passports makes the data vulnerable for theft, causing privacy related issues. An effective solution to combating unauthorized access such as skimming (obtaining data from the passport's owner who did not willingly submit the data) or eavesdropping (intercepting information as it moves from the chip to the reader) could be judicious use of watermarking and encryption at the source end of the biometric process in hardware like digital camera or scanners etc. To address such issues, a novel approach and its architecture in the framework of a digital camera, conceptualized as an SDC is presented. The SDC inserts biometric data into passport image with the aid of watermarking and encryption processes. The VLSI (very large scale integration) architecture of the functional units of the SDC such as watermarking and encryption unit is presented. The result of the hardware implementation of Rijndael advanced encryption standard (AES) and a discrete cosine transform (DCT) based visible and invisible watermarking algorithm is presented. The prototype chip can carry out simultaneous encryption and watermarking, which to our knowledge is the first of its kind. The encryption unit has a throughput of 500 Mbit/s and the visible and invisible watermarking unit has a max frequency of 96.31 MHz and 256 MHz respectively.
|
728 |
Comparison and Evaluation of Existing Analog Circuit Simulator using Sigma-Delta ModulatorAle, Anil Kumar 12 1900 (has links)
In the world of VLSI (very large scale integration) technology, there are many different types of circuit simulators that are used to design and predict the circuit behavior before actual fabrication of the circuit. In this thesis, I compared and evaluated existing circuit simulators by considering standard benchmark circuits. The circuit simulators which I evaluated and explored are Ngspice, Tclspice, Winspice (open source) and Spectre® (commercial). I also tested standard benchmarks using these circuit simulators and compared their outputs. The simulators are evaluated using design metrics in order to quantify their performance and identify efficient circuit simulators. In addition, I designed a sigma-delta modulator and its individual components using the analog behavioral language Verilog-A. Initially, I performed simulations of individual components of the sigma-delta modulator and later of the whole system. Finally, CMOS (complementary metal-oxide semiconductor) transistor-level circuits were designed for the differential amplifier, operational amplifier and comparator of the modulator.
|
729 |
Desempenho de sistemas com dados georeplicados com consistência em momento indeterminado e na linha do tempo / Performace of systems with geo-replicated data with eventual consistency and timeline consistencyMauricio José de Oliveira de Diana 21 March 2013 (has links)
Sistemas web de larga escala são distribuídos em milhares de servidores em múltiplos centros de processamento de dados em diferentes localizações geográficas, operando sobre redes de longa distância (WANs). Várias técnicas são usadas para atingir os altos níveis de escalabilidade requeridos por esses sistemas. Replicação de dados está entre as principais delas, e tem por objetivo diminuir a latência, aumentar a vazão e/ou aumentar a disponibilidade do sistema. O principal problema do uso de replicação em sistemas georeplicados é a dificuldade de garantir consistência entre as réplicas sem prejudicar consideravelmente o desempenho e a disponibilidade do sistema. O desempenho do sistema é afetado pelas latências da ordem de centenas de milissegundos da WAN, enquanto a disponibilidade é afetada por falhas que impedem a comunicação entre as réplicas. Quanto mais rígido o modelo de consistência de um sistema de armazenamento, mais simples é o desenvolvimento do sistema que o usa, mas menores são seu desempenho e disponibilidade. Entre os modelos de consistência mais relaxados e mais difundidos em sistemas web georeplicados está a consistência em momento indeterminado (eventual consistency). Esse modelo de consistência garante que em algum momento as réplicas convergem após as escritas terem cessado. Um modelo mais rígido e menos difundido é a consistência na linha do tempo. Esse modelo de consistência usa uma réplica mestre para garantir que não ocorram conflitos na escrita. Nas leituras, os clientes podem ler os valores mais recentes a partir da cópia mestre, ou optar explicitamente por ler valores possivelmente desatualizados para obter maior desempenho ou disponibilidade. A consistência na linha do tempo apresenta disponibilidade menor que a consistência em momento indeterminado em determinadas situações, mas não há dados comparando o desempenho de ambas. O objetivo principal deste trabalho foi a comparação do desempenho de sistemas de armazenamento georeplicados usando esses dois modelos de consistência. Para cada modelo de consistência, foram realizados experimentos que mediram o tempo de resposta do sistema sob diferentes cargas de trabalho e diferentes condições de rede entre centros de processamento de dados. O estudo mostra que um sistema usando consistência na linha do tempo apresenta desempenho semelhante ao mesmo sistema usando consistência em momento indeterminado em uma WAN quando a localidade dos acessos é alta. Esse comparativo pode auxiliar desenvolvedores e administradores de sistemas no planejamento de capacidade e de desenvolvimento de sistemas georeplicados. / Large scale web systems are distributed among thousands of servers spread over multiple data centers in geographically different locations operating over wide area networks (WANs). Several techniques are employed to achieve the high levels of scalability required by such systems. One of the main techniques is data replication, which aims to reduce latency, increase throughput and/or increase availability. The main drawback of replication in geo-replicated systems is that it is hard to guarantee consistency between replicas without considerably impacting system performance and availability. System performance is affected by WAN latencies, typically of hundreds of miliseconds, while system availability is affected by failures cutting off communication between replicas. The more rigid the consistency model provided by a storage system, the simpler the development of the system using it, but the lower its performance and availability. Eventual consistency is one of the more relaxed and most widespread consistency models among geo-replicated systems. This consistency model guarantees that all replicas converge at some unspecified time after writes have stopped. A model that is more rigid and less widespread is timeline consistency. This consistency model uses a master replica to guarantee that no write conflicts occur. Clients can read the most up-to-date values from the master replica, or they can explicitly choose to read stale values to obtain greater performance or availability. Timeline consistency has lower availability than eventual consistency in particular situations, but there are no data comparing their performance. The main goal of this work was to compare the performance of a geo-replicated storage system using these consistency models. For each consistency model, experiments were conducted to measure system response time under different workloads and network conditions between data centers. The study shows that a system using timeline consistency has similar performance than the same system using eventual consistency over a WAN when access locality is high. This comparative may help developers and system administrators on capacity and development planning of geo-replicated systems.
|
730 |
The large scale structures. A window on the dark components of the Universe / La structuration de l'Univers à grande échelle. une fenêtre sur ses composantes sombresIlić, Stéphane 23 October 2013 (has links)
L'énergie sombre est l'un des grands mystères de la cosmologie moderne, responsable de l'actuelle accélération de l'expansion de notre Univers. Son étude est un des axes principaux de ma thèse : une des voies que j'exploite s'appuie sur la structuration de l'Univers à grande échelle à travers un effet observationnel appelé effet Sachs-Wolfe intégré (iSW). Cet effet est théoriquement détectable dans le fond diffus cosmologique (FDC) : avant de nous parvenir cette lumière traverse un grand nombre grandes structures sous-tendues par des potentiels gravitationnels. L'accélération de l'expansion étire et aplatit ces potentiels pendant le passage des photons du FDC, modifiant leur énergie d'une façon dépendante des caractéristiques de l'énergie sombre. L'effet iSW n'a qu'un effet ténu sur le FDC, obligeant l'utilisation de données externes pour le détecter. Une approche classique consiste à corréler le FDC avec un traceur de la distribution de la matière, et donc des potentiels sous-jacents. Maintes fois tentée avec des relevés de galaxies, cette corrélation n'a pas donné à l'heure actuelle de résultat définitif sur la détection de l'effet iSW, la faute à des relevés pas assez profonds et/ou avec une couverture trop faible. Un partie de ma thèse est dédiée à la corrélation du FDC avec un autre fond "diffus" : le fond diffus infrarouge (FDI), qui est constitué de l'émission intégrée des galaxies lointaines non-résolues. J'ai pu montrer qu'il représente un excellent traceur, exempt des défauts des relevés actuels. Les niveaux de signifiance attendus pour la corrélation CIB-CMB excèdent ceux des relevés actuels, et rivalisent avec ceux prédits pour la futur génération de très grands relevés. Dans la suite, ma thèse a porté sur l'empreinte individuelle sur le FDC des plus grandes structures par effet iSW. Mon travail sur le sujet a d'abord consisté à revisiter une étude précédente d'empilement de vignettes de FDC à la position de structures, avec mes propres protocole de mesure et tests statistiques pour vérifier la signifiance de ces résultats, délicate à évaluer et sujette à de possibles biais de sélection. J'ai poursuivi en appliquant cette même méthode de détection à d'autres catalogues de structures disponibles, beaucoup plus conséquents et supposément plus raffinés dans leur algorithme de détection. Les résultats pour un d'eux suggère la présence d'un signal à des échelles et amplitudes compatible avec la théorie, mais à des niveaux modérés de signifiance. Ces résultats empilements font s'interroger concernant le signal attendu : cela m'a amené à travailler sur une prédiction théorique de l'iSW engendré par des structures, par des simulations basées sur la métrique de Lemaître-Tolman-Bondi. Cela m'a permis de prédire l'effet iSW théorique exact de structures existantes : l'amplitude centrale des signaux mesurés est compatible avec la théorie, mais présente des caractéristiques non-reproductibles par ces mêmes prédictions. Une extension aux catalogues étendus permettra de vérifier la signifiance de leurs signaux et leur compatibilité avec la théorie. Un dernier pan de ma thèse porte sur une époque de l'histoire de l'Univers appelée réionisation : son passage d'un état neutre à ionisé par l'arrivée des premières étoiles et autres sources ionisantes. Cette période a une influence importante sur le FDC et ses propriétés statistiques, en particulier sur son spectre de puissance des fluctuations de polarisation. Dans mon cas, je me suis penché sur l'utilisation des mesures de températures du milieu intergalactique, afin d'étudier la contribution possible de la désintégration et annihilation de l'hypothétique matière sombre. A partir d'un travail théorique sur plusieurs modèles et leur comparaison aux observations de température, j'ai pu extraire des contraintes intéressantes et inédites sur les paramètres cruciaux de la matière sombre et des caractéristiques cruciales de la réionisation elle-même. / The dark energy is one of the great mysteries of modern cosmology, responsible for the current acceleration of the expansion of our Universe. Its study is a major focus of my thesis : the way I choose to do so is based on the large-scale structure of the Universe, through a probe called the integrated Sachs-Wolfe effect (iSW). This effect is theoretically detectable in the cosmic microwave background (CMB) : before reaching us this light travelled through large structures underlain by gravitational potentials. The acceleration of the expansion stretches and flattens these potentials during the crossing of photons, changing their energy, in a way that depend on the properties of the dark energy. The iSW effect only has a weak effect on the CMB requiring the use of external data to be detectable. A conventional approach is to correlate the CMB with a tracer of the distribution of matter, and therefore the underlying potentials. This has been attempted numerous times with galaxies surveys but the measured correlation has yet to give a definitive result on the detection of the iSW effect. This is mainly due to the shortcomings of current surveys that are not deep enough and/or have a too low sky coverage. A part of my thesis is devoted to the correlation of FDC with another diffuse background, namely the cosmological infrared background (CIB), which is composed of the integrated emission of the non-resolved distant galaxies. I was able to show that it is an excellent tracer, free from the shortcomings of current surveys. The levels of significance for the expected correlation CIB-CMB exceed those of current surveys, and compete with those predicted for the future generation of very large surveys. In the following, my thesis was focused on the individual imprint in the CMB of the largest structures by iSW effect. My work on the subject first involved revisiting a past study of stacking CMB patches at structures location, using my own protocol, completed and associated with a variety of statistical tests to check the significance of these results. This point proved to be particularly difficult to assess and subject to possible selection bias. I extended the use of this detection method to other available catalogues of structures, more consequent and supposedly more sophisticated in their detection algorithms. The results from one of them suggests the presence of a signal at scales and amplitude consistent with the theory, but with moderate significance. The stacking results raise questions regarding the expected signal : this led me to work on a theoretical prediction of the iSW effect produced by structures, through simulations based on the Lemaître-Tolman-Bondi metric. This allowed me to predict the exact theoretical iSW effect of existing structures. The central amplitude of the measured signals is consistent with the theory, but shows features non-reproducible by my predictions. An extension to the additional catalogues will verify the significance of their signals and their compatibility with the theory. Another part of my thesis focuses on a distant time in the history of the Universe, called reionisation : the transition from a neutral universe to a fully ionised one under the action of the first stars and other ionising sources. This period has a significant influence on the CMB and its statistical properties, in particular the power spectrum of its polarisation fluctuations. In my case, I focused on the use of temperature measurements of the intergalactic medium during the reionisation in order to investigate the possible contribution of the disintegration and annihilation of the hypothetical dark matter. Starting from a theoretical work based on several models of dark matter, I computed and compared predictions to actual measures of the IGM temperature, which allowed me to extract new and interesting constraints on the critical parameters of the dark matter and crucial features of the reionisation itself
|
Page generated in 0.046 seconds