Spelling suggestions: "subject:"atemsystem bperformance."" "subject:"atemsystem deperformance.""
61 |
Multi-Core Memory System Design : Developing and using Analytical Models for Performance Evaluation and EnhancementsDwarakanath, Nagendra Gulur January 2015 (has links) (PDF)
Memory system design is increasingly influencing modern multi-core architectures from both performance and power perspectives. Both main memory latency and bandwidth have im-proved at a rate that is slower than the increase in processor core count and speed. Off-chip memory, primarily built from DRAM, has received significant attention in terms of architecture and design for higher performance. These performance improvement techniques include sophisticated memory access scheduling, use of multiple memory controllers, mitigating the impact of DRAM refresh cycles, and so on. At the same time, new non-volatile memory technologies have become increasingly viable in terms of performance and energy. These alternative technologies offer different performance characteristics as compared to traditional DRAM.
With the advent of 3D stacking, on-chip memory in the form of 3D stacked DRAM has opened up avenues for addressing the bandwidth and latency limitations of off-chip memory. Stacked DRAM is expected to offer abundant capacity — 100s of MBs to a few GBs — at higher bandwidth and lower latency. Researchers have proposed to use this capacity as an extension to main memory, or as a large last-level DRAM cache. When leveraged as a cache, stacked DRAM provides opportunities and challenges for improving cache hit rate, access latency, and off-chip bandwidth.
Thus, designing off-chip and on-chip memory systems for multi-core architectures is complex, compounded by the myriad architectural, design and technological choices, combined with the characteristics of application workloads. Applications have inherent spatial local-ity and access parallelism that influence the memory system response in terms of latency and bandwidth.
In this thesis, we construct an analytical model of the off-chip main memory system to comprehend this diverse space and to study the impact of memory system parameters and work-load characteristics from latency and bandwidth perspectives. Our model, called ANATOMY, uses a queuing network formulation of the memory system parameterized with workload characteristics to obtain a closed form solution for the average miss penalty experienced by the last-level cache. We validate the model across a wide variety of memory configurations on four-core, eight-core and sixteen-core architectures. ANATOMY is able to predict memory latency with average errors of 8.1%, 4.1%and 9.7%over quad-core, eight-core and sixteen-core configurations respectively. Further, ANATOMY identifie better performing design points accurately thereby allowing architects and designers to explore the more promising design points in greater detail. We demonstrate the extensibility and applicability of our model by exploring a variety of memory design choices such as the impact of clock speed, benefit of multiple memory controllers, the role of banks and channel width, and so on. We also demonstrate ANATOMY’s ability to capture architectural elements such as memory scheduling mechanisms and impact of DRAM refresh cycles. In all of these studies, ANATOMY provides insight into sources of memory performance bottlenecks and is able to quantitatively predict the benefit of redressing them.
An insight from the model suggests that the provisioning of multiple small row-buffers in each DRAM bank achieves better performance than the traditional one (large) row-buffer per bank design. Multiple row-buffers also enable newer performance improvement opportunities such as intra-bank parallelism between data transfers and row activations, and smart row-buffer allocation schemes based on workload demand. Our evaluation (both using the analytical model and detailed cycle-accurate simulation) shows that the proposed DRAM re-organization achieves significant speed-up as well as energy reduction.
Next we examine the role of on-chip stacked DRAM caches at improving performance by reducing the load on off-chip main memory. We extend ANATOMY to cover DRAM caches. ANATOMY-Cache takes into account all the key parameters/design issues governing DRAM cache organization namely, where the cache metadata is stored and accessed, the role of cache block size and set associativity and the impact of block size on row-buffer hit rate and off-chip bandwidth. Yet the model is kept simple and provides a closed form solution for the aver-age miss penalty experienced by the last-level SRAM cache. ANATOMY-Cache is validated against detailed architecture simulations and shown to have latency estimation errors of 10.7% and 8.8%on average in quad-core and eight-core configurations respectively. An interesting in-sight from the model suggests that under high load, it is better to bypass the congested DRAM cache and leverage the available idle main memory bandwidth. We use this insight to propose a refresh reduction mechanism that virtually eliminates refresh overhead in DRAM caches. We implement a low-overhead hardware mechanism to record accesses to recent DRAM cache pages and refresh only these pages. Older cache pages are considered invalid and serviced from the (idle) main memory. This technique achieves average refresh reduction of 90% with resulting memory energy savings of 9%and overall performance improvement of 3.7%.
Finally, we propose a new DRAM cache organization that achieves higher cache hit rate, lower latency and lower off-chip bandwidth demand. Called the Bi-Modal Cache, our cache organization brings three independent improvements together: (i) it enables parallel tag and data accesses, (ii) it eliminates a large fraction of tag accesses entirely by use of a novel way locator and (iii) it improves cache space utilization by organizing the cache sets as a combination of some big blocks (512B) and some small blocks (64B). The Bi-Modal Cache reduces hit latency by use of the way locator and parallel tag and data accesses. It improves hit rate by leveraging the cache capacity efficiently – blocks with low spatial reuse are allocated in the cache at 64B granularity thereby reducing both wasted off-chip bandwidth as well as cache internal fragmentation. Increased cache hit rate leads to reduction in off-chip bandwidth demand. Through detailed simulations, we demonstrate that the Bi-Modal Cache achieves overall performance improvement of 10.8%, 13.8% and 14.0% in quad-core, eight-core and sixteen-core workloads respectively over an aggressive baseline.
|
62 |
Projeto, constru??o e an?lise de um prototipo vibracional em escala de bancada aplic?vel ao tratamento de ?gua de produ??o de petr?leo bruto, mediante inovadora opera??o h?brida de adsor??o e auto-flota??o / Design construction and testing of a laboratory vibrating prototype for treatment of oil production water, emulsion or the like, through hybrid operation of adsorption and self-flotationLacerda Junior, Jonatas Araujo de 28 April 2014 (has links)
Made available in DSpace on 2014-12-17T14:09:19Z (GMT). No. of bitstreams: 1
JonatasALJ_TESE.pdf: 9049452 bytes, checksum: dd9b668c8bc7791108be8a6bbc44e152 (MD5)
Previous issue date: 2014-04-28 / A self-flotator vibrational prototype electromechanical drive for treatment of oil and water emulsion or like emulsion is presented and evaluated. Oil production and refining to obtain derivatives is carried out under arrangements technically referred to as on-shore and off-shore, ie, on the continent and in the sea. In Brazil 80 % of the petroleum production is taken at sea and area of deployment and it cost scale are worrisome. It is associated, oily water production on a large scale, carrier 95% of the potential pollutant of activity whose final destination is the environment medium, terrestrial or maritime. Although diversified set of techniques and water treatment systems are in use or research, we propose an innovative system that operates in a sustainable way without chemical additives, for the good of the ecosystem. Labyrinth adsor-bent is used in metal spirals, and laboratory scale flow. Equipment and process patents are claimed. Treatments were performed at different flow rates and bands often monitored with control systems, some built, other bought for this purpose. Measurements of the levels of oil and grease (OGC) of efluents treaty remained within the range of legal framework under test conditions. Adsorbents were weighed before and after treatment for obtaining oil impregna-tion, the performance goal of vibratory action and treatment as a whole. Treatment technolo-gies in course are referenced, to compare performance, qualitatively and quantitatively. The vibration energy consumption is faced with and without conventional flotation and self-flotation. There are good prospects for the proposed, especially in reducing the residence time, by capillary action system. The impregnation dimensionless parameter was created and confronted with consecrated dimensionless parameters, on the vibrational version, such as Weber number and Froude number in quadratic form, referred to as vibrational criticality. Re-sults suggest limits to the vibration intensity / Um prot?tipo vibr?til autoflotador de acionamento eletromec?nico para tratamento de ?gua de produ??o de petr?leo e emuls?o cong?nere ? apresentado e avaliado. A produ??o de petr?leo para refinamento e obten??o de derivados ? realizada sob modalidades tecnicamente referidas como on-shore e off-shore, isto ?, no continente e no mar. No Brasil 80% da produ??o petrol?-fera ? feita no mar e ?rea de implanta??o e escala de custo s?o preocupantes. Associa-se ?gua oleosa de produ??o, efluente abundante em larga escala, carreadora de 95% do potencial polu-idor da atividade cujo destino final ? o meio ambiente mar?timo ou terrestre. Embora diversi-ficado conjunto de t?cnicas e sistemas de tratamento d ?gua encontram-se em uso ou pesqui-sa, prop?e-se um sistema inovador que opera de forma sustent?vel sem aditivos qu?micos, pa-ra o bem do ecossistema. Utilizou-se labirinto adsorvente, em espirais met?licos, e escala la-boratorial de fluxo. Patentes de equipamento e processo s?o reivindicadas. Realizaram-se tra-tamentos em vaz?es e faixas de frequ?ncia distintas, monitoradas com sistemas de controle, uns constru?dos, outros aquistados para tal. Medi??es do teor de ?leo e graxa (TOG) do eflu-ente tratado mantiveram-se dentro do intervalo de enquadramento legal nas condi??es de en-saio. Pesaram-se os adsorventes antes e ap?s o tratamento para obten??o da impregna??o de ?leo, meta de desempenho da a??o vibrat?ria e tratamento como um todo. Tecnologias atuais de tratamento s?o referenciadas para compara??o de desempenho, qualitativa e quantitativa-mente. Confrontou-se consumo energ?tico operando-se em vibra??o, com e sem flota??o con-vencional, e com autoflota??o. Vislumbram-se boas perspectivas de rendimento do sistema proposto, sobretudo, na redu??o do tempo de resid?ncia por a??o de capilaridade. Criou-se o par?metro adimensional de impregna??o e se lhe confrontou com consagrados par?metros a-dimensionais, na vers?o vibracional, como n?mero de Weber e n?mero de Froud quadr?tico, referido como criticalidade vibr?til. Resultados sugerem limites ? intensidade vibrat?ria
|
63 |
Modeling and Performance Evaluation of Spatially-correlated Cellular Networks / Modélisation et évaluation de la performance de réseaux cellulaires à corrélation spatialeWang, Shanshan 14 March 2019 (has links)
Dans la modélisation et l'évaluation des performances de la communication cellulaire sans fil, la géométrie stochastique est largement appliquée afin de fournir des solutions plus efficaces et plus précises. Le processus ponctuel de Poisson homogène (H-PPP), est le processus ponctuel le plus largement utilisé pour modéliser les emplacements spatiaux des stations de base (BS) en raison de sa facilité de traitement mathématique et de sa simplicité. Pour les fortes corrélations spatiales entre les emplacements des stations de base, seuls les processus ponctuels (PP) avec inhibitions et attractions spatiales peuvent être utiles. Cependant, le temps de simulation long et la faible aptitude mathématique rendent les PP non-Poisson non adaptés à l'évaluation des performances au niveau du système. Par conséquent, pour surmonter les problèmes mentionnés, nous avons les contributions suivantes dans cette thèse: Premièrement, nous introduisons une nouvelle méthodologie de modélisation et d’analyse de réseaux cellulaires de liaison descendante, dans laquelle les stations de base constituent un processus ponctuel invariant par le mouvement qui présente un certain degré d’interaction entre les points. L'approche proposée est basée sur la théorie des PP inhomogènes de Poisson (I-PPP) et est appelée approche à double amincissement non homogène (IDT). L’approche proposée consiste à approximer le PP initial invariant par le mouvement avec un PP équivalent constitué de la superposition de deux I-PPP conditionnellement indépendants. Les inhomogénéités des deux PP sont créées du point de vue de l'utilisateur type ``centré sur l'utilisateur''. Des conditions suffisantes sur les paramètres des fonctions d'amincissement qui garantissent une couverture meilleure ou pire par rapport au modèle de PPP homogène de base sont identifiées. La précision de l'approche IDT est justifiée à l'aide de données empiriques sur la distribution spatiale des stations de base. Ensuite, sur la base de l’approche IDT, une nouvelle expression analytique traitable du rapport de brouillage moyen sur signal (MISR) des réseaux cellulaires où les stations de base présentent des corrélations spatiales est introduite. Pour les PP non-Poisson, nous appliquons l'approche IDT proposée pour estimer les performances des PP non-Poisson. En prenant comme exemple le processus de points β-Ginibre ( β -GPP), nous proposons de nouvelles fonctions d’approximation pour les paramètres clés dans l’approche IDT afin de modéliser différents degrés d’inhibition spatiale et de prouver que MISR est constant en densification de réseau. Nous prouvons que la performance MISR dans le cas β-GPP ne dépend que du degré de répulsion spatiale, c'est-à-dire β , quelles que soient les densités de BS. Les nouvelles fonctions d'approximation et les tendances sont validées par des simulations numériques.Troisièmement nous étudions plus avant la méta-distribution du SIR à l’aide de l’approche IDT. La méta-distribution est la distribution de la probabilité de réussite conditionnelle compte tenu du processus de points. Nous dérivons et comparons l'expression sous forme fermée pour le b-ème moment dans les cas PP H-PPP et non-Poisson. Le calcul direct de la fonction de distribution cumulative complémentaire (CCDF) pour la méta-distribution n'étant pas disponible, nous proposons une méthode numérique simple et précise basée sur l'inversion numérique des transformées de Laplace. L'approche proposée est plus efficace et stable que l'approche conventionnelle utilisant le théorème de Gil-Pelaez. La valeur asymptotique de la CCDF de la méta distribution est calculée dans la nouvelle définition de la probabilité de réussite. En outre, la méthode proposée est comparée à certaines autres approximations et limites, par exemple l’approximation bêta, les bornes de Markov et les liaisons de Paley-Zygmund. Cependant, les autres modèles et limites d'approximation sont comparés pour être moins précis que notre méthode proposée. / In the modeling and performance evaluation of wireless cellular communication, stochastic geometry is widely applied, in order to provide more efficient and accurate solutions. Homogeneous Poisson point process (H-PPP) with identically independently distributed variables, is the most widely used point process to model the spatial locations of base stations (BSs) due to its mathematical tractability and simplicity. For strong spatial correlations between locations of BSs, only point processes (PPs) with spatial inhibitions and attractions can help. However, the long simulation time and weak mathematical tractability make non-Poisson PPs not suitable for system level performance evaluation. Therefore, to overcome mentioned problems, we have the following contributions in this thesis: First, we introduce a new methodology for modeling and analyzing downlink cellular networks, where the base stations constitute a motion-invariant point process that exhibits some degree of interactions among the points. The proposed approach is based on the theory of inhomogeneous Poisson PPs (I-PPPs) and is referred to as inhomogeneous double thinning (IDT) approach. The proposed approach consists of approximating the original motion-invariant PP with an equivalent PP that is made of the superposition of two conditionally independent I-PPPs. The inhomogeneities of both PPs are created from the point of view of the typical user. The inhomogeneities are mathematically modeled through two distance-dependent thinning functions and a tractable expression of the coverage probability is obtained. Sufficient conditions on the parameters of the thinning functions that guarantee better or worse coverage compared with the baseline homogeneous PPP model are identified. The accuracy of the IDT approach is substantiated with the aid of empirical data for the spatial distribution of the BSs. Then, based on the IDT approach, a new tractable analytical expression of mean interference to signal ratio (MISR) of cellular networks where BSs exhibits spatial correlations is introduced.For non-Poisson PPs, we apply proposed IDT approach to approximate the performance of non-Poisson PPs. Taking β-Ginibre point process (β -GPP) as an example, we propose new approximation functions for key parameters in IDT approach to model different degree of spatial inhibition and we successfully prove that MISR for β -GPP is constant under network densification with our proposed approximation functions. We prove that of MISR performance under β-GPP case only depends on the degree of spatial repulsion, i.e., β , regardless of different BS densities. We also prove that with the increase of β or (given fixed γ or β respectively), the corresponding MISR for β-GPP decreases. The new approximation functions and the trends are validated by numerical simulations. Third, we further study meta distribution of the SIR with the help of the IDT approach. Meta distribution is the distribution of the conditional success probability given the point process. We derive and compare the closed-form expression for the b-th moment under H-PPP and non-Poisson PP case. Since the direct computation of the complementary cumulative distribution function (CCDF) for meta distribution is not available, we propose a simple and accurate numerical method based on numerical inversion of Laplace transforms. The proposed approach is more efficient and stable than the conventional approach using Gil-Pelaez theorem. The asymptotic value of CCDF of meta distribution is computed under new definition of success probability. Furthermore, the proposed method is compared with some other approximations and bounds, e.g., beta approximation, Markov bounds and Paley-Zygmund bound. However, the other approximation models and bounds are compared to be less accurate than our proposed method.
|
64 |
DIPBench: An Independent Benchmark for Data-Intensive Integration ProcessesLehner, Wolfgang, Böhm, Matthias, Habich, Dirk, Wloka, Uwe 12 August 2022 (has links)
The integration of heterogeneous data sources is one of the main challenges within the area of data engineering. Due to the absence of an independent and universal benchmark for data-intensive integration processes, we propose a scalable benchmark, called DIPBench (Data intensive integration Process Benchmark), for evaluating the performance of integration systems. This benchmark could be used for subscription systems, like replication servers, distributed and federated DBMS or message-oriented middleware platforms like Enterprise Application Integration (EAI) servers and Extraction Transformation Loading (ETL) tools. In order to reach the mentioned universal view for integration processes, the benchmark is designed in a conceptual, process-driven way. The benchmark comprises 15 integration process types. We specify the source and target data schemas and provide a toolsuite for the initialization of the external systems, the execution of the benchmark and the monitoring of the integration system's performance. The core benchmark execution may be influenced by three scale factors. Finally, we discuss a metric unit used for evaluating the measured integration system's performance, and we illustrate our reference benchmark implementation for federated DBMS.
|
65 |
On performance limitations of large-scale networks with distributed feedback controlTegling, Emma January 2016 (has links)
We address the question of performance of large-scale networks with distributed feedback control. We consider networked dynamical systems with single and double integrator dynamics, subject to distributed disturbances. We focus on two types of problems. First, we consider problems modeled over regular lattice structures. Here, we treat consensus and vehicular formation problems and evaluate performance in terms of measures of “global order”, which capture the notion of network coherence. Second, we consider electric power networks, which we treat as dynamical systems modeled over general graphs. Here, we evaluate performance in terms of the resistive power losses that are incurred in maintaining network synchrony. These losses are associated with transient power flows that are a consequence of “local disorder” caused by lack of synchrony. In both cases, we characterize fundamental limitations to performance as networks become large. Previous studies have shown that such limitations hold for coherence in networks with regular lattice structures. These imply that connections in 3 spatial dimensions are necessary to achieve full coherence, when the controller uses static feedback from relative measurements in a local neighborhood. We show that these limitations remain valid also with dynamic feedback, where each controller has an internal memory state. However, if the controller can access certain absolute state information, dynamic feedback can improve performance compared to static feedback, allowing also 1-dimensional formations to be fully coherent. For electric power networks, we show that the transient power losses grow unboundedly with network size. However, in contrast to previous results, performance does not improve with increased network connectivity. We also show that a certain type of distributed dynamic feedback controller can improve performance by reducing losses, but that their scaling with network size remains an important limitation. / <p>QC 20160504</p>
|
66 |
TEMPENSURE, A BLOCKCHAIN SYSTEM FOR TEMPERATURE CONTROL IN COLD CHAIN LOGISTICSMatthew L Schnell (13206366) 05 August 2022 (has links)
<p> </p>
<p>Cold chain logistics comprise a large portion of transported pharmaceutical medications and raw materials which must be preserved at specified temperatures to maintain consumer safety and efficacy. An immutable record of temperatures of transported pharmaceutical goods allows for mitigation of temperature-related issues of such drugs and their raw components. The recording of this information on a blockchain creates such an immutable record of this information which can be readily accessed by any relevant party. This can allow for any components which have not been kept at the appropriate temperatures to be removed from production. These data can also be used as inputs for smart contracts or for data analytic purposes. </p>
<p>A theoretical framework for such a system, referred to as “TempEnsure” is described, which provides digital capture of the internal temperature of temperature-controlled shipping containers. The data are recorded in a blockchain system. Real world testing of this system was not possible due to monetary constraints, but the functional elements of the system, as well as potential improvements for the system, are discussed.</p>
|
67 |
Towards No-Penalty Control Hazard Handling in RISC architecture microcontrollersLINKNATH SURYA BALASUBRAMANIAN (8781929) 03 September 2024 (has links)
<p dir="ltr">Achieving higher throughput is one of the most important requirements of a modern microcontroller. It is therefore not affordable for it to waste a considerable number of clock cycles in branch mispredictions. This paper proposes a hardware mechanism that makes microcontrollers forgo branch predictors, thereby removing branch mispredictions. The scope of this work is limited to low cost microcontroller cores that are applied in embedded systems. The proposed technique is implemented as five different modules which work together to forward required operands, resolve branches without prediction, and calculate the next instruction's address in the first stage of an in-order five stage pipelined micro-architecture. Since the address of successive instruction to a control transfer instruction is calculated in the first stage of pipeline, branch prediction is no longer necessary, thereby eliminating the clock cycle penalties occurred when using a branch predictor. The designed architecture was able to successfully calculate the address of next correct instruction and fetch it without any wastage of clock cycles except in cases where control transfer instructions are in true dependence with their immediate previous instructions. Further, we synthesized the proposed design with 7nm FinFET process and compared its latency with other designs to make sure that the microcontroller's operating frequency is not degraded by using this design. The critical path latency of instruction fetch stage integrated with the proposed architecture is 307 ps excluding the instruction cache access time.</p>
|
68 |
Collaborative Mobile System Design, Evaluation, and ApplicationsZhang, Jinran 07 1900 (has links)
This dissertation explores the integration and optimization of advanced communication technologies within the collaborative mobile system (CMS), focusing on the system design, implementation, and evaluation over unmanned aerial vehicles (UAVs). Collectively, this dissertation tackles the key challenges of connectivity and performance within CMS. This work demonstrates practical implementations and sheds light on the challenges and opportunities for CMS. The dissertation emphasizes the importance of adaptability and scalability in network design and implementation, particularly in leveraging the integration of hardware and software to adapt to promising architectures. By providing insights into performance under real-world conditions, this work explores the interplay of innovations in UAVs, mobile communications, network architecture, and system performance, paving the way for future network investigation and development.
|
69 |
Development of 3D Printing Multifunctional Materials for Structural Health MonitoringCole M Maynard (6622457) 11 August 2022 (has links)
<p>Multifunctional additive manufacturing has the immense potential of addressing present needs within structural health monitoring by enabling a new additive manufacturing paradigm that redefines what a sensor is, or what sensors should resemble. To achieve this, the properties of printed components must be precisely tailored to meet structure specific and application specific requirements. However due to the limited number of commercially available multifunctional filaments, this research investigates the in-house creation of adaptable piezoresistive multifunctional filaments and their potential within structural health monitoring applications based upon their characterized piezoresistive responses. To do so, a rigid polylactic acid based-filament and a flexible thermoplastic polyurethane based-filament were modified to impart piezoresistive properties using carbon nanofibers. The filaments were produced using different mixing techniques, nanoparticle concentrations, and optimally selected manufacturing parameters from a design of experiments approach. The resulting filaments exhibited consistent resistivity values which were found to be less variable under specific mixing techniques than commercially available multifunctional filaments. This improved consistency was found to be a key factor which held back currently available piezoresistive filaments from fulfilling needs within structural health monitoring. To demonstrate the ability to meet these needs, the piezoresistive responses of three dog-bone shaped sensor sizes were measured under monotonic and cyclic loading conditions for the optimally manufactured filaments. The characterized piezoresistive responses demonstrated high strain sensitivities under both tensile and compressive loads. These piezoresistive sensors demonstrated the greatest sensitivity in tension, where all three sensor sizes exhibited gauge factors over 30. Cyclic loading supported these results and further demonstrated the accuracy and reliability of the printed sensors within SHM applications.</p>
|
70 |
Energy Savings Using a Direct Current Distribution Network in a PV and Battery Equipped Residential BuildingOllas, Patrik January 2020 (has links)
Energy from solar photovoltaic (PV) are generated as direct current (DC) and almost all of today’s electrical loads in residential buildings, household appliances and HVAC system (Heating Ventilation and Air-conditioning) are operated on DC. For a conventional alternating current (AC) distribution system this requires the need for multiple conversion steps before the final user-stage. By switching the distribution system to DC, conversion steps between AC to DC can be avoided and, in that way, losses are reduced. Including a battery storage–the system’s losses can be reduced further and the generated PV energy is even better utilised. This thesis investigates and quantifies the energy savings when using a direct current distribution topology in a residential building together with distributed energy generation from solar photovoltaic and a battery storage. Measured load and PV generation data for a single-family house situated in Borås, Sweden is used as a case study for the analysis. Detailed and dynamic models–based on laboratory measurements of the power electronic converters and the battery–are also used to more accurately reflect the system’s dynamic performance. In this study a dynamic representation of the battery’s losses is presented which is based on laboratory measurements of the resistance and current dependency for a single lithium-ion cell based on Lithium iron phosphate (LFP). A comparative study is made with two others, commonly used, loss representations and evaluated with regards to the complete system’s performance, using the PV and load data from the single-family house. Results show that a detailed battery representation is important for a correct loss prediction when modelling the interaction between loads, PV and the battery. Four DC system topologies are also modelled and compared to an equivalent AC topology using the experimental findings from the power electronic converters and the battery measurements. Results from the quasi-dynamic modelling show that the annual energy savings potential from the suggested DC topologies ranges between 1.9–5.6%. The DC topologies also increase the PV utilisation by up to 10 percentage points, by reducing the associated losses from the inverter and the battery conversion. Results also show that the grid-tied converter is the main loss contributor and when a constant grid-tied efficiency is used, the energy savings are overestimated.
|
Page generated in 0.162 seconds