• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 264
  • 131
  • 41
  • 20
  • 16
  • 15
  • 11
  • 10
  • 8
  • 5
  • 5
  • 4
  • 3
  • 3
  • 3
  • Tagged with
  • 622
  • 83
  • 79
  • 64
  • 62
  • 57
  • 55
  • 48
  • 46
  • 45
  • 40
  • 39
  • 39
  • 38
  • 36
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
491

On Throughput-Reliability-Delay Tradeoffs in Wireless Networks

Nam, Young-Han 19 March 2008 (has links)
No description available.
492

Insamling och användning av data för vidareutveckling av spel : Med fokus på multiplayer datorspel / Data Collection and its Application in Further Development of Online Games

Bunea, Robert, Ivarsson, Kajsa January 2021 (has links)
Digitala spel har blivit ett allt vanligare sätt för underhållning, där industrin har sett en stor ökning av användare, vilket har stor sannolikhet att fortsätta även i framtiden. Dessa spel har ökat i komplexitet i takt med den teknologiska utvecklingen från 90-talet tills idag, där Internet har haft stor påverkan och introducerat nya möjligheter. En utav dessa möjligheter är insamlingen av olika datatyper med olika syften och olika användningsområden inom ett spelutvecklingsföretag. Industrin har lockat mycket forskning angående processer och metoder som berör skapandet av dessa komplexa produkter, där förarbetet och själva skapandet av digitala spel tar upp majoriteten av forskning i jämförelse med vidareutvecklingen av dessa digitala spel efter lansering. Syftet med denna rapport är därmed att bidra med kunskap till det mindre utforskade området kring utvecklingsprocessen efter lansering av online multiplayerspel, samt undersökning och insamling av data och dess integrering i vidareutvecklingsprocessen. Första viktiga delen av rapporten är ett teoriavsnitt, skapad genom en litteraturstudie där denna bestod av att identifiera och hitta vetenskapliga artiklar med kunskap kring ämnet, där dessa var viktiga för att skapa en teoretisk grund för denna studie. Detta gjordes huvudsakligen genom vetenskapliga databaser och söktjänster, med hjälp av nyckelord relaterade till viktiga teorier. Den andra viktiga delen är empiriska studier i form av semistrukturerade intervjuer utifrån en intervjuguide med respondenter från olika spelutvecklingsföretag. Företagen som respondenterna jobbade för var Ghost Ship Games, Jagex, samt Bohemia Interactive. Dessa intervjuer har sedan analyserats och viktiga teman har identifierats utifrån respondenternas svar. Respondenterna har nämnt olika datatyper som kan täcka olika delar av företaget, men när det gäller vidareutveckling av spel har tre typer nämnts att ha störst påverkan. Dessa typer var datapunkter över hur spelet används av individer, kvalitetsdata av spelets prestanda samt feedback från användare. Metoderna för att samla in de förenämnda datatyperna består framförallt av enkäter som alla respondenter har nämnt att respektive företag använder för att samla in feedback, rapportering av bugs eller förslag på förbättringsområden. Utöver detta samlas även mer övergripande information genom digitala verktyg eller spelklienten för respektive spel. Där dessa verktyg är skapade inom företaget eller hanteras genom ett separat företag. Ghost Ship Games har nämnt en egen idealbild över hur spelet ska vara balanserat som företaget strävar efter, vilket tas i åtanke vid beslut som tas av utvecklarna som täcker respektive problem. Inom Jagex är det hela team som bestämmer tillsammans angående förändringar eller tillägg i spelet. En intressant aspekt var att för Jagex så är användarna en del av beslutsfattandet genom opinionsundersökningar. Beslutsfattandet inom Bohemia Interactive nämndes att ske huvudsakligen genom avdelningen för kvalitetssäkring, där dessa filtrerar insamlad information och sedan delegerar identifierade problem till rätt utvecklare inom företaget. Vidareutvecklingen inom företagen gjordes framförallt genom balansering med hjälp av spelmotorn för respektive spel. Ghost Ship Games fokuserar även på vidareutveckling av befintliga funktioner, system och verktyg, genom att omarbeta några av föregående delar. Inom Bohemia Interactive angående Arma 3 användes till exempel hotfixes eller helt nytt spelinnehåll genom DLC, liknande sker även inom Jagex i form av hotpatches. Övergripande så är vidareutvecklingsprocessen lik mellan företagen, genom de olika steg som har identifierats i denna rapport. Dessa kan beskrivas som en iterativ process där företag identifierar problem eller förslag för vidareutveckling, samlar in eller undersöker befintlig data för att bekräfta problem eller stödja förslag, implementerar lösning eller förslag, samt samlar in feedback och agerar utifrån feedback. / Digital games have become an increasingly common means of entertainment where the industry has seen a huge increase in users, something that is likely to continue in the future. These games have increased in complexity in line with technological developments from the 1990s to the present day, where the Internet has had a major impact and introduced new possibilities. One of these opportunities is the collection of different types of data with different purposes and different uses within a game development company. The industry has thus attracted a lot of research regarding processes and methods involved in the creation of these complex products, where the pre-processing and actual creation of digital games is in the vast majority of studies compared to the further development of these digital games after launch. The aim of this paper is therefore to contribute knowledge to the less explored area of the post-launch development process of online multiplayer games, as well as the investigation of data collection and its integration into the further development process. The first important part of the report is a theory section, created through literature review where this consisted of identifying and finding scientific articles with knowledge on the topic, where these were important to create a theoretical basis for this study. This was mainly done through scientific databases and search services, using keywords related to important theories. The second important part is empirical studies in the form of semi-structured interviews based on an interview guide with respondents from different game development companies. The companies the respondents worked for were Ghost Ship Games, Jagex, and Bohemia Interactive. These interviews were then analysed and important themes were identified based on the respondents’ answers. Respondents mentioned different types of data that could cover different parts of the company, but when it comes to further game development, three types were mentioned to have the most impact. These types were data points on how the game is used by individuals, quality data of the game's performance, and feedback from users. The methods for collecting the aforementioned data types consist mainly of surveys, which all respondents mentioned that the respective companies use to collect feedback, report bugs or suggest areas for improvement. In addition, more general information is also collected through digital tools or the game client for the respective game. Where these tools are created within the company or managed by a separate company. Ghost Ship Games has an ideal image of how the game should be balanced that the company strives for, which is taken into account when decisions are made by the developers themselves that cover the respective issues. Within Jagex, the whole team decides together on changes or additions to the game. An interesting aspect is that within Jagex, users are a part of the decision making process through opinion polls. Decision-making within Bohemia Interactive is mainly done through the Quality Assurance department, where they filter collected information and then delegate identified problems to the appropriate developers within the company. Further development within the companies is mainly done through balancing using the game engine for each game. Ghost Ship Games may also focus on further development of existing features, systems and tools, by reworking some of the previously mentioned elements. In Bohemia Interactive regarding Arma 3, are hotfixes or completely new game content through DLC used for example. Similar is also done in Jagex in the form of hotpatches. Overall, the further development process is similar between the companies, through the different steps identified in this report. These can be described as an iterative process in which companies identify problems or proposals for further development, collect data or investigate existing data, implement the solution or proposal, collect feedback and act on the feedback.
493

Development of an exercise machine for enhanced eccentric training of the muscles : A study of sensors and system performance / Utveckling av en träningsmaskin för förbättrad excentrisk muskelträning

Zivanovic, Natalija January 2020 (has links)
Currently, there are various training machines that can support training of the muscles while the muscles are lengthened, also known as eccentric training. Training machines that are widely used to train the muscles eccentrically utilize a flywheel to generate load to the user. When training the muscles eccentrically with such a machine, there is a desire to accomplish eccentric overload, which is achieved when the muscles under training are exposed to a very high load during eccentric training of these muscles. To achieve this, the user needs to activate other muscles that are not in the focus of the training or be assisted by another person. In this study, a novel, smart flywheel training machine was developed by implementing electric motor and sensors, which could identify the exercise pattern of the user and help achieve desired eccentric overload. This study focused on how the system performance of such training machine interacting with human beings was affected by various grade of sensor feedback. With an increased resolution of the sensors and a lower sample time, the cost of the system is increased, and it was therefore of interest to study what grade of sensor feedback was required. More exactly, this study evaluated how the system performance was improved when sensor resolution was improved, what resolution and sample time were required for the system to perform correct and safely and last, how noise and disturbances affected the system. The study was conducted in a simulated environment in Matlab and Simulink, and some real tests and experiments were also performed on the existing flywheel training machine. An incremental encoder was implemented in the system and resolution of the encoder, as well as sample time, were tweaked in the simulation to test different combinations of these. The results showed that both resolution and sample time had an impact on the system performance. A higher resolution resulted in a smaller tracking error to some extent, but after a certain value the system became unstable if the sample time was not small enough. Noise and disturbances had a minor impact on the system performance. It was concluded that the best choice of encoder resolution was 0.0314 radians with a sample time of 0.01 ms. Even lower resolution such as 0.628 rad, 0.126 rad or 0.0571 rad with a sample time of 0.1 ms could be allowed and should be considered safe. However, the system might not perform as desired if these alternatives are chosen, although the alternatives might decrease the cost of the system. / I nuläget finns det olika träningsmaskiner som kan stödja träning av muskler där musklerna förlängs, även känt som excentrisk träning. Träningsmaskiner som idag används i stor utsträckning för att träna musklerna excentriskt använder ett svänghjul för att generera träningsmotstånd till användaren. När musklerna tränas excentriskt med en sådan maskin finns det en önskan att åstadkomma excentrisk överbelastning; detta uppnås när musklerna som tränas utsätts för en mycket hög belastning under den excentriska träningsfasen. För att uppnå detta måste användaren aktivera andra muskler som inte står i träningens fokus eller få hjälp av en annan person. I den här studien har en ny, smart, svänghjulsträningsmaskin utvecklats genom att implementera elmotor och sensorer som kan identifiera användarens träningsmönster och hjälpa till att uppnå önskvärd excentrisk överbelastning. Denna studie fokuserade på hur systemprestanda för en sådan träningsmaskin som interagerar med människor påverkades av olika grader av sensoråterkoppling. Med en ökad upplösning av sensorerna och en lägre samplingstid ökar kostnaden för systemet och det var därför av intresse att studera vilken grad av sensoråterkoppling som krävdes. Mer exakt utvärderar denna studie hur systemets prestanda förbättrades när sensorupplösningen var högre och vilken upplösning och samplingstid som krävdes för att systemet skulle fungera korrekt och säkert. Påverkan av brus och störningar på systemet utvärderades också. Studien genomfördes i simuleringsmiljö i Matlab och Simulink och verkliga tester och experiment utfördes på den befintliga svänghjulsträningsmaskinen. En inkrementell pulsgivare (incremental encoder) implementerades i systemet och dess upplösning, såväl som samplingstid, justerades i simuleringen för att testa olika kombinationer av dessa. Resultat visade att både upplösningen och samplingstiden påverkade systemets prestanda. En högre upplösning resulterade i ett mindre reglerfel till en viss del, men efter en viss ökad upplösning blev systemet instabilt om samplingstiden inte var tillräckligt liten. Brus och störningar hade en mindre inverkan på systemprestandan. Slutsatsen var att det bästa valet av pulsgivarupplösning var 0,0314 radianer med en samplingstid på 0,01 ms. Även lägre upplösning såsom 0,628 rad, 0,126 rad eller 0,0571 rad med en samplingstid på 0,1 ms kan tillåtas och bör betraktas som säkert. Systemet kan dock komma att inte fungera som önskat om dessa alternativ väljs, dock kan alternativen sänka kostnaden för systemet.
494

Segmentação de objetos via transformada imagem-floresta orientada com restrições de conexidade / Object segmentation by oriented image foresting transform with connectivity constraints

Mansilla, Lucy Alsina Choque 10 August 2018 (has links)
Segmentação de objetos em imagens é um dos problemas mais fundamentais e desafiadores em processamento de imagem e visão computacional. O conhecimento de alto nível e específico do usuário é frequentemente requerido no processo de segmentação, devido à presença de fundos heterogêneos, objetos com bordas fracamente definidas, inomogeneidade de campo, ruído, artefatos, efeitos de volume parcial e seus efeitos conjuntos. Propriedades globais do objeto de interesse, tais como conexidade, restrições de forma e polaridade de borda, são conhecimentos prévios de alto nível úteis para a sua segmentação, permitindo a customização da segmentação para um objeto alvo. Nesse trabalho, apresentamos um novo método chamado Transformada Imagem-Floresta Orientada Conexa (COIFT, Connected Oriented Image Foresting Transform), que fornece soluções ótimas globais de acordo com uma medida de corte em grafo, incorporando a restrição de conexidade na Transformada Imagem-Floresta Orientada (OIFT, Oriented Image Foresting Transform), com o fim de garantir a geração de objetos conexos, bem como permitir o controle simultâneo da polaridade de borda. Enquanto o emprego de restrições de conexidade em outros arcabouços, tais como no algoritmo de corte-mínimo/fluxo-máximo (min-cut/max-flow), leva a um problema NP-difícil, a COIFT conserva o baixo custo computacional da OIFT. Experimentos mostram que a COIFT pode melhorar consideravelmente a segmentação de objetos com partes finas e alongadas, para o mesmo número de sementes em segmentação baseada em marcadores. / Object segmentation is one of the most fundamental and challenging problems in image processing and computer vision. The high-level and specific knowledge of the user is often required in the segmentation process, due to the presence of heterogeneous backgrounds, objects with poorly defined boundaries, field inhomogeneity, noise, artifacts, partial volume effects and their joint effects. Global properties of the object of interest, such as connectivity, shape constraints and boundary polarity, are useful high-level priors for its segmentation, allowing the customization of the segmentation for a given target object. In this work, we introduce a new method called Connected Oriented Image Foresting Transform (COIFT), which provides global optimal solutions according to a graph-cut measure in graphs, subject to the connectivity constraint in the Oriented Image Foresting Transform (OIFT), in order to ensure the generation of connected objects, as well as allowing the simultaneous control of the boundary polarity. While the use of connectivity constraints in other frameworks, such as in the min-cut/max-flow algorithm, leads to a NP-Hard problem, COIFT retains the low computational cost of OIFT. Experiments show that COIFT can considerably improve the segmentation of objects with thin and elongated parts, for the same number of seeds in segmentation based on markers.
495

Algoritmo para a extração incremental de sequências relevantes com janelamento e pós-processamento aplicado a dados hidrográficos

Silveira Junior, Carlos Roberto 07 June 2013 (has links)
Made available in DSpace on 2016-06-02T19:06:09Z (GMT). No. of bitstreams: 1 5554.pdf: 2294386 bytes, checksum: ce6dc6cd7128337c0533ddd23c0bc601 (MD5) Previous issue date: 2013-06-07 / The mining of sequential patterns in data from environmental sensors is a challenging task: the data may show noise and may also contain sparse patterns that are difficult to detect. The knowledge extracted from environmental sensor data can be used to determine climate change, for example. However, there is a lack of methods that can handle this type of database. In order to reduce this gap, the algorithm Incremental Miner of Stretchy Time Sequences with Post-Processing (IncMSTS-PP) was proposed. The IncMSTS-PP applies incremental extraction of sequential patterns with post-processing based on ontology for the generalization of the patterns. The post-processing makes the patterns semantically richer. Generalized patterns synthesize the information and makes it easier to be interpreted. IncMSTS-PP implements the Stretchy Time Window (STW) that allows stretchy time patterns (patterns with temporal intervals) are mined from bases that have noises. In comparison with GSP algorithm, IncMSTS-PP can return 2.3 times more patterns and patterns with 5 times more itemsets. The post-processing module is responsible for the reduction in 22.47% of the number of patterns presented to the user, but the returned patterns are semantically richer. Thus, the IncMSTS-PP showed good performance and mined relevant patterns showing, that way, that IncMSTS-PP is effective, efficient and appropriate for domain of environmental sensor data. / A mineração de padrões sequenciais em dados de sensores ambientais é uma tarefa desafiadora: os dados podem apresentar ruídos e podem, também, conter padrões esparsos que são difíceis de serem detectados. O conhecimento extraído de dados de sensores ambientais pode ser usado para determinar mudanças climáticas, por exemplo. Entretanto, há uma lacuna de métodos que podem lidar com este tipo de banco de dados. Com o intuito de diminuir esta lacuna, o algoritmo Incremental Miner of Stretchy Time Sequences with Post- Processing (IncMSTS-PP) foi proposto. O IncMSTS-PP aplica a extração incremental de padrões sequencias com pós-processamento baseado em ontologia para a generalização dos padrões obtidos que acarreta o enriquecimento semântico desses padrões. Padrões generalizados sintetizam a informação e a torna mais fácil de ser interpretada. IncMSTS-PP implementa o método Stretchy Time Window (STW) que permite que padrões de tempo elástico (padrões com intervalos temporais) sejam extraídos em bases que apresentam ruídos. Em comparação com o algoritmo GSP, o IncMSTS-PP pode retornar 2,3 vezes mais sequencias e sequencias com 5 vezes mais itemsets. O módulo de pós-processamento é responsável pela redução em 22,47% do número de padrões apresentados ao usuário, porém os padrões retornados são semanticamente mais ricos, se comparados aos padrões não generalizados. Assim sendo, o IncMSTS-PP apresentou bons resultados de desempenho e minerou padrões relevantes mostrando, assim, que IncMSTS-PP é eficaz, eficiente e apropriado em domínio de dados de sensores ambientais.
496

Segmentação de objetos via transformada imagem-floresta orientada com restrições de conexidade / Object segmentation by oriented image foresting transform with connectivity constraints

Lucy Alsina Choque Mansilla 10 August 2018 (has links)
Segmentação de objetos em imagens é um dos problemas mais fundamentais e desafiadores em processamento de imagem e visão computacional. O conhecimento de alto nível e específico do usuário é frequentemente requerido no processo de segmentação, devido à presença de fundos heterogêneos, objetos com bordas fracamente definidas, inomogeneidade de campo, ruído, artefatos, efeitos de volume parcial e seus efeitos conjuntos. Propriedades globais do objeto de interesse, tais como conexidade, restrições de forma e polaridade de borda, são conhecimentos prévios de alto nível úteis para a sua segmentação, permitindo a customização da segmentação para um objeto alvo. Nesse trabalho, apresentamos um novo método chamado Transformada Imagem-Floresta Orientada Conexa (COIFT, Connected Oriented Image Foresting Transform), que fornece soluções ótimas globais de acordo com uma medida de corte em grafo, incorporando a restrição de conexidade na Transformada Imagem-Floresta Orientada (OIFT, Oriented Image Foresting Transform), com o fim de garantir a geração de objetos conexos, bem como permitir o controle simultâneo da polaridade de borda. Enquanto o emprego de restrições de conexidade em outros arcabouços, tais como no algoritmo de corte-mínimo/fluxo-máximo (min-cut/max-flow), leva a um problema NP-difícil, a COIFT conserva o baixo custo computacional da OIFT. Experimentos mostram que a COIFT pode melhorar consideravelmente a segmentação de objetos com partes finas e alongadas, para o mesmo número de sementes em segmentação baseada em marcadores. / Object segmentation is one of the most fundamental and challenging problems in image processing and computer vision. The high-level and specific knowledge of the user is often required in the segmentation process, due to the presence of heterogeneous backgrounds, objects with poorly defined boundaries, field inhomogeneity, noise, artifacts, partial volume effects and their joint effects. Global properties of the object of interest, such as connectivity, shape constraints and boundary polarity, are useful high-level priors for its segmentation, allowing the customization of the segmentation for a given target object. In this work, we introduce a new method called Connected Oriented Image Foresting Transform (COIFT), which provides global optimal solutions according to a graph-cut measure in graphs, subject to the connectivity constraint in the Oriented Image Foresting Transform (OIFT), in order to ensure the generation of connected objects, as well as allowing the simultaneous control of the boundary polarity. While the use of connectivity constraints in other frameworks, such as in the min-cut/max-flow algorithm, leads to a NP-Hard problem, COIFT retains the low computational cost of OIFT. Experiments show that COIFT can considerably improve the segmentation of objects with thin and elongated parts, for the same number of seeds in segmentation based on markers.
497

Real-time Business Intelligence through Compact and Efficient Query Processing Under Updates

Idris, Muhammad 10 April 2019 (has links)
Responsive analytics are rapidly taking over the traditional data analytics dominated by the post-fact approaches in traditional data warehousing. Recent advancements in analytics demand placing analytical engines at the forefront of the system to react to updates occurring at high speed and detect patterns, trends and anomalies. These kinds of solutions find applications in Financial Systems, Industrial Control Systems, Business Intelligence and on-line Machine Learning among others. These applications are usually associated with Big Data and require the ability to react to constantly changing data in order to obtain timely insights and take proactive measures. Generally, these systems specify the analytical results or their basic elements in a query language, where the main task then is to maintain these results under frequent updates efficiently. The task of reacting to updates and analyzing changing data has been addressed in two ways in the literature: traditional business intelligence (BI) solutions focus on historical data analysis where the data is refreshed periodically and in batches, and stream processing solutions process streams of data from transient sources as flow (or set of flows) of data items. Both kinds of systems share the niche of reacting to updates (known as dynamic evaluation); however, they differ in architecture, query languages, and processing mechanisms. In this thesis, we investigate the possibility of a reactive and unified framework to model queries that appear in both kinds of systems. In traditional BI solutions, evaluating queries under updates has been studied under the umbrella of incremental evaluation of updates that is based on relational incremental view maintenance model and mostly focus on queries that feature equi-joins. Streaming systems, in contrast, generally follow the automaton based models to evaluate queries under updates, and they generally process queries that mostly feature comparisons of temporal attributes (e.g., timestamp attributes) along-with comparisons of non-temporal attributes over streams of bounded sizes. Temporal comparisons constitute inequality constraints, while non-temporal comparisons can either be equality or inequality constraints, hence these systems mostly process inequality joins. As starting point, we postulate the thesis that queries in streaming systems can also be evaluated efficiently based on the paradigm of incremental evaluation just like in BI systems in a main-memory model. The efficiency of such a model is measured in terms of runtime memory footprint and the update processing cost. To this end, the existing approaches of dynamic evaluation in both kind of systems present a trade-off between memory footprint and the update processing cost. More specifically, systems that avoid materialization of query (sub) results incur high update latency and systems that materialize (sub) results incur high memory footprint. We are interested in investigating the possibility to build a model that can address this trade-off. In particular, we overcome this trade-off by investigating the possibility of practical dynamic evaluation algorithm for queries that appear in both kinds of systems, and present a main-memory data representation that allows to enumerate query (sub) results without materialization and can be maintained efficiently under updates. We call this representation the Dynamic Constant Delay Linear Representation (DCLR). We devise DCLRs with the following properties: 1) they allow, without materialization, enumeration of query results with bounded-delay (and with constant delay for a sub-class of queries); 2) they allow tuple lookup in query results with logarithmic delay (and with constant delay for conjunctive queries with equi-joins only); 3) they take space linear in the size of the database; 4) they can be maintained efficiently under updates. We first study the DCLRs with the above-described properties for the class of acyclic conjunctive queries featuring equi-joins with projections and present the dynamic evaluation algorithm. Then, we present the generalization of thiw algorithm to the class of acyclic queries featuring multi-way theta-joins with projections. We devise DCLRs with the above properties for acyclic conjunctive queries, and the working of dynamic algorithms over DCLRs is based on a particular variant of join trees, called the Generalized Join Trees (GJTs) that guarantee the above-described properties of DCLRs. We define GJTs and present the algorithms to test a conjunctive query featuring theta-joins for acyclicity and to generate GJTs for such queries. To do this, we extend the classical GYO algorithm from testing a conjunctive query with equalities for acyclicity to test a conjunctive query featuring multi-way theta-joins with projections for acyclicity. We further extend the GYO algorithm to generate GJTs for queries that are acyclic. We implemented our algorithms in a query compiler that takes as input the SQL queries and generates Scala executable code – a trigger program to process queries and maintain under updates. We tested our approach against state of the art main-memory BI and CEP systems. Our evaluation results have shown that our DCLRs based approach is over an order of magnitude efficient than existing systems for both memory footprint and update processing cost. We have also shown that the enumeration of query results without materialization in DCLRs is comparable (and in some cases efficient) as compared to enumerating from materialized query results.
498

Battery Capacity Prediction Using Deep Learning : Estimating battery capacity using cycling data and deep learning methods

Rojas Vazquez, Josefin January 2023 (has links)
The growing urgency of climate change has led to growth in the electrification technology field, where batteries have emerged as an essential role in the renewable energy transition, supporting the implementation of environmentally friendly technologies such as smart grids, energy storage systems, and electric vehicles. Battery cell degradation is a common occurrence indicating battery usage. Optimizing lithium-ion battery degradation during operation benefits the prediction of future degradation, minimizing the degradation mechanisms that result in power fade and capacity fade. This degree project aims to investigate battery degradation prediction based on capacity using deep learning methods. Through analysis of battery degradation and health prediction for lithium-ion cells using non-destructive techniques. Such as electrochemical impedance spectroscopy obtaining ECM and three different deep learning models using multi-channel data. Additionally, the AI models were designed and developed using multi-channel data and evaluated performance within MATLAB. The results reveal an increased resistance from EIS measurements as an indicator of ongoing battery aging processes such as loss o active materials, solid-electrolyte interphase thickening, and lithium plating. The AI models demonstrate accurate capacity estimation, with the LSTM model revealing exceptional performance based on the model evaluation with RMSE. These findings highlight the importance of carefully managing battery charging processes and considering factors contributing to degradation. Understanding degradation mechanisms enables the development of strategies to mitigate aging processes and extend battery lifespan, ultimately leading to improved performance.
499

Incremental Scheme for Open-Shell Systems

Anacker, Tony 22 February 2016 (has links) (PDF)
In this thesis, the implementation of the incremental scheme for open-shell systems with unrestricted Hartree-Fock reference wave functions is described. The implemented scheme is tested within robustness and performance with respect to the accuracy in the energy and the computation times. New approaches are discussed to implement a fully automated incremental scheme in combination with the domain-specific basis set approximation. The alpha Domain Partitioning and Template Equalization are presented to handle unrestricted wave functions for the local correlation treatment. Both orbital schemes are analyzed with a test set of structures and reactions. As a further goal, the DSBSenv orbital basis sets and auxiliary basis sets are optimized to be used as environmental basis in the domain-specific basis set approach. The performance with respect to the accuracy and computation times is analyzed with a test set of structures and reactions. In another project, a scheme for the optimization of auxiliary basis sets for uranium is presented. This scheme was used to optimize the MP2Fit auxiliary basis sets for uranium. These auxiliary basis enable density fitting in quantum chemical methods and the application of the incremental scheme for systems containing uranium. Another project was the systematical analysis of the binding energies of four water dodecamers. The incremental scheme in combination with the CCSD(T) and CCSD(T)(F12*) method were used to calculate benchmark energies for these large clusters.
500

A Bayesian learning approach to inconsistency identification in model-based systems engineering

Herzig, Sebastian J. I. 08 June 2015 (has links)
Designing and developing complex engineering systems is a collaborative effort. In Model-Based Systems Engineering (MBSE), this collaboration is supported through the use of formal, computer-interpretable models, allowing stakeholders to address concerns using well-defined modeling languages. However, because concerns cannot be separated completely, implicit relationships and dependencies among the various models describing a system are unavoidable. Given that models are typically co-evolved and only weakly integrated, inconsistencies in the agglomeration of the information and knowledge encoded in the various models are frequently observed. The challenge is to identify such inconsistencies in an automated fashion. In this research, a probabilistic (Bayesian) approach to abductive reasoning about the existence of specific types of inconsistencies and, in the process, semantic overlaps (relationships and dependencies) in sets of heterogeneous models is presented. A prior belief about the manifestation of a particular type of inconsistency is updated with evidence, which is collected by extracting specific features from the models by means of pattern matching. Inference results are then utilized to improve future predictions by means of automated learning. The effectiveness and efficiency of the approach is evaluated through a theoretical complexity analysis of the underlying algorithms, and through application to a case study. Insights gained from the experiments conducted, as well as the results from a comparison to the state-of-the-art have demonstrated that the proposed method is a significant improvement over the status quo of inconsistency identification in MBSE.

Page generated in 0.0851 seconds