Spelling suggestions: "subject:"incremental"" "subject:"ncremental""
561 |
IIRC : Incremental Implicitly-Refined ClassificationAbdelsalam, Mohamed 05 1900 (has links)
Nous introduisons la configuration de la "Classification Incrémentale Implicitement Raffinée / Incremental Implicitly-Refined Classification (IIRC)", une extension de la configuration de l'apprentissage incrémental des classes où les lots de classes entrants possèdent deux niveaux de granularité, c'est-à-dire que chaque échantillon peut avoir une étiquette (label) de haut niveau (brute), comme "ours”, et une étiquette de bas niveau (plus fine), comme "ours polaire". Une seule étiquette (label) est fournie à la fois, et le modèle doit trouver l’autre étiquette s’il l’a déjà apprise. Cette configuration est plus conforme aux scénarios de la vie réelle, où un apprenant aura tendance à interagir avec la même famille d’entités plusieurs fois, découvrant ainsi encore plus de granularité à leur sujet, tout en essayant de ne pas oublier les connaissances acquises précédemment. De plus, cette configuration permet d’évaluer les modèles pour certains défis importants liés à l’apprentissage tout au long de la vie (lifelong learning) qui ne peuvent pas être facilement abordés dans les configurations existantes. Ces défis peuvent être motivés par l’exemple suivant: “si un modèle a été entraîné sur la classe ours dans une tâche et sur ours polaire dans une autre tâche; oubliera-t-il le concept d’ours, déduira-t-il à juste titre qu’un ours polaire est également un ours ? et associera-t-il à tort l’étiquette d’ours polaire à d’autres races d’ours ?” Nous développons un benchmark qui permet d’évaluer les modèles sur la configuration de l’IIRC. Nous évaluons plusieurs algorithmes d’apprentissage ”tout au long de la vie” (lifelong learning) de l’état de l’art. Par exemple, les méthodes basées sur la distillation sont relativement performantes mais ont tendance à prédire de manière incorrecte un trop grand nombre d’étiquettes par image. Nous espérons que la configuration proposée, ainsi que le benchmark, fourniront un cadre de problème significatif aux praticiens. / We introduce the "Incremental Implicitly-Refined Classification (IIRC)" setup, an extension to the class incremental learning setup where the incoming batches of classes have two granularity levels. i.e., each sample could have a high-level (coarse) label like "bear" and a low-level (fine) label like "polar bear". Only one label is provided at a time, and the model has to figure out the other label if it has already learned it. This setup is more aligned with real-life scenarios, where a learner usually interacts with the same family of entities multiple times, discovers more granularity about them, while still trying not to forget previous knowledge. Moreover, this setup enables evaluating models for some important lifelong learning challenges that cannot be easily addressed under the existing setups. These challenges can be motivated by the example "if a model was trained on the class bear in one task and on polar bear in another task, will it forget the concept of bear, will it rightfully infer that a polar bear is still a bear? and will it wrongfully associate the label of polar bear to other breeds of bear?". We develop a standardized benchmark that enables evaluating models on the IIRC setup. We evaluate several state-of-the-art lifelong learning algorithms and highlight their strengths and limitations. For example, distillation-based methods perform relatively well but are prone to incorrectly predicting too many labels per image. We hope that the proposed setup, along with the benchmark, would provide a meaningful problem setting to the practitioners.
|
562 |
Beitrag zur Modellierung und Simulation von Zylinderdrückwalzprozessen mit elementaren MethodenKleditzsch, Stefan 29 January 2014 (has links)
Drückwalzen als inkrementelles Umformverfahren ist aufgrund seiner Verfahrenscharakteristik mit sehr hohen Rechenzeiten bei der Finite-Elemente-Methode (FEM) verbunden. Die Modelle ModIni und FloSim sind zwei analytisch-elementare Ansätze, um dieser Prämisse entgegenzuwirken. Das für ModIni entwickelte Geometriemodell wird in der vorliegenden Arbeit weiterentwickelt, so dass eine werkstoffunabhängige Berechnung der Staugeometrie ermöglicht wird und ein deutlich größeres Anwendungsspektrum der Methode bereitsteht. Die Simulationsmethode FloSim basiert auf dem oberen Schrankenverfahren und ermöglicht somit eine Berechnung von Zylinderdrückwalzprozessen innerhalb weniger Minuten. Für die Optimierung der Methode FloSim wurden in der vorliegenden Arbeit die analytischen Grundlagen für die Berechnung der Bauteillänge sowie der Umformzonentemperatur während des Prozesses erarbeitet. Weiterhin wurde auf Basis von numerisch realisierten Parameteranalysen ein Ansatz für die analytische Berechnung des Vergleichsumformgrades von Drückwalzprozessen entwickelt. Diese drei Ansätze, zu Bauteillänge, Temperatur und Umformgrad wurden in die Simulationssoftware FloSim integriert und führen zu einer deutlichen Genauigkeitssteigerung der Methode. / Flow Forming as incremental forming process is connected with extreme long computation times for Finite-Element-Analyses. ModIni and FloSim are two analytical/elementary models to antagonize this situation. The geometry model, which was developed for ModIni, is improved within the presented work. The improvement enables the material independent computation of the pile-up geometry and permits a wider application scope of ModIni. The simulation method FloSim is based on the upper bound method, which enables the computation of cylindrical Flow Forming processes within minutes. For the optimization of the method FloSim, the basics for the analytical computation of the workpiece length during the process and the computation of the forming zone temperature were developed within this work. Fur-thermore, an analytical approach for the computation of the equivalent plastic strain of cylindrical Flow Forming processes was developed based on numerical parameter analyses. This tree approaches for computing the workpiece length, the temperature and the equivalent plastic strain were integrated in FloSim and lead to an increased accuracy.
|
563 |
Incremental Scheme for Open-Shell SystemsAnacker, Tony 11 February 2016 (has links)
In this thesis, the implementation of the incremental scheme for open-shell systems with unrestricted Hartree-Fock reference wave functions is described. The implemented scheme is tested within robustness and performance with respect to the accuracy in the energy and the computation times.
New approaches are discussed to implement a fully automated incremental scheme in combination with the domain-specific basis set approximation. The alpha Domain Partitioning and Template Equalization are presented to handle unrestricted wave functions for the local correlation treatment. Both orbital schemes are analyzed with a test set of structures and reactions. As a further goal, the DSBSenv orbital basis sets and auxiliary basis sets are optimized to be used as environmental basis in the domain-specific basis set approach. The performance with respect to the accuracy and computation times is analyzed with a test set of structures and reactions. In another project, a scheme for the optimization of auxiliary basis sets for uranium is presented. This scheme was used to optimize the MP2Fit auxiliary basis sets for uranium. These auxiliary basis enable density fitting in quantum chemical methods and the application of the incremental scheme for systems containing uranium. Another project was the systematical analysis of the binding energies of four water dodecamers. The incremental scheme in combination with the CCSD(T) and CCSD(T)(F12*) method were used to calculate benchmark energies for these large clusters.
|
564 |
Parallel Query Systems : Demand-Driven Incremental Compilers / En arkitektur för parallella och inkrementella kompilatorerNolander, Christofer January 2023 (has links)
Query systems were recently introduced as an architecture for constructing compilers, and have shown to enable fast and efficient incremental compilation, where results from previous builds is reused to accelerate future builds. With this architecture, a compiler is composed of several queries, each of which extracts a small piece of information about the source program. For example, one query might determine the type of a variable, and another the list of functions defined in some file. The dependencies of a query, which includes other queries or files on disk, are automatically recorded at runtime. With these dependencies, query systems can detect changes in their inputs and incorporate them into the final output, while reusing old results from queries which have not changed. This reduces the amount of work needed to recompile code, which saves both time and energy. We present a new parallel execution model for query systems using work-stealing, which dynamically balances the workload across multiple threads. This is facilitated by various augmentations to existing algorithms to allow concurrent operations. Furthermore, we introduce a novel data structure that accelerates incremental compilation for common use cases. We evaluated the impact of these augmentations by implementing a compiler frontend capable of parsing and type-checking the Go programming language. We demonstrate a 10x reduction in compile times using the parallel execution mode. Finally, under certain common conditions, we show a 5x reduction in incremental compile times compared to the state-of-the-art. / Query-system är en ny arkitektur som har använts för att implementera kompilatorer för programspråk och har ett fokus på att möjliggöra snabb och effektiv inkrementell kompilering. Med denna arkitektur består en kompilator flera olika mindre funktioner, som var och en svarar på en liten fråga om källprogrammet, såsom typen av en variabel eller listan över funktioner i en fil. Genom att spåra hur dessa funktioner anropar varandra, och den data de läser, kan kompilatorer upptäcka förändringar i sina indata och utföra den minimala mängd arbete som krävs för att sammanställa dessa förändringar i utdata. Detta minskar mängden arbete som behövs för att kompilera om kod, vilket sparar både tid och energi. I denna rapport presenterar vi en ny exekveringsmodell för Query-system som möjliggör parallellism med hjälp av work-stealing. Detta underlättas av flera tillägg till befintliga algoritmer som gör det möjligt att utföra alla operationer parallellt. Utöver detta introducerar vi även en ny datastruktur som gör inkrementell kompilering snabbare för många vanliga användningsområden. Vi utvärderade effekten av dessa förändringar genom att implementera ett kompilatorgränssnitt som kan analysera och verifiera korrekthet av typer Go-programmeringsspråket. Resultaten visar en 10x reduktion i kompileringstider med hjälp av parallellkörningsläget. Vi demonstrerar även 5 gånger lägre kompileringstider vid inkrementella ändringar än vad som tidigare varit möjligt.
|
565 |
Visualization of Conceptual Data with Methods of Formal Concept AnalysisKriegel, Francesco 27 September 2013 (has links)
Draft and proof of an algorithm computing incremental changes within a labeled layouted concept lattice upon insertion or removal of an attribute column in the underlying formal context. Furthermore some implementational details and mathematical background knowledge are presented.:1 Introduction
1.1 Acknowledgements
1.2 Supporting University: TU Dresden, Institute for Algebra
1.3 Supporting Corporation: SAP AG, Research Center Dresden
1.4 Research Project: CUBIST
1.5 Task Description und Structure of the Diploma Thesis
I Mathematical Details
2 Fundamentals of Formal Concept Analysis
2.1 Concepts and Concept Lattice
2.2 Visualizations of Concept Lattices
2.2.1 Transitive Closure and Transitive Reduction
2.2.2 Neighborhood Relation
2.2.3 Line Diagram
2.2.4 Concept Diagram
2.2.5 Vertical Hybridization
2.2.6 Omitting the top and bottom concept node
2.2.7 Actions on Concept Diagrams
2.2.8 Metrics on Concept Diagrams
2.2.9 Heatmaps for Concept Diagrams
2.2.10 Biplots of Concept Diagrams
2.2.11 Seeds Selection
2.3 Apposition of Contexts
3 Incremental Updates for Concept Diagrams
3.1 Insertion & Removal of a single Attribute Column
3.1.1 Updating the Concepts
3.1.2 Structural Remarks
3.1.3 Updating the Order
3.1.4 Updating the Neighborhood
3.1.5 Updating the Concept Labels
3.1.6 Updating the Reducibility
3.1.7 Updating the Arrows
3.1.8 Updating the Seed Vectors
3.1.9 Complete IFOX Algorithm
3.1.10 An Example: Stepwise Construction of FCD(3)
3.2 Setting & Deleting a single cross
4 Iterative Exploration of Concept Lattices
4.1 Iceberg Lattices
4.2 Alpha Iceberg Lattices
4.3 Partly selections
4.3.1 Example with EMAGE data
4.4 Overview on Pruning & Interaction Techniques
II Implementation Details
5 Requirement Analysis
5.1 Introduction
5.2 User-Level Requirements for Graphs
5.2.1 Select
5.2.2 Explore
5.2.3 Reconfigure
5.2.4 Encode
5.2.5 Abstract/Elaborate
5.2.6 Filter
5.2.7 Connect
5.2.8 Animate
5.3 Low-Level Requirements for Graphs
5.3.1 Panel
5.3.2 Node and Edge
5.3.3 Interface
5.3.4 Algorithm
5.4 Mapping of Low-Level Requirements to User-Level Requirements
5.5 Specific Visualization Requirements for Lattices
5.5.1 Lattice Zoom/Recursive Lattices/Partly Nested Lattices
5.5.2 Planarity
5.5.3 Labels
5.5.4 Selection of Ideals, Filters and Intervalls
5.5.5 Restricted Moving of Elements
5.5.6 Layout Algorithms
5.5.7 Additional Feature: Three Dimensions and Rotation
5.5.8 Additional Feature: Nesting
6 FCAFOX Framework for Formal Concept Analysis in JAVA
6.1 Architecture
A Appendix
A.1 Synonym Lexicon
A.2 Galois Connections & Galois Lattices
A.3 Fault Tolerance Extensions to Formal Concept Analysis / Entwurf und Beweis eines Algorithmus zur Berechnung inkrementeller Änderungen in einem beschrifteten dargestellten Begriffsverband beim Einfügen oder Entfernen einer Merkmalsspalte im zugrundeliegenden formalen Kontext. Weiterhin sind einige Details zur Implementation sowie zum mathematischen Hintergrundwissen dargestellt.:1 Introduction
1.1 Acknowledgements
1.2 Supporting University: TU Dresden, Institute for Algebra
1.3 Supporting Corporation: SAP AG, Research Center Dresden
1.4 Research Project: CUBIST
1.5 Task Description und Structure of the Diploma Thesis
I Mathematical Details
2 Fundamentals of Formal Concept Analysis
2.1 Concepts and Concept Lattice
2.2 Visualizations of Concept Lattices
2.2.1 Transitive Closure and Transitive Reduction
2.2.2 Neighborhood Relation
2.2.3 Line Diagram
2.2.4 Concept Diagram
2.2.5 Vertical Hybridization
2.2.6 Omitting the top and bottom concept node
2.2.7 Actions on Concept Diagrams
2.2.8 Metrics on Concept Diagrams
2.2.9 Heatmaps for Concept Diagrams
2.2.10 Biplots of Concept Diagrams
2.2.11 Seeds Selection
2.3 Apposition of Contexts
3 Incremental Updates for Concept Diagrams
3.1 Insertion & Removal of a single Attribute Column
3.1.1 Updating the Concepts
3.1.2 Structural Remarks
3.1.3 Updating the Order
3.1.4 Updating the Neighborhood
3.1.5 Updating the Concept Labels
3.1.6 Updating the Reducibility
3.1.7 Updating the Arrows
3.1.8 Updating the Seed Vectors
3.1.9 Complete IFOX Algorithm
3.1.10 An Example: Stepwise Construction of FCD(3)
3.2 Setting & Deleting a single cross
4 Iterative Exploration of Concept Lattices
4.1 Iceberg Lattices
4.2 Alpha Iceberg Lattices
4.3 Partly selections
4.3.1 Example with EMAGE data
4.4 Overview on Pruning & Interaction Techniques
II Implementation Details
5 Requirement Analysis
5.1 Introduction
5.2 User-Level Requirements for Graphs
5.2.1 Select
5.2.2 Explore
5.2.3 Reconfigure
5.2.4 Encode
5.2.5 Abstract/Elaborate
5.2.6 Filter
5.2.7 Connect
5.2.8 Animate
5.3 Low-Level Requirements for Graphs
5.3.1 Panel
5.3.2 Node and Edge
5.3.3 Interface
5.3.4 Algorithm
5.4 Mapping of Low-Level Requirements to User-Level Requirements
5.5 Specific Visualization Requirements for Lattices
5.5.1 Lattice Zoom/Recursive Lattices/Partly Nested Lattices
5.5.2 Planarity
5.5.3 Labels
5.5.4 Selection of Ideals, Filters and Intervalls
5.5.5 Restricted Moving of Elements
5.5.6 Layout Algorithms
5.5.7 Additional Feature: Three Dimensions and Rotation
5.5.8 Additional Feature: Nesting
6 FCAFOX Framework for Formal Concept Analysis in JAVA
6.1 Architecture
A Appendix
A.1 Synonym Lexicon
A.2 Galois Connections & Galois Lattices
A.3 Fault Tolerance Extensions to Formal Concept Analysis
|
566 |
Toward Cuffless Blood Pressure Monitoring: Integrated Microsystems for Implantable Recording of PhotoplethysmogramMarefat, Fatemeh 07 September 2020 (has links)
No description available.
|
567 |
Analysis of post-tensioned concrete box-girder bridges : A comparison of Incremental launching and Movable scaffolding systemEl Hamad, Hamad, Tanhan, Furkan January 2018 (has links)
When designing a bridge it is of high importance that the geometry for the cross section is optimized for the structure. This is partly due to the influence of the amount of material needed and its impact on the budget and environment. The influence of choosing the right amount of each material lies in the unit-price of the different material, where they can differ significantly. The Swedish Transport Administration, Trafikverket, has ordered the construction of Stockholm Bypass which is one of Swedens largest infrastructure project and is valued to 27.6 billion SEK according to the price index of the year 2009. The infrastructure project is divided into multiple projects where one of them is assigned to Implenia and Veidekke through a joint venture (Joint venture Hjulsta, JVH) and is valued to nearly 800 MSEK. The reference bridge that is used in the analysis of the master’s thesis is a part of the project. The aim of this masters thesis was to analyze and compare the two construction methods, mov- able scaffolding system (MSS) and incremental launching for the reference bridge with respect to amount post-tensioning and slenderness. Furthermore, an economical comparison between the two construction methods was carried out based on the obtained results. The analysis of the MSS was carried out by modeling the reference bridge structure in the finite element software SOFiSTiK AG. The bridge was modeled with different cross section height, i.e. different slenderness where the optimal amount of post-tension tendons could be determined by iteration until stress conditions from the Eurocode were fulfilled. For the incremental launching method, a numerical analysis was performed. The optimal amount of required post-tensioning was evaluated in the construction stages and final stages with different construction heights i.e. different values of slenderness. A cost analysis was also performed where the aim was to analyze how the total cost of the construc- tion of the bridge would be influenced by the different slenderness of the bridge as a comparison for the two construction methods. This was done by dividing the costs into fixed costs and variable costs. The results showed that the structural rigidity had a large influence on the required amount of prestressing steel for both construction methods. In other words, the smaller the cross section the more prestressing steel was required. Incremental launching proved to require a much greater amount of (PT) tendons compared to the MSS although the identical cross sections and properties for both methods, except for the PT. The prestressing for incremental launching is generally by centrical prestressing during the construction stages. A intersection point was obtained in the cost analysis for the construction methods. The incremental launching was the cheaper solution for slenderness smaller than the intersection point at slenderness between 17 and 18. The MSS was cheaper than the incremental launching for slenderness larger than the intersection point. / Vid dimensionering av tvärsektioner i broar är det av stor vikt att optimera geometrin avseende materialåtgång då mängden material har stor på verkan på ett projekts budget samt miljö. Eftersom konstruktioner ofta består av olika byggnadsmaterial gäller det vid optimering att välja byggnadsmaterialen genom optimerad proportionalitet. Förbifart Stockholm, beställt av Trafikverket, är ett av Sveriges största infrastrukturprojekt och värderas till 27,6 miljarder kronor enligt 2009 års prisnivå. Infrastrukturprojektet är uppdelat i flera mindre entreprenader eller så kallade etapper. Den entreprenad som omfattar trafikplats Hjulsta Södra har blivit tilldelat till Implenia och Veidekke genom ett konsortium (Jointventure Hjulsta, JVH) och värderas till cirka 800 miljoner kronor. Den förspända betongbro som byggs i trafikplats Hjulsta ligger till grund för analysen i detta examensarbete och har använts som referens under vår studie. Syftet med examensarbete var att analysera och jämföra två de två olika produktionsmetoderna, Movable scaffolding system (MSS) och etappvis lansering med hänsyn till erforderlig mängd förspänningskablar och slankhet. Vidare, baserat på erhållna resultat, utfördes en ekonomisk analys och jämförelse mellan produktionsmetoderna. Analysen av MSS utfördes genom att modellera brokonstruktionen i mjukvaruprogrammet SOFiSTiK AG som bygger på finita elementmetoder. Konstruktionen modellerades för olika slankheter, där slankheten definieras som kvoten mellan maximala spannlängden och brons tvärsnittshöjd. Spannlängden hölls konstant medan tvärsnittshöjden varierade för att erhålla olika slankheter. Den optimala slankheten bestämdes genom iterering av mängd förspänningskablar tills spänningsvillkoren var uppfyllda enligt Eurocode. För analysen av etappvis lansering utfördes en numerisk analys vars den optimala mängden förspänningskablar utvärderades i byggskedet (construction stages) samt i slutskedet (final stage). Analysen utfördes på samma sätt för de olika slankheterna. Slutligen genomfördes en konstandsanalys för de olika metoderna. Syftet var att jämföra hur den totala kostnaden för uppförandet av brokonstruktionen skiljde sig för de olika slankheterna. Jämförelsen genomfördes genom att dela upp de olika kostnaderna i fasta kostnader samt rörliga kostnader. Resultaten från analysen visade att den erforderliga mängd förspänningskablar som behövs i en förspänd betongbro är beroende av den strukturella styvheten i tvärsektionen. En högre slankhet, alltså lägre tvärsnittshöjd, ger lägre styvhet och därav mer erforderlig förspänningskablar. Etappvis lansering visade sig vara den metod som krävde mer mängd förspänningskablar. I resultaten för kostnadsanalysen uppmättes en skärningspunkt, för en slankhet mellan 17-18, mellan de två olika metoderna. För förspända betongbroar med slankhet lägre än skärningsupunkten vid 17-18 är etappvis lansering det billigare alternativet. För slankheter högre än 17-18 är MSS det mer ekonomiskt lönsamma alternativet.
|
568 |
Seismic Performance Evaluation of Industrial and Nuclear Reinforced Concrete Shear Walls: Hybrid Simulation Tests and Data-Driven ModelsAkl, Ahmed January 2024 (has links)
Low-aspect-ratio reinforced concrete (RC) shear walls, characterized by height-to-length ratios of less than two, have been widely used as a seismic force-resisting system (SFRS) in a wide array of structures, ranging from conventional buildings to critical infrastructure systems such as nuclear facilities. Despite their extensive applications, recent research has brought to light the inadequate understanding of their seismic performance, primarily attributed to the intricate nonlinear flexure-shear interaction behaviour unique to these walls. In this respect, the current research dissertation aims to bridge this knowledge gap by conducting a comprehensive evaluation to quantify the seismic performance of low-aspect-ratio RC shear walls when used in different applications.
Chapter 2 focuses on low-aspect-ratio RC shear walls that are employed in residential and industrial structures. Considering their significance, the seismic response modification factors of such walls, as defined in various standards, are thoroughly examined and evaluated utilizing the FEMA P695 methodology. The analysis revealed potential deficiencies in the current code-based recommendations for response modification factors. Consequently, a novel set of response modification factors, capable of mitigating the seismic risk of collapse under the maximum considered earthquake, is proposed. Such proposed values can be integrated into the forthcoming revisions of relevant building codes and design standards.
While the FEMA P695 methodology offers a comprehensive approach to assessing building seismic performance factors, its practical implementation is associated with many challenges for practicing engineers. Specifically, the methodology heavily relies on resource-intensive and time-consuming incremental dynamic analyses, making it less feasible for routine engineering practices. To enhance its practicality, a data-driven framework is developed in Chapter 3, circumventing the need for such demanding analyses. This framework provides genetic programming-based expressions capable of producing accurate predictions of the median collapse intensities—a key metric in the acceptance criteria of the FEMA P695 methodology, for different structural systems. To demonstrate its use, the developed framework is operationalized to low-aspect-ratio RC shear walls and the predictive expression is evaluated considering several statistical and structural parameters, which showed its adequacy in predicting the median collapse intensities of such walls. Furthermore, the adaptability of this framework is showcased, highlighting its applicability across various SFRSs.
Chapters 4 and 5 tackle the scarcity of experimental assessments pertaining to the seismic performance of low-aspect-ratio RC walls in nuclear facilities. The seismic hybrid simulation testing technique is employed herein to merge the simplicity of numerical simulations with the efficiency of experimental tests. Hybrid simulation can overcome obstacles related to physical specimen sizes, limited actuator capacities, and space constraints in most laboratories. In these two chapters, the experimental program delves into evaluating the seismic performance of three two-storey low-aspect-ratio nuclear RC walls under different earthquake levels, including operational, design, and beyond-design-level scenarios. Diverse design configurations, including the use of increased thickness boundary elements and different materials (i.e., normal- and high-strength reinforcement), are considered in such walls to provide a comprehensive understanding of several structural parameters and economic metrics. Key structural parameters, such as the force-displacement responses, multi-storey effects, lateral and rotational stiffnesses, ductility capacities, displacement components, rebar strains, crack patterns and damage sequences, are all investigated to provide direct comparisons between the walls in terms of their seismic performances. Additionally, economic metrics, including the total rebar weights, overall construction costs and the expected seismic repair costs, are considered in order to evaluate the seismic performance of the walls considering an economic perspective. The findings of this experimental investigation are expected to inform future nuclear design standards by enhancing the resilience and safety of their structures incorporating low-aspect-ratio RC shear walls. / Thesis / Doctor of Philosophy (PhD)
|
569 |
General dynamic Yannakakis: Conjunctive queries with theta joins under updatesIdris, Muhammad, Ugarte, Martín, Vansummeren, Stijn, Voigt, Hannes, Lehner, Wolfgang 17 July 2023 (has links)
The ability to efficiently analyze changing data is a key requirement of many real-time analytics applications. In prior work, we have proposed general dynamic Yannakakis (GDYN), a general framework for dynamically processing acyclic conjunctive queries with θ-joins in the presence of data updates. Whereas traditional approaches face a trade-off between materialization of subresults (to avoid inefficient recomputation) and recomputation of subresults (to avoid the potentially large space overhead of materialization), GDYN is able to avoid this trade-off. It intelligently maintains a succinct data structure that supports efficient maintenance under updates and from which the full query result can quickly be enumerated. In this paper, we consolidate and extend the development of GDYN. First, we give full formal proof of GDYN ’s correctness and complexity. Second, we present a novel algorithm for computing GDYN query plans. Finally, we instantiate GDYN to the case where all θ-joins are inequalities and present extended experimental comparison against state-of-the-art engines. Our approach performs consistently better than the competitor systems with multiple orders of magnitude improvements in both time and memory consumption.
|
570 |
Entwicklung des selbstregelnden DrückwalzensLaue, Robert 18 January 2024 (has links)
Die Verbesserung der Energie- und Ressourceneffizienz stellt eine zentrale Aufgabe für die Produktionstechnik dar. Inkrementelle Verfahren wie das Drückwalzen weisen bereits aufgrund ihres Prozessprinzips ein hohes Potenzial zur Ressourceneffizienz auf. Allerdings besitzen diese Verfahren eine Vielzahl von Einflussfaktoren auf das Prozessergebnis, die zudem in Wechselwirkung zueinander stehen. Die Folge schwankender Prozesseinflussgrößen (z. B. Chargenschwankungen oder variierende Halbzeuggeometrie) ist häufig Bauteilausschuss, der sich aufgrund der meist kleinen bis mittleren Losgrößen stärker auf die Produktivität auswirkt.
Die Weiterentwicklung von gesteuerten zu selbstgeregelten Umformprozessen mit Prozessrückkopplung bietet ein großes Potential zur Verbesserung der Ressourceneffizienz. Im Rahmen dieser Arbeit werden die Grundlagen und Vorgehensweise zur Realisierung des selbstregelnden Drückwalzens erarbeitet. Nach der Analyse und Bewertung von Störungen auf die Prozesseinflussgrößen erfolgt die Definition eines Referenzzustandes und von Störszenarien. Auf Basis experimenteller Untersuchungen wird der Referenzzustand analysiert und ein digitaler Zwilling des Drückwalzprozesses entwickelt. Mit dessen Hilfe erfolgt die Bewertung der Störszenarien. Anschließend wird ein methodisches Vorgehen vorgestellt, mit dem das selbstregelnde Drückwalzen beliebiger Zielgrößen entwickelt werden kann. Im digitalen Zwilling werden zusätzlich ein virtueller Sensor, der Regelalgorithmus und die Aktordynamik integriert und damit die Selbstregelung für eine Prozessgröße und ein Prozessergebnis ausgelegt und untersucht. Mit den gewonnenen Erkenntnissen wurde das selbstregelnde Drückwalzen erstmals erfolgreich experimentell umgesetzt. Die in der Arbeit vorgestellten Ergebnisse zeigen eine signifikante Reduzierung des Einflusses von Prozessstörungen auf das Prozessergebnis durch die Selbstregelung. / Improving energy and resource efficiency is also a key challenge for production technology. Incremental processes such as flow-forming already have a high potential for resource efficiency due to their process principle. Flow-forming has a large number of influencing process parameters that also interact with each other. Fluctuating process parameters (e.g. batch fluctuations or varying semi-finished product geometry) can result in component scrap, which has a major influence on productivity due to the mostly small to medium batch sizes. The further development of controlled to self-controlled forming processes with process feed-back offers great potential for improving resource efficiency. In this thesis, the basics and the procedure for the realization of self-controlled flow-forming are developed. After the analysis and evaluation of disturbances on the process influencing variables, a reference state and disturbance scenarios are defined. The reference state is analyzed on the basis of experimental investigations and a digital twin of the flow-forming process is developed. This is used to evaluate the disturbance scenarios. Subsequently, a methodical procedure is presented to develop self-controlled flow-forming of any process parameter or process result. A virtual sensor, the control algorithm and the actuator dynamics are also integrated into the digital twin to design and investigate the self-control for a process parameter and a process result. Based on the knowledge gained, self-controlled flow-forming was successfully implemented experimentally for the first
time. The results show a significant reduction of the influence of process disturbances on the process results.
|
Page generated in 0.0967 seconds