• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 5
  • Tagged with
  • 5
  • 5
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Automating security processing of Integration flows : Automating input processing for Attack Simulations using Meta Attack Language and Common Vulnerability and Exposures

Henriksson, Erik, Engberg, Klas January 2022 (has links)
In our ever evolving society security becomes more and more important as a lot of our lives move online. Performing security analysis of IT-systems is a cumbersome process requiring extensive domain knowledge and tailored analysis per system. Research shows that manual tasks are error prone. In this thesis we have implemented an automation of performing security analysis of integration flows, building on an earlier project between KTH and SAP. To perform the analysis Common Vulnerability and Exposure-records containing information about vulnerabilities are connected to relevant parts of the system utilizing Meta Attack Language. The vulnerabilities are weighted according to their impact and then attack simulations are performed in the program SecuriCAD. Automating the input for the attack simulations removes an earlier manual task. Utilizing coreLang which is an implementation of MAL that is generally applicable means that the automated process can be used to perform analysis on integration flows in general. Domain knowledge is still needed to configure the automated process. More work can be done in the future to continue automating further tasks in the process. More work can also be done on visualizing security analysis to make the results more available to a general audience / I dagens ständigt expanderande värld som snabbt utvecklas blir säkerhet allt viktigare allteftersom mer av våra liv flyttar in på Internet. Att utföra säkerhetsanalyser av IT-system är en omständlig process som kräver extensiv kunskap om domänen och ofta skräddarsydda lösningar per system. Forskning visar att manuella arbetsuppgifter leder till fler fel än automatiserade processer. I det här examensarbetet har vi implementerat automatisering av säkerhetsanalyser baserade på integrationsflöden. Examensarbetet bygger vidare på ett tidigare projekt mellan KTH och SAP. För att utföra analysen används sårbarheter dokumenterade genom Common Vulnerability and Exposure. Dessa sårbarheter kopplas till relevanta delar av systemet genom användning av Meta Attack Language. Sårbarheterna är viktade i relation till deras påverkan och attacksimuleringar utförs sedan i verktyget SecuriCAD. Automatiseringen av indata i denna process eliminerar en tidigare manuell arbetsuppgift. Användandet av coreLang vilket är en generellt applicerbar implementation av MAL betyder att den automatiserade processen kan appliceras på generalistiska integrationsflöden. Kunskap om domänden behövs fortfarande för att konfigurera den automatiserade processen. I framtiden kan processen utvecklas genom att automatisera andra delar i processen av en säkerhetsanalys. Mer jobb kan även göras för att utveckla visualiseringen av analyserna för att tillgängliggöra resultaten för en bredare publik.
2

On-demand re-optimization of integration flows

Böhm, Matthias, Habich, Dirk, Lehner, Wolfgang 04 July 2023 (has links)
Integration flows are used to propagate data between heterogeneous operational systems or to consolidate data into data warehouse infrastructures. In order to meet the increasing need of up-to-date information, many messages are exchanged over time. The efficiency of those integration flows is therefore crucial to handle the high load of messages and to reduce message latency. State-of-the-art strategies to address this performance bottleneck are based on incremental statistic maintenance and periodic cost-based re-optimization. This also achieves adaptation to unknown statistics and changing workload characteristics, which is important since integration flows are deployed for long time horizons. However, the major drawbacks of periodic re-optimization are many unnecessary re-optimization steps and missed optimization opportunities due to adaptation delays. In this paper, we therefore propose the novel concept of on-demand re-optimization. We exploit optimality conditions from the optimizer in order to (1) monitor optimality of the current plan, and (2) trigger directed re-optimization only if necessary. Furthermore, we introduce the PlanOptimalityTree as a compact representation of optimality conditions that enables efficient monitoring and exploitation of these conditions. As a result and in contrast to existing work, re-optimization is immediately triggered but only if a new plan is certain to be found. Our experiments show that we achieve near-optimal re-optimization overhead and fast workload adaptation.
3

Cost-Based Optimization of Integration Flows

Böhm, Matthias 15 March 2011 (has links)
Integration flows are increasingly used to specify and execute data-intensive integration tasks between heterogeneous systems and applications. There are many different application areas such as real-time ETL and data synchronization between operational systems. For the reasons of an increasing amount of data, highly distributed IT infrastructures, and high requirements for data consistency and up-to-dateness of query results, many instances of integration flows are executed over time. Due to this high load and blocking synchronous source systems, the performance of the central integration platform is crucial for an IT infrastructure. To tackle these high performance requirements, we introduce the concept of cost-based optimization of imperative integration flows that relies on incremental statistics maintenance and inter-instance plan re-optimization. As a foundation, we introduce the concept of periodical re-optimization including novel cost-based optimization techniques that are tailor-made for integration flows. Furthermore, we refine the periodical re-optimization to on-demand re-optimization in order to overcome the problems of many unnecessary re-optimization steps and adaptation delays, where we miss optimization opportunities. This approach ensures low optimization overhead and fast workload adaptation.
4

Caractérisation électrique et optimisation technologique des mémoires résistives Conductive Bridge Memory (CBRAM) afin d’optimiser la performance, la vitesse et la fiabilité / Electrical characterization and technological optimization of Conductive Bridge RAM CBRAM devices to improve performance, speed and reliability

Barci, Marinela 06 April 2016 (has links)
La technologie Flash arrive à ses limites de miniaturisation. Ainsi, la nécessité de nouvelles technologies mémoire augmente. Les candidats au remplacement des mémoires Flash sont les technologies non volatiles émergentes comme les mémoires à pont conducteur (CBRAM), résistives à base d'oxyde (RRAM), mémoires magnétiques (MRAM) et mémoires à changement de phase (PCRAM). En particulier, les mémoires CBRAM sont basées sur structure simple métal-isolant-métal (MIM) et présentent plusieurs avantages par rapport aux autres technologies. La CBRAM est non volatile, à savoir qu'elle garde l’information lorsque l'alimentation est coupée, ses dimensions peuvent être réduites jusqu'à nœud 10 nm, elle peut facilement être intégrée dans le Back-End d’une intégration CMOS, enfin, elle a une vitesse de fonctionnement élevée à basse tension et un faible coût de fabrication. Néanmoins, les spécifications pour l'industrialisation des CBRAM sont très strictes. Dans cette thèse, nous analysons deux générations de technologie CBRAM, chacune adressant un marché d'application spécifique. La première partie de la thèse est consacrée à l’étude électrique des structures à base de cuivre et de GdOX, qui présentent comme avantages une conservation des données très stable et une bonne résistance lors de la soudure des puces, et un bon comportement de l'endurance. Cette technologie adresse principalement les applications à haute température telle que l'automobile. Pour répondre aux spécifications, un oxyde métallique dopé ainsi que des bicouches sont intégrés pour réduire la tension de formation de la mémoire et augmenter la fenêtre de programmation. Les performances en endurance sont améliorées. La deuxième partie est dédiée à une nouvelle technologie de CBRAM, avec un empilement de type MIM. Dans ce cas, nous avons démontré des temps de commutation très rapides de 20ns à basses tensions (2V), combinés avec une endurance satisfaisante et une bonne rétention des données. Cette technologie semble être compatible avec les applications Internet des objets (IOT). En résumé, au cours de ce doctorat, l'objectif principal était d'étudier la fiabilité des dispositifs embarqués CBRAM en termes d’écriture des données, endurance et la conservation de l’information. Une méthodologie de test spécifique a été développée, afin d’évaluer les performances des technologies étudiées. Des modèles physiques ont été mis au point pour expliquer et analyser les résultats expérimentaux. Sur la base des résultats obtenus, nous démontrons que la technologie de CBRAM est très prometteuse pour les futures applications de mémoires non volatiles. / Flash technology is approaching its scaling limits, so the demand for novel memory technologies is increasing. Promising replacing candidates are the emerging non volatile technologies such as Conductive Bridge Memory (CBRAM), Oxide based Resistive RAM (OXRAM), Magnetic Random Access Memory (MRAM) and Phase Change Memory (PCRAM). In particular, CBRAM is based on a simple Metal-Insulator-Metal (MIM) structure and presents several advantages compared to the other technologies. CBRAM is non volatile, i.e. it keeps the information when the power is off, it is scalable down to 10nm technology node, it can be easily integrated into the Back-End-of-Line (BEOL), finally, it has high operation speed at low voltages and low cost per bit. Nevertheless, demands for the industrialization of CBRAM are very stringent and issues related to device reliability are still to be faced. In this thesis we analyze two generations of CBRAM technology, each one addressing a specific application market. The first part of the PhD is dedicated to the electricalstudy of Cu-based/GdOx structures, which present the advantages of a very stable data retention and resistance to soldering reflow and also good endurance behavior. This CBRAM family addresses mainly the high temperature applications as automotive. To fulfill the specification requirements, doping of metal-oxide andbilayers are integrated to decrease the forming voltage and increase the programmingwindow. Better endurance performance is also achieved. The second part isdedicated to a new CBRAM technology, with a simple MIM structure. In this case, the device showsfast operation speed of 20ns at low voltages of 2V, combined with satisfying endurance and data retention. This technology seems to be compatible with the growing Internet of Things (IOT) market. In summary, during the PhD research, the main objective was to study the reliability of the embedded CBRAM devices in terms of forming, endurance and data retention. Some methodologies were developed and the electrical set-up was modified and adapted to specific measurements. Physical models were developed to explain and better fit the experimental results. Based on the obtained results, we demonstrate that the CBRAM technology is highly promising for future NVM applications.
5

Multi-flow Optimization via Horizontal Message Queue Partitioning

Boehm, Matthias, Habich, Dirk, Lehner, Wolfgang 19 January 2023 (has links)
Integration flows are increasingly used to specify and execute data-intensive integration tasks between heterogeneous systems and applications. There are many different application areas such as near real-time ETL and data synchronization between operational systems. For the reasons of an increasing amount of data, highly distributed IT infrastructures, as well as high requirements for up-to-dateness of analytical query results and data consistency, many instances of integration flows are executed over time. Due to this high load, the performance of the central integration platform is crucial for an IT infrastructure. With the aim of throughput maximization, we propose the concept of multi-flow optimization (MFO). In this approach, messages are collected during a waiting time and executed in batches to optimize sequences of plan instances of a single integration flow. We introduce a horizontal (value-based) partitioning approach for message batch creation and show how to compute the optimal waiting time. This approach significantly reduces the total execution time of a message sequence and hence, it maximizes the throughput, while accepting moderate latency time.

Page generated in 0.1288 seconds