1 |
On Optimizing Transactional Memory: Transaction Splitting, Scheduling, Fine-grained Fallback, and NUMA OptimizationMohamedin, Mohamed Ahmed Mahmoud 01 September 2015 (has links)
The industrial shift from single core processors to multi-core ones introduced many challenges. Among them, a program cannot get a free performance boost by just upgrading to a new hardware because new chips include more processing units but at the same (or comparable) clock speed as the previous generation. In order to effectively exploit the new available hardware and thus gain performance, a program should maximize parallelism. Unfortunately, parallel programming poses several challenges, especially when synchronization is involved because parallel threads need to access the same shared data. Locks are the standard synchronization mechanism but gaining performance using locks is difficult for a non-expert programmers and without deeply knowing the application logic. A new, easier, synchronization abstraction is therefore required and Transactional Memory (TM) is the concrete candidate.
TM is a new programming paradigm that simplifies the implementation of synchronization. The programmer just defines atomic parts of the code and the underlying TM system handles the required synchronization, optimistically. In the past decade, TM researchers worked extensively to improve TM-based systems. Most of the work has been dedicated to Software TM (or STM) as it does not requires special transactional hardware supports. Very recently (in the past two years), those hardware supports have become commercially available as commodity processors, thus a large number of customers can finally take advantage of them. Hardware TM (or HTM) provides the potential to obtain the best performance of any TM-based systems, but current HTM systems are best-effort, thus transactions are not guaranteed to commit in any case. In fact, HTM transactions are limited in size and time as well as prone to livelock at high contention levels.
Another challenge posed by the current multi-core hardware platforms is their internal architecture used for interfacing with the main memory. Specifically, when the common computer deployment changed from having a single processor to having multiple multi-core processors, the architects redesigned also the hardware subsystem that manages the memory access from the one providing a Uniform Memory Access (UMA), where the latency needed to fetch a memory location is the same independently from the specific core where the thread executes on, to the current one with a Non-Uniform Memory Access (NUMA), where such a latency differs according to the core used and the memory socket accessed. This switch in technology has an implication on the performance of concurrent applications. In fact, the building blocks commonly used for designing concurrent algorithms under the assumptions of UMA (e.g., relying on centralized meta-data) may not provide the same high performance and scalability when deployed on NUMA-based architectures.
In this dissertation, we tackle the performance and scalability challenges of multi-core architectures by providing three solutions for increasing performance using HTM (i.e., Part-HTM, Octonauts, and Precise-TM), and one solution for solving the scalability issues provided by NUMA-architectures (i.e., Nemo).
• Part-HTM is the first hybrid transactional memory protocol that solves the problem of transactions aborted due to the resource limitations (space/time) of current best-effort HTM. The basic idea of Part-HTM is to partition those transactions into multiple sub-transactions, which can likely be committed in hardware. Due to the eager nature of HTM, we designed a low-overhead software framework to preserve transaction's correctness (with and without opacity) and isolation. Part-HTM is efficient: our evaluation study confirms that its performance is the best in all tested cases, except for those where HTM cannot be outperformed. However, in such a workload, Part-HTM still performs better than all other software and hybrid competitors.
• Octonauts tackles the live-lock problem of HTM at high contention level. HTM lacks of advanced contention management (CM) policies. Octonauts is an HTM-aware scheduler that orchestrates conflicting transactions. It uses a priori knowledge of transactions' working-set to prevent the activation of conflicting transactions, simultaneously. Octonauts also accommodates both HTM and STM with minimal overhead by exploiting adaptivity. Based on the transaction's size, time, and irrevocable calls (e.g., system call) Octonauts selects the best path among HTM, STM, or global locking. Results show a performance improvement up to 60% when Octonauts is deployed in comparison with pure HTM with falling back to global locking.
• Precise-TM is a unique approach to solve the granularity of the software fallback path of best-efforts HTM. It provide an efficient and precise technique for HTM-STM communication such that HTM is not interfered by concurrent STM transactions. In addition, the added overhead is marginal in terms of space or execution time. Precise-TM uses address-embedded locks (pointers bit-stealing) for a precise communication between STM and HTM. Results show that our precise fine-grained locking pays off as it allows more concurrency between hardware and software transactions. Specifically, it gains up to 5x over the default HTM implementation with a single global lock as fallback path.
• Nemo is a new STM algorithm that ensures high and scalable performance when an application workload with a data locality property is deployed. Existing STM algorithms rely on centralized shared meta-data (e.g., a global timestamp) to synchronize concurrent accesses, but in such a workload, this scheme may hamper the achievement of scalable performance given the high latency introduced by NUMA architectures for updating those centralized meta-data. Nemo overcomes these limitations by allowing only those transactions that actually conflict with each other to perform inter-socket communication. As a result, if two transactions are non-conflicting, they cannot interact with each other through any meta-data. Such a policy does not apply for application threads running in the same socket. In fact, they are allowed to share any meta-data even if they execute non-conflicting operations because, supported by our evaluation study, we found that the local processing happening inside one socket does not interfere with the work done by parallel threads executing on other sockets. Nemo's evaluation study shows improvement over state-of-the-art TM algorithms by as much as 65%. / Ph. D.
|
2 |
Anomaly detection in user behavior of websites using Hierarchical Temporal Memories : Using Machine Learning to detect unusual behavior from users of a web service to quickly detect possible security hazards.Berger, Victor January 2017 (has links)
This Master's Thesis focuses on the recent Cortical Learn-ing Algorithm (CLA), designed for temporal anomaly detection. It is here applied to the problem of anomaly detec-tion in user behavior of web services, which is getting moreand more important in a network security context. CLA is here compared to more traditional state-of-the-art algorithms of anomaly detection: Hidden Markov Models (HMMs) and t-stide (an N-gram-based anomaly detector), which are among the few algorithms compatible withthe online processing constraint of this problem. It is observed that on the synthetic dataset used forthis comparison, CLA performs signicantly better thanthe other two algorithms in terms of precision of the detection. The two other algorithms don't seem to be able tohandle this task at all. It appears that this anomaly de-tection problem (outlier detection in short sequences overa large alphabet) is considerably different from what hasbeen extensively studied up to now.
|
3 |
Anomaly Detection for Portfolio Risk Management : An evaluation of econometric and machine learning based approaches to detecting anomalous behaviour in portfolio risk measures / Avvikelsedetektering för Riskhantering av Portföljer : En utvärdering utav ekonometriska och maskininlärningsbaserade tillvägagångssätt för att detektera avvikande beteende hos portföljriskmåttWesterlind, Simon January 2018 (has links)
Financial institutions manage numerous portfolios whose risk must be managed continuously, and the large amounts of data that has to be processed renders this a considerable effort. As such, a system that autonomously detects anomalies in the risk measures of financial portfolios, would be of great value. To this end, the two econometric models ARMA-GARCH and EWMA, and the two machine learning based algorithms LSTM and HTM, were evaluated for the task of performing unsupervised anomaly detection on the streaming time series of portfolio risk measures. Three datasets of returns and Value-at-Risk series were synthesized and one dataset of real-world Value-at-Risk series had labels handcrafted for the experiments in this thesis. The results revealed that the LSTM has great potential in this domain, due to an ability to adapt to different types of time series and for being effective at finding a wide range of anomalies. However, the EWMA had the benefit of being faster and more interpretable, but lacked the ability to capture anomalous trends. The ARMA-GARCH was found to have difficulties in finding a good fit to the time series of risk measures, resulting in poor performance, and the HTM was outperformed by the other algorithms in every regard, due to an inability to learn the autoregressive behaviour of the time series. / Finansiella institutioner hanterar otaliga portföljer vars risk måste hanteras kontinuerligt, och den stora mängden data som måste processeras gör detta till ett omfattande uppgift. Därför skulle ett system som autonomt kan upptäcka avvikelser i de finansiella portföljernas riskmått, vara av stort värde. I detta syftet undersöks två ekonometriska modeller, ARMA-GARCH och EWMA, samt två maskininlärningsmodeller, LSTM och HTM, för ändamålet att kunna utföra så kallad oövervakad avvikelsedetektering på den strömande tidsseriedata av portföljriskmått. Tre dataset syntetiserades med avkastningar och Value-at-Risk serier, och ett dataset med verkliga Value-at-Risk serier fick handgjorda etiketter till experimenten i denna avhandling. Resultaten visade att LSTM har stor potential i denna domänen, tack vare sin förmåga att anpassa sig till olika typer av tidsserier och för att effektivt lyckas finna varierade sorters anomalier. Däremot så hade EWMA fördelen av att vara den snabbaste och enklaste att tolka, men den saknade förmågan att finna avvikande trender. ARMA-GARCH hade svårigheter med att modellera tidsserier utav riskmått, vilket resulterade i att den preseterade dåligt. HTM blev utpresterad utav de andra algoritmerna i samtliga hänseenden, på grund utav dess oförmåga att lära sig tidsserierna autoregressiva beteende.
|
4 |
Anomaly Detection in Microservice Infrastructures / Anomalitetsdetektering i microservice-infrastrukturerOhlsson, Jonathan January 2018 (has links)
Anomaly detection in time series is a broad field with many application areas, and has been researched for many years. In recent years the need for monitoring and DevOps has increased, partly due to the increased usage of microservice infrastructures. Applying time series anomaly detection to the metrics emitted by these microservices can yield new insights into the system health and could enable detecting anomalous conditions before they are escalated into a full incident. This thesis investigates how two proposed anomaly detectors, one based on the RPCA algorithm and the other on the HTM neural network, perform on metrics emitted by a microservice infrastructure, with the goal of enhancing the infrastructure monitoring. The detectors are evaluated against a random sample of metrics from a digital rights management company’s microservice infrastructure, as well as the open source NAB dataset. It is illustrated that both algorithms are able to detect every known incident in the company metrics tested. Their ability to detect anomalies is shown to be dependent on the defined threshold value for what qualifies as an outlier. The RPCA Detector proved to be better at detecting anomalies on the company microservice metrics, however the HTM detector performed better on the NAB dataset. Findings also highlight the difficulty of manually annotating anomalies even with domain knowledge. An issue found to be true for both the dataset created for this project, and the NAB dataset. The thesis concludes that the proposed detectors possess different abilities, both having their respective trade-offs. Although they are similar in detection accuracy and false positive rates, each has different inert abilities to perform tasks such as continuous monitoring or ease of deployment in an existing monitoring setup. / Anomalitetsdetektering i tidsserier är ett brett område med många användningsområden och har undersökts under många år. De senaste åren har behovet av övervakning och DevOps ökat, delvis på grund av ökad användning av microservice-infrastrukturer. Att tillämpa tidsserieanomalitetsdetektering på de mätvärden som emitteras av dessa microservices kan ge nya insikter i systemhälsan och kan möjliggöra detektering av avvikande förhållanden innan de eskaleras till en fullständig incident. Denna avhandling undersöker hur två föreslagna anomalitetsdetektorer, en baserad på RPCA-algoritmen och den andra på HTM neurala nätverk, presterar på mätvärden som emitteras av en microservice-infrastruktur, med målet att förbättra infrastrukturövervakningen. Detektorerna utvärderas mot ett slumpmässigt urval av mätvärden från en microservice-infrastruktur på en digital underhållningstjänst, och från det öppet tillgängliga NAB-dataset. Det illustreras att båda algoritmerna kunde upptäcka alla kända incidenter i de testade underhållningstjänst-mätvärdena. Deras förmåga att upptäcka avvikelser visar sig vara beroende av det definierade tröskelvärdet för vad som kvalificeras som en anomali. RPCA-detektorn visade sig bättre på att upptäcka anomalier i underhållningstjänstens mätvärden, men HTM-detektorn presterade bättre på NAB-datasetet. Fynden markerar också svårigheten med att manuellt annotera avvikelser, även med domänkunskaper. Ett problem som visat sig vara sant för datasetet skapat för detta projekt och NAB-datasetet. Avhandlingen slutleder att de föreslagna detektorerna har olikaförmågor, vilka båda har sina respektive avvägningar. De har liknande detekteringsnoggrannhet, men har olika inerta förmågor för att utföra uppgifter som kontinuerlig övervakning, eller enkelhet att installera i en befintlig övervakningsinstallation.
|
5 |
Black liquor gasification : experimental stability studies of smelt components and refractory liningRåberg, Mathias January 2007 (has links)
<p>Black liquors are presently combusted in recovery boilers where the inorganic cooking chemicals are recovered and the energy in the organic material is converted to steam and electricity. A new technology, developed by Chemrec AB, is black liquor gasification (BLG). BLG has more to offer compared to the recovery boiler process, in terms of on-site generation of electric power, liquid fuel and process chemicals. A prerequisite for both optimization of existing processes and the commercialization of BLG is better understanding of the physical and chemical processes involved including interactions with the refractory lining. The chemistry in the BLG process is very complex and to minimize extensive and expensive time-consuming studies otherwise required accurate and reliable model descriptions are needed for a full understanding of most chemical and physical processes as well as for up-scaling of the new BLG processes. However, by using these calculated model results in practice, the errors in the state of the art thermochemical data have to be considered. An extensive literature review was therefore performed to update the data needed for unary, binary and higher order systems. The results from the review reviled that there is a significant range of uncertainty for several condensed phases and a few gas species. This resulted in experimental re-determinations of the binary phase diagrams sodium carbonate-sodium sulfide (Na2CO3-Na2S) and sodium sulfate-sodium sulfide (Na2SO4-Na2S) using High Temperature Microscopy (HTM), High Temperature X-ray Diffraction (HT-XRD) and Differential Thermal Analysis (DTA). For the Na2CO3-Na2S system, measurements were carried out in dry inert atmosphere at temperatures from 25 to 1200 °C. To examine the influence of pure CO2 atmosphere on the melting behavior, HTM experiments in the same temperature interval were made. The results include re-determination of liquidus curves, in the Na2CO3 rich area, melting points of the pure components as well as determination of the extent of the solid solution, Na2CO3(ss), area. The thermal stability of Na2SO3 was studied and the binary phase diagram Na2SO4-Na2S was re-determined. The results indicate that Na2SO3 can exist for a short time up to 750 °C, before it melts. It was also proved that a solid/solid transformation, not reported earlier, occurs at 675 ± 10 °C. At around 700 °C, Na2SO3 gradually breaks down within a few hours, to finally form the solid phases Na2SO4 and Na2S. From HTM measurements a metastable phase diagram including Na2SO3, as well as an equilibrium phase diagram have been constructed for the binary system Na2SO4-Na2S. Improved data on Na2S was experimentally obtained by using solid-state EMF measurements. The equilibrium constant for Na2S(s) was determined to be log Kf(Na2S(s)) (± 0.05) = 216.28 – 4750(T/K)–1 – 28.28878 ln (T/K). Gibbs energy of formation for Na2S(s) was obtained as ΔfG°(Na2S(s))/(kJ mol–1) (± 1.0) = 90.9 – 4.1407(T/K) + 0.5415849(T/K) ln (T/K). The standard enthalpy of formation of Na2S(s) was evaluated to be ΔfH°(Na2S(s), 298.15 K)/(kJ mol–1) (± 1.0) = – 369.0. The standard entropy was evaluated to be S°(Na2S(s), 298.15 K)/(J mol–1 K–1) (± 2.0) = 97.0. Analyses of used refractory material from the Chemrec gasifier were also performed in order to elucidate the stability of the refractory lining. Scanning electron microscopy (SEM) analysis revealed that the chemical attack was limited to 250-300 μm, of the surface directly exposed to the gasification atmosphere and the smelt. From XRD analysis it was found that the phases in this surface layer of the refractory were dominated by sodiumaluminosilicates, mainly Na1.55Al1.55Si0.45O4.</p>
|
6 |
Black liquor gasification : experimental stability studies of smelt components and refractory liningRåberg, Mathias January 2007 (has links)
Black liquors are presently combusted in recovery boilers where the inorganic cooking chemicals are recovered and the energy in the organic material is converted to steam and electricity. A new technology, developed by Chemrec AB, is black liquor gasification (BLG). BLG has more to offer compared to the recovery boiler process, in terms of on-site generation of electric power, liquid fuel and process chemicals. A prerequisite for both optimization of existing processes and the commercialization of BLG is better understanding of the physical and chemical processes involved including interactions with the refractory lining. The chemistry in the BLG process is very complex and to minimize extensive and expensive time-consuming studies otherwise required accurate and reliable model descriptions are needed for a full understanding of most chemical and physical processes as well as for up-scaling of the new BLG processes. However, by using these calculated model results in practice, the errors in the state of the art thermochemical data have to be considered. An extensive literature review was therefore performed to update the data needed for unary, binary and higher order systems. The results from the review reviled that there is a significant range of uncertainty for several condensed phases and a few gas species. This resulted in experimental re-determinations of the binary phase diagrams sodium carbonate-sodium sulfide (Na2CO3-Na2S) and sodium sulfate-sodium sulfide (Na2SO4-Na2S) using High Temperature Microscopy (HTM), High Temperature X-ray Diffraction (HT-XRD) and Differential Thermal Analysis (DTA). For the Na2CO3-Na2S system, measurements were carried out in dry inert atmosphere at temperatures from 25 to 1200 °C. To examine the influence of pure CO2 atmosphere on the melting behavior, HTM experiments in the same temperature interval were made. The results include re-determination of liquidus curves, in the Na2CO3 rich area, melting points of the pure components as well as determination of the extent of the solid solution, Na2CO3(ss), area. The thermal stability of Na2SO3 was studied and the binary phase diagram Na2SO4-Na2S was re-determined. The results indicate that Na2SO3 can exist for a short time up to 750 °C, before it melts. It was also proved that a solid/solid transformation, not reported earlier, occurs at 675 ± 10 °C. At around 700 °C, Na2SO3 gradually breaks down within a few hours, to finally form the solid phases Na2SO4 and Na2S. From HTM measurements a metastable phase diagram including Na2SO3, as well as an equilibrium phase diagram have been constructed for the binary system Na2SO4-Na2S. Improved data on Na2S was experimentally obtained by using solid-state EMF measurements. The equilibrium constant for Na2S(s) was determined to be log Kf(Na2S(s)) (± 0.05) = 216.28 – 4750(T/K)–1 – 28.28878 ln (T/K). Gibbs energy of formation for Na2S(s) was obtained as ΔfG°(Na2S(s))/(kJ mol–1) (± 1.0) = 90.9 – 4.1407(T/K) + 0.5415849(T/K) ln (T/K). The standard enthalpy of formation of Na2S(s) was evaluated to be ΔfH°(Na2S(s), 298.15 K)/(kJ mol–1) (± 1.0) = – 369.0. The standard entropy was evaluated to be S°(Na2S(s), 298.15 K)/(J mol–1 K–1) (± 2.0) = 97.0. Analyses of used refractory material from the Chemrec gasifier were also performed in order to elucidate the stability of the refractory lining. Scanning electron microscopy (SEM) analysis revealed that the chemical attack was limited to 250-300 μm, of the surface directly exposed to the gasification atmosphere and the smelt. From XRD analysis it was found that the phases in this surface layer of the refractory were dominated by sodiumaluminosilicates, mainly Na1.55Al1.55Si0.45O4.
|
7 |
Hierarchical Temporal Memory Cortical Learning Algorithm for Pattern Recognition on Multi-core ArchitecturesPrice, Ryan William 01 January 2011 (has links)
Strongly inspired by an understanding of mammalian cortical structure and function, the Hierarchical Temporal Memory Cortical Learning Algorithm (HTM CLA) is a promising new approach to problems of recognition and inference in space and time. Only a subset of the theoretical framework of this algorithm has been studied, but it is already clear that there is a need for more information about the performance of HTM CLA with real data and the associated computational costs. For the work presented here, a complete implementation of Numenta's current algorithm was done in C++. In validating the implementation, first and higher order sequence learning was briefly examined, as was algorithm behavior with noisy data doing simple pattern recognition. A pattern recognition task was created using sequences of handwritten digits and performance analysis of the sequential implementation was performed. The analysis indicates that the resulting rapid increase in computing load may impact algorithm scalability, which may, in turn, be an obstacle to widespread adoption of the algorithm. Two critical hotspots in the sequential code were identified and a parallelized version was developed using OpenMP multi-threading. Scalability analysis of the parallel implementation was performed on a state of the art multi-core computing platform. Modest speedup was readily achieved with straightforward parallelization. Parallelization on multi-core systems is an attractive choice for moderate sized applications, but significantly larger ones are likely to remain infeasible without more specialized hardware acceleration accompanied by optimizations to the algorithm.
|
8 |
Designing, Modeling, and Optimizing Transactional Data StructuresHassan, Ahmed Mohamed Elsayed 25 September 2015 (has links)
Transactional memory (TM) has emerged as a promising synchronization abstraction for multi-core architectures. Unlike traditional lock-based approaches, TM shifts the burden of implementing threads synchronization from the programmer to an underlying framework using hardware (HTM) and/or software (STM) components.
Although TM can be leveraged to implement transactional data structures (i.e., those where multiple operations are allowed to execute atomically, all-or-nothing, according to the transaction paradigm), its intensive speculation may result in significantly lower performance than the optimized concurrent data structures. This poor performance motivates the need to find other, more effective, alternatives for designing transactional data structures without losing the simple programming abstraction proposed by TM.
To do so, we identified three major challenges that need to be addressed to design efficient transactional data structures. The first challenge is composability, namely allowing an atomic execution of two or more data structure operations in the same way as TM provides, but without its high overheads. The second challenge is integration, which enables the execution of data structure operations within generic transactions that may contain other memory- based operations. The last challenge is modeling, which encompasses the necessity of defining a unified formal methodology to reason about the correctness of transactional data structures.
In this dissertation, we propose different approaches to address the above challenges. First, we address the composability challenge by introducing an optimistic methodology to effi- ciently convert concurrent data structures into transactional ones. Second, we address the integration challenge by injecting the semantic operations of those transactional data struc- ture into TM frameworks, and by presenting two novel STM algorithms in order to enhance the overall performance of those frameworks. Finally, we address the modeling challenge by presenting two models for concurrent and transactional data structures designs.
• Our first main contribution in this dissertation is Optimistic transactional boosting (OTB), a methodology to design transactional versions of the highly concurrent optimistic (i.e., lazy) data structures. An earlier (pessimistic) boosting proposal added a layer of abstract locks on top of existing concurrent data structures. Instead, we propose an optimistic boosting methodology, which allows greater data structure-specific optimizations, easier integration with TM frameworks, and lower restrictions on the operations than the original (more pessimistic) boosting methodology.
Based on the proposed OTB methodology, we implement the transactional version of two list-based data structures (i.e., set and priority queue). Then, we present TxCF-Tree, a balanced tree whose design is optimized to support transactional accesses. The core optimizations of TxCF-Tree's operations are: providing a traversal phase that does not use any lock and/or speculation and deferring the lock acquisition or physical modification to the transaction's commit phase; isolating the structural operations (such as re-balancing) in an interference-less housekeeping thread; and minimizing the interference between structural operations and the critical path of semantic operations (i.e., additions and removals on the tree).
• Our second main contribution is to integrate OTB with both STM and HTM algorithms. For STM, we extend the design of both DEUCE, a Java STM framework, and RSTM, a C++ STM framework, to support the integration with OTB. Using our extension, programmers can include both OTB data structure operations and traditional memory reads/writes in the same transaction. Results show that OTB performance is closer to the optimal lazy (non-transactional) data structures than the original boosting algorithm.
On the HTM side, we introduce a methodology to inject semantic operations into the well-known hybrid transactional memory algorithms (e.g., HTM-GL, HyNOrec, and NOre- cRH). In addition, we enhance the proposed semantically-enabled HTM algorithms with a lightweight adaptation mechanism that allows bypassing the HTM paths if the overhead of the semantic operations causes repeated HTM aborts. Experiments on micro- and macro- benchmarks confirm that our proposals outperform the other TM solutions in almost all the tested workloads.
• Our third main contribution is to enhance the performance of TM frameworks in gen- eral by introducing two novel STM algorithms. Remote Transaction Commit (RTC) is a mechanism for executing commit phases of STM transactions in dedicated server cores. RTC shows significant improvements compared to its corresponding validation based STM algorithm (up to 4x better) as it decreases the overhead of spin locking during commit, in terms of cache misses, blocking of lock holders, and CAS operations. Remote Inval- idation (RInval) applies the same idea of RTC on invalidation based STM algorithms. Furthermore, it allows more concurrency by executing commit and invalidation routines concurrently in different servers. RInval performs up to 10x better than its corresponding invalidation based STM algorithm (InvalSTM), and up to 2x better than its corresponding validation-based algorithm (NOrec).
• Our fourth and final main contribution is to provide a theoretical model for concurrent and transactional data structures. We exploit the similarities of the OTB-based data structures and provide a unified model to reason about the correctness of those designs. Specifically, we extend a recent approach that models data structures with concurrent readers and a single writer (called SWMR), and we propose two novel models that additionally allow multiple writers and transactional execution. Those models are more practical because they cover a wider set of data structures than the original SWMR model. / Ph. D.
|
9 |
Everything you wanted to know about the TPA molecule adsorbed on Au(111)Svensson, Pamela H.W. January 2020 (has links)
The electronic properties of Triphenylamine (TPA) in gas phase and adsorbed on gold(111) have been simulated with Quantum Espresso using Density Functional Theory (DFT). To better understand how the presence of a gold surface affects sunlight absorption in the system, partial Density Of States (pDOS) and Near Edge X-ray Absorption Fine Structure (NEXAFS) of the system have been calculated. To describe the electronic excitation, three different methods have been used, No Core Hole (NCH), Full Core Hole (FCH) and Half Core Hole (HCH) approximation. The excitation of the TPA molecule was made in the nitrogen (N) atom and in the four different carbon (C) atoms with different electronic environments, C-ipso, C-ortho, C-meta and C-para. When using the HCH method, the absorbing atom must be described by a pseudopotential (PP) which includes half of a hole in the 1s orbital. This PP has been generated and a detailed summary of the process is described. The TPA/gold system relaxes to a position with the central N atom of TPA above an gold (Au) atom in the second layer of the surface and at a distance of 3.66 Angstrom, to the first layer. TPA keeps its symmetry with only small differences in the length of atomic bonds when adsorbed. The most striking result of this study is how the band gap of TPA is affected by the gold layer. From the pDOS we can observe that TPA in gas phase has a clear band gap of 2.2 eV with C-ortho dominating in the valence region and the four carbons dominating in the first unoccupied states. When depositing the molecule on the surface of Au(111), the band gap is essentially gone and a number of states appear between the previous highest occupied and lowest unoccupied molecular orbital in TPA. These new states align in energy with three clusters of states of the gold suggesting an interaction between the molecule and the surface. In the generated NEXAFS of nitrogen and carbon in TPA gas phase, one can observe a small pre-peak before the first unoccupied state. This is reinforced when adsorbing the molecule, which generates a pre-peak of approximately 3 eV in width. The pre-peak is connected to the new peaks seen in pDOS, correlating with experimental results on the same system.
|
10 |
Intelligent Energy-Savings and Process Improvement Strategies in Energy-Intensive Industries / Intelligent Energy-Savings and Process Improvement Strategies in Energy-Intensive IndustriesTeng, Sin Yong January 2020 (has links)
S tím, jak se neustále vyvíjejí nové technologie pro energeticky náročná průmyslová odvětví, stávající zařízení postupně zaostávají v efektivitě a produktivitě. Tvrdá konkurence na trhu a legislativa v oblasti životního prostředí nutí tato tradiční zařízení k ukončení provozu a k odstavení. Zlepšování procesu a projekty modernizace jsou zásadní v udržování provozních výkonů těchto zařízení. Současné přístupy pro zlepšování procesů jsou hlavně: integrace procesů, optimalizace procesů a intenzifikace procesů. Obecně se v těchto oblastech využívá matematické optimalizace, zkušeností řešitele a provozní heuristiky. Tyto přístupy slouží jako základ pro zlepšování procesů. Avšak, jejich výkon lze dále zlepšit pomocí moderní výpočtové inteligence. Účelem této práce je tudíž aplikace pokročilých technik umělé inteligence a strojového učení za účelem zlepšování procesů v energeticky náročných průmyslových procesech. V této práci je využit přístup, který řeší tento problém simulací průmyslových systémů a přispívá následujícím: (i)Aplikace techniky strojového učení, která zahrnuje jednorázové učení a neuro-evoluci pro modelování a optimalizaci jednotlivých jednotek na základě dat. (ii) Aplikace redukce dimenze (např. Analýza hlavních komponent, autoendkodér) pro vícekriteriální optimalizaci procesu s více jednotkami. (iii) Návrh nového nástroje pro analýzu problematických částí systému za účelem jejich odstranění (bottleneck tree analysis – BOTA). Bylo také navrženo rozšíření nástroje, které umožňuje řešit vícerozměrné problémy pomocí přístupu založeného na datech. (iv) Prokázání účinnosti simulací Monte-Carlo, neuronové sítě a rozhodovacích stromů pro rozhodování při integraci nové technologie procesu do stávajících procesů. (v) Porovnání techniky HTM (Hierarchical Temporal Memory) a duální optimalizace s několika prediktivními nástroji pro podporu managementu provozu v reálném čase. (vi) Implementace umělé neuronové sítě v rámci rozhraní pro konvenční procesní graf (P-graf). (vii) Zdůraznění budoucnosti umělé inteligence a procesního inženýrství v biosystémech prostřednictvím komerčně založeného paradigmatu multi-omics.
|
Page generated in 0.2026 seconds