Spelling suggestions: "subject:"finegrained"" "subject:"finergrained""
71 |
Teplotní stabilita Mg-slitiny AZ91 připravené pomocí intenzivní plastické deformace / Thermal stability of Mg-alloy AZ91 prepared by severe plastic deformationŠtěpánek, Roman January 2012 (has links)
This thesis dealt with thermal stability of magnesium alloy AZ91 prepared by severe plastic deformation, which leeds to fine grained structure. This structure is characterised by its inherent instability and this thesis tries to find out the value of critical temperature and rate of this instability, which manifests as grain coarsening.
|
72 |
Studium mikrostruktury ultrajemnozrnných kovových materiálů metodou pozitronové anihilace / Studium mikrostruktury ultrajemnozrnných kovových materiálů metodou pozitronové anihilaceBarnovská, Zuzana January 2011 (has links)
In the presented thesis we study the changes in distribution of the size of vacancy clusters in metals processed by severe plastic deformation, so called ul- tra fine grained materials. We use a modern non-destructive method of positron annihilation spectroscopy, which is one of the few methods that allow us to inves- tigate point defects like vacancies with sizes of a few ˚A. The obtained spectra of positrons annihilating in the samples enable us to determine changes of vacancy cluster sizes depending on temperature or severity of the deformation applied on the samples. 1
|
73 |
Metastabilní slitina Ti-15Mo připravená práškovou metalurgií / Metastable alloy Ti-15Mo prepared by powder metallurgyVeverková, Anna January 2019 (has links)
This diploma thesis focused on manufacturing and characterization of Ti-15Mo metastable beta-Ti alloy prepared by cryogenic milling and spark plasma sintering. Initial powder was prepared by gas atomization and consequently deformed by cryogenic milling (milled powder). Both initial and milled powders were compacted by spark plasma sintering (SPS) at temperatures from 750 řC to 850 řC. Dependence of microstructure and mechanical properties on the parameters of preparation was studied. During cryo-milling, powder particles significantly changed shape from ball-shaped to disc-shaped. Particles were not refined by milling, but severely plastically deformed. SEM observations showed that all prepared samples contain duplex alpha + beta structure. Volume fraction of alpha phase is significantly higher in the sintered milled powder due to increased beta- transus temperature caused by contamination by oxygen and also due to easier alpha phase precipitation caused by refined microstructure. Maximum microhardness of 350 HV was achieved for both types of sintered powders. High microhardness of sintered initial powder can be attributed to formation of omega phase during cooling, while sintered milled powder is strengthened by refined microstructure and small alpha phase precipitates. Cryogenic milling prior to...
|
74 |
Polymermodifizierte Feinbetone - Untersuchungen zum FeuchtetransportKeil, Allessandra, Raupach, Michael January 2011 (has links)
Untersuchungen zur Dauerhaftigkeit von ARGlasbewehrung im Textilbeton haben gezeigt, dass durch die Alkalität des Betons in Verbindung mit Feuchtigkeit eine Glaskorrosion hervorgerufen wird, die im Laufe der Zeit zu Festigkeitsverlusten des Glases führt. Eine Möglichkeit, die durch die Glaskorrosion verursachten Festigkeitsverluste zu reduzieren, stellt die Polymermodifikation des Betons dar. Durch die Polymerzugabe wird die Wasseraufnahme der Feinbetonmatrix reduziert, dadurch sinkt der Gehalt an gelösten Alkalien im Bereich der Bewehrung. Um den Einfluss verschiedener Feinbetonmatrices auf die Dauerhaftigkeit von Textilbeton beurteilen zu können, sind u. a. zeit- und tiefenabhängige Informationen zur Feuchteverteilung erforderlich, die durch den Einsatz der NMR-Technik gewonnen werden. Der nachfolgende Artikel beschreibt den Feuchtetransport in einer speziell für den Textilbeton entwickelten Feinbetonmatrix sowie den Einfluss verschiedener Modifikationsstoffe auf das Wasseraufnahmeverhalten des Betons. / Durability tests of textile reinforced concrete revealed a loss of strength of the AR-glass reinforcement due to glass corrosion effected by the alkalinity and moisture content of the concrete. In order to reduce this strength loss of AR-glass in cementitious matrices, polymers can be used for concrete modification. The aim of the polymer addition is to reduce the amount of capillary water absorption of the matrix, which reduces the amount of free alkalies closed to the reinforcement. In order to evaluate the effect of the concrete matrix on the durability of TRC, it is necessary to determine the moisture content as functions of time and depth. This data can be obtained by the use of nuclear magnetic resonance (NMR) technique. This paper deals with the moisture transport in a finegrained concrete matrix especially developed for the use in TRC as well as the influence of polymer addition on the water absorption properties of the concrete matrix.
|
75 |
Brandverhalten textilbewehrter BauteileKulas, Christian, Hegger, Josef, Raupach, Michael, Antons, Udo January 2011 (has links)
Die Einhaltung von Brandschutzanforderungen ist ein wichtiger Aspekt für sichere Baukonstruktionen. Beim innovativen Werkstoff textilbewehrter Beton, der einen Verbundwerkstoff aus einer Feinbetonmatrix und textiler Bewehrung darstellt, ist das Brandverhalten bisher nur unzureichend erforscht worden. Insbesondere das Tragverhalten der einzelnen Komponenten unter hohen Temperaturen stellt noch eine Wissenslücke in der heutigen Forschung dar. Dieser Artikel befasst sich mit den experimentellen Untersuchungen an einer Feinbetonmatrix, die ein Größtkorndurchmesser von 0,6 mm aufweist, sowie an AR-Glas- und Carbongarnen. Basierend auf instationären Versuchen werden das Spannungs- und Dehnungsverhalten unter hohen Temperaturen abgeleitet und Ansätze zur rechnerischen Beschreibung des Hochtemperaturverhaltens vorgeschlagen. / The design of structural members under fire attack is an important aspect for safe constructions. For the innovative material textile reinforced concrete (TRC), which is a composite material made of fine-grained concrete and textile reinforcement, the fire behavior has not been investigated insufficiently yet. Especially the load-bearing behavior under high-temperatures of the single components marks a gap in the state-of-the-art of science and technology today. This article deals with experimental investigations on a fine-grained concrete matrix, which has maximum grain size of only 0.6 mm, as well as yarns made of AR-glass and carbon. On the basis of transient tests the stress and strain behavior under high temperatures is derived. Finally, a calculative approach for the hightemperature behavior is presented.
|
76 |
An Investigation of Low-Rank Decomposition for Increasing Inference Speed in Deep Neural Networks With Limited Training DataWikén, Victor January 2018 (has links)
In this study, to increase inference speed of convolutional neural networks, the optimization technique low-rank tensor decomposition has been implemented and applied to AlexNet which had been trained to classify dog breeds. Due to a small training set, transfer learning was used in order to be able to classify dog breeds. The purpose of the study is to investigate how effective low-rank tensor decomposition is when the training set is limited. The results obtained from this study, compared to a previous study, indicate that there is a strong relationship between the effects of the tensor decomposition and how much available training data exists. A significant speed up can be obtained in the different convolutional layers using tensor decomposition. However, since there is a need to retrain the network after the decomposition and due to the limited dataset there is a slight decrease in accuracy. / För att öka inferenshastigheten hos faltningssnätverk, har i denna studie optimeringstekniken low-rank tensor decomposition implementerats och applicerats på AlexNet, som har tränats för att klassificera hundraser. På grund av en begränsad mängd träningsdata användes transfer learning för uppgiften. Syftet med studien är att undersöka hur effektiv low-rank tensor decomposition är när träningsdatan är begränsad. Jämfört med resultaten från en tidigare studie visar resultaten från denna studie att det finns ett starkt samband mellan effekterna av low-rank tensor decomposition och hur mycket tillgänglig träningsdata som finns. En signifikant hastighetsökning kan uppnås i de olika faltningslagren med hjälp av low-rank tensor decomposition. Eftersom det finns ett behov av att träna om nätverket efter dekompositionen och på grund av den begränsade mängden data så uppnås hastighetsökningen dock på bekostnad av en viss minskning i precisionen för modellen.
|
77 |
Supporting Applications Involving Dynamic Data Structures and Irregular Memory Access on Emerging Parallel PlatformsRen, Bin 09 September 2014 (has links)
No description available.
|
78 |
A Heuristic-Based Approach to Real-Time TCP State and Retransmission AnalysisSwaro, James E. January 2015 (has links)
No description available.
|
79 |
On Optimizing Transactional Memory: Transaction Splitting, Scheduling, Fine-grained Fallback, and NUMA OptimizationMohamedin, Mohamed Ahmed Mahmoud 01 September 2015 (has links)
The industrial shift from single core processors to multi-core ones introduced many challenges. Among them, a program cannot get a free performance boost by just upgrading to a new hardware because new chips include more processing units but at the same (or comparable) clock speed as the previous generation. In order to effectively exploit the new available hardware and thus gain performance, a program should maximize parallelism. Unfortunately, parallel programming poses several challenges, especially when synchronization is involved because parallel threads need to access the same shared data. Locks are the standard synchronization mechanism but gaining performance using locks is difficult for a non-expert programmers and without deeply knowing the application logic. A new, easier, synchronization abstraction is therefore required and Transactional Memory (TM) is the concrete candidate.
TM is a new programming paradigm that simplifies the implementation of synchronization. The programmer just defines atomic parts of the code and the underlying TM system handles the required synchronization, optimistically. In the past decade, TM researchers worked extensively to improve TM-based systems. Most of the work has been dedicated to Software TM (or STM) as it does not requires special transactional hardware supports. Very recently (in the past two years), those hardware supports have become commercially available as commodity processors, thus a large number of customers can finally take advantage of them. Hardware TM (or HTM) provides the potential to obtain the best performance of any TM-based systems, but current HTM systems are best-effort, thus transactions are not guaranteed to commit in any case. In fact, HTM transactions are limited in size and time as well as prone to livelock at high contention levels.
Another challenge posed by the current multi-core hardware platforms is their internal architecture used for interfacing with the main memory. Specifically, when the common computer deployment changed from having a single processor to having multiple multi-core processors, the architects redesigned also the hardware subsystem that manages the memory access from the one providing a Uniform Memory Access (UMA), where the latency needed to fetch a memory location is the same independently from the specific core where the thread executes on, to the current one with a Non-Uniform Memory Access (NUMA), where such a latency differs according to the core used and the memory socket accessed. This switch in technology has an implication on the performance of concurrent applications. In fact, the building blocks commonly used for designing concurrent algorithms under the assumptions of UMA (e.g., relying on centralized meta-data) may not provide the same high performance and scalability when deployed on NUMA-based architectures.
In this dissertation, we tackle the performance and scalability challenges of multi-core architectures by providing three solutions for increasing performance using HTM (i.e., Part-HTM, Octonauts, and Precise-TM), and one solution for solving the scalability issues provided by NUMA-architectures (i.e., Nemo).
• Part-HTM is the first hybrid transactional memory protocol that solves the problem of transactions aborted due to the resource limitations (space/time) of current best-effort HTM. The basic idea of Part-HTM is to partition those transactions into multiple sub-transactions, which can likely be committed in hardware. Due to the eager nature of HTM, we designed a low-overhead software framework to preserve transaction's correctness (with and without opacity) and isolation. Part-HTM is efficient: our evaluation study confirms that its performance is the best in all tested cases, except for those where HTM cannot be outperformed. However, in such a workload, Part-HTM still performs better than all other software and hybrid competitors.
• Octonauts tackles the live-lock problem of HTM at high contention level. HTM lacks of advanced contention management (CM) policies. Octonauts is an HTM-aware scheduler that orchestrates conflicting transactions. It uses a priori knowledge of transactions' working-set to prevent the activation of conflicting transactions, simultaneously. Octonauts also accommodates both HTM and STM with minimal overhead by exploiting adaptivity. Based on the transaction's size, time, and irrevocable calls (e.g., system call) Octonauts selects the best path among HTM, STM, or global locking. Results show a performance improvement up to 60% when Octonauts is deployed in comparison with pure HTM with falling back to global locking.
• Precise-TM is a unique approach to solve the granularity of the software fallback path of best-efforts HTM. It provide an efficient and precise technique for HTM-STM communication such that HTM is not interfered by concurrent STM transactions. In addition, the added overhead is marginal in terms of space or execution time. Precise-TM uses address-embedded locks (pointers bit-stealing) for a precise communication between STM and HTM. Results show that our precise fine-grained locking pays off as it allows more concurrency between hardware and software transactions. Specifically, it gains up to 5x over the default HTM implementation with a single global lock as fallback path.
• Nemo is a new STM algorithm that ensures high and scalable performance when an application workload with a data locality property is deployed. Existing STM algorithms rely on centralized shared meta-data (e.g., a global timestamp) to synchronize concurrent accesses, but in such a workload, this scheme may hamper the achievement of scalable performance given the high latency introduced by NUMA architectures for updating those centralized meta-data. Nemo overcomes these limitations by allowing only those transactions that actually conflict with each other to perform inter-socket communication. As a result, if two transactions are non-conflicting, they cannot interact with each other through any meta-data. Such a policy does not apply for application threads running in the same socket. In fact, they are allowed to share any meta-data even if they execute non-conflicting operations because, supported by our evaluation study, we found that the local processing happening inside one socket does not interfere with the work done by parallel threads executing on other sockets. Nemo's evaluation study shows improvement over state-of-the-art TM algorithms by as much as 65%. / Ph. D.
|
80 |
Finfördelad Sentimentanalys : Utvärdering av neurala nätverksmodeller och förbehandlingsmetoder med Word2Vec / Fine-grained Sentiment Analysis : Evaluation of Neural Network Models and Preprocessing Methods with Word2VecPhanuwat, Phutiwat January 2024 (has links)
Sentimentanalys är en teknik som syftar till att automatiskt identifiera den känslomässiga tonen i text. Vanligtvis klassificeras texten som positiv, neutral eller negativ. Nackdelen med denna indelning är att nyanser går förlorade när texten endast klassificeras i tre kategorier. En vidareutveckling av denna klassificering är att inkludera ytterligare två kategorier: mycket positiv och mycket negativ. Utmaningen med denna femklassificering är att det blir svårare att uppnå hög träffsäkerhet på grund av det ökade antalet kategorier. Detta har lett till behovet av att utforska olika metoder för att lösa problemet. Syftet med studien är därför att utvärdera olika klassificerare, såsom MLP, CNN och Bi-GRU i kombination med word2vec för att klassificera sentiment i text i fem kategorier. Studien syftar också till att utforska vilken förbehandling som ger högre träffsäkerhet för word2vec. Utvecklingen av modellerna gjordes med hjälp av SST-datasetet, som är en känd dataset inom finfördelad sentimentanalys. För att avgöra vilken förbehandling som ger högre träffsäkerhet för word2vec, förbehandlades datasetet på fyra olika sätt. Dessa innefattar enkel förbehandling (EF), samt kombinationer av vanliga förbehandlingar som att ta bort stoppord (EF+Utan Stoppord) och lemmatisering (EF+Lemmatisering), samt en kombination av båda (EF+Utan Stoppord/Lemmatisering). Dropout användes för att hjälpa modellerna att generalisera bättre, och träningen reglerades med early stopp-teknik. För att utvärdera vilken klassificerare som ger högre träffsäkerhet, användes förbehandlingsmetoden som hade högst träffsäkerhet som identifierades, och de optimala hyperparametrarna utforskades. Måtten som användes i studien för att utvärdera träffsäkerheten är noggrannhet och F1-score. Resultaten från studien visade att EF-metoden presterade bäst i jämförelse med de andra förbehandlingsmetoderna som utforskades. Den modell som hade högst noggrannhet och F1-score i studien var Bi-GRU. / Sentiment analysis is a technique aimed at automatically identifying the emotional tone in text. Typically, text is classified as positive, neutral, or negative. The downside of this classification is that nuances are lost when text is categorized into only three categories. An advancement of this classification is to include two additional categories: very positive and very negative. The challenge with this five-class classification is that achieving high performance becomes more difficult due to the increased number of categories. This has led to the need to explore different methods to solve the problem. Therefore, the purpose of the study is to evaluate various classifiers, such as MLP, CNN, and Bi-GRU in combination with word2vec, to classify sentiment in text into five categories. The study also aims to explore which preprocessing method yields higher performance for word2vec. The development of the models was done using the SST dataset, which is a well-known dataset in fine-grained sentiment analysis. To determine which preprocessing method yields higher performance for word2vec, the dataset was preprocessed in four different ways. These include simple preprocessing (EF), as well as combinations of common preprocessing techniques such as removing stop words (EF+Without Stopwords) and lemmatization (EF+Lemmatization), as well as a combination of both (EF+Without Stopwords/Lemmatization). Dropout was used to help the models generalize better, and training was regulated with early stopping technique. To evaluate which classifier yields higher performance, the preprocessing method with the highest performance was used, and the optimal hyperparameters were explored. The metrics used in the study to evaluate performance are accuracy and F1-score. The results of the study showed that the EF method performed best compared to the other preprocessing methods explored. The model with the highest accuracy and F1-score in the study was Bi-GRU.
|
Page generated in 0.052 seconds