• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 111
  • 25
  • 21
  • 12
  • 11
  • 8
  • 7
  • 3
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 246
  • 109
  • 109
  • 39
  • 38
  • 33
  • 30
  • 30
  • 25
  • 25
  • 19
  • 19
  • 18
  • 17
  • 17
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
191

Supporting Applications Involving Dynamic Data Structures and Irregular Memory Access on Emerging Parallel Platforms

Ren, Bin 09 September 2014 (has links)
No description available.
192

Investigation of single molecule and monolayer properties with Monte Carlo simulations of a coarse-grained model for alpha-sexithiophene

Garcia, Claudio J. 07 June 2018 (has links)
No description available.
193

Protein Primary and Quaternary Structure Elucidation by Mass Spectrometry

Song, Yang 18 September 2015 (has links)
No description available.
194

A Heuristic-Based Approach to Real-Time TCP State and Retransmission Analysis

Swaro, James E. January 2015 (has links)
No description available.
195

Computational Studies on Multi-phasic Multi-componentComplex Fluids

Boromand, Arman 07 February 2017 (has links)
No description available.
196

On Optimizing Transactional Memory: Transaction Splitting, Scheduling, Fine-grained Fallback, and NUMA Optimization

Mohamedin, Mohamed Ahmed Mahmoud 01 September 2015 (has links)
The industrial shift from single core processors to multi-core ones introduced many challenges. Among them, a program cannot get a free performance boost by just upgrading to a new hardware because new chips include more processing units but at the same (or comparable) clock speed as the previous generation. In order to effectively exploit the new available hardware and thus gain performance, a program should maximize parallelism. Unfortunately, parallel programming poses several challenges, especially when synchronization is involved because parallel threads need to access the same shared data. Locks are the standard synchronization mechanism but gaining performance using locks is difficult for a non-expert programmers and without deeply knowing the application logic. A new, easier, synchronization abstraction is therefore required and Transactional Memory (TM) is the concrete candidate. TM is a new programming paradigm that simplifies the implementation of synchronization. The programmer just defines atomic parts of the code and the underlying TM system handles the required synchronization, optimistically. In the past decade, TM researchers worked extensively to improve TM-based systems. Most of the work has been dedicated to Software TM (or STM) as it does not requires special transactional hardware supports. Very recently (in the past two years), those hardware supports have become commercially available as commodity processors, thus a large number of customers can finally take advantage of them. Hardware TM (or HTM) provides the potential to obtain the best performance of any TM-based systems, but current HTM systems are best-effort, thus transactions are not guaranteed to commit in any case. In fact, HTM transactions are limited in size and time as well as prone to livelock at high contention levels. Another challenge posed by the current multi-core hardware platforms is their internal architecture used for interfacing with the main memory. Specifically, when the common computer deployment changed from having a single processor to having multiple multi-core processors, the architects redesigned also the hardware subsystem that manages the memory access from the one providing a Uniform Memory Access (UMA), where the latency needed to fetch a memory location is the same independently from the specific core where the thread executes on, to the current one with a Non-Uniform Memory Access (NUMA), where such a latency differs according to the core used and the memory socket accessed. This switch in technology has an implication on the performance of concurrent applications. In fact, the building blocks commonly used for designing concurrent algorithms under the assumptions of UMA (e.g., relying on centralized meta-data) may not provide the same high performance and scalability when deployed on NUMA-based architectures. In this dissertation, we tackle the performance and scalability challenges of multi-core architectures by providing three solutions for increasing performance using HTM (i.e., Part-HTM, Octonauts, and Precise-TM), and one solution for solving the scalability issues provided by NUMA-architectures (i.e., Nemo). • Part-HTM is the first hybrid transactional memory protocol that solves the problem of transactions aborted due to the resource limitations (space/time) of current best-effort HTM. The basic idea of Part-HTM is to partition those transactions into multiple sub-transactions, which can likely be committed in hardware. Due to the eager nature of HTM, we designed a low-overhead software framework to preserve transaction's correctness (with and without opacity) and isolation. Part-HTM is efficient: our evaluation study confirms that its performance is the best in all tested cases, except for those where HTM cannot be outperformed. However, in such a workload, Part-HTM still performs better than all other software and hybrid competitors. • Octonauts tackles the live-lock problem of HTM at high contention level. HTM lacks of advanced contention management (CM) policies. Octonauts is an HTM-aware scheduler that orchestrates conflicting transactions. It uses a priori knowledge of transactions' working-set to prevent the activation of conflicting transactions, simultaneously. Octonauts also accommodates both HTM and STM with minimal overhead by exploiting adaptivity. Based on the transaction's size, time, and irrevocable calls (e.g., system call) Octonauts selects the best path among HTM, STM, or global locking. Results show a performance improvement up to 60% when Octonauts is deployed in comparison with pure HTM with falling back to global locking. • Precise-TM is a unique approach to solve the granularity of the software fallback path of best-efforts HTM. It provide an efficient and precise technique for HTM-STM communication such that HTM is not interfered by concurrent STM transactions. In addition, the added overhead is marginal in terms of space or execution time. Precise-TM uses address-embedded locks (pointers bit-stealing) for a precise communication between STM and HTM. Results show that our precise fine-grained locking pays off as it allows more concurrency between hardware and software transactions. Specifically, it gains up to 5x over the default HTM implementation with a single global lock as fallback path. • Nemo is a new STM algorithm that ensures high and scalable performance when an application workload with a data locality property is deployed. Existing STM algorithms rely on centralized shared meta-data (e.g., a global timestamp) to synchronize concurrent accesses, but in such a workload, this scheme may hamper the achievement of scalable performance given the high latency introduced by NUMA architectures for updating those centralized meta-data. Nemo overcomes these limitations by allowing only those transactions that actually conflict with each other to perform inter-socket communication. As a result, if two transactions are non-conflicting, they cannot interact with each other through any meta-data. Such a policy does not apply for application threads running in the same socket. In fact, they are allowed to share any meta-data even if they execute non-conflicting operations because, supported by our evaluation study, we found that the local processing happening inside one socket does not interfere with the work done by parallel threads executing on other sockets. Nemo's evaluation study shows improvement over state-of-the-art TM algorithms by as much as 65%. / Ph. D.
197

Finfördelad Sentimentanalys : Utvärdering av neurala nätverksmodeller och förbehandlingsmetoder med Word2Vec / Fine-grained Sentiment Analysis : Evaluation of Neural Network Models and Preprocessing Methods with Word2Vec

Phanuwat, Phutiwat January 2024 (has links)
Sentimentanalys är en teknik som syftar till att automatiskt identifiera den känslomässiga tonen i text. Vanligtvis klassificeras texten som positiv, neutral eller negativ. Nackdelen med denna indelning är att nyanser går förlorade när texten endast klassificeras i tre kategorier. En vidareutveckling av denna klassificering är att inkludera ytterligare två kategorier: mycket positiv och mycket negativ. Utmaningen med denna femklassificering är att det blir svårare att uppnå hög träffsäkerhet på grund av det ökade antalet kategorier. Detta har lett till behovet av att utforska olika metoder för att lösa problemet. Syftet med studien är därför att utvärdera olika klassificerare, såsom MLP, CNN och Bi-GRU i kombination med word2vec för att klassificera sentiment i text i fem kategorier. Studien syftar också till att utforska vilken förbehandling som ger högre träffsäkerhet för word2vec.   Utvecklingen av modellerna gjordes med hjälp av SST-datasetet, som är en känd dataset inom finfördelad sentimentanalys. För att avgöra vilken förbehandling som ger högre träffsäkerhet för word2vec, förbehandlades datasetet på fyra olika sätt. Dessa innefattar enkel förbehandling (EF), samt kombinationer av vanliga förbehandlingar som att ta bort stoppord (EF+Utan Stoppord) och lemmatisering (EF+Lemmatisering), samt en kombination av båda (EF+Utan Stoppord/Lemmatisering). Dropout användes för att hjälpa modellerna att generalisera bättre, och träningen reglerades med early stopp-teknik. För att utvärdera vilken klassificerare som ger högre träffsäkerhet, användes förbehandlingsmetoden som hade högst träffsäkerhet som identifierades, och de optimala hyperparametrarna utforskades. Måtten som användes i studien för att utvärdera träffsäkerheten är noggrannhet och F1-score.   Resultaten från studien visade att EF-metoden presterade bäst i jämförelse med de andra förbehandlingsmetoderna som utforskades. Den modell som hade högst noggrannhet och F1-score i studien var Bi-GRU. / Sentiment analysis is a technique aimed at automatically identifying the emotional tone in text. Typically, text is classified as positive, neutral, or negative. The downside of this classification is that nuances are lost when text is categorized into only three categories. An advancement of this classification is to include two additional categories: very positive and very negative. The challenge with this five-class classification is that achieving high performance becomes more difficult due to the increased number of categories. This has led to the need to explore different methods to solve the problem. Therefore, the purpose of the study is to evaluate various classifiers, such as MLP, CNN, and Bi-GRU in combination with word2vec, to classify sentiment in text into five categories. The study also aims to explore which preprocessing method yields higher performance for word2vec.   The development of the models was done using the SST dataset, which is a well-known dataset in fine-grained sentiment analysis. To determine which preprocessing method yields higher performance for word2vec, the dataset was preprocessed in four different ways. These include simple preprocessing (EF), as well as combinations of common preprocessing techniques such as removing stop words (EF+Without Stopwords) and lemmatization (EF+Lemmatization), as well as a combination of both (EF+Without Stopwords/Lemmatization). Dropout was used to help the models generalize better, and training was regulated with early stopping technique. To evaluate which classifier yields higher performance, the preprocessing method with the highest performance was used, and the optimal hyperparameters were explored. The metrics used in the study to evaluate performance are accuracy and F1-score.   The results of the study showed that the EF method performed best compared to the other preprocessing methods explored. The model with the highest accuracy and F1-score in the study was Bi-GRU.
198

The Cracking and Tensile-Load-Bearing Behaviour of Concrete Reinforced with Sanded Carbon Grids

Frenzel, Michael, Baumgärtel, Enrico, Marx, Steffen, Curbach, Manfred 04 October 2024 (has links)
This article presents the cracking and load-bearing behaviour of carbon-reinforced prismatic concrete tensile specimens. Grids with different geometries and impregnations were used as carbon reinforcement. In addition, the roving surfaces were partially coated with a fine sand to improve the bond between concrete and reinforcement. The article shows the influence of the different parameters on the developing cracks with respect to their width and spacing from each other. The material properties and tensile strengths of carbon concrete are also presented. These can be used for calculations. A fine-grained, commercially available shotcrete was used for the investigations. Based on the tests and results described in this article, an influence of the sanded carbon grids on the crack properties (crack widths, crack spacing) could be shown in comparison to unsanded carbon grids.
199

Geosynthetic Reinforced Soil: Numerical and Mathematical Analysis of Laboratory Triaxial Compression Tests

Santacruz Reyes, Karla 03 February 2017 (has links)
Geosynthetic reinforced soil (GRS) is a soil improvement technology in which closely spaced horizontal layers of geosynthetic are embedded in a soil mass to provide lateral support and increase strength. GRS is popular due to a relatively new application for bridge support, as well as long-standing application in mechanically stabilized earth walls. Several different GRS design methods have been used, and some are application-specific and not based on fundamental principles of mechanics. Because consensus regarding fundamental behavior of GRS is lacking, numerical and mathematical analyses were performed for laboratory tests obtained from the published literature of GRS under triaxial compression in consolidated-drained conditions. A three-dimensional numerical model was developed using FLAC3D. An existing constitutive model for the soil component was modified to incorporate confining pressure dependency of friction angle and dilation parameters, while retaining the constitutive model's ability to represent nonlinear stress-strain response and plastic yield. Procedures to obtain the parameter values from drained triaxial compression tests on soil specimens were developed. A method to estimate the parameter values from particle size distribution and relative compaction was also developed. The geosynthetic reinforcement was represented by two-dimensional orthotropic elements with soil-geosynthetic interfaces on each side. Comparisons between the numerical analyses and laboratory tests exhibited good agreement for strains from zero to 3% for tests with 1 to 3 layers of reinforcement. As failure is approached at larger strains, agreement was good for specimens that had 1 or 2 layers of reinforcement and soil friction angle less than 40 degrees. For other conditions, the numerical model experienced convergence problems that could not be overcome by mesh refinement or reducing the applied loading rate; however, it appears that, if convergence problems can be solved, the numerical model may provide a mechanics-based representation of GRS behavior, at least for triaxial test conditions. Three mathematical theories of GRS failure available in published literature were applied to the laboratory triaxial tests. Comparisons between the theories and the tests results demonstrated that all three theories have important limitations. These numerical and mathematical evaluations of laboratory GRS tests provided a basis for recommending further research. / Ph. D. / Sometimes soils in nature do not possess the strength characteristics necessary to be used in a specific engineering application, and soil improvement technologies are necessary. Geosynthetic reinforced soil (GRS) is a soil improvement technology in which closely spaced horizontal layers of geosynthetic material are placed in a soil mass to provide lateral support and increase the strength of the reinforced mass. The geosynthetic materials used in GRS are flexible sheets of polymeric materials produced in the form of woven fabrics or openwork grids. This technology is widely used to improve the strength of granular soil to form walls and bridge abutments. Current design methods for GRS applications are case specific, some of these methods do not rely on fundamental principles of physics, and consensus regarding the fundamental behavior of GRS is lacking. To improve understanding of GRS response independent of application, the three dimensional response of GRS specimens to axisymmetric loading were investigated using numerical and mathematical analysis. A numerical model using the finite difference method in which the domain is discretized in small zones was developed, and this model can capture the response of GRS laboratory specimens under axisymmetric loading with reasonably good accuracy at working strains (up to 3% strain). This numerical model includes a robust constitutive model for the soil that is capable of representing the most important stiffness and strength characteristics of the soil. For large strains approaching failure loading, the numerical model encountered convergence difficulties when the soil strength was high or when more than two layers of reinforcement were used. As an alternative to discretized numerical analysis, three mathematical theories available in the published literature were applied to the collected GRS laboratory test data. These evaluations demonstrated that all three theories have important limitations in their ability to represent failure of GRS laboratory test specimens. This study is important because it proposed a numerical model in 3D to represent the GRS behavior under working strains, and it identified several limitations of mathematical theories that attempt to represent the ultimate strength of GRS. Based on these findings, recommendations for further research were developed.
200

Coarse-grained modeling with constant pH of the protein complexation phenomena / Modelagem de granularidade grossa com pH constante para o fenômeno da complexação de proteínas

Cuevas, Sergio Alejandro Poveda 10 April 2017 (has links)
Theoretical studies of the molecular mechanisms responsible for the formation and stability of protein complexes have gained importance due to their practical applications in the understanding of the molecular basis of several diseases, in protein engineering and biotechnology. The objective of this project is to critically analyze and refine a coarse-grained force field for protein-protein interactions based on experimental thermodynamic properties and to apply it to cancer-related S100A4 protein system. Our ultimate goal is to generate knowledge for a better understanding of the physical mechanisms responsible for the association of particular proteins in different environments. We studied the role of short and long-range interactions on the complexation of homo-associations. Furthermore, we analyzed the influence of the pH and its correlation with the charge regulation mechanism. We analyzed and refined the adjustable Lennard-Jones parameter for a mesoscopic model based on experimental second virial data for lysozyme, chymotrypsinogen, and ribonuclease A via Monte Carlo simulations. From of that, the S100A3 protein was used to test the new calibrated parameters. Finally, we evaluated the dimerization process of S100A4 proteins, observing the role of physical-chemistry variables involved in the thermodynamical stability of different oligomers. / Estudos teóricos dos mecanismos moleculares responsáveis pela formação e estabilidade dos complexos de proteínas vêm ganhando importância devido às suas aplicações práticas no entendimento da base molecular de várias doenças, em engenharia de proteínas e biotecnologia. O objetivo deste projeto é analisar criticamente e aperfeiçoar um campo de força de granulidade grossa para interação proteína-proteína com base em propriedades termodinâmicas experimentais e aplicá-lo ao sistema proteico S100A4 relacionado com o câncer. Nosso objetivo final é gerar conhecimento para uma melhor compreensão dos mecanismos físicos responsáveis pelas associações de proteínas particulares em diferentes ambientes. Estudamos o papel das interações de curto e longo alcance na complexação de homo-associações. Além disso, analisamos a influência do pH e sua correlação com o mecanismo de regulação de cargas. Por meio de simulações Monte Carlo, analisamos e refinamos o parametro ajustável de Lennard-Jones para um modelo mesoscópico, usando dados experimentais do segundo virial para a lisozima, o quimotripsinogênio e a ribonuclease A. A partir disso, a proteína S100A3 foi usada para testar os novos parâmetros calibrados. Finalmente, foi avaliado o processo de dimerização das proteínas S100A4, observando o papel de algumas variáveis físico-químicas envolvidas na estabilidade termondinâmica de diferentes oligómeros.

Page generated in 0.0562 seconds