• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1284
  • 376
  • 212
  • 163
  • 71
  • 63
  • 36
  • 33
  • 28
  • 28
  • 26
  • 12
  • 12
  • 10
  • 10
  • Tagged with
  • 2847
  • 398
  • 284
  • 280
  • 207
  • 195
  • 190
  • 162
  • 156
  • 156
  • 156
  • 152
  • 147
  • 142
  • 128
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
421

An Experimental Study of a Liquid Steel Sampling Process

Ericsson, Ola January 2010 (has links)
During the steelmaking process samples are taken from the liquid steel, mainly to assess the chemical composition of the steel. Recently, methods for rapid determination of inclusion characteristics (size and composition) have progressed to the level where they can be implemented in process control. Inclusions in steel can have either good or detrimental effects depending on their characteristics (size, number, composition and morphology). Thereby, by determination of the inclusion characteristics during the steelmaking process it is possible to steer the inclusion characteristics in order to increase the quality of the steel. However, in order to successfully implement these methods it is critical that the samples taken from the liquid steel represent the inclusion characteristics in the liquid steel at the sampling moment.   The purpose of this study is to investigate the changes in inclusion characteristics during the liquid steel sampling process. Experimental studies were carried out at steel plants to measure filling velocity and solidification rate in real industrial samples. The sampling conditions for three sample geometries and two slag protection types were determined. Furthermore, the dispersion of the total oxygen content in the samples was evaluated as a function of sample geometry and type of slag protection. In addition, the effects of cooling rate as well as oxygen and sulfur content on the inclusion characteristics were investigated in laboratory and industrial samples. Possibilities to separate primary (existing in the liquid steel at sampling moment) and secondary (formed during cooling and solidification) inclusions depending on size and composition were investigated. Finally, in order to evaluate the homogeneity and representative of the industrial samples the dispersion of inclusion characteristics in different zones and layers of the samples were investigated.   It was concluded that the type of slag protection has a significant effect on the filling velocity and the sampling repeatability. Furthermore, that the thickness of the samples is the main controlling factor for the solidification rate. It was shown that top slag can contaminate the samples. Therefore, the choice of slag protection type is critical to obtain representative samples. It was shown that the cooling rate has a significant effect on the number of secondary precipitated inclusions. However, the number of primary inclusions was almost constant and independent on the cooling rate. In most cases it is possible to roughly separate the secondary and primary oxide inclusions based on the particle size distributions. However, in high-sulfur steels a significant amount of sulfides precipitate heterogeneously during cooling and solidification. This makes separation of secondary and primary inclusions very difficult. Moreover, the secondary sulfides which precipitate heterogeneously significantly change the characteristics (size, composition and morphology) of primary inclusions. The study revealed that both secondary and primary inclusions are heterogeneously dispersed in the industrial samples. In general, the middle zone of the surface layer is recommended for investigation of primary inclusions. / QC 20101112
422

Compact Multipurpose sub-sampling and processing of in-situ cores with PRESS (Pressurized Core Sub-sampling and Extrusion System).

Anders, Erik, Müller, Wolfgang H. 07 1900 (has links)
Understanding the deep biosphere is of great commercial and scientific interest and will contri-bute to increased knowledge of the environment. If environmentally relevant results are to be ob-tained the precondition to achieve genuine findings is research in pristine habitat as close as pos-sible to those encountered in-situ. Therefore benthic conditions of sediment structure and gas hydrates, temperature, pressure and bio-geochemistry have to be maintained during the sequences of sampling, retrieval, transfer, sto-rage and downstream analysis. At the Technische Universität Berlin (TUB) the Pressurized Core Sub-Sampling and Extrusion System (PRESS) was developed in the EU project HYACE/HYACINTH. It enables well-defined sectioning and transfer of drilled pressure-cores [obtained by HYACE Rotary Corer (HRC) and Fugro Pressure Corer (FPC)] into transportation and investigation chambers. Coupled with DeepIsoBUG (University Cardiff, John Parkes) it allows sub-sampling and incubation of coaxial core-sections to examine high-pressure adapted bacteria or remote biogeochemical processes in defined research conditions of the laboratory; all sterile, anaerobic and without depressurisation. Appraisals of successful PRESS deployments in the Gulf of Mexico, on IODP Expedition 311 and as part of the NGHP expedition 01 demonstrate the general concept to be feasible and useful. Aided by Deutsche Forschungsgemeinschaft (DFG) TUB is currently working on concepts to downscale the system in order to reduce logistical and financial expenses and, likewise, to enlarge its implementation by requiring less operating space. Redesigning the cutting mechanism shall simultaneously adjust the system to harder cores (e.g., ICDP). Novel transportation chambers for processed sub-samples intend to make the system more attractive for a broad spectrum of users and reduce their interdependence.
423

Diel and monthly observations of plant mediated fluxes of methane, carbon dioxide and nitrous oxide from lake Följesjön in Sweden using static chamber method

Radpour, Houtan January 2013 (has links)
Aquatic plants or macrophytes are known as conduits of Methane (CH4), Carbon dioxide (CO2) and Nitrous oxide (N2O) which contribute to the total fluxes of the Greenhouse gases emissions from lakes. Recent studies emphasized that the knowledge on plant mediated emissions calls for more systematic and comparative data especially in the areas of spatial and temporal variability. In this study I measured diel (24 hour) and diurnal(  daily hours only) plant mediated fluxes during four sampling sessions using chamber method from a  Swedish lake in summer 2012. The measurements were conducted on two macrophyte population patterns of mixed plant communities and Equisetum fluviatile (specie-specific) community. CH4 emissions were higher in darker hours and there were no diel correlation between CH4 fluxes and average diel temperature. CH4 fluxes varied between 0.42 mmol m-2d-1 and 2.3 mmol m-2d-1. The CO2 fluxes had negative fluxes in day and positive during the day which was logical due to macrophyte respiration and photosynthesis mechanisms. Occasional daily positive fluxes were seen (only) during the rainy hours and there were no correlation between temperature and diel CO2 fluxes. The total net CO2 exchange was 2.8mmol m-2d-1 indicating that there was more CO2 release in the littoral zone of that lake. N2O fluxes did not show any clear diel or monthly pattern and the fluxes ranged between positive and negative numbers. The N2O fluxes did not exceed 2µmol m-2 d-1 with the total average flux of 0.8µmol m-2 d-1.
424

Monte Carlo Integration Using Importance Sampling and Gibbs Sampling

Hörmann, Wolfgang, Leydold, Josef January 2005 (has links) (PDF)
To evaluate the expectation of a simple function with respect to a complicated multivariate density Monte Carlo integration has become the main technique. Gibbs sampling and importance sampling are the most popular methods for this task. In this contribution we propose a new simple general purpose importance sampling procedure. In a simulation study we compare the performance of this method with the performance of Gibbs sampling and of importance sampling using a vector of independent variates. It turns out that the new procedure is much better than independent importance sampling; up to dimension five it is also better than Gibbs sampling. The simulation results indicate that for higher dimensions Gibbs sampling is superior. (author's abstract) / Series: Preprint Series / Department of Applied Statistics and Data Processing
425

Interkontextualitetens universella trådar : Strävan efter medvetenhet inför interkontextuella förhållanden, subjektiva normer & sociala mutationer inom nutida remixkulturer / The Universal Threads of Intercontextuality : Towards an Understanding of Intercontextual Relationships, Subjective Norms & Social Mutations within Contemporary Remix Cultures

Lindberg, Tobias, Karlsson, Andreas January 2015 (has links)
I det rådande informationssamhället där utbytet av information och digitalt material har blivit mer framträdande än någonsin tidigare har även upphovsrättsfallen kring dess användande och återanvändande blivit allt mer aktualiserade. I denna uppsats har vi sökt att studera hur normer kring skapande och originalitet har påverkat kreativa aktörers syn på återanvändning och rekontextualisering under det senare 00-talet och tidiga 2010-talet. Genom att utveckla metoden interkontextualitet har vi studerat hur människan och hennes kognitiva processer influerar rättsfall som rör just deriverade verk och/eller rekontextualisering. Den digitala tekniken har fört med sig nya normer kring skapande tillika hur information förmedlas mellan människor, vilket resulterat i förändrade konsumtionsmönster och en remixkultur där individuella verk inte ses som statiska enheter utan som levande, sammanlänkade uttryck. Kognitiva processer kan ha bidragit till denna remixkultur och i sin tur mer generaliserade attityder från mediadistributionsbolag och myndighetsorganisationer. Med bakgrund av detta söker vi att förespråka en mer öppen syn på de interkontextuella samband som binder kreativa verk och deras bakomliggande kreatörer till varandra. Att se kreativa uttryck som en del i den globala meme-pool,där människor tillsammans bygger vidare på vårt kulturella arv, kan vara ett steg i att motverka skadlig egoism kring det egna skapandet. / In the current information society where the exchange of information and digital material has grown to become more prominent than ever before, so has the copyright cases regarding the usage and reusage of said materials become increasingly actualized. In this thesis, we’ve sought to study how norms surrounding creation and originality has affected the views on reuse and recontextualization of creative actors during the latter parts of the 2000’s as well as the early 2010’s. By developing the method intercontextuality, we’ve been studying how mankind and her cognitive processes influence court cases concerning derivative works and/or recontextualization. The digital technology has brought forth new norms surrounding creation as well as how information is mediated between people, which has resulted in changed consumption patterns and a remix culture where individual works are no longer viewed as static entities, but as living, interconnected expressions. Cognitive processes may have contributed to this remix culture and in turn more generalising attitudes from media distribution companies and government organizations. We proceed to advocate a more open view on the intercontextual connections that bind all creative works and their underlying creators together. To view creative expressions as a part of the global meme-pool, where together people keep building on our cultural heritage may just be a step towards preventing harmful egoism surrounding one’s own creations.
426

Understanding Multicore Performance : Efficient Memory System Modeling and Simulation

Sandberg, Andreas January 2014 (has links)
To increase performance, modern processors employ complex techniques such as out-of-order pipelines and deep cache hierarchies. While the increasing complexity has paid off in performance, it has become harder to accurately predict the effects of hardware/software optimizations in such systems. Traditional microarchitectural simulators typically execute code 10 000×–100 000× slower than native execution, which leads to three problems: First, high simulation overhead makes it hard to use microarchitectural simulators for tasks such as software optimizations where rapid turn-around is required. Second, when multiple cores share the memory system, the resulting performance is sensitive to how memory accesses from the different cores interleave. This requires that applications are simulated multiple times with different interleaving to estimate their performance distribution, which is rarely feasible with today's simulators. Third, the high overhead limits the size of the applications that can be studied. This is usually solved by only simulating a relatively small number of instructions near the start of an application, with the risk of reporting unrepresentative results. In this thesis we demonstrate three strategies to accurately model multicore processors without the overhead of traditional simulation. First, we show how microarchitecture-independent memory access profiles can be used to drive automatic cache optimizations and to qualitatively classify an application's last-level cache behavior. Second, we demonstrate how high-level performance profiles, that can be measured on existing hardware, can be used to model the behavior of a shared cache. Unlike previous models, we predict the effective amount of cache available to each application and the resulting performance distribution due to different interleaving without requiring a processor model. Third, in order to model future systems, we build an efficient sampling simulator. By using native execution to fast-forward between samples, we reach new samples much faster than a single sample can be simulated. This enables us to simulate multiple samples in parallel, resulting in almost linear scalability and a maximum simulation rate close to native execution. / CoDeR-MP / UPMARC
427

Random sampling of lattice configurations using local Markov chains

Greenberg, Sam 01 December 2008 (has links)
Algorithms based on Markov chains are ubiquitous across scientific disciplines, as they provide a method for extracting statistical information about large, complicated systems. Although these algorithms may be applied to arbitrary graphs, many physical applications are more naturally studied under the restriction to regular lattices. We study several local Markov chains on lattices, exploring how small changes to some parameters can greatly influence efficiency of the algorithms. We begin by examining a natural Markov Chain that arises in the context of "monotonic surfaces", where some point on a surface is sightly raised or lowered each step, but with a greater rate of raising than lowering. We show that this chain is rapidly mixing (converges quickly to the equilibrium) using a coupling argument; the novelty of our proof is that it requires defining an exponentially increasing distance function on pairs of surfaces, allowing us to derive near optimal results in many settings. Next, we present new methods for lower bounding the time local chains may take to converge to equilibrium. For many models that we study, there seems to be a phase transition as a parameter is changed, so that the chain is rapidly mixing above a critical point and slow mixing below it. Unfortunately, it is not always possible to make this intuition rigorous. We present the first proofs of slow mixing for three sampling problems motivated by statistical physics and nanotechnology: independent sets on the triangular lattice (the hard-core lattice gas model), weighted even orientations of the two-dimensional Cartesian lattice (the 8-vertex model), and non-saturated Ising (tile-based self-assembly). Previous proofs of slow mixing for other models have been based on contour arguments that allow us prove that a bottleneck in the state space constricts the mixing. The standard contour arguments do not seem to apply to these problems, so we modify this approach by introducing the notion of "fat contours" that can have nontrivial area. We use these to prove that the local chains defined for these models are slow mixing. Finally, we study another important issue that arises in the context of phase transitions in physical systems, namely how the boundary of a lattice can affect the efficiency of the Markov chain. We examine a local chain on the perfect and near-perfect matchings of the square-octagon lattice, and show for one boundary condition the chain will mix in polynomial time, while for another it will mix exponentially slowly. Strikingly, the two boundary conditions only differ at four vertices. These are the first rigorous proofs of such a phenomenon on lattice graphs.
428

Polypop : Polyrytmik i modern populärmusik / Polypop : Polyrhythms in contemporary pop music

Klint, Ludvig January 2018 (has links)
I detta självständiga arbete undersöks hur polyrytmik kan användas i en populärmusikalisk kontext. Syftet med den här studien är att undersöka hur polyrytmik kan användas som grund för att skapa poplåtar. I inledningen presenteras ett antal exempel på hur polyrytmik förekommer i olika genrer. Metal och Jazz tas upp som exempel på musikaliskt avancerade genrer där polyrytmik är vanligt förekommande. Vidare problematiseras hur polyrytmen 4:3 fått ett stort genomslag i populärmusiken, medan andra polyrytmer ignoreras. I metodavsnittet redogörs för hur samplingar och programmerad musik användes i arbetet för att behålla fokus på det rytmiska. Resultatet av denna studie är den klingande delen av det här arbetet som består av sex stycken/låtar av popkaraktär som var och en bygger på en unik polyrytm. Två låtar innehåller sång och text, resterande är instrumentala stycken. Reflektioner som framkommer av resultatet är bland annat hur tempon påverkar om en polyrytm upplevs som musikalisk eller inte. / In this study polyrhythms are examined as a way of writing pop songs. The purpose of the study is to research how polyrhythms can be used to create music productions in the context of popular music. As an introduction a few examples are given as to how polyrhythms appear in different genres of music. Metal and Jazz are both examples of musically advanced genres which are known to use polyrhythms. The Polyrhythm 4:3 is presented as the most common polyrhythm appearing in pop music, while other polyrhythms are left unused. In the Method chapter the use of sampling and music programming in this study is described, and why this was the preferred method over live musicians and acoustic recordings. The result of this study is the sounding material i.e. the pieces of music that was created as a part of this thesis. A total of six songs were written, two songs include vocals and lyrics while the remaining four are instrumental tracks. Important reflections include the aspect of tempo, and how it changes whether polyrhythms are perceived as musical or understandable, in my opinion.
429

Monitoring populations of the ham mite, Tyrophagus putrescentiae (Schrank) (Acari: Acaridae): research on traps, orientation behavior, and sampling techniques

Amoah, Barbara Amoh January 1900 (has links)
Doctor of Philosophy / Department of Entomology / Thomas W. Phillips / The phase-out of methyl bromide production, the most effective fumigant for the control of the ham mite, Tyrophagus putrescentiae (Schrank) (Acari: Acaridae), on dry-cured ham has necessitated the search for other management methods. The foundation of a successful management program is an effective monitoring program that provides information on pest presence and abundance over time and space to help in making management decisions. By using the standard trap made from disposable Petri dishes and a dog food-based bait, mite activity was monitored weekly in five dry-cured ham aging rooms from three commercial processing facilities from June 2012 to September 2013. Results indicated that mite numbers in traps in facilities typically had a pattern of sharp decline after fumigation, followed by a steady increase until the next fumigation. Average trap captures varied due to trap location, indicating that traps could be used to identify locations where mite infestation of hams may be more likely to occur. Experiments were also conducted in 6 m x 3 m climate-controlled rooms to determine the effects of some physical factors on trap capture. Factors such as trap design, trap location, trap distance, duration of trapping, and light conditions had significant effects on mite capture. Mites also responded differently to light emitting diodes of different wavelengths, either as a component of the standard trap or as a stand-alone stimulus to orientation. To determine the relationship between trap capture and mite density, experiments were carried out in the climate-controlled rooms. Mite density was varied but trap number remained constant for all mite densities. There was strong positive correlation between trap capture and mite density. In simulated ham aging rooms, the distribution of mites on hams was determined and different sampling techniques such as vacuum sampling, trapping, rack sampling, ham sampling and absolute mite counts from whole hams were compared and correlated. Results showed weak or moderate correlations between sampling techniques in pairwise comparisons. Two sampling plans were developed to determine the number of samples required to estimate mite density on ham with respect to fixed precision levels or to an action threshold for making pest management decisions. Findings reported here can help in the optimization of trapping and sampling of ham mite populations to help in the development of efficient, cost-effective tools for pest management decisions incorporated with alternatives to methyl bromide.
430

Sur quelques applications du codage parcimonieux et sa mise en oeuvre / On compressed sampling applications and its implementation

Coppa, Bertrand 08 March 2013 (has links)
Le codage parcimonieux permet la reconstruction d'un signal à partir de quelques projections linéaires de celui-ci, sous l'hypothèse que le signal se décompose de manière parcimonieuse, c'est-à-dire avec peu de coefficients, sur un dictionnaire connu. Le codage est simple, et la complexité est déportée sur la reconstruction. Après une explication détaillée du fonctionnement du codage parcimonieux, une présentation de quelques résultats théoriques et quelques simulations pour cerner les performances envisageables, nous nous intéressons à trois problèmes : d'abord, l'étude de conception d'un système permettant le codage d'un signal par une matrice binaire, et des avantages apportés par une telle implémentation. Ensuite, nous nous intéressons à la détermination du dictionnaire de représentation parcimonieuse du signal par des méthodes d'apprentissage. Enfin, nous discutons la possibilité d'effectuer des opérations comme la classification sur le signal sans le reconstruire. / Compressed sensing allows to reconstruct a signal from a few linear projections, under the assumption that the signal can be sparsely represented, that is, with only a few coefficients, on a known dictionary. Coding is very simple and all the complexity is gathered on the reconstruction. After more detailed explanations of the principle of compressed sensing, some theoretic resultats from literature and a few simulations allowing to get an idea of expected performances, we focusson three problems: First, the study for the building of a system using compressed sensing with a binary matrix and the obtained benefits. Then, we have a look at the building of a dictionary for sparse representations of the signal. And lastly, we discuss the possibility of processing signal without reconstruction, with an example in classification.

Page generated in 0.0404 seconds