• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 215
  • 38
  • 27
  • 23
  • 12
  • 8
  • 5
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • Tagged with
  • 399
  • 194
  • 81
  • 74
  • 58
  • 54
  • 48
  • 47
  • 46
  • 45
  • 37
  • 33
  • 33
  • 32
  • 30
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
261

Impact of residential wood combustion on urban air quality

Krecl, Patricia January 2008 (has links)
<p>Wood combustion is mainly used in cold regions as a primary or supplemental space heating source in residential areas. In several industrialized countries, there is a renewed interest in residential wood combustion (RWC) as an alternative to fossil fuel and nuclear power consumption. The main objective of this thesis was to investigate the impact of RWC on the air quality in urban areas. To this end, a field campaign was conducted in Northern Sweden during wintertime to characterize atmospheric aerosol particles and polycyclic aromatic hydrocarbons (PAH) and to determine their source apportionment.</p><p>A large day-to-day and hour-to-hour variability in aerosol concentrations was observed during the intensive field campaign. On average, total carbon contributed a substantial fraction of PM10 mass concentrations (46%) and aerosol particles were mostly in the fine fraction (PM1 accounted for 76% of PM10). Evening aerosol concentrations were significantly higher on weekends than on weekdays which could be associated to the use of wood burning for recreational purposes or higher space heat demand when inhabitants spend longer time at home. It has been shown that continuous aerosol particle number size distribution measurements successfully provided source apportionment of atmospheric aerosol with high temporal resolution. The first compound-specific radiocarbon analysis (CSRA) of atmospheric PAH demonstrated its potential to provide quantitative information on the RWC contribution to individual PAH. RWC accounted for a large fraction of particle number concentrations in the size range 25-606 nm (44-57%), PM10 (36-82%), PM1 (31-83%), light-absorbing carbon (40-76%) and individual PAH (71-87%) mass concentrations.</p><p>These studies have demonstrated that the impact of RWC on air quality in an urban location can be very important and largely exceed the contribution of vehicle emissions during winter, particularly under very stable atmospheric conditions.</p>
262

Recursive Blocked Algorithms, Data Structures, and High-Performance Software for Solving Linear Systems and Matrix Equations

Jonsson, Isak January 2003 (has links)
<p>This thesis deals with the development of efficient and reliable algorithms and library software for factorizing matrices and solving matrix equations on high-performance computer systems. The architectures of today's computers consist of multiple processors, each with multiple functional units. The memory systems are hierarchical with several levels, each having different speed and size. The practical peak performance of a system is reached only by considering all of these characteristics. One portable method for achieving good system utilization is to express a linear algebra problem in terms of level 3 BLAS (Basic Linear Algebra Subprogram) transformations. The most important operation is GEMM (GEneral Matrix Multiply), which typically defines the practical peak performance of a computer system. There are efficient GEMM implementations available for almost any platform, thus an algorithm using this operation is highly portable.</p><p>The dissertation focuses on how recursion can be applied to solve linear algebra problems. Recursive linear algebra algorithms have the potential to automatically match the size of subproblems to the different memory hierarchies, leading to much better utilization of the memory system. Furthermore, recursive algorithms expose level 3 BLAS operations, and reveal task parallelism. The first paper handles the Cholesky factorization for matrices stored in packed format. Our algorithm uses a recursive packed matrix data layout that enables the use of high-performance matrix--matrix multiplication, in contrast to the standard packed format. The resulting library routine requires half the memory of full storage, yet the performance is better than for full storage routines.</p><p>Paper two and tree introduce recursive blocked algorithms for solving triangular Sylvester-type matrix equations. For these problems, recursion together with superscalar kernels produce new algorithms that give 10-fold speedups compared to existing routines in the SLICOT and LAPACK libraries. We show that our recursive algorithms also have a significant impact on the execution time of solving unreduced problems and when used in condition estimation. By recursively splitting several problem dimensions simultaneously, parallel algorithms for shared memory systems are obtained. The fourth paper introduces a library---RECSY---consisting of a set of routines implemented in Fortran 90 using the ideas presented in paper two and three. Using performance monitoring tools, the last paper evaluates the possible gain in using different matrix blocking layouts and the impact of superscalar kernels in the RECSY library. </p>
263

Mathematics textbooks for teaching : An analysis of content knowledge and pedagogical content knowledge concerning algebra in Swedish upper secondary education

Sönnerhed, Wang Wei January 2011 (has links)
In school algebra, using different methods including factorization to solve quadratic equations is one common teaching and learning topic at upper secondary school level. This study is about analyzing the algebra content related to solving quadratic equations and the method of factorization as presented in Swedish mathematics textbooks with subject matter content knowledge (CK) and pedagogical content knowledge (PCK) as analytical tools. Mathematics textbooks as educational resources and artefacts are widely used in classroom teaching and learning. What is presented in a textbook is often taught by teachers in the classroom. Similarly, what is missing from the textbook may not be presented by the teacher. The study is based on an assumption that pedagogical content knowledge is embedded in the subject content presented in textbooks. Textbooks contain both subject content knowledge and pedagogical content knowledge. The primary aim of the study is to explore what pedagogical content knowledge regarding solving quadratic equations that is embedded in mathematics textbooks. The secondary aim is to analyze the algebra content related to solving quadratic equations from the perspective of mathematics as a discipline in relation to algebra history. It is about what one can find in the textbook rather than how the textbook is used in the classroom. The study concerns a teaching perspective and is intended to contribute to the understanding of the conditions of teaching solving quadratic equations. The theoretical framework is based on Shulman’s concept pedagogical content knowledge and Mishra and Koehler’s concept content knowledge. The general theoretical perspective is based on Wartofsky’s artifact theory. The empirical material used in this study includes twelve mathematics textbooks in the mathematics B course at Swedish upper secondary schools. The study contains four rounds of analyses. The results of the first three rounds have set up a basis for a deep analysis of one selected textbook. The results show that the analyzed Swedish mathematics textbooks reflect the Swedish mathematics syllabus of algebra. It is found that the algebra content related to solving quadratic equations is similar in every investigated textbook. There is an accumulative relationship among all the algebra content with a final goal of presenting how to solve quadratic equations by quadratic formula, which implies that classroom teaching may focus on quadratic formula. Factorization method is presented for solving simple quadratic equations but not the general-formed quadratic equations. The study finds that the presentation of the algebra content related to quadratic equations in the selected textbook is organized by four geometrical models that can be traced back to the history of algebra. These four geometrical models are applied for illustrating algebra rules and construct an overall embedded teaching trajectory with five sub-trajectories. The historically related pedagogy and application of mathematics in both real world and pure mathematics contexts are the pedagogical content knowledge related to quadratic equations.
264

Recursive Blocked Algorithms, Data Structures, and High-Performance Software for Solving Linear Systems and Matrix Equations

Jonsson, Isak January 2003 (has links)
This thesis deals with the development of efficient and reliable algorithms and library software for factorizing matrices and solving matrix equations on high-performance computer systems. The architectures of today's computers consist of multiple processors, each with multiple functional units. The memory systems are hierarchical with several levels, each having different speed and size. The practical peak performance of a system is reached only by considering all of these characteristics. One portable method for achieving good system utilization is to express a linear algebra problem in terms of level 3 BLAS (Basic Linear Algebra Subprogram) transformations. The most important operation is GEMM (GEneral Matrix Multiply), which typically defines the practical peak performance of a computer system. There are efficient GEMM implementations available for almost any platform, thus an algorithm using this operation is highly portable. The dissertation focuses on how recursion can be applied to solve linear algebra problems. Recursive linear algebra algorithms have the potential to automatically match the size of subproblems to the different memory hierarchies, leading to much better utilization of the memory system. Furthermore, recursive algorithms expose level 3 BLAS operations, and reveal task parallelism. The first paper handles the Cholesky factorization for matrices stored in packed format. Our algorithm uses a recursive packed matrix data layout that enables the use of high-performance matrix--matrix multiplication, in contrast to the standard packed format. The resulting library routine requires half the memory of full storage, yet the performance is better than for full storage routines. Paper two and tree introduce recursive blocked algorithms for solving triangular Sylvester-type matrix equations. For these problems, recursion together with superscalar kernels produce new algorithms that give 10-fold speedups compared to existing routines in the SLICOT and LAPACK libraries. We show that our recursive algorithms also have a significant impact on the execution time of solving unreduced problems and when used in condition estimation. By recursively splitting several problem dimensions simultaneously, parallel algorithms for shared memory systems are obtained. The fourth paper introduces a library---RECSY---consisting of a set of routines implemented in Fortran 90 using the ideas presented in paper two and three. Using performance monitoring tools, the last paper evaluates the possible gain in using different matrix blocking layouts and the impact of superscalar kernels in the RECSY library.
265

Impact of residential wood combustion on urban air quality

Krecl, Patricia January 2008 (has links)
Wood combustion is mainly used in cold regions as a primary or supplemental space heating source in residential areas. In several industrialized countries, there is a renewed interest in residential wood combustion (RWC) as an alternative to fossil fuel and nuclear power consumption. The main objective of this thesis was to investigate the impact of RWC on the air quality in urban areas. To this end, a field campaign was conducted in Northern Sweden during wintertime to characterize atmospheric aerosol particles and polycyclic aromatic hydrocarbons (PAH) and to determine their source apportionment. A large day-to-day and hour-to-hour variability in aerosol concentrations was observed during the intensive field campaign. On average, total carbon contributed a substantial fraction of PM10 mass concentrations (46%) and aerosol particles were mostly in the fine fraction (PM1 accounted for 76% of PM10). Evening aerosol concentrations were significantly higher on weekends than on weekdays which could be associated to the use of wood burning for recreational purposes or higher space heat demand when inhabitants spend longer time at home. It has been shown that continuous aerosol particle number size distribution measurements successfully provided source apportionment of atmospheric aerosol with high temporal resolution. The first compound-specific radiocarbon analysis (CSRA) of atmospheric PAH demonstrated its potential to provide quantitative information on the RWC contribution to individual PAH. RWC accounted for a large fraction of particle number concentrations in the size range 25-606 nm (44-57%), PM10 (36-82%), PM1 (31-83%), light-absorbing carbon (40-76%) and individual PAH (71-87%) mass concentrations. These studies have demonstrated that the impact of RWC on air quality in an urban location can be very important and largely exceed the contribution of vehicle emissions during winter, particularly under very stable atmospheric conditions.
266

Simultaneous control of coupled actuators using singular value decomposition and semi-nonnegative matrix factorization

Winck, Ryder Christian 08 November 2012 (has links)
This thesis considers the application of singular value decomposition (SVD) and semi-nonnegative matrix factorization (SNMF) within feedback control systems, called the SVD System and SNMF System, to control numerous subsystems with a reduced number of control inputs. The subsystems are coupled using a row-column structure to allow mn subsystems to be controlled using m+n inputs. Past techniques for controlling systems in this row-column structure have focused on scheduling procedures that offer limited performance. The SVD and SNMF Systems permit simultaneous control of every subsystem, which increases the convergence rate by an order of magnitude compared with previous methods. In addition to closed loop control, open loop procedures using the SVD and SNMF are compared with previous scheduling procedures, demonstrating significant performance improvements. This thesis presents theoretical results for the controllability of systems using the row-column structure and for the stability and performance of the SVD and SNMF Systems. Practical challenges to the implementation of the SVD and SNMF Systems are also examined. Numerous simulation examples are provided, in particular, a dynamic simulation of a pin array device, called Digital Clay, and two physical demonstrations are used to assess the feasibility of the SVD and SNMF Systems for specific applications.
267

Chemical identification under a poisson model for Raman spectroscopy

Palkki, Ryan D. 14 November 2011 (has links)
Raman spectroscopy provides a powerful means of chemical identification in a variety of fields, partly because of its non-contact nature and the speed at which measurements can be taken. The development of powerful, inexpensive lasers and sensitive charge-coupled device (CCD) detectors has led to widespread use of commercial and scientific Raman systems. However, relatively little work has been done developing physics-based probabilistic models for Raman measurement systems and crafting inference algorithms within the framework of statistical estimation and detection theory. The objective of this thesis is to develop algorithms and performance bounds for the identification of chemicals from their Raman spectra. First, a Poisson measurement model based on the physics of a dispersive Raman device is presented. The problem is then expressed as one of deterministic parameter estimation, and several methods are analyzed for computing the maximum-likelihood (ML) estimates of the mixing coefficients under our data model. The performance of these algorithms is compared against the Cramer-Rao lower bound (CRLB). Next, the Raman detection problem is formulated as one of multiple hypothesis detection (MHD), and an approximation to the optimal decision rule is presented. The resulting approximations are related to the minimum description length (MDL) approach to inference. In our simulations, this method is seen to outperform two common general detection approaches, the spectral unmixing approach and the generalized likelihood ratio test (GLRT). The MHD framework is applied naturally to both the detection of individual target chemicals and to the detection of chemicals from a given class. The common, yet vexing, scenario is then considered in which chemicals are present that are not in the known reference library. A novel variation of nonnegative matrix factorization (NMF) is developed to address this problem. Our simulations indicate that this algorithm gives better estimation performance than the standard two-stage NMF approach and the fully supervised approach when there are chemicals present that are not in the library. Finally, estimation algorithms are developed that take into account errors that may be present in the reference library. In particular, an algorithm is presented for ML estimation under a Poisson errors-in-variables (EIV) model. It is shown that this same basic approach can also be applied to the nonnegative total least squares (NNTLS) problem. Most of the techniques developed in this thesis are applicable to other problems in which an object is to be identified by comparing some measurement of it to a library of known constituent signatures.
268

Chemical Composition Of Atmospheric Particles In The Aegean Region

Munzur, Basak 01 February 2008 (has links) (PDF)
Daily aerosol samples were collected at the &Ccedil / andarli which is located on Aegean coast of Turkey. A rural site was selected to monitor atmospheric pollution by long range transport. Sampling was performed in both summer and winter seasons, and in total 151 samples were obtained. Concentrations of elements in the samples were measured in order to identify sources and possible source locations of pollutants. Measured concentrations of trace elements at the &Ccedil / andarli station were compared with those measured at various sites around the world and, also in Turkey. As a result of comparison, level of pollution at the Aegean Region was found to be lower than the Mediterranean Region and Black Sea Region. Air flow climatology at &Ccedil / andarli was investigated in order to determine potential source regions for pollutants. Frequency of air flows from Russia and Western Europe are higher suggesting that emissions from these industrial regions affect the chemical composition of particulate matter. Besides these, it was concluded that contributions from Central and Eastern European countries are significantly high because of frequent air mass transport. Concentrations of elements measured at &Ccedil / andarli station were found to show short and seasonal variations. Such variations in concentrations are explained by variations in the source strengths and transport patterns. Positive matrix factorization (PMF) was applied to determine sources of elements and contribution of sources to each element. This analysis revealed 5 sources, two local anthropogenic emissions factor, one soil factor, one sea salt factor and one long range transport factor. Distribution of Potential Source Contribution Function (PSCF) values showed that main sources of SO42- are observed in Bulgaria, Romania, Poland, Ukraine and central part of Aegean region.
269

The Hilbert Space Of Probability Mass Functions And Applications On Probabilistic Inference

Bayramoglu, Muhammet Fatih 01 September 2011 (has links) (PDF)
The Hilbert space of probability mass functions (pmf) is introduced in this thesis. A factorization method for multivariate pmfs is proposed by using the tools provided by the Hilbert space of pmfs. The resulting factorization is special for two reasons. First, it reveals the algebraic relations between the involved random variables. Second, it determines the conditional independence relations between the random variables. Due to the first property of the resulting factorization, it can be shown that channel decoders can be employed in the solution of probabilistic inference problems other than decoding. This approach might lead to new probabilistic inference algorithms and new hardware options for the implementation of these algorithms. An example of new inference algorithms inspired by the idea of using channel decoder for other inference tasks is a multiple-input multiple-output (MIMO) detection algorithm which has a complexity of the square-root of the optimum MIMO detection algorithm. Keywords: The Hilbert space of pmfs, factorization of pmfs, probabilistic inference, MIMO detection, Markov random fields iv
270

Algorithm/architecture codesign of low power and high performance linear algebra compute fabrics

Pedram, Ardavan 27 September 2013 (has links)
In the past, we could rely on technology scaling and new micro-architectural techniques to improve the performance of processors. Nowadays, both of these methods are reaching their limits. The primary concern in future architectures with billions of transistors on a chip and limited power budgets is power/energy efficiency. Full-custom design of application-specific cores can yield up to two orders of magnitude better power efficiency over conventional general-purpose cores. However, a tremendous design effort is required in integrating a new accelerator for each new application. In this dissertation, we present the design of specialized compute fabrics that maintain the efficiency of full custom hardware while providing enough flexibility to execute a whole class of coarse-grain operations. The broad vision is to develop integrated and specialized hardware/software solutions that are co-optimized and co-designed across all layers ranging from the basic hardware foundations all the way to the application programming support through standard linear algebra libraries. We try to address these issues specifically in the context of dense linear algebra applications. In the process, we pursue the main questions that architects will face while designing such accelerators. How broad is this class of applications that the accelerator can support? What are the limiting factors that prevent utilization of these accelerators on the chip? What is the maximum achievable performance/efficiency? Answering these questions requires expertise and careful codesign of the algorithms and the architecture to select the best possible components, datapaths, and data movement patterns resulting in a more efficient hardware-software codesign. In some cases, codesign reduces complexities that are imposed on the algorithm side due to the initial limitations in the architectures. We design a specialized Linear Algebra Processor (LAP) architecture and discuss the details of mapping of matrix-matrix multiplication onto it. We further verify the flexibility of our design for computing a broad class of linear algebra kernels. We conclude that this architecture can perform a broad range of matrix-matrix operations as complex as matrix factorizations, and even Fast Fourier Transforms (FFTs), while maintaining its ASIC level efficiency. We present a power-performance model that compares state-of-the-art CPUs and GPUs with our design. Our power-performance model reveals sources of inefficiencies in CPUs and GPUs. We demonstrate how to overcome such inefficiencies in the process of designing our LAP. As we progress through this dissertation, we introduce modifications of the original matrix-matrix multiplication engine to facilitate the mapping of more complex operations. We observe the resulting performance and efficiencies on the modified engine using our power estimation methodology. When compared to other conventional architectures for linear algebra applications and FFT, our LAP is over an order of magnitude better in terms of power efficiency. Based on our estimations, up to 55 and 25 GFLOPS/W single- and double-precision efficiencies are achievable on a single chip in standard 45nm technology. / text

Page generated in 0.0912 seconds