• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1262
  • 440
  • 229
  • 124
  • 93
  • 37
  • 27
  • 26
  • 22
  • 20
  • 16
  • 12
  • 11
  • 11
  • 10
  • Tagged with
  • 2786
  • 320
  • 317
  • 288
  • 233
  • 229
  • 190
  • 181
  • 179
  • 160
  • 155
  • 138
  • 137
  • 131
  • 130
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
101

Litter decomposition and trace metal cycling studies in habitants variously influenced by coal strip-mining /

Lawrey, James Donald January 1977 (has links)
No description available.
102

Optimization Strategies for the Synthesis / Design of Hihgly Coupled, Highly Dynamic Energy Systems

Munoz Guevara, Jules Ricardo 13 October 2000 (has links)
In this work several decomposition strategies for the synthesis / design optimization of highly coupled, highly dynamic energy systems are formally presented and their implementation illustrated. The methods are based on the autonomous optimization of the individual units (components, sub-systems or disciplines), while maintaining energy and cost links between all units, which make up the overall system. All of the approaches are designed to enhance current engineering synthesis / design practices in that: they support the analysis of systems and optimization in a modular way, the results at every step are feasible and constitute an improvement over the initial design state, the groups in charge of the different unit designs are allowed to work concurrently, and permit any level of complexity as to the modeling and optimization of the units. All of the decomposition methods use the Optimum Response Surface (ORS) of the problem as a basis for analysis. The ORS is a representation of the optimum objective function for various values of the functions that couple the system units1. The complete ORS or an approximation thereof can be used in ways, which lead to different methods. The first decomposition method called the Local Global Optimization (LGO) method requires the creation of the entire ORS by carrying out multiple unit optimizations for various combinations of values of the coupling functions. The creation of the ORS is followed by a system-level optimization in which the best combination of values for the coupling functions is sought The second decomposition method is called the Iterative Local Global Optimization (ILGO) scheme. In the ILGO method an initial point on the ORS is found, i.e. the unit optimizations are performed for initial arbitrary values of the coupling functions. A linear approximation of the ORS about that initial point is then used to guide the selection of new values for the coupling functions that guarantee an improvement upon the initial design. The process is repeated until no further improvement is achieved. The mathematical properties of the methods depend on the convexity of the ORS, which in turn is affected by the choice of thermodynamic properties used to charecterize the couplings. Examples in the aircraft industry are used to illustrate the application and properties of the methods. / Ph. D.
103

Thermophysical Properties and Microstructural Changes of Composite Materials at Elevated Temperature

Goodrich, Thomas William 22 December 2009 (has links)
Experimental methods were developed and used to quantify the behavior of composite materials during heating to support development of heat and mass transfer pyrolysis models. Methods were developed to measure specific heat capacity, kinetic parameters, microstructure changes, porosity, and permeability. Specific heat and gravimetric data for kinetic parameters were measured with a simultaneous differential scanning calorimeter (DSC) / thermogravimetric analyzer (TGA). Experimental techniques were developed for quantitative specific heat measurement based on ASTM standards with modifications for accurate measurements of decomposing materials. An environmental scanning electron microscope (ESEM) was used in conjunction with a heating platform to record real-time video of microstructural changes of materials during decomposition and cooling following decomposition. A gas infusion technique was devised to measure porosity, in which nitrogen was infused into the pores of permeable material samples and used to determine the open-pore porosity of the material. Permeability was measured using a standard pressure differential gas flow technique with improvements over past sealing techniques and modifications to allow for potential high temperature use. Experimental techniques were used to measure the properties of composite construction materials commonly used in naval applications: E-glass vinyl ester laminates and end-grain balsa wood core. The simultaneous DSC/TGA was used to measure the apparent specific heat required to heat the decomposing sample. ESEM experiments captured microstructural changes during decomposition for both E-glass vinyl ester laminate and balsa wood samples. Permeability and porosity changes during decomposition appeared to depend on microstructural changes in addition to mass fraction. / Master of Science
104

Decompositions of the Complete Mixed Graph by Mixed Stars

Culver, Chance 01 August 2020 (has links)
In the study of mixed graphs, a common question is: What are the necessary and suffcient conditions for the existence of a decomposition of the complete mixed graph into isomorphic copies of a given mixed graph? Since the complete mixed graph has twice as many arcs as edges, then an obvious necessary condition is that the isomorphic copies have twice as many arcs as edges. We will prove necessary and suffcient conditions for the existence of a decomposition of the complete mixed graphs into mixed stars with two edges and four arcs. We also consider some special cases of decompositions of the complete mixed graph into partially oriented stars with twice as many arcs as edges. We employ difference methods in most of our constructions when showing suffciency. 2
105

Perturbation Based Decomposition of sEMG Signals

Huettinger, Rachel 01 March 2019 (has links)
Surface electromyography records the motor unit action potential signals in the vicinity of the electrode to reveal information on muscle activation. Decomposition of sEMG signals for characterization of constituent motor unit action potentials in terms of amplitude and firing times is useful for clinical research as well as diagnosis of neurological disorders. Successful decomposition of sEMG signals would allow for pertinent motor unit action potential information to be acquired without discomfort to the subject or the need for a well-trained operator (compared with intramuscular EMG). To determine amplitudes and firing times for motor unit action potentials in an sEMG recording, Szlavik's perturbation based decomposition may be applied. The decomposition was initially applied to synthetic sEMG signals and then to experimental data collected from the biceps brachii. Szlavik's decomposition estimator yields satisfactory results for synthetic and experimental sEMG signals with reasonable complexity.
106

Concept Learning By Example Decomposition

Joshi, Sameer 01 January 2009 (has links)
For efficient understanding and prediction in natural systems, even in artificially closed ones, we usually need to consider a number of factors that may combine in simple or complex ways. Additionally, many modern scientific disciplines face increasingly large datasets from which to extract knowledge (for example, genomics). Thus to learn all but the most trivial regularities in the natural world, we rely on different ways of simplifying the learning problem. One simplifying technique that is highly pervasive in nature is to break down a large learning problem into smaller ones; to learn the smaller, more manageable problems; and then to recombine them to obtain the larger picture. It is widely accepted in machine learning that it is easier to learn several smaller decomposed concepts than a single large one. Though many machine learning methods exploit it, the process of decomposition of a learning problem has not been studied adequately from a theoretical perspective. Typically such decomposition of concepts is achieved in highly constrained environments, or aided by human experts. In this work, we investigate concept learning by example decomposition in a general probably approximately correct (PAC) setting for Boolean learning. We develop sample complexity bounds for the different steps involved in the process. We formally show that if the cost of example partitioning is kept low then it is highly advantageous to learn by example decomposition. To demonstrate the efficacy of this framework, we interpret the theory in the context of feature extraction. We discover that many vague concepts in feature extraction, starting with what exactly a feature is, can be formalized unambiguously by this new theory of feature extraction. We analyze some existing feature learning algorithms in light of this theory, and finally demonstrate its constructive nature by generating a new learning algorithm from theoretical results.
107

The measurement and decomposition of achievement equity - an introduction to its concepts and methods including a multiyear empirical study of sixth grade reading scores

Rogers, Francis H., III 29 September 2004 (has links)
No description available.
108

Structural production layer decomposition: a new method to measure differences between MRIO databases for footprint assessments

Wieland, Hanspeter, Giljum, Stefan, Bruckner, Martin January 2018 (has links) (PDF)
Recent empirical assessments revealed that footprint indicators calculated with various multi-regional input-output (MRIO) databases deliver deviating results. In this paper, we propose a new method, called structural production layer decomposition (SPLD), which complements existing structural decomposition approaches. SPLD enables differentiating between effects stemming from specific parts in the technology matrix, e.g. trade blocks vs. domestic blocks, while still allowing to link the various effects to the total region footprint. Using the carbon footprint of the EU-28 in 2011 as an example, we analyse the differences between EXIOBASE, Eora, GTAP and WIOD. Identical environmental data are used across all MRIO databases. In all model comparisons, variations in domestic blocks have a more significant impact on the carbon footprint than variations in trade blocks. The results provide a wealth of information for MRIO developers and are relevant for policy makers designing climate policy measures targeted to specific stages along product supply chain.
109

Subgradient-based Decomposition Methods for Stochastic Mixed-integer Programs with Special Structures

Beier, Eric 2011 December 1900 (has links)
The focus of this dissertation is solution strategies for stochastic mixed-integer programs with special structures. Motivation for the methods comes from the relatively sparse number of algorithms for solving stochastic mixed-integer programs. Two stage models with finite support are assumed throughout. The first contribution introduces the nodal decision framework under private information restrictions. Each node in the framework has control of an optimization model which may include stochastic parameters, and the nodes must coordinate toward a single objective in which a single optimal or close-to-optimal solution is desired. However, because of competitive issues, confidentiality requirements, incompatible database issues, or other complicating factors, no global view of the system is possible. An iterative methodology called the nodal decomposition-coordination algorithm (NDC) is formally developed in which each entity in the cooperation forms its own nodal deterministic or stochastic program. Lagrangian relaxation and subgradient optimization techniques are used to facilitate negotiation between the nodal decisions in the system without any one entity gaining access to the private information from other nodes. A computational study on NDC using supply chain inventory coordination problem instances demonstrates that the new methodology can obtain good solution values without violating private information restrictions. The results also show that the stochastic solutions outperform the corresponding expected value solutions. The next contribution presents a new algorithm called scenario Fenchel decomposition (SFD) for solving two-stage stochastic mixed 0-1 integer programs with special structure based on scenario decomposition of the problem and Fenchel cutting planes. The algorithm combines progressive hedging to restore nonanticipativity of the first-stage solution, and generates Fenchel cutting planes for the LP relaxations of the subproblems to recover integer solutions. A computational study SFD using instances with multiple knapsack constraint structure is given. Multiple knapsack constrained problems are chosen due to the advantages they provide when generating Fenchel cutting planes. The computational results are promising, and show that SFD is able to find optimal solutions for some problem instances in a short amount of time, and that overall, SFD outperforms the brute force method of solving the DEP.
110

Cordic-based Givens QR decomposition for MIMO detectors

Ren, Minzhen 13 January 2014 (has links)
The object of the thesis research is to realize a complex-valued QR decomposition (QRD) algorithm on FPGAs for MIMO communication systems. The challenge is to implement a QRD processor that efficiently utilizes hardware resources to meet throughput requirements in MIMO systems. By studying the basic QRD algorithm using Givens rotations and the CORDIC algorithm, the thesis develops a master-slave structure to more efficiently implement CORDIC-based Givens rotations compared to traditional methods. Based on the master-slave structure, an processing-element array architecture is proposed to further improve result precision and to achieve near-theoretical latency with parallelized normalization and rotations. The proposed architecture also demonstrates flexible scalability through implementations for different sizes of QRDs. The QRD implementations can process 7.41, 1.90 and 0.209 million matrices per second for two by two, four by four and eight by eight QRDs respectively. This study has built the foundation to develop QRD processors that can fulfill high throughput requirements for MIMO systems.

Page generated in 0.1292 seconds