• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 366
  • 65
  • 53
  • 36
  • 4
  • 4
  • 4
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 638
  • 160
  • 88
  • 85
  • 79
  • 77
  • 71
  • 68
  • 53
  • 52
  • 49
  • 48
  • 44
  • 42
  • 42
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
131

Lithium availability and future production outlooks

Vikström, Hanna, Davidsson, Simon, Höök, Mikael January 2013 (has links)
Lithium is a highly interesting metal, in part due to the increasing interest in lithium-ion batteries. Several recent studies have used different methods to estimate whether the lithium production can meet an increasing demand, especially from the transport sector, where lithium-ion batteries are the most likely technology for electric cars. The reserve and resource estimates of lithium vary greatly between different studies and the question whether the annual production rates of lithium can meet a growing demand is seldom adequately explained. This study presents a review and compilation of recent estimates of quantities of lithium available for exploitation and discusses the uncertainty and differences between these estimates. Also, mathematical curve fitting models are used to estimate possible future annual production rates. This estimation of possible production rates are compared to a potential increased demand of lithium if the International Energy Agency’s Blue Map Scenarios are fulfilled regarding electrification of the car fleet. We find that the availability of lithium could in fact be a problem for fulfilling this scenario if lithium-ion batteries are to be used. This indicates that other battery technologies might have to be implemented for enabling an electrification of road transports. / Stand
132

A survey of recent methods for solving project scheduling problems

Rehm, Markus, Thiede, Josefine 05 December 2012 (has links) (PDF)
This paper analyses the current state of research regarding solution methods dealing with resource-constrained project scheduling problems. The intention is to present a concentrated survey and brief scientific overview on models, their decision variables and constraints as well as current solution methods in the field of project scheduling. The allocation of scarce resources among multiple projects with different, conflicting decision variables is a highly difficult problem in order to achieve an optimal schedule which meets all (usually different) of the projects’ objectives. Those projects, e.g. the assembly of complex machinery and goods, consume many renewable, e.g. workforce/staff, and non-renewable, e.g. project budget, resources. Each single process within these projects can often be performed in different ways – so called execution modes can help to make a schedule feasible. On the other hand the number of potential solutions increases dramatically through this fact. Additional constraints, e.g. min/max time lags, preemption or specific precedence relations of activities, lead to highly complex problems which are NP-hard in the strong sense.
133

On Efficient Semidefinite Relaxations for Quadratically Constrained Quadratic Programming

Ding, Yichuan 17 May 2007 (has links)
Two important topics in the study of Quadratically Constrained Quadratic Programming (QCQP) are how to exactly solve a QCQP with few constraints in polynomial time and how to find an inexpensive and strong relaxation bound for a QCQP with many constraints. In this thesis, we first review some important results on QCQP, like the S-Procedure, and the strength of Lagrangian Relaxation and the semidefinite relaxation. Then we focus on two special classes of QCQP, whose objective and constraint functions take the form trace(X^TQX + 2C^T X) + β, and trace(X^TQX + XPX^T + 2C^T X)+ β respectively, where X is an n by r real matrix. For each class of problems, we proposed different semidefinite relaxation formulations and compared their strength. The theoretical results obtained in this thesis have found interesting applications, e.g., solving the Quadratic Assignment Problem.
134

Optimal Vibration Control in Structures using Level set Technique

Ansari, Masoud 24 September 2013 (has links)
Vibration control is inevitable in many fields, including mechanical and civil engineering. This matter becomes more crucial for lightweight systems, like those made of magnesium. One of the most commonly practiced methods in vibration control is to apply constrained layer damping (CLD) patches to the surface of a structure. In order to consider the weight efficiency of the structure, the best shape and locations of the patches should be determined to achieve the optimum vibration suppression with the lowest amount of damping patch. In most research work done so far, the shape of patches are assumed to be known and only their optimum locations are found. However, the shape of the patches plays an important role in vibration suppression that should be included in the overall optimization procedure. In this research, a novel topology optimization approach is proposed. This approach is capable of finding the optimum shape and locations of the patches simultaneously for a given surface area. In other words, the damping optimization will be formulated in the context of the level set technique, which is a numerical method used to track shapes and locations concurrently. Although level set technique offers several key benefits, its application especially in time-varying problems is somewhat cumbersome. To overcome this issue, a unique programming technique is suggested that utilizes MATLAB© and COMSOL© simultaneously. Different 2D structures will be considered and CLD patches will be optimally located on them to achieve the highest modal loss factor. Optimization will be performed while having different amount of damping patches to check the effectiveness of the technique. In all cases, certain constraints are imposed in order to make sure that the amount of damping material remains constant and equal to the starting value. Furthermore, different natural frequencies will be targeted in the damping optimization, and their effects will also be explained. The level set optimization technique will then be expanded to 3D structures, and a novel approach will be presented for defining an efficient 4D level set function to initialize the optimization process. Vibrations of a satellite dish will be optimally suppressed using CLD patches. Dependency of the optimum shape and location of patches to different parameters of the models such as natural frequencies and initial starting point will be examined. In another practical example, excessive vibrations of an automotive dash panel will be minimized by adding damping materials and their optimal distribution will be found. Finally, the accuracy of the proposed method will be experimentally confirmed through lab tests on a rectangular plate with nonsymmetrical boundary conditions. Different damping configurations, including the optimum one, will be tested. It will be shown that the optimum damping configuration found via level set technique possesses the highest loss factor and reveals the best vibration attenuation. The proposed level set topology optimization method shows high capability of determining the optimum damping set in structures. The effective coding method presented in this research will make it possible to easily extend this method to other physical problems such as image processing, heat transfer, magnetic fields, etc. Being interconnected, the physical part will be modeled in a finite element package like COMSOL and the optimization advances by means of Hamilton-Jacobi partial differential equation. Thus, the application of the proposed method is not confined to damping optimization and can be expanded to many engineering problems. In summary, this research: - offers general solution to 2D and 3D CLD applications and simultaneously finds the best shape and location of the patches for a given surface area (damping material); - extends the level set technique to concurrent shape and location optimization; - proposes a new numerical implementation to handle level set optimization problems in any complicated structure; - makes it possible to perform level set optimization in time dependent problems; - extends level set approach to higher order problems.
135

An Approach for the Adaptive Solution of Optimization Problems Governed by Partial Differential Equations with Uncertain Coefficients

Kouri, Drew 05 September 2012 (has links)
Using derivative based numerical optimization routines to solve optimization problems governed by partial differential equations (PDEs) with uncertain coefficients is computationally expensive due to the large number of PDE solves required at each iteration. In this thesis, I present an adaptive stochastic collocation framework for the discretization and numerical solution of these PDE constrained optimization problems. This adaptive approach is based on dimension adaptive sparse grid interpolation and employs trust regions to manage the adapted stochastic collocation models. Furthermore, I prove the convergence of sparse grid collocation methods applied to these optimization problems as well as the global convergence of the retrospective trust region algorithm under weakened assumptions on gradient inexactness. In fact, if one can bound the error between actual and modeled gradients using reliable and efficient a posteriori error estimators, then the global convergence of the proposed algorithm follows. Moreover, I describe a high performance implementation of my adaptive collocation and trust region framework using the C++ programming language with the Message Passing interface (MPI). Many PDE solves are required to accurately quantify the uncertainty in such optimization problems, therefore it is essential to appropriately choose inexpensive approximate models and large-scale nonlinear programming techniques throughout the optimization routine. Numerical results for the adaptive solution of these optimization problems are presented.
136

Characterization of Rate Region and User Removal in Interference Channels with Constrained Power

Hajar, Mahdavidoost January 2007 (has links)
Channel sharing is known as a unique solution to satisfy the increasing demand for the spectral-efficient communication. In the channel sharing technique, several users concurrently communicate through a shared wireless medium. In such a scheme, the interference of users over each other is the main source of impairment. The task of performance evaluation and signaling design in the presence of such interference is known as a challenging problem. In this thesis, a system including $n$ parallel interfering AWGN transmission paths is considered, where the power of the transmitters are subject to some upper-bounds. For such a system, we obtain a closed form for the boundaries of the rate region based on the Perron-Frobenius eigenvalue of some non-negative matrices. While the boundary of the rate region for the case of unconstrained power is a well-established result, this is the first result for the case of constrained power. This result is utilized to develop an efficient user removal algorithm for congested networks. In these networks, it may not be possible for all users to attain a required Quality of Service (QoS). In this case, the solution is to remove some of the users from the set of active ones. The problem of finding the set of removed users with the minimum cardinality is claimed to be an NP-complete problem. In this thesis, a novel sub-optimal removal algorithm is proposed, which relies on the derived boundary of the rate region in the first part of the thesis. Simulation results show that the proposed algorithm outperforms other known schemes.
137

On Efficient Semidefinite Relaxations for Quadratically Constrained Quadratic Programming

Ding, Yichuan 17 May 2007 (has links)
Two important topics in the study of Quadratically Constrained Quadratic Programming (QCQP) are how to exactly solve a QCQP with few constraints in polynomial time and how to find an inexpensive and strong relaxation bound for a QCQP with many constraints. In this thesis, we first review some important results on QCQP, like the S-Procedure, and the strength of Lagrangian Relaxation and the semidefinite relaxation. Then we focus on two special classes of QCQP, whose objective and constraint functions take the form trace(X^TQX + 2C^T X) + β, and trace(X^TQX + XPX^T + 2C^T X)+ β respectively, where X is an n by r real matrix. For each class of problems, we proposed different semidefinite relaxation formulations and compared their strength. The theoretical results obtained in this thesis have found interesting applications, e.g., solving the Quadratic Assignment Problem.
138

Statistical Learning in Drug Discovery via Clustering and Mixtures

Wang, Xu January 2007 (has links)
In drug discovery, thousands of compounds are assayed to detect activity against a biological target. The goal of drug discovery is to identify compounds that are active against the target (e.g. inhibit a virus). Statistical learning in drug discovery seeks to build a model that uses descriptors characterizing molecular structure to predict biological activity. However, the characteristics of drug discovery data can make it difficult to model the relationship between molecular descriptors and biological activity. Among these characteristics are the rarity of active compounds, the large volume of compounds tested by high-throughput screening, and the complexity of molecular structure and its relationship to activity. This thesis focuses on the design of statistical learning algorithms/models and their applications to drug discovery. The two main parts of the thesis are: an algorithm-based statistical method and a more formal model-based approach. Both approaches can facilitate and accelerate the process of developing new drugs. A unifying theme is the use of unsupervised methods as components of supervised learning algorithms/models. In the first part of the thesis, we explore a sequential screening approach, Cluster Structure-Activity Relationship Analysis (CSARA). Sequential screening integrates High Throughput Screening with mathematical modeling to sequentially select the best compounds. CSARA is a cluster-based and algorithm driven method. To gain further insight into this method, we use three carefully designed experiments to compare predictive accuracy with Recursive Partitioning, a popular structureactivity relationship analysis method. The experiments show that CSARA outperforms Recursive Partitioning. Comparisons include problems with many descriptor sets and situations in which many descriptors are not important for activity. In the second part of the thesis, we propose and develop constrained mixture discriminant analysis (CMDA), a model-based method. The main idea of CMDA is to model the distribution of the observations given the class label (e.g. active or inactive class) as a constrained mixture distribution, and then use Bayes’ rule to predict the probability of being active for each observation in the testing set. Constraints are used to deal with the otherwise explosive growth of the number of parameters with increasing dimensionality. CMDA is designed to solve several challenges in modeling drug data sets, such as multiple mechanisms, the rare target problem (i.e. imbalanced classes), and the identification of relevant subspaces of descriptors (i.e. variable selection). We focus on the CMDA1 model, in which univariate densities form the building blocks of the mixture components. Due to the unboundedness of the CMDA1 log likelihood function, it is easy for the EM algorithm to converge to degenerate solutions. A special Multi-Step EM algorithm is therefore developed and explored via several experimental comparisons. Using the multi-step EM algorithm, the CMDA1 model is compared to model-based clustering discriminant analysis (MclustDA). The CMDA1 model is either superior to or competitive with the MclustDA model, depending on which model generates the data. The CMDA1 model has better performance than the MclustDA model when the data are high-dimensional and unbalanced, an essential feature of the drug discovery problem! An alternate approach to the problem of degeneracy is penalized estimation. By introducing a group of simple penalty functions, we consider penalized maximum likelihood estimation of the CMDA1 and CMDA2 models. This strategy improves the convergence of the conventional EM algorithm, and helps avoid degenerate solutions. Extending techniques from Chen et al. (2007), we prove that the PMLE’s of the two-dimensional CMDA1 model can be asymptotically consistent.
139

Efficient Cryptographic Algorithms and Protocols for Mobile Ad Hoc Networks

Fan, Xinxin 12 April 2010 (has links)
As the next evolutionary step in digital communication systems, mobile ad hoc networks (MANETs) and their specialization like wireless sensor networks (WSNs) have been attracting much interest in both research and industry communities. In MANETs, network nodes can come together and form a network without depending on any pre-existing infrastructure and human intervention. Unfortunately, the salient characteristics of MANETs, in particular the absence of infrastructure and the constrained resources of mobile devices, present enormous challenges when designing security mechanisms in this environment. Without necessary measures, wireless communications are easy to be intercepted and activities of users can be easily traced. This thesis presents our solutions for two important aspects of securing MANETs, namely efficient key management protocols and fast implementations of cryptographic primitives on constrained devices. Due to the tight cost and constrained resources of high-volume mobile devices used in MANETs, it is desirable to employ lightweight and specialized cryptographic primitives for many security applications. Motivated by the design of the well-known Enigma machine, we present a novel ultra-lightweight cryptographic algorithm, referred to as Hummingbird, for resource-constrained devices. Hummingbird can provide the designed security with small block size and is resistant to the most common attacks such as linear and differential cryptanalysis. Furthermore, we also present efficient software implementations of Hummingbird on 4-, 8- and 16-bit microcontrollers from Atmel and Texas Instruments as well as efficient hardware implementations on the low-cost field programmable gate arrays (FPGAs) from Xilinx, respectively. Our experimental results show that after a system initialization phase Hummingbird can achieve up to 147 and 4.7 times faster throughput for a size-optimized and a speed-optimized software implementation, respectively, when compared to the state-of-the-art ultra-lightweight block cipher PRESENT on the similar platforms. In addition, the speed optimized Hummingbird encryption core can achieve a throughput of 160.4 Mbps and the area optimized encryption core only occupies 253 slices on a Spartan-3 XC3S200 FPGA device. Bilinear pairings on the Jacobians of (hyper-)elliptic curves have received considerable attention as a building block for constructing cryptographic schemes in MANETs with new and novel properties. Motivated by the work of Scott, we investigate how to use efficiently computable automorphisms to speed up pairing computations on two families of non-supersingular genus 2 hyperelliptic curves over prime fields. Our findings lead to new variants of Miller's algorithm in which the length of the main loop can be up to 4 times shorter than that of the original Miller's algorithm in the best case. We also generalize Chatterjee et al.'s idea of encapsulating the computation of the line function with the group operations to genus 2 hyperelliptic curves, and derive new explicit formulae for the group operations in projective and new coordinates in the context of pairing computations. Efficient software implementation of computing the Tate pairing on both a supersingular and a non-supersingular genus 2 curve with the same embedding degree of k = 4 is investigated. Combining the new algorithm with known optimization techniques, we show that pairing computations on non-supersingular genus 2 curves over prime fields use up to 55.8% fewer field operations and run about 10% faster than supersingular genus 2 curves for the same security level. As an important part of a key management mechanism, efficient key revocation protocol, which revokes the cryptographic keys of malicious nodes and isolates them from the network, is crucial for the security and robustness of MANETs. We propose a novel self-organized key revocation scheme for MANETs based on the Dirichlet multinomial model and identity-based cryptography. Firmly rooted in statistics, our key revocation scheme provides a theoretically sound basis for nodes analyzing and predicting peers' behavior based on their own observations and other nodes' reports. Considering the difference of malicious behaviors, we proposed to classify the nodes' behavior into three categories, namely good behavior, suspicious behavior and malicious behavior. Each node in the network keeps track of three categories of behavior and updates its knowledge about other nodes' behavior with 3-dimension Dirichlet distribution. Based on its own analysis, each node is able to protect itself from malicious attacks by either revoking the keys of the nodes with malicious behavior or ceasing the communication with the nodes showing suspicious behavior for some time. The attack-resistant properties of the resulting scheme against false accusation attacks launched by independent and collusive adversaries are also analyzed through extensive simulations. In WSNs, broadcast authentication is a crucial security mechanism that allows a multitude of legitimate users to join in and disseminate messages into the networks in a dynamic and authenticated way. During the past few years, several public-key based multi-user broadcast authentication schemes have been proposed in the literature to achieve immediate authentication and to address the security vulnerability intrinsic to μTESLA-like schemes. Unfortunately, the relatively slow signature verification in signature-based broadcast authentication has also incurred a series of problems such as high energy consumption and long verification delay. We propose an efficient technique to accelerate the signature verification in WSNs through the cooperation among sensor nodes. By allowing some sensor nodes to release the intermediate computation results to their neighbors during the signature verification, a large number of sensor nodes can accelerate their signature verification process significantly. When applying our faster signature verification technique to the broadcast authentication in a 4×4 grid-based WSN, a quantitative performance analysis shows that our scheme needs 17.7%~34.5% less energy and runs about 50% faster than the traditional signature verification method.
140

Evaluation and implementation of neural brain activity detection methods for fMRI

Breitenmoser, Sabina January 2005 (has links)
Functional Magnetic Resonance Imaging (fMRI) is a neuroimaging technique used to study brain functionality to enhance our understanding of the brain. This technique is based on MRI, a painless, noninvasive image acquisition method without harmful radiation. Small local blood oxygenation changes which are reflected as small intensity changes in the MR images are utilized to locate the active brain areas. Radio frequency pulses and a strong static magnetic field are used to measure the correlation between the physical changes in the brain and the mental functioning during the performance of cognitive tasks. This master thesis presents approaches for the analysis of fMRI data. The constrained Canonical Correlation Analysis (CCA) which is able to exploit the spatio-temporal nature of an active area is presented and tested on real human fMRI data. The actual distribution of active brain voxels is not known in the case of real human data. To evaluate the performance of the diagnostic algorithms applied to real human data, a modified Receiver Operating Characteristics (modified ROC) which deals with this lack of knowledge is presented. The tests on real human data reveal the better detection efficiency with the constrained CCA algorithm. A second aim of this thesis was to implement the promising technique of constrained CCA into the software environment SPM. To implement the constrained CCA algorithms into the fMRI part of SPM2, a toolbox containing Matlab functions has been programmed for the further use by neurological scientists. The new SPM functionalities to exploit the spatial extent of the active regions with CCA are presented and tested.

Page generated in 0.0856 seconds