• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 3752
  • 1249
  • 523
  • 489
  • 323
  • 323
  • 323
  • 323
  • 323
  • 314
  • 101
  • 98
  • 60
  • 13
  • 12
  • Tagged with
  • 7317
  • 7317
  • 773
  • 628
  • 621
  • 551
  • 500
  • 487
  • 484
  • 466
  • 392
  • 388
  • 354
  • 353
  • 346
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
251

A study of chemical reaction optimization

Xu, Jin, 徐进 January 2012 (has links)
Complex optimization problems are prevalent in various fields of science and engineering. However, many of them belong to a category of problems called NP- hard (nondeterministic polynomial-time hard). On the other hand, due to the powerful capability in solving a myriad of complex optimization problems, metaheuristic approaches have attracted great attention in recent decades. Chemical Reaction Optimization (CRO) is a recently developed metaheuristic mimicking the interactions of molecules in a chemical reaction. With the flexible structure and excellent characteristics, CRO can explore the solution space efficiently to identify the optimal or near optimal solution(s) within an acceptable time. Our research not only designs different versions of CRO and applies them to tackle various NP-hard optimization problems, but also investigates theoretical aspects of CRO in terms of convergence and finite time behavior. We first focus on the problem of task scheduling in grid computing, which involves seeking the most efficient strategy for allocating tasks to resources. In addition to Makespan and Flowtime, we also take reliability of resource into account, and task scheduling is formulated as an optimization problem with three objective functions. Then, four different kinds of CRO are designed to solve this problem. Simulation results show that the CRO methods generally perform better than existing methods and performance improvement is especially significant in large-scale applications. Secondly, we study stock portfolio selection, which pertains to deciding how to allocate investments to a number of stocks. Here we adopt the classical Markowitz mean-variance model and consider an additional cardinality constraint. Thus, the stock portfolio optimization becomes a mixed-integer quadratic programming problem. To solve it, we propose a new version of CRO named Super Molecule-based CRO (S-CRO). Computational experiments suggest that S-CRO is superior to canonical CRO in solving this problem. Thirdly, we apply CRO to the short adjacent repeats identification problem (SARIP), which involves detecting the short adjacent repeats shared by multiple DNA sequences. After proving that SARIP is NP-hard, we test CRO with both synthetic and real data, and compare its performance with BASARD, which is the previous best algorithm for this problem. Simulation results show that CRO performs much better than BASARD in terms of computational time and finding the optimal solution. We also propose a parallel version of CRO (named PCRO) with a synchronous communication scheme. To test its efficiency, we employ PCRO to solve the Quadratic Assignment Problem (QAP), which is a classical combinatorial optimization problem. Simulation results show that compared with canonical sequential CRO, PCRO can reduce the computational time as well as improve the quality of the solution for instances of QAP with large sizes. Finally, we perform theoretical analysis on the convergence and finite time behavior of CRO for combinatorial optimization problems. We explore CRO convergence from two aspects, namely, the elementary reactions and the total system energy. Furthermore, we also investigate the finite time behavior of CRO in respect of convergence rate and first hitting time. / published_or_final_version / Electrical and Electronic Engineering / Doctoral / Doctor of Philosophy
252

Fast methods for low-frequency and static EM problems

Ma, Zuhui, 馬祖輝 January 2013 (has links)
Electromagnetic effects play an important role in many engineering problems. The fast and accurate numerical methods for electromagnetic analysis are highly desired in both the low-frequency analysis and the static analysis. In the first part of this thesis, a low-frequency stable domain decomposition method, the augmented equivalence principle algorithm (A-EPA) with augmented electric field integral equation (A-EFIE), is introduced for analyzing the electromagnetic problems at low frequencies. The A-EFIE is first employed as a inner current solver for the EPA algorithm so that it improves the low-frequency inaccuracy issue. This method, however, cannot completely remove the low-frequency breakdown. To overcome it, the A-EPA with A-EFIE is studied and developed so that it has the capability to solve low-frequency problems accurately. In the second part, novel Helmholtz decomposition based fast Poisson solvers for both 2-D and 3-D problems are introduced. These new methods are implemented through the quasi-Helmholtz decomposition technique, i.e. the loop-tree decomposition. In 2-D cases, the proposed method can achieve O(N) complexity in terms of both computational cost and memory consumption for moderate accuracy requirements. Although computational costs become higher when more accurate results are needed, a multilevel method by using the hierarchical loop basis functions can obtain the desired efficiency. The same idea can be extend to 3-D case for exploiting a new generation of fast method for electrostatic problems. / published_or_final_version / Electrical and Electronic Engineering / Doctoral / Doctor of Philosophy
253

Analysis of some risk processes in ruin theory

Liu, Luyin, 劉綠茵 January 2013 (has links)
In the literature of ruin theory, there have been extensive studies trying to generalize the classical insurance risk model. In this thesis, we look into two particular risk processes considering multi-dimensional risk and dependent structures respectively. The first one is a bivariate risk process with a dividend barrier, which concerns a two-dimensional risk model under a barrier strategy. Copula is used to represent the dependence between two business lines when a common shock strikes. By defining the time of ruin to be the first time that either of the two lines has its surplus level below zero, we derive a discrete approximation procedure to calculate the expected discounted dividends until ruin under such a model. A thorough discussion of application in proportional reinsurance with numerical examples is provided as well as an examination of the joint optimal dividend barrier for the bivariate process. The second risk process is a semi-Markovian dual risk process. Assuming that the dependence among innovations and waiting times is driven by a Markov chain, we analyze a quantity resembling the Gerber-Shiu expected discounted penalty function that incorporates random variables defined before and after the time of ruin, such as the minimum surplus level before ruin and the time of the first gain after ruin. General properties of the function are studied, and some exact results are derived upon distributional assumptions on either the inter-arrival times or the gain amounts. Applications in a perpetual insurance and the last inter-arrival time before ruin are given along with some numerical examples. / published_or_final_version / Statistics and Actuarial Science / Master / Master of Philosophy
254

Mean variance portfolio management : time consistent approach

Wong, Kwok-chuen, 黃國全 January 2013 (has links)
In this thesis, two problems of time consistent mean-variance portfolio selection have been studied: mean-variance asset-liability management with regime switchings and mean-variance optimization with state-dependent risk aversion under short-selling prohibition. Due to the non-linear expectation term in the mean-variance utility, the usual Tower Property fails to hold, and the corresponding optimal portfolio selection problem becomes time-inconsistent in the sense that it does not admit the Bellman Optimality Principle. Because of this, in this thesis, time-consistent equilibrium solution of two mean-variance optimization problems is established via a game theoretic approach. In the first part of this thesis, the time consistent solution of the mean-variance asset-liability management is sought for. By using the extended Hamilton-Jacobi- Bellman equation for equilibrium solution, equilibrium feedback control of this MVALM and the corresponding equilibrium value function can be obtained. The equilibrium control is found to be affine in liability. Hence, the time consistent equilibrium control of this problem is state dependent in the sense that it depends on the uncontrollable liability process, which is in substantial contrast with the time consistent solution of the simple classical mean-variance problem in Björk and Murgoci (2010), in which it was independent of the state. In the second part of this thesis, the time consistent equilibrium strategies for the mean-variance portfolio selection with state dependent risk aversion under short-selling prohibition is studied in both a discrete and a continuous time set- tings. The motivation that urges us to study this problem is the recent work in Björk et al. (2012) that considered the mean-variance problem with state dependent risk aversion in the sense that the risk aversion is inversely proportional to the current wealth. There is no short-selling restriction in their problem and the corresponding time consistent control was shown to be linear in wealth. However, we discovered that the counterpart of their continuous time equilibrium control in the discrete time framework behaves unsatisfactory, in the sense that the corresponding “optimal” wealth process can take negative values. This negativity in wealth will change the investor into a risk seeker which results in an unbounded value function that is economically unsound. Therefore, the discretized version of the problem in Bjork et al. (2012) might yield solutions with bankruptcy possibility. Furthermore, such “bankruptcy” solution can converge to the solution in continuous counterpart as Björk et al. (2012). This means that the negative risk aversion drawback could appear in implementing the solution in Björk et al. (2012) discretely in practice. This drawback urges us to prohibit short-selling in order to eliminate the chance of getting non-positive wealth. Using backward induction, the equilibrium control in discrete time setting is explicit solvable and is shown to be linear in wealth. An application of the extended Hamilton-Jacobi-Bellman equation leads us to conclude that the continuous time equilibrium control is also linear in wealth. Also, the investment to wealth ratio would satisfy an integral equation which is uniquely solvable. The discrete time equilibrium controls are shown to converge to that in continuous time setting. / published_or_final_version / Mathematics / Master / Master of Philosophy
255

Budget-limited data disambiguation

Yang, Xuan, 楊譞 January 2013 (has links)
The problem of data ambiguity exists in a wide range of applications. In this thesis, we study “cost-aware" methods to alleviate the data ambiguity problems in uncertain databases and social-tagging data. In database applications, ambiguous (or uncertain) data may originate from data integration and measurement error of devices. These ambiguous data are maintained by uncertain databases. In many situations, it is possible to “clean", or remove, ambiguities from these databases. For example, the GPS location of a user is inexact due to measurement error, but context information (e.g., what a user is doing) can be used to reduce the imprecision of the location value. In practice, a cleaning activity often involves a cost, may fail and may not remove all ambiguities. Moreover, the statistical information about how likely database entities can be cleaned may not be precisely known. We model the above aspects with the uncertain database cleaning problem, which requires us to make sensible decisions in selecting entities to clean in order to maximize the amount of ambiguous information removed under a limited budget. To solve this problem, we propose the Explore-Exploit (or EE) algorithm, which gathers valuable information during the cleaning process to determine how the remaining cleaning budget should be invested. We also study how to fine-tune the parameters of EE in order to achieve optimal cleaning effectiveness. Social tagging data capture web users' textual annotations, called tags, for resources (e.g., webpages and photos). Since tags are given by casual users, they often contain noise (e.g., misspelled words) and may not be able to cover all the aspects of each resource. In this thesis, we design a metric to systematically measure the tagging quality of each resource based on the tags it has received. We propose an incentive-based tagging framework in order to improve the tagging quality. The main idea is to award users some incentive for giving (relevant) tags to resources. The challenge is, how should we allocate incentives to a large set of resources, so as to maximize the improvement of their tagging quality under a limited budget? To solve this problem, we propose a few efficient incentive allocation strategies. Experiments shows that our best strategy provides resources with a close-to-optimal gain in tagging quality. To summarize, we study the problem of budget-limited data disambiguation for uncertain databases and social tagging data | given a set of objects (entities from uncertain databases or web resources), how can we make sensible decisions about which object to \disambiguate" (to perform a cleaning activity on the entity or ask a user to tag the resource), in order to maximize the amount of ambiguous information reduced under a limited budget. / published_or_final_version / Computer Science / Doctoral / Doctor of Philosophy
256

Spatio-temporal modeling and forecasting of air quality data

Yan, Tsz-leung, 甄子良 January 2014 (has links)
Respirable Suspended Particulate (RSP) time series data sampled in an air quality monitoring network are found strongly correlated and they are varying in highly similar patterns. This study provides a methodology for spatio-temporal modeling and forecasting of multiple RSP time series, in which the dynamic spatial correlations amongst the series can be effectively utilized.   The efficacy of the Spatio-Temporal Dynamic Harmonic Regression (STDHR) model is demonstrated. Based on the decomposition of the observed time series into the trend and periodic components, the model is capable of making forecast of RSP data series that exhibit variation patterns during air pollution episodes and typhoons with dynamic weather conditions. It is also capable to produce spatial predictions of RSP time series up to three unobserved sites.   The Noise-variance-ratio (NVR) form of the multivariate recursive algorithm ((M2) algorithm) that derived by the author can greatly facilitate its practical application in both multivariate and univariate time series analysis. The (M2) algorithm allows the spatial correlations to be specified at parametric levels. The state-space (SS) model formulation can flexibly accommodate the existing inter or intra (auto) correlations amongst the parameters of the data series.   Applications of the variance intervention (VI) are exploited and illustrated with a real life case study which involves forecasting of RSP data series during an air pollution episode. This illustrates that time series with abrupt changes can be predicted by automatic implementation of the VI approach.   The present study also extended the anisotropic Matern model to estimate the dynamic spatial correlation structure of the air quality data by using mean wind speed and prevailing wind direction in defining the spatial anisotropy. The Anisotropic Matern model by Mean Wind Speed and Prevailing Wind Direction (AMMP) model that devised by the author can avoid huge computational burden in estimating variogram at every variation of the underlying spatial structure.   Finally, the findings of this dissertation have laid the foundation for further research on multiple time series analysis and estimation of dynamic spatial structure. / published_or_final_version / Geography / Doctoral / Doctor of Philosophy
257

Nonlinear circuits modeling and analysis by the associated transform of Volterra transfer functions

Zhang, Yang, 張陽 January 2013 (has links)
Model order reduction (MOR) is one of the general techniques in the fields of computeraided design (CAD) and electronic design automation (EDA) which accelerates the flow of electronic simulations and verifications. By MOR, the original circuit, which is described by a set of ordinary differential equations (ODEs), can be trimmed into a much smaller reduced-order model (ROM) in terms of the number of state variables, with approximately the same input-output (I/O) characteristics. Hence, simulations using this ROM will be much more efficient and effective than using the original system. In this thesis, a novel and fast approach of computing the projection matrices serving high-order Volterra transfer functions in the context of weakly and strongly nonlinear MOR is proposed. The innovation is to carry out an association of multivariate Laplace-domain variables in high-order multiple-input multiple-output (MIMO) transfer functions to generate univariate single-s transfer functions. In contrast to conventional projection-based nonlinear MOR which finds projection subspaces about every si in multivariate transfer functions, only that about a single s is required in the proposed approach. This translates into much more compact nonlinear ROMs without compromising accuracy. Different algorithms and their extensions are devised in this thesis. Extensive numerical examples are given to prove and verify the algorithms. / published_or_final_version / Electrical and Electronic Engineering / Doctoral / Doctor of Philosophy
258

Numerical methodologies for electromagnetic parasitic system modeling and simulation

Li, Ping, 李平 January 2014 (has links)
In this thesis, to efficiently and accurately model the electromagnetic radiations from electronic and antenna systems, and to analyze the hybrid electromagnetic (EM)-circuit system and the interactions between EM waves and multi-physics systems, a plethora of full-wave approaches are developed. Specifically, a set of frequency-domain methods are proposed in the first part of this thesis to characterize the electromagnetic radiations from device under test (DUT) based on the sampled near-field data. For the first approach, the dyadic Green function (DGF) in the presence of perfectly conducting sphere is expanded by spherical vector wave functions, which is mathematically rigorous. Based on this DGF and the reciprocity theorem, the radiation outside the spherical sampling surface can be accurately predicted with only the tangential components of the electric near-field over this sampling surface. Sometimes for situations where electronic devices are placed in good conductive shielding enclosures with apertures or ventilation slots, only partially planar electric near-field sampling over the apertures or the slots is sufficient according to Schelkunoff’s principle. Due to the unavailability of analytical DGF and the prohibitively computational cost for the numerical DGF, a novel two-step process approach by considering the radiation problem as a scattering issue with incident waves from the equivalent magnetic currents derived from the sampled electric near-field is proposed. However, the very near-field radiation inside the sampling surface cannot be retrieved with the above two approaches. To overcome this limitation, the equivalent source reconstruction based methods are introduced by replacing the radiators with equivalent current sources that are capable of reproducing the original radiation. Due to the difficulty of acquiring the phase information of the near-field data, a fully new iterative phaseless source reconstruction method (SRM) which only needs the amplitude of the electric field is developed. To reduce the computational cost of traditional SRM for broadband radiators, a wideband SRM based on a Stoer-Bulirsh (SB) recursive tabular algorithm is proposed. Enhanced by an adaptive frequency sampling strategy, only a very small number of frequency samples are required. With the purpose to capture the nonlinear response of EM-circuit systems, transient scattering from penetrable objects, surface plasmon polarization (SPP) of grapheme below the terahertz range, and the impacts of random parameters on the physical behavior of stochastic systems, various novel discontinuous Galerkin time-domain (DGTD) based methods and their extensions are developed. For a practical electronic system, apart from the EM part, the presence of lumped elements must be considered. Therefore, a hybrid EM-circuit solver is indispensable. For the EM subsystem governed by Maxwell’s equations, it is solved by DGTD with an explicit time-marching scheme. For the lumped subsystem, circuit equations are constructed based on either the modified nodal analysis (MNA) derived from Kirchoff’s current law or the basic I-V relations. By introducing a port voltage and current, the EM and circuit solvers are synchronized in the temporal sequence at the lumped port. This synchronized EM-circuit solver is free of instabilities even though nonlinear circuit elements are involved. For open-region scattering problem analysis, a novel approach by integrating the time-domain boundary integral (TDBI) algorithm with DGTD is developed. At the truncation boundary, the fields required for the incoming flux in DGTD is calculated using the TDBI from the equivalent currents over a Huygens’ surface enclosing the scatterer. The hybrid DGTD-BI ensures that the radiation condition is mathematically exact and the resulting computation domain is as small as possible since the truncation boundary conforms to scatterer’s shape. By considering the one atom-thick graphene as an infinitesimally thin conductive sheet, a surface impedance boundary condition (SIBC) augmented DGTD algorithm is developed to model the graphene. With this SIBC, straightforward volumetric discretization is avoided, thus significantly reducing the memory cost and meanwhile alleviating the restriction on the minimum time marching size. Due to the complex relation between the surface conductivity σg (comprising contributions from both intraband and interband) and the angular frequency ω, direct mapping the numerical flux from the frequency to the time-domain via inverse Fourier transform is not available. To address this issue, a fast-relaxing vector-fitting (FRVF) technique is used to approximate the σg by rational functions in the Laplace-domain. Via inverse Laplace transform, the time-domain matrix equations are obtained in integral forms of time t. Resorting to finite integral technique (FIT), a fully-discrete matrix system can be achieved. Finally, to consider the impact of random parameters on realistic electronic systems, a stochastic solver based on DGTD and sparse-grid collocation method is developed. To reduce the number of supporting, an adaptive strategy is utilized by using the local hierarchical surplus as error indicator. To improve the flexibility of the proposed algorithm, both piecewise linear and Lagrange polynomial basis functions are employed to handle different stochastic systems. Particularly, the piecewise linear basis function is more efficient for non-smoothly observables while Lagrange polynomials are more suitable for smoothly observables. With these strategies, the singularities and quick variations can be efficiently captured but with very small number of collocation points. The above proposed algorithms are demonstrated by various examples, the accuracy, efficiency, and robustness of these algorithms are clearly observed. / published_or_final_version / Electrical and Electronic Engineering / Doctoral / Doctor of Philosophy
259

An integrated approach to empty container repositioning and vessel routing in marine transportation

Zhang, Lu, 張露 January 2014 (has links)
abstract / Industrial and Manufacturing Systems Engineering / Doctoral / Doctor of Philosophy
260

An iterative two-stage approach to modeling vacant taxi movements : formulations and implications

Wong, Cheuk-pong, 黃卓邦 January 2014 (has links)
abstract / Civil Engineering / Doctoral / Doctor of Philosophy

Page generated in 0.3389 seconds