• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 209
  • 31
  • 29
  • 13
  • 12
  • 10
  • 7
  • 5
  • 4
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • Tagged with
  • 408
  • 158
  • 59
  • 58
  • 57
  • 57
  • 55
  • 52
  • 49
  • 45
  • 42
  • 41
  • 39
  • 35
  • 34
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
81

Continuation in US Foreign Policy: An Offensive Realist Perspective

Prifti, Bledar 20 October 2014 (has links)
This dissertation is a study of US foreign policy that aims at maintaining its regional hegemonic status and preventing the emergence of another regional hegemon by implementing the offshore balancing strategy. US intervention during the 2003 Iraq War, strained US-Iran relationship, and the establishment of the Islamic State of Iraq and the Levant (ISIL) in early 2014 compel a reevaluation of US foreign policy. Two major claims of this dissertation include: (1) US foreign policy is consistent with offensive realist theoretical claims; and (2) US foreign policy is characterized by continuity when it comes to issues related to America's strategic interests. Utilizing a case study and comparative case study methodology, this dissertation outlines the following findings. The first finding of this dissertation is that US foreign policy actions under the Bush Doctrine, which led to the 2003 Iraq War, were dictated by the anarchic status of the international system, the possession by Iraq of military capabilities that could harm or destroy America, fear from and suspicion of Iraq's intentions, the need to ensure survival in an anarchic system, and the need to maximize relative power vis-à-vis other states. All these factors led to three main pattern of behavior: fear, self-help, and power maximization. Because there was no other regional great power capable and willing to balance Iraq, the US was forced to rely on direct balancing by threatening Iraq to take military actions, creating an anti-Iraqi alliance, and maximizing its relative power by destroying Iraq's military capabilities. Second, US foreign policy under the Bush Doctrine was a continuation of the 20th century foreign policy. US foreign policy during the 20th century was dictated by three major patterns of behavior: fear, self-help, and power maximization. In realizing its foreign policy goals, the US had to rely on buck-passing and balancing strategies. Whenever there was no regional great power able and willing "to carry the buck", the US would rely on direct balancing by either threatening the aggressor, creating alliances with other regional states, or utilizing additional resources of its own. Four major presidential doctrines and related occurrences were utilized to test the claim: the Roosevelt Corollary, the Truman Doctrine, the Carter Doctrine, and the Reagan Doctrine. The last finding of this dissertation is that US foreign policy toward Iran constitutes continuity and is dictated by US need to maintain regional hegemony by acting as an offshore balancer. In addition, the US and Iran share mutual strategic interests in several occasions, and a strategic win or loss for one state is a win or loss for the other. Like that of the US, Iran's foreign policy is guided by rationality. The Iran-Contra affair, the Armenia-Azerbaijan conflict, and the Russia-Chechnya conflict support the claim that Iran's foreign policy is based on rationality instead of religious ideology as argued by many scholars. Also, the 2001 Afghanistan war, the 2003 Iraq war, and the establishment of the ISIL support the claim that the US and Iran share mutual strategic interests. Cooperation is often desirable and in some cases inevitable. Despite this strong claim, US-Iran relationship has its own limitations because neither the US nor Iran would accept a too powerful other that could establish absolute dominance in the region.
82

Segmentation of the Brain from MR Images

Caesar, Jenny January 2005 (has links)
<p>KTH, Division of Neuronic Engineering, have a finite element model of the head. However, this model does not contain detailed modeling of the brain. This thesis project consists of finding a method to extract brain tissues from T1-weighted MR images of the head. The method should be automatic to be suitable for patient individual modeling.</p><p>A summary of the most common segmentation methods is presented and one of the methods is implemented. The implemented method is based on the assumption that the probability density function (pdf) of an MR image can be described by parametric models. The intensity distribution of each tissue class is modeled as a Gaussian distribution. Thus, the total pdf is a sum of Gaussians. However, the voxel values are also influenced by intensity inhomogeneities, which affect the pdf. The implemented method is based on the expectation-maximization algorithm and it corrects for intensity inhomogeneities. The result from the algorithm is a classification of the voxels. The brain is extracted from the classified voxels using morphological operations.</p>
83

Advanced control for power density maximization of the brushless DC generator

Lee, Hyung-Woo 17 February 2005 (has links)
This dissertation proposes a novel control technique for power density maximization of the brushless DC (BLDC) generator which is a nonsinusoidal power supply system. In a generator of given rating, the weight and size of the system affect the fuel consumption directly, therefore power density is one of the most important issues in a stand-alone generator. Conventional rectification methods cannot achieve the maximum power possible because of a distorted or unsuitable current waveform. The optimal current waveform for maximizing power density and minimizing machine size and weight in a nonsinusoidal power supply system has been proposed theoretically and verified by simulation and experimental work. Also, various attributes of practical interest are analyzed and simulated to investigate the impact on real systems.
84

Deterministic annealing EM algorithm for robust learning of Gaussian mixture models

Wang, Bo Yu January 2011 (has links)
University of Macau / Faculty of Science and Technology / Department of Electrical and Electronics Engineering
85

Statistical Learning in Drug Discovery via Clustering and Mixtures

Wang, Xu January 2007 (has links)
In drug discovery, thousands of compounds are assayed to detect activity against a biological target. The goal of drug discovery is to identify compounds that are active against the target (e.g. inhibit a virus). Statistical learning in drug discovery seeks to build a model that uses descriptors characterizing molecular structure to predict biological activity. However, the characteristics of drug discovery data can make it difficult to model the relationship between molecular descriptors and biological activity. Among these characteristics are the rarity of active compounds, the large volume of compounds tested by high-throughput screening, and the complexity of molecular structure and its relationship to activity. This thesis focuses on the design of statistical learning algorithms/models and their applications to drug discovery. The two main parts of the thesis are: an algorithm-based statistical method and a more formal model-based approach. Both approaches can facilitate and accelerate the process of developing new drugs. A unifying theme is the use of unsupervised methods as components of supervised learning algorithms/models. In the first part of the thesis, we explore a sequential screening approach, Cluster Structure-Activity Relationship Analysis (CSARA). Sequential screening integrates High Throughput Screening with mathematical modeling to sequentially select the best compounds. CSARA is a cluster-based and algorithm driven method. To gain further insight into this method, we use three carefully designed experiments to compare predictive accuracy with Recursive Partitioning, a popular structureactivity relationship analysis method. The experiments show that CSARA outperforms Recursive Partitioning. Comparisons include problems with many descriptor sets and situations in which many descriptors are not important for activity. In the second part of the thesis, we propose and develop constrained mixture discriminant analysis (CMDA), a model-based method. The main idea of CMDA is to model the distribution of the observations given the class label (e.g. active or inactive class) as a constrained mixture distribution, and then use Bayes’ rule to predict the probability of being active for each observation in the testing set. Constraints are used to deal with the otherwise explosive growth of the number of parameters with increasing dimensionality. CMDA is designed to solve several challenges in modeling drug data sets, such as multiple mechanisms, the rare target problem (i.e. imbalanced classes), and the identification of relevant subspaces of descriptors (i.e. variable selection). We focus on the CMDA1 model, in which univariate densities form the building blocks of the mixture components. Due to the unboundedness of the CMDA1 log likelihood function, it is easy for the EM algorithm to converge to degenerate solutions. A special Multi-Step EM algorithm is therefore developed and explored via several experimental comparisons. Using the multi-step EM algorithm, the CMDA1 model is compared to model-based clustering discriminant analysis (MclustDA). The CMDA1 model is either superior to or competitive with the MclustDA model, depending on which model generates the data. The CMDA1 model has better performance than the MclustDA model when the data are high-dimensional and unbalanced, an essential feature of the drug discovery problem! An alternate approach to the problem of degeneracy is penalized estimation. By introducing a group of simple penalty functions, we consider penalized maximum likelihood estimation of the CMDA1 and CMDA2 models. This strategy improves the convergence of the conventional EM algorithm, and helps avoid degenerate solutions. Extending techniques from Chen et al. (2007), we prove that the PMLE’s of the two-dimensional CMDA1 model can be asymptotically consistent.
86

Generation Capacity Expansion Planning in Deregulated Electricity Markets

Sharma, Deepak 20 May 2009 (has links)
With increasing demand of electric power in the context of deregulated electricity markets, a good strategic planning for the growth of the power system is critical for our tomorrow. There is a need to build new resources in the form of generation plants and transmission lines while considering the effects of these new resources on power system operations, market economics and the long-term dynamics of the economy. In deregulation, the exercise of generation planning has undergone a paradigm shift. The first stage of generation planning is now undertaken by the individual investors. These investors see investments in generation capacity as an increasing business opportunity because of the increasing market prices. Therefore, the main objective of such a planning exercise, carried out by individual investors, is typically that of long-term profit maximization. This thesis presents some modeling frameworks for generation capacity expansion planning applicable to independent investor firms in the context of power industry deregulation. These modeling frameworks include various technical and financing issues within the process of power system planning. The proposed modeling frameworks consider the long-term decision making process of investor firms, the discrete nature of generation capacity addition and incorporates transmission network modeling. Studies have been carried out to examine the impact of the optimal investment plans on transmission network loadings in the long-run by integrating the generation capacity expansion planning framework within a modified IEEE 30-bus transmission system network. The work assesses the importance of arriving at an optimal IRR at which the firm’s profit maximization objective attains an extremum value. The mathematical model is further improved to incorporate binary variables while considering discrete unit sizes, and subsequently to include the detailed transmission network representation. The proposed models are novel in the sense that the planning horizon is split into plan sub-periods so as to minimize the overall risks associated with long-term plan models, particularly in the context of deregulation.
87

High Performance Digital Circuit Techniques

Sadrossadat, Sayed Alireza January 2009 (has links)
Achieving high performance is one of the most difficult challenges in designing digital circuits. Flip-flops and adders are key blocks in most digital systems and must therefore be designed to yield highest performance. In this thesis, a new high performance serial adder is developed while power consumption is attained. Also, a statistical framework for the design of flip-flops is introduced that ensures that such sequential circuits meet timing yield under performance criteria. Firstly, a high performance serial adder is developed. The new adder is based on the idea of having a constant delay for the addition of two operands. While conventional adders exhibit logarithmic delay, the proposed adder works at a constant delay order. In addition, the new adder's hardware complexity is in a linear order with the word length, which consequently exhibits less area and power consumption as compared to conventional high performance adders. The thesis demonstrates the underlying algorithm used for the new adder and followed by simulation results. Secondly, this thesis presents a statistical framework for the design of flip-flops under process variations in order to maximize their timing yield. In nanometer CMOS technologies, process variations significantly impact the timing performance of sequential circuits which may eventually cause their malfunction. Therefore, developing a framework for designing such circuits is inevitable. Our framework generates the values of the nominal design parameters; i.e., the size of gates and transmission gates of flip-flop such that maximum timing yield is achieved for flip-flops. While previous works focused on improving the yield of flip-flops, less research was done to improve the timing yield in the presence of process variations.
88

Time-efficient Computation with Near-optimal Solutions for Maximum Link Activation in Wireless Communication Systems

Geng, Qifeng January 2012 (has links)
In a generic wireless network where the activation of a transmission link is subject to its signal-to-noise-and-interference ratio (SINR) constraint, one of the most fundamental and yet challenging problem is to find the maximum number of simultaneous transmissions. In this thesis, we consider and study in detail the problem of maximum link activation in wireless networks based on the SINR model. Integer Linear Programming has been used as the main tool in this thesis for the design of algorithms. Fast algorithms have been proposed for the delivery of near-optimal results time-efficiently. With the state-of-art Gurobi optimization solver, both the conventional approach consisting of all the SINR constraints explicitly and the exact algorithm developed recently using cutting planes have been implemented in the thesis. Based on those implementations, new solution algorithms have been proposed for the fast delivery of solutions. Instead of considering interference from all other links, an interference range has been proposed. Two scenarios have been considered, namely the optimistic case and the pessimistic case. The optimistic case considers no interference from outside the interference range, while the pessimistic case considers the interference from outside the range as a common large value. Together with the algorithms, further enhancement procedures on the data analysis have also been proposed to facilitate the computation in the solver.
89

Segmentation of the Brain from MR Images

Caesar, Jenny January 2005 (has links)
KTH, Division of Neuronic Engineering, have a finite element model of the head. However, this model does not contain detailed modeling of the brain. This thesis project consists of finding a method to extract brain tissues from T1-weighted MR images of the head. The method should be automatic to be suitable for patient individual modeling. A summary of the most common segmentation methods is presented and one of the methods is implemented. The implemented method is based on the assumption that the probability density function (pdf) of an MR image can be described by parametric models. The intensity distribution of each tissue class is modeled as a Gaussian distribution. Thus, the total pdf is a sum of Gaussians. However, the voxel values are also influenced by intensity inhomogeneities, which affect the pdf. The implemented method is based on the expectation-maximization algorithm and it corrects for intensity inhomogeneities. The result from the algorithm is a classification of the voxels. The brain is extracted from the classified voxels using morphological operations.
90

Statistical Learning in Drug Discovery via Clustering and Mixtures

Wang, Xu January 2007 (has links)
In drug discovery, thousands of compounds are assayed to detect activity against a biological target. The goal of drug discovery is to identify compounds that are active against the target (e.g. inhibit a virus). Statistical learning in drug discovery seeks to build a model that uses descriptors characterizing molecular structure to predict biological activity. However, the characteristics of drug discovery data can make it difficult to model the relationship between molecular descriptors and biological activity. Among these characteristics are the rarity of active compounds, the large volume of compounds tested by high-throughput screening, and the complexity of molecular structure and its relationship to activity. This thesis focuses on the design of statistical learning algorithms/models and their applications to drug discovery. The two main parts of the thesis are: an algorithm-based statistical method and a more formal model-based approach. Both approaches can facilitate and accelerate the process of developing new drugs. A unifying theme is the use of unsupervised methods as components of supervised learning algorithms/models. In the first part of the thesis, we explore a sequential screening approach, Cluster Structure-Activity Relationship Analysis (CSARA). Sequential screening integrates High Throughput Screening with mathematical modeling to sequentially select the best compounds. CSARA is a cluster-based and algorithm driven method. To gain further insight into this method, we use three carefully designed experiments to compare predictive accuracy with Recursive Partitioning, a popular structureactivity relationship analysis method. The experiments show that CSARA outperforms Recursive Partitioning. Comparisons include problems with many descriptor sets and situations in which many descriptors are not important for activity. In the second part of the thesis, we propose and develop constrained mixture discriminant analysis (CMDA), a model-based method. The main idea of CMDA is to model the distribution of the observations given the class label (e.g. active or inactive class) as a constrained mixture distribution, and then use Bayes’ rule to predict the probability of being active for each observation in the testing set. Constraints are used to deal with the otherwise explosive growth of the number of parameters with increasing dimensionality. CMDA is designed to solve several challenges in modeling drug data sets, such as multiple mechanisms, the rare target problem (i.e. imbalanced classes), and the identification of relevant subspaces of descriptors (i.e. variable selection). We focus on the CMDA1 model, in which univariate densities form the building blocks of the mixture components. Due to the unboundedness of the CMDA1 log likelihood function, it is easy for the EM algorithm to converge to degenerate solutions. A special Multi-Step EM algorithm is therefore developed and explored via several experimental comparisons. Using the multi-step EM algorithm, the CMDA1 model is compared to model-based clustering discriminant analysis (MclustDA). The CMDA1 model is either superior to or competitive with the MclustDA model, depending on which model generates the data. The CMDA1 model has better performance than the MclustDA model when the data are high-dimensional and unbalanced, an essential feature of the drug discovery problem! An alternate approach to the problem of degeneracy is penalized estimation. By introducing a group of simple penalty functions, we consider penalized maximum likelihood estimation of the CMDA1 and CMDA2 models. This strategy improves the convergence of the conventional EM algorithm, and helps avoid degenerate solutions. Extending techniques from Chen et al. (2007), we prove that the PMLE’s of the two-dimensional CMDA1 model can be asymptotically consistent.

Page generated in 0.1582 seconds