• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 2596
  • 912
  • 381
  • 347
  • 331
  • 101
  • 66
  • 49
  • 40
  • 36
  • 34
  • 32
  • 31
  • 27
  • 26
  • Tagged with
  • 5940
  • 1422
  • 871
  • 726
  • 722
  • 669
  • 492
  • 490
  • 479
  • 447
  • 421
  • 414
  • 386
  • 365
  • 340
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
651

Near optimal design of fixture layouts in multi-station assembly processes

Kim, Pansoo 15 November 2004 (has links)
This dissertation presents a methodology for the near optimal design of fixture layouts in multi-station assembly processes. An optimal fixture layout improves the robustness of a fixture system, reduces product variability and leads to manufacturing cost reduction. Three key aspects of the multi-station fixture layout design are addressed: a multi-station variation propagation model, a quantitative measure of fixture design, and an effective and efficient optimization algorithm. Multi-station design may have high dimensions of design space, which can contain a lot of local optima. In this dissertation, I investigated two algorithms for optimal fixture layout designs. The first algorithm is an exchange algorithm, which was originally developed in the research of optimal experimental designs. I revised the exchange routine so that it can remarkably reduce the computing time without sacrificing the optimal values. The second algorithm uses data-mining methods such as clustering and classification. It appears that the data-mining method can find valuable design selection rules that can in turn help to locate the optimal design efficiently. Compared with other non-linear optimization algorithms such as the simplex search method, simulated annealing, genetic algorithm, the data-mining method performs the best and the revised exchange algorithm performs comparably to simulated annealing, but better than the others. A four-station assembly process for a sport utility vehicle (SUV) side frame is used throughout the dissertation to illustrate the relevant concepts and the resulting methodology.
652

Essays in Dynamic Macroeconometrics

Bañbura, Marta 26 June 2009 (has links)
The thesis contains four essays covering topics in the field of macroeconomic forecasting. The first two chapters consider factor models in the context of real-time forecasting with many indicators. Using a large number of predictors offers an opportunity to exploit a rich information set and is also considered to be a more robust approach in the presence of instabilities. On the other hand, it poses a challenge of how to extract the relevant information in a parsimonious way. Recent research shows that factor models provide an answer to this problem. The fundamental assumption underlying those models is that most of the co-movement of the variables in a given dataset can be summarized by only few latent variables, the factors. This assumption seems to be warranted in the case of macroeconomic and financial data. Important theoretical foundations for large factor models were laid by Forni, Hallin, Lippi and Reichlin (2000) and Stock and Watson (2002). Since then, different versions of factor models have been applied for forecasting, structural analysis or construction of economic activity indicators. Recently, Giannone, Reichlin and Small (2008) have used a factor model to produce projections of the U.S GDP in the presence of a real-time data flow. They propose a framework that can cope with large datasets characterised by staggered and nonsynchronous data releases (sometimes referred to as “ragged edge”). This is relevant as, in practice, important indicators like GDP are released with a substantial delay and, in the meantime, more timely variables can be used to assess the current state of the economy. The first chapter of the thesis entitled “A look into the factor model black box: publication lags and the role of hard and soft data in forecasting GDP” is based on joint work with Gerhard Rünstler and applies the framework of Giannone, Reichlin and Small (2008) to the case of euro area. In particular, we are interested in the role of “soft” and “hard” data in the GDP forecast and how it is related to their timeliness. The soft data include surveys and financial indicators and reflect market expectations. They are usually promptly available. In contrast, the hard indicators on real activity measure directly certain components of GDP (e.g. industrial production) and are published with a significant delay. We propose several measures in order to assess the role of individual or groups of series in the forecast while taking into account their respective publication lags. We find that surveys and financial data contain important information beyond the monthly real activity measures for the GDP forecasts, once their timeliness is properly accounted for. The second chapter entitled “Maximum likelihood estimation of large factor model on datasets with arbitrary pattern of missing data” is based on joint work with Michele Modugno. It proposes a methodology for the estimation of factor models on large cross-sections with a general pattern of missing data. In contrast to Giannone, Reichlin and Small (2008), we can handle datasets that are not only characterised by a “ragged edge”, but can include e.g. mixed frequency or short history indicators. The latter is particularly relevant for the euro area or other young economies, for which many series have been compiled only since recently. We adopt the maximum likelihood approach which, apart from the flexibility with regard to the pattern of missing data, is also more efficient and allows imposing restrictions on the parameters. Applied for small factor models by e.g. Geweke (1977), Sargent and Sims (1977) or Watson and Engle (1983), it has been shown by Doz, Giannone and Reichlin (2006) to be consistent, robust and computationally feasible also in the case of large cross-sections. To circumvent the computational complexity of a direct likelihood maximisation in the case of large cross-section, Doz, Giannone and Reichlin (2006) propose to use the iterative Expectation-Maximisation (EM) algorithm (used for the small model by Watson and Engle, 1983). Our contribution is to modify the EM steps to the case of missing data and to show how to augment the model, in order to account for the serial correlation of the idiosyncratic component. In addition, we derive the link between the unexpected part of a data release and the forecast revision and illustrate how this can be used to understand the sources of the latter in the case of simultaneous releases. We use this methodology for short-term forecasting and backdating of the euro area GDP on the basis of a large panel of monthly and quarterly data. In particular, we are able to examine the effect of quarterly variables and short history monthly series like the Purchasing Managers' surveys on the forecast. The third chapter is entitled “Large Bayesian VARs” and is based on joint work with Domenico Giannone and Lucrezia Reichlin. It proposes an alternative approach to factor models for dealing with the curse of dimensionality, namely Bayesian shrinkage. We study Vector Autoregressions (VARs) which have the advantage over factor models in that they allow structural analysis in a natural way. We consider systems including more than 100 variables. This is the first application in the literature to estimate a VAR of this size. Apart from the forecast considerations, as argued above, the size of the information set can be also relevant for the structural analysis, see e.g. Bernanke, Boivin and Eliasz (2005), Giannone and Reichlin (2006) or Christiano, Eichenbaum and Evans (1999) for a discussion. In addition, many problems may require the study of the dynamics of many variables: many countries, sectors or regions. While we use standard priors as proposed by Litterman (1986), an important novelty of the work is that we set the overall tightness of the prior in relation to the model size. In this we follow the recommendation by De Mol, Giannone and Reichlin (2008) who study the case of Bayesian regressions. They show that with increasing size of the model one should shrink more to avoid overfitting, but when data are collinear one is still able to extract the relevant sample information. We apply this principle in the case of VARs. We compare the large model with smaller systems in terms of forecasting performance and structural analysis of the effect of monetary policy shock. The results show that a standard Bayesian VAR model is an appropriate tool for large panels of data once the degree of shrinkage is set in relation to the model size. The fourth chapter entitled “Forecasting euro area inflation with wavelets: extracting information from real activity and money at different scales” proposes a framework for exploiting relationships between variables at different frequency bands in the context of forecasting. This work is motivated by the on-going debate whether money provides a reliable signal for the future price developments. The empirical evidence on the leading role of money for inflation in an out-of-sample forecast framework is not very strong, see e.g. Lenza (2006) or Fisher, Lenza, Pill and Reichlin (2008). At the same time, e.g. Gerlach (2003) or Assenmacher-Wesche and Gerlach (2007, 2008) argue that money and output could affect prices at different frequencies, however their analysis is performed in-sample. In this Chapter, it is investigated empirically which frequency bands and for which variables are the most relevant for the out-of-sample forecast of inflation when the information from prices, money and real activity is considered. To extract different frequency components from a series a wavelet transform is applied. It provides a simple and intuitive framework for band-pass filtering and allows a decomposition of series into different frequency bands. Its application in the multivariate out-of-sample forecast is novel in the literature. The results indicate that, indeed, different scales of money, prices and GDP can be relevant for the inflation forecast.
653

Models and algorithms for network design problems

Poss, Michael 22 February 2011 (has links)
Dans cette thèse, nous étudions différents modèles, déterministes et stochastiques, pour les problèmes de dimensionnement de réseaux. Nous examinons également le problème du sac-à-dos stochastique ainsi que, plus généralement, les contraintes de capacité en probabilité. Dans une première partie, nous nous consacrons à des modèles de dimensionnement de réseaux déterministes, possédant de nombreuses contraintes techniques s'approchant de situations réalistes. Nous commençons par étudier deux modèles de réseaux de télécommunications. Le premier considère des réseaux multi-couches et des capacités sur les arcs, tandis que le second étudie des réseaux mono-couche, sans capacité, où les commodités doivent être acheminées sur un nombre K de chemins disjoint de taille au plus L. Nous résolvons ces deux problèmes grâce à un algorithme de ``branch-and-cut' basé sur la décomposition de Benders de formulations linéaires pour ces problèmes. La nouveauté de notre approche se situe principalement dans l'étude empirique de la fréquence optimale de génération de coupes au cours de l'algorithme. Nous étudions ensuite un problème d'expansion de réseaux de transmission électrique. Notre travail étudie différents modèles et formulations pour le problème, les comparant sur des réseaux brésiliens réels. En particulier, nous montrons que le re-dimensionnement permet des réductions de coût importantes. Dans une seconde partie, nous examinons des modèles de programmation stochastique. Premièrement, nous prouvons que trois cas particuliers du problème de sac-à-dos avec recours simple peuvent être résolu par des algorithmes de programmation dynamique. Nous reformulons ensuite le problème comme un programme non-linéaire en variables entières et testons un algorithme ``branch-and-cut' basé l'approximation extérieure de la fonction objective. Cet algorithme est ensuite transformé en un algorithme de ``branch-and-cut-and-price', utilisé pour résoudre un problème de dimensionnement de réseau stochastique avec recours simple. Finalement, nous montrons comment linéariser des contraintes de capacité en probabilité avec variables binaires lorsque les coefficients sont des variables aléatoires satisfaisant certaines propriétés.
654

On the Prediction of Warfarin Dose

Eriksson, Niclas January 2012 (has links)
Warfarin is one of the most widely used anticoagulants in the world. Treatment is complicated by a large inter-individual variation in the dose needed to reach adequate levels of anticoagulation i.e. INR 2.0 – 3.0. The objective of this thesis was to evaluate which factors, mainly genetic but also non-genetic, that affect the response to warfarin in terms of required maintenance dose, efficacy and safety with special focus on warfarin dose prediction. Through candidate gene and genome-wide studies, we have shown that the genes CYP2C9 and VKORC1 are the major determinants of warfarin maintenance dose. By combining the SNPs CYP2C9 *2, CYP2C9 *3 and VKORC1 rs9923231 with the clinical factors age, height, weight, ethnicity, amiodarone and use of inducers (carbamazepine, phenytoin or rifampicin) into a prediction model (the IWPC model) we can explain 43 % to 51 % of the variation in warfarin maintenance dose. Patients requiring doses < 29 mg/week and doses ≥ 49 mg/week benefitted the most from pharmacogenetic dosing. Further, we have shown that the difference across ethnicities in percent variance explained by VKORC1 was largely accounted for by the allele frequency of rs9923231. Other novel genes affecting maintenance dose (NEDD4 and DDHD1), as well as the replicated CYP4F2 gene, have small effects on dose predictions and are not likely to be cost-effective, unless inexpensive genotyping is available. Three types of prediction models for warfarin dosing exist: maintenance dose models, loading dose models and dose revision models. The combination of these three models is currently being used in the warfarin treatment arm of the European Pharmacogenetics of Anticoagulant Therapy (EU-PACT) study. Other clinical trials aiming to prove the clinical validity and utility of pharmacogenetic dosing are also underway. The future of pharmacogenetic warfarin dosing relies on results from these ongoing studies, the availability of inexpensive genotyping and the cost-effectiveness of pharmacogenetic driven warfarin dosing compared with new oral anticoagulant drugs.
655

Aspects of List-of-Two Decoding

Eriksson, Jonas January 2006 (has links)
We study the problem of list decoding with focus on the case when we have a list size limited to two. Under this restriction we derive general lower bounds on the maximum possible size of a list-of-2-decodable code. We study the set of correctable error patterns in an attempt to obtain a characterization. For a special family of Reed-Solomon codes - which we identify and name 'class-I codes' - we give a weight-based characterization of the correctable error patterns under list-of-2 decoding. As a tool in this analysis we use the theoretical framework of Sudan's algorithm. The characterization is used in an exact calculation of the probability of transmission error in the symmetric channel when list-of-2 decoding is used. The results from the analysis and complementary simulations for QAM-systems show that a list-of-2 decoding gain of nearly 1 dB can be achieved. Further we study Sudan's algorithm for list decoding of Reed-Solomon codes for the special case of the class-I codes. For these codes algorithms are suggested for both the first and second step of Sudan's algorithm. Hardware solutions for both steps based on the derived algorithms are presented.
656

Algorithms and Protocols Enhancing Mobility Support for Wireless Sensor Networks Based on Bluetooth and Zigbee

García Castaño, Javier January 2006 (has links)
Mobile communication systems are experiencing a huge growth. While traditional communication paradigms deal with fixed networks, mobility raises a new set of questions, techniques, and solutions. This work focuses on wireless sensor networks (WSNs) where each node is a mobile device. The main objectives of this thesis have been to develop algorithms and protocols enabling WSNs with a special interest in overcoming mobility support limitations of standards such as Bluetooth and Zigbee. The contributions of this work may be divided in four major parts related to mobility support. The first part describes the implementation of local positioning services in Bluetooth since local positioning is not supported in Bluetooth v1.1. The obtained results are used in later implemented handover algorithms in terms of deciding when to perform the handover. Moreover local positioning information may be used in further developed routing protocols. The second part deals with handover as a solution to overcome the getting out of range problem. Algorithms for handover have been implemented enabling mobility in Bluetooth infrastructure networks. The principal achievement in this part is the significant reduction of handover latency since sensor cost and quality of service are directly affected by this parameter. The third part solves the routing problems originated with handovers. The main contribution of this part is the impact of the Bluetooth scatternet formation and routing protocols, for multi-hop data transmissions, in the system quality of service. The final part is a comparison between Bluetooth and Zigbee in terms of mobility support. The main outcome of this comparison resides on the conclusions, which can be used as a technology election guide. The main scientific contribution relies on the implementation of a mobile WSN with Bluetooth v1.1 inside the scope of the ”Multi Monitoring Medical Chip (M3C) for Homecare Applications” European Union project (Sixth Framework Program (FP6) Reference: 508291) offering multi-hop routing support and improvements in handover latencies with aid of local positioning services.
657

Development of New Methods for Inferring and Evaluating Phylogenetic Trees

Hill, Tobias January 2007 (has links)
Inferring phylogeny is a difficult computational problem. Heuristics are necessary to minimize the time spent evaluating non optimal trees. In paper I, we developed an approach for heuristic searching, using a genetic algorithm. Genetic algorithms mimic the natural selections ability to solve complex problems. The algorithm can reduce the time required for weighted maximum parsimony phylogenetic inference using protein sequences, especially for data sets involving large number of taxa. Evaluating and comparing the ability of phylogenetic methods to infer the correct topology is complex. In paper II, we developed software that determines the minimum subtree prune and regraft (SPR) distance between binary trees to ease the process. The minimum SPR distance can be used to measure the incongruence between trees inferred using different methods. Given a known topology the methods could be evaluated on their ability to infer the correct phylogeny given specific data. The minimum SPR software the intermediate trees that separate two binary trees. In paper III we developed software that given a set of incongruent trees determines the median SPR consensus tree i.e. the tree that explains the trees with a minimum of SPR operations. We investigated the median SPR consensus tree and its possible interpretation as a species tree given a set of gene trees. We used a set of α-proteobacteria gene trees to test the ability of the algorithm to infer a species tree and compared it to previous studies. The results show that the algorithm can successfully reconstruct a species tree. Expressed sequence tag (EST) data is important in determining intron-exon boundaries, single nucleotide polymorphism and the coding sequence of genes. In paper IV we aligned ESTs to the genome to evaluate the quality of EST data. The results show that many ESTs are contaminated by vector sequences and low quality regions. The reliability of EST data is largely determined by the clustering of the ESTs and the association of the clusters to the correct portion of genome. We investigate the performance of EST clustering using the genome as template compared to previously existing methods using pair-wise alignments. The results show that using the genome as guidance improves the resulting EST clusters in respect to the extent ESTs originating from the same transcriptional unit are separated into disjunct clusters.
658

Identification of Driving Styles in Buses

Karginova, Nadezda January 2010 (has links)
It is important to detect faults in bus details at an early stage. Because the driving style affects the breakdown of different details in the bus, identification of the driving style is important to minimize the number of failures in buses. The identification of the driving style of the driver was based on the input data which contained examples of the driving runs of each class. K-nearest neighbor and neural networks algorithms were used. Different models were tested. It was shown that the results depend on the selected driving runs. A hypothesis was suggested that the examples from different driving runs have different parameters which affect the results of the classification. The best results were achieved by using a subset of variables chosen with help of the forward feature selection procedure. The percent of correct classifications is about 89-90 % for the k-nearest neighbor algorithm and 88-93 % for the neural networks. Feature selection allowed a significant improvement in the results of the k-nearest neighbor algorithm and in the results of the neural networks algorithm received for the case when the training and testing data sets were selected from the different driving runs. On the other hand, feature selection did not affect the results received with the neural networks for the case when the training and testing data sets were selected from the same driving runs. Another way to improve the results is to use smoothing. Computing the average class among a number of consequent examples allowed achieving a decrease in the error.
659

An Effective Hybrid Genetic Algorithm with Priority Selection for the Traveling Salesman Problem

Hu, Je-wei 07 September 2007 (has links)
Traveling salesman problem (TSP) is a well-known NP-hard problem which can not be solved within a polynomial bounded computation time. However, genetic algorithm (GA) is a familiar heuristic algorithm to obtain near-optimal solutions within reasonable time for TSPs. In TSPs, the geometric properties are problem specific knowledge can be used to enhance GAs. Some tour segments (edges) of TSPs are fine while some maybe too long to appear in a short tour. Therefore, this information can help GAs to pay more attention to fine tour segments and without considering long tour segments as often. Consequently, we propose a new algorithm, called intelligent-OPT hybrid genetic algorithm (IOHGA), to exploit local optimal tour segments and enhance the searching process in order to reduce the execution time and improve the quality of the offspring. The local optimal tour segments are assigned higher priorities for the selection of tour segments to be appeared in a short tour. By this way, tour segments of a TSP are divided into two separate sets. One is a candidate set which contains the candidate fine tour segments and the other is a non-candidate set which contains non-candidate fine tour segments. According to the priorities of tour segments, we devise two genetic operators, the skewed production (SP) and the fine subtour crossover (FSC). Besides, we combine the traditional GA with 2-OPT local search algorithm but with some modifications. The modified 2-OPT is named the intelligent OPT (IOPT). Simulation study was conducted to evaluate the performance of the IOHGA. The experimental results indicate that generally the IOHGA could obtain near-optimal solutions with less time and higher accuracy than the hybrid genetic algorithm with simulated annealing algorithm and the genetic algorithm using the gene expression algorithm. Thus, the IOHGA is an effective algorithm for solving TSPs. If the case is not focused on the optimal solution, the IOHGA can provide good near-optimal solutions rapidly. Therefore, the IOHGA could be incorporated with some clustering algorithm and applied to mobile agent planning problems (MAP) in a real-time environment.
660

Partial Volume Correction in PET/CT

Åkesson, Lars January 2008 (has links)
In this thesis, a two-dimensional pixel-wise deconvolution method for partial volume correction (PVC) for combined Positron Emission Tomography and Computer Tomography (PET/CT) imaging has been developed. The method is based on Van Cittert's deconvolution algorithm and includes a noise reduction method based on adaptive smoothing and median filters. Furthermore, a technique to take into account the position dependent PET point spread function (PSF) and to reduce ringing artifacts is also described. The quantitative and qualitative performance of the proposed PVC algorithm was evaluated using phantom experiments with varying object size, background and noise level. PVC results in an increased activity recovery as well as image contrast enhancement. However, the quantitative performance of the algorithm is impaired by the presence of background activity and image noise. When applying the correction on clinical PET images, the result was an increase in standardized uptake values, up to 98% for small tumors in the lung. These results suggest that the PVC described in this work significantly improves activity recovery without producing excessive amount of ringing artifacts and noise amplification. The main limitations of the algorithm are the restriction to two dimensions and the lack of regularization constraints based on anatomical information from the co-registered CT images.

Page generated in 0.3124 seconds