• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 8
  • 4
  • 2
  • 1
  • Tagged with
  • 22
  • 22
  • 7
  • 6
  • 6
  • 5
  • 5
  • 5
  • 4
  • 4
  • 4
  • 4
  • 3
  • 3
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Calibration of trip distribution by generalised linear models

Shrewsbury, John Stephen January 2012 (has links)
Generalised linear models (GLMs) provide a flexible and sound basis for calibrating gravity models for trip distribution, for a wide range of deterrence functions (from steps to splines), with K factors and geographic segmentation. The Tanner function fitted Wellington Transport Strategy Model data as well as more complex functions and was insensitive to the formulation of intrazonal and external costs. Weighting from variable expansion factors and interpretation of the deviance under sparsity are addressed. An observed trip matrix is disaggregated and fitted at the household, person and trip levels with consistent results. Hierarchical GLMs (HGLMs) are formulated to fit mixed logit models, but were unable to reproduce the coefficients of simple nested logit models. Geospatial analysis by HGLM showed no evidence of spatial error patterns, either as random K factors or as correlations between them. Equivalence with hierarchical mode choice, duality with trip distribution, regularisation, lorelograms, and the modifiable areal unit problem are considered. Trip distribution is calibrated from aggregate data by the MVESTM matrix estimation package, incorporating period and direction factors in the intercepts. Counts across four screenlines showed a significance similar to a thousand-household travel survey. Calibration was possible only in conjuction with trip end data. Criteria for validation against screenline counts were met, but only if allowance was made for error in the trip end data.
12

Uma proposta de estimação da matriz OD a partir dos fluxos de tráfego observados nas interseções da rede de transportes / A proposal for OD matrix estimation from traffic flow observed at transportation network intersections

Bertoncini, Bruno Vieira 19 November 2010 (has links)
A meta do trabalho é propor e testar a hipótese que a contagem de tráfego nas interseções da rede de transportes, ao invés de contagem de tráfego nos arcos, reduz o grau de indeterminação e torna mais precisa a matriz OD estimada pelo modelo sintético. Ademais, é proposto e detalhado um método de estimação da matriz OD através de médias sucessivas (MEMS). É apresentada a descrição matemática das propostas e o detalhamento dos experimentos elaborados para testá-las. Três métodos de estimação, QUEENSOD, TransCAD e MEMS, foram utilizados na verificação da hipótese. A inserção de \"arcos virtuais\" na rede de transportes constituiu um artifício que permitiu aos programas QUEENSOD e TransCAD realizarem a estimação utilizando fluxos observados nas interseções. A utilização de contagens de fluxo nas interseções propiciou à matriz OD estimada, melhorias que acarretaram sua aproximação com a matriz OD \"real\". O experimento mostrou que a matriz OD estimada ao considerar contagens de tráfego nas interseções apresenta melhor desempenho em comparação a matriz estimada ao considerar contagens nos arcos da rede de transportes. A matriz estimada gradativamente aproximou-se da \"real\" à medida que foi aumentada a quantidade de informação de fluxo e sua distribuição na rede. Assim, a hipótese formulada para este trabalho não pôde ser refutada. / The aim of this work is to propose and test the hypothesis that traffic counts collected at network intersections, instead of traffic counts collected at links, reduce indeterminacy and make more accurate the OD matrix estimated by the synthetic model. Furthermore, a method is proposed and described in detail to estimate the OD matrix based on successive averages (MEMS). The model formulation of the proposals and a description of the experiments are presented. Three estimation methods, QUEENSOD, TransCAD, and MEMS were used in the hypothesis verification. The use of \"virtual links\" in the network consists of an artifice that enable the QUEENSOD and TransCAD to estimate the OD matrix based on traffic counts at intersection. By using flow counts conducted at intersections, improvements could be made to the estimated OD matrix bringing it closer to \"real situations\". The experiments results show that the OD matrix estimation based on traffic counts collected on network intersection has a better performance in contrast to the estimation based on traffic counts collected on network links. The estimated matrix gradually becomes closer to \"real situations\" while the quantity of information flow and its distribution to the network is increased. Therefore, the formulated hypothesis for this work cannot be refuted.
13

Uma proposta de estimação da matriz OD a partir dos fluxos de tráfego observados nas interseções da rede de transportes / A proposal for OD matrix estimation from traffic flow observed at transportation network intersections

Bruno Vieira Bertoncini 19 November 2010 (has links)
A meta do trabalho é propor e testar a hipótese que a contagem de tráfego nas interseções da rede de transportes, ao invés de contagem de tráfego nos arcos, reduz o grau de indeterminação e torna mais precisa a matriz OD estimada pelo modelo sintético. Ademais, é proposto e detalhado um método de estimação da matriz OD através de médias sucessivas (MEMS). É apresentada a descrição matemática das propostas e o detalhamento dos experimentos elaborados para testá-las. Três métodos de estimação, QUEENSOD, TransCAD e MEMS, foram utilizados na verificação da hipótese. A inserção de \"arcos virtuais\" na rede de transportes constituiu um artifício que permitiu aos programas QUEENSOD e TransCAD realizarem a estimação utilizando fluxos observados nas interseções. A utilização de contagens de fluxo nas interseções propiciou à matriz OD estimada, melhorias que acarretaram sua aproximação com a matriz OD \"real\". O experimento mostrou que a matriz OD estimada ao considerar contagens de tráfego nas interseções apresenta melhor desempenho em comparação a matriz estimada ao considerar contagens nos arcos da rede de transportes. A matriz estimada gradativamente aproximou-se da \"real\" à medida que foi aumentada a quantidade de informação de fluxo e sua distribuição na rede. Assim, a hipótese formulada para este trabalho não pôde ser refutada. / The aim of this work is to propose and test the hypothesis that traffic counts collected at network intersections, instead of traffic counts collected at links, reduce indeterminacy and make more accurate the OD matrix estimated by the synthetic model. Furthermore, a method is proposed and described in detail to estimate the OD matrix based on successive averages (MEMS). The model formulation of the proposals and a description of the experiments are presented. Three estimation methods, QUEENSOD, TransCAD, and MEMS were used in the hypothesis verification. The use of \"virtual links\" in the network consists of an artifice that enable the QUEENSOD and TransCAD to estimate the OD matrix based on traffic counts at intersection. By using flow counts conducted at intersections, improvements could be made to the estimated OD matrix bringing it closer to \"real situations\". The experiments results show that the OD matrix estimation based on traffic counts collected on network intersection has a better performance in contrast to the estimation based on traffic counts collected on network links. The estimated matrix gradually becomes closer to \"real situations\" while the quantity of information flow and its distribution to the network is increased. Therefore, the formulated hypothesis for this work cannot be refuted.
14

Quelques contributions à l'estimation de grandes matrices de précision / Some contributions to large precision matrix estimation

Balmand, Samuel 27 June 2016 (has links)
Sous l'hypothèse gaussienne, la relation entre indépendance conditionnelle et parcimonie permet de justifier la construction d'estimateurs de l'inverse de la matrice de covariance -- également appelée matrice de précision -- à partir d'approches régularisées. Cette thèse, motivée à l'origine par la problématique de classification d'images, vise à développer une méthode d'estimation de la matrice de précision en grande dimension, lorsque le nombre $n$ d'observations est petit devant la dimension $p$ du modèle. Notre approche repose essentiellement sur les liens qu'entretiennent la matrice de précision et le modèle de régression linéaire. Elle consiste à estimer la matrice de précision en deux temps. Les éléments non diagonaux sont tout d'abord estimés en considérant $p$ problèmes de minimisation du type racine carrée des moindres carrés pénalisés par la norme $ell_1$.Les éléments diagonaux sont ensuite obtenus à partir du résultat de l'étape précédente, par analyse résiduelle ou maximum de vraisemblance. Nous comparons ces différents estimateurs des termes diagonaux en fonction de leur risque d'estimation. De plus, nous proposons un nouvel estimateur, conçu de sorte à tenir compte de la possible contamination des données par des {em outliers}, grâce à l'ajout d'un terme de régularisation en norme mixte $ell_2/ell_1$. L'analyse non-asymptotique de la convergence de notre estimateur souligne la pertinence de notre méthode / Under the Gaussian assumption, the relationship between conditional independence and sparsity allows to justify the construction of estimators of the inverse of the covariance matrix -- also called precision matrix -- from regularized approaches. This thesis, originally motivated by the problem of image classification, aims at developing a method to estimate the precision matrix in high dimension, that is when the sample size $n$ is small compared to the dimension $p$ of the model. Our approach relies basically on the connection of the precision matrix to the linear regression model. It consists of estimating the precision matrix in two steps. The off-diagonal elements are first estimated by solving $p$ minimization problems of the type $ell_1$-penalized square-root of least-squares. The diagonal entries are then obtained from the result of the previous step, by residual analysis of likelihood maximization. This various estimators of the diagonal entries are compared in terms of estimation risk. Moreover, we propose a new estimator, designed to consider the possible contamination of data by outliers, thanks to the addition of a $ell_2/ell_1$ mixed norm regularization term. The nonasymptotic analysis of the consistency of our estimator points out the relevance of our method
15

Optimisation dynamique de réseaux IP/MPLS / Dynamic optimization of IP/MPLS networks

Vallet, Josselin 05 May 2015 (has links)
La forte variabilité des trafics est devenue l'un des problèmes majeurs auxquels doivent faire face les gestionnaires d'infrastructures réseau. Dans ces conditions, l'optimisation du routage des flux en se basant uniquement sur une matrice de trafic moyenne estimée en heure de pointe n'est plus pertinente. Les travaux conduits dans cette thèse visent la conception de méthodes d'optimisation dynamiques du routage, adaptant en temps réel les routes utilisées par les flux aux conditions de trafic dans le réseau.Nous étudions tout d'abord le problème d'optimisation des poids OSPF pour le routage intra-domaine dans les réseaux IP, où le trafic est routé le long de plus courts chemins, en fonction des poids des liens. Nous proposons une approche en ligne permettant de reconfigurer dynamiquement les poids OSPF, et donc les routes utilisées, pour répondre aux variations observées du trafic et réduire ainsi le taux de congestion du réseau. L'approche proposée repose sur l'estimation robuste des demandes en trafic des flux à partir de mesures SNMP sur la charge des liens. Les résultats expérimentaux, aussi bien sur des trafics simulés que réels, montrent que le taux de congestion du réseau peut être significativement réduit par rapport à une configuration statique.Dans la même optique, nous nous intéressons également à l'optimisation des réseaux MPLS, qui permettent de gérer l'utilisation des ressources disponibles en affectant un chemin spécifique à chaque LSP. Nous proposons un algorithme inspiré de la théorie des jeux pour déterminer le placement des LSP optimisant un critère de performance non linéaire. Nous établissons la convergence de cet algorithme et obtenons des bornes sur son facteur d'approximation pour plusieurs fonctions de coût. L'intérêt principal de cette technique étant d'offrir des solutions de bonne qualité en des temps de calcul extrêmement réduits, nous étudions son utilisation pour la reconfiguration dynamique du placement des LSP.La dernière partie de cette thèse est consacrée à la conception et au développement d'une solution logicielle permettant le déploiement d'un réseau overlay auto-guérissant et auto-optimisant entre différentes plateformes de cloud computing. La solution est conçue pour ne nécessiter aucun changement des applications. En mesurant régulièrement la qualité des liens Internet entre les centres de données, elle permet de détecter rapidement la panne d'une route IP et de basculer le trafic sur un chemin de secours. Elle permet également de découvrir dynamiquement les chemins dans le réseau overlay qui optimisent une métrique de routage spécifique à l'application. Nous décrivons l'architecture et l'implémentation du système, ainsi que les expériences réalisées à la fois en émulation et sur une plateforme réelle composée de plusieurs centres de données situés dans différents pays. / The high variability of traffic has become one of the major problems faced by network infrastructure managers . Under these conditions, flow route optimization based solely on an average busy hour traffic matrix is no longer relevant. The work done in this thesis aims to design dynamic routing optimization methods, adapting in real time the routes used by the flows to the actual network traffic conditions.We first study the problem of OSPF weight optimization for intra-domain routing in IP networks, where the traffic is routed along shortest paths, according to links weights. We propose an online scheme to dynamically reconfigure the OSPF weights and therefore the routes used, to respond to observed traffic variations and reduce the network congestion rate. The proposed approach is based on robust estimation of flow traffic demands from SNMP measurements on links loads. Experimental results, both on simulated and real traffic data show that the network congestion rate can be significantly reduced in comparison to a static weight configuration.On the same idea, we are also interested in optimizing MPLS networks that manage the available resource utilization by assigning a specific path for each LSP. We propose an algorithm inspired by game theory to determine the LSP placement optimizing a nonlinear performance criterion. We establish the convergence of the algorithm and obtain bounds on its approximation factor for several cost functions. As the main advantage of this technique is to offer good quality solutions in extremely reduced computation times, we are studying its use for dynamic reconfiguration of the LSP placement.The last part of this thesis is devoted to the design and development of a software solution for the deployment of a self-healing and self-optimizing network overlay between different cloud platforms. The solution is designed such that no change is required for client applications. By regularly measuring the quality of Internet links between data centers, it can quickly detect an IP route failure and switch the traffic to a backup path. It also allows to dynamically discover the paths in the overlay network that optimize a routing metric specific to the application. We describe the system architecture and implementation, as well as the experiments in both emulation and real platform composed of several data centers located in different countries
16

Neural Networks for improved signal source enumeration and localization with unsteered antenna arrays

Rogers, John T, II 08 December 2023 (has links) (PDF)
Direction of Arrival estimation using unsteered antenna arrays, unlike mechanically scanned or phased arrays, requires complex algorithms which perform poorly with small aperture arrays or without a large number of observations, or snapshots. In general, these algorithms compute a sample covriance matrix to obtain the direction of arrival and some require a prior estimate of the number of signal sources. Herein, artificial neural network architectures are proposed which demonstrate improved estimation of the number of signal sources, the true signal covariance matrix, and the direction of arrival. The proposed number of source estimation network demonstrates robust performance in the case of coherent signals where conventional methods fail. For covariance matrix estimation, four different network architectures are assessed and the best performing architecture achieves a 20 times improvement in performance over the sample covariance matrix. Additionally, this network can achieve comparable performance to the sample covariance matrix with 1/8-th the amount of snapshots. For direction of arrival estimation, preliminary results are provided comparing six architectures which all demonstrate high levels of accuracy and demonstrate the benefits of progressively training artificial neural networks by training on a sequence of sub- problems and extending to the network to encapsulate the entire process.
17

Highly Robust and Efficient Estimators of Multivariate Location and Covariance with Applications to Array Processing and Financial Portfolio Optimization

Fishbone, Justin Adam 21 December 2021 (has links)
Throughout stochastic data processing fields, mean and covariance matrices are commonly employed for purposes such as standardizing multivariate data through decorrelation. For practical applications, these matrices are usually estimated, and often, the data used for these estimates are non-Gaussian or may be corrupted by outliers or impulsive noise. To address this, robust estimators should be employed. However, in signal processing, where complex-valued data are common, the robust estimation techniques currently employed, such as M-estimators, provide limited robustness in the multivariate case. For this reason, this dissertation extends, to the complex-valued domain, the high-breakdown-point class of multivariate estimators called S-estimators. This dissertation defines S-estimators in the complex-valued context, and it defines their properties for complex-valued data. One major shortcoming of the leading high-breakdown-point multivariate estimators, such as the Rocke S-estimator and the smoothed hard rejection MM-estimator, is that they lack statistical efficiency at non-Gaussian distributions, which are common with real-world applications. This dissertation proposes a new tunable S-estimator, termed the Sq-estimator, for the general class of elliptically symmetric distributions—a class containing many common families such as the multivariate Gaussian, K-, W-, t-, Cauchy, Laplace, hyperbolic, variance gamma, and normal inverse Gaussian distributions. This dissertation demonstrates the diverse applicability and performance benefits of the Sq-estimator through theoretical analysis, empirical simulation, and the processing of real-world data. Through analytical and empirical means, the Sq-estimator is shown to generally provide higher maximum efficiency than the leading maximum-breakdown estimators, and it is also shown to generally be more stable with respect to initial conditions. To illustrate the theoretical benefits of the Sq for complex-valued applications, the efficiencies and influence functions of adaptive minimum variance distortionless response (MVDR) beamformers based on S- and M-estimators are compared. To illustrate the finite-sample performance benefits of the Sq-estimator, empirical simulation results of multiple signal classification (MUSIC) direction-of-arrival estimation are explored. Additionally, the optimal investment of real-world stock data is used to show the practical performance benefits of the Sq-estimator with respect to robustness to extreme events, estimation efficiency, and prediction performance. / Doctor of Philosophy / Throughout stochastic processing fields, mean and covariance matrices are commonly employed for purposes such as standardizing multivariate data through decorrelation. For practical applications, these matrices are usually estimated, and often, the data used for these estimates are non-normal or may be corrupted by outliers or large sporadic noise. To address this, estimators should be employed that are robust to these conditions. However, in signal processing, where complex-valued data are common, the robust estimation techniques currently employed provide limited robustness in the multivariate case. For this reason, this dissertation extends, to the complex-valued domain, the highly robust class of multivariate estimators called S-estimators. This dissertation defines S-estimators in the complex-valued context, and it defines their properties for complex-valued data. One major shortcoming of the leading highly robust multivariate estimators is that they may require unreasonably large numbers of samples (i.e. they may have low statistical efficiency) in order to provide good estimates at non-normal distributions, which are common with real-world applications. This dissertation proposes a new tunable S-estimator, termed the Sq-estimator, for the general class of elliptically symmetric distributions—a class containing many common families such as the multivariate Gaussian, K-, W-, t-, Cauchy, Laplace, hyperbolic, variance gamma, and normal inverse Gaussian distributions. This dissertation demonstrates the diverse applicability and performance benefits of the Sq-estimator through theoretical analysis, empirical simulation, and the processing of real-world data. Through analytical and empirical means, the Sq-estimator is shown to generally provide higher maximum efficiency than the leading highly robust estimators, and its solutions are also shown to generally be less sensitive to initial conditions. To illustrate the theoretical benefits of the Sq-estimator for complex-valued applications, the statistical efficiencies and robustness of adaptive beamformers based on various estimators are compared. To illustrate the finite-sample performance benefits of the Sq-estimator, empirical simulation results of signal direction-of-arrival estimation are explored. Additionally, the optimal investment of real-world stock data is used to show the practical performance benefits of the Sq-estimator with respect to robustness to extreme events, estimation efficiency, and prediction performance.
18

Efficient formulation and implementation of ensemble based methods in data assimilation

Nino Ruiz, Elias David 11 January 2016 (has links)
Ensemble-based methods have gained widespread popularity in the field of data assimilation. An ensemble of model realizations encapsulates information about the error correlations driven by the physics and the dynamics of the numerical model. This information can be used to obtain improved estimates of the state of non-linear dynamical systems such as the atmosphere and/or the ocean. This work develops efficient ensemble-based methods for data assimilation. A major bottleneck in ensemble Kalman filter (EnKF) implementations is the solution of a linear system at each analysis step. To alleviate it an EnKF implementation based on an iterative Sherman Morrison formula is proposed. The rank deficiency of the ensemble covariance matrix is exploited in order to efficiently compute the analysis increments during the assimilation process. The computational effort of the proposed method is comparable to those of the best EnKF implementations found in the current literature. The stability analysis of the new algorithm is theoretically proven based on the positiveness of the data error covariance matrix. In order to improve the background error covariance matrices in ensemble-based data assimilation we explore the use of shrinkage covariance matrix estimators from ensembles. The resulting filter has attractive features in terms of both memory usage and computational complexity. Numerical results show that it performs better that traditional EnKF formulations. In geophysical applications the correlations between errors corresponding to distant model components decreases rapidly with the distance. We propose a new and efficient implementation of the EnKF based on a modified Cholesky decomposition for inverse covariance matrix estimation. This approach exploits the conditional independence of background errors between distant model components with regard to a predefined radius of influence. Consequently, sparse estimators of the inverse background error covariance matrix can be obtained. This implies huge memory savings during the assimilation process under realistic weather forecast scenarios. Rigorous error bounds for the resulting estimator in the context of data assimilation are theoretically proved. The conclusion is that the resulting estimator converges to the true inverse background error covariance matrix when the ensemble size is of the order of the logarithm of the number of model components. We explore high-performance implementations of the proposed EnKF algorithms. When the observational operator can be locally approximated for different regions of the domain, efficient parallel implementations of the EnKF formulations presented in this dissertation can be obtained. The parallel computation of the analysis increments is performed making use of domain decomposition. Local analysis increments are computed on (possibly) different processors. Once all local analysis increments have been computed they are mapped back onto the global domain to recover the global analysis. Tests performed with an atmospheric general circulation model at a T-63 resolution, and varying the number of processors from 96 to 2,048, reveal that the assimilation time can be decreased multiple fold for all the proposed EnKF formulations.Ensemble-based methods can be used to reformulate strong constraint four dimensional variational data assimilation such as to avoid the construction of adjoint models, which can be complicated for operational models. We propose a trust region approach based on ensembles in which the analysis increments are computed onto the space of an ensemble of snapshots. The quality of the resulting increments in the ensemble space is compared against the gains in the full space. Decisions on whether accept or reject solutions rely on trust region updating formulas. Results based on a atmospheric general circulation model with a T-42 resolution reveal that this methodology can improve the analysis accuracy. / Ph. D.
19

Bilevel programming

Zemkoho, Alain B. 25 June 2012 (has links) (PDF)
We have considered the bilevel programming problem in the case where the lower-level problem admits more than one optimal solution. It is well-known in the literature that in such a situation, the problem is ill-posed from the view point of scalar objective optimization. Thus the optimistic and pessimistic approaches have been suggested earlier in the literature to deal with it in this case. In the thesis, we have developed a unified approach to derive necessary optimality conditions for both the optimistic and pessimistic bilevel programs, which is based on advanced tools from variational analysis. We have obtained various constraint qualifications and stationarity conditions depending on some constructive representations of the solution set-valued mapping of the follower’s problem. In the auxiliary developments, we have provided rules for the generalized differentiation and robust Lipschitzian properties for the lower-level solution setvalued map, which are of a fundamental interest for other areas of nonlinear and nonsmooth optimization. Some of the results of the aforementioned theory have then been applied to derive stationarity conditions for some well-known transportation problems having the bilevel structure.
20

Bilevel programming: reformulations, regularity, and stationarity

Zemkoho, Alain B. 12 June 2012 (has links)
We have considered the bilevel programming problem in the case where the lower-level problem admits more than one optimal solution. It is well-known in the literature that in such a situation, the problem is ill-posed from the view point of scalar objective optimization. Thus the optimistic and pessimistic approaches have been suggested earlier in the literature to deal with it in this case. In the thesis, we have developed a unified approach to derive necessary optimality conditions for both the optimistic and pessimistic bilevel programs, which is based on advanced tools from variational analysis. We have obtained various constraint qualifications and stationarity conditions depending on some constructive representations of the solution set-valued mapping of the follower’s problem. In the auxiliary developments, we have provided rules for the generalized differentiation and robust Lipschitzian properties for the lower-level solution setvalued map, which are of a fundamental interest for other areas of nonlinear and nonsmooth optimization. Some of the results of the aforementioned theory have then been applied to derive stationarity conditions for some well-known transportation problems having the bilevel structure.

Page generated in 0.0963 seconds