• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 392
  • 85
  • 67
  • 50
  • 27
  • 13
  • 7
  • 6
  • 5
  • 5
  • 4
  • 4
  • 4
  • 4
  • 3
  • Tagged with
  • 791
  • 220
  • 112
  • 82
  • 67
  • 58
  • 56
  • 55
  • 55
  • 55
  • 52
  • 52
  • 51
  • 50
  • 49
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
541

Statistical methods for reconstruction of entry, descent, and landing performance with application to vehicle design

Dutta, Soumyo 13 January 2014 (has links)
There is significant uncertainty in our knowledge of the Martian atmosphere and the aerodynamics of the Mars entry, descent, and landing (EDL) systems. These uncertainties result in conservatism in the design of the EDL vehicles leading to higher system masses and a broad range of performance predictions. Data from flight instrumentation onboard Mars EDL systems can be used to quantify these uncertainties, but the existing dataset is sparse and many parameters of interest have not been previously observable. Many past EDL reconstructions neither utilize statistical information about the uncertainty of the measured data nor quantify the uncertainty of the estimated parameters. Statistical estimation methods can blend together disparate data types to improve the reconstruction of parameters of interest for the vehicle. For example, integrating data obtained from aeroshell-mounted pressure transducers, inertial measurement unit, and radar altimeter can improve the estimates of the trajectory, atmospheric profile, and aerodynamic coefficients, while also quantifying the uncertainty in these estimates. These same statistical methods can be leveraged to improve current engineering models in order to reduce conservatism in future EDL vehicle design. The work in this thesis presents a comprehensive methodology for parameter reconstruction and uncertainty quantification while blending dissimilar Mars EDL datasets. Statistical estimation methods applied include the Extended Kalman Filter, Unscented Kalman Filter, and Adaptive Filter. The estimators are applied in a manner in which the observability of the parameters of interest is maximized while using the sparse, disparate EDL dataset. The methodology is validated with simulated data and then applied to estimate the EDL performance of the 2012 Mars Science Laboratory. The reconstruction methodology is also utilized as a tool for improving vehicle design and reducing design conservatism. A novel method of optimizing the design of future EDL atmospheric data systems is presented by leveraging the reconstruction methodology. The methodology identifies important design trends and the point of diminishing returns of atmospheric data sensors that are critical in improving the reconstruction performance for future EDL vehicles. The impact of the estimation methodology on aerodynamic and atmospheric engineering models is also studied and suggestions are made for future EDL instrumentation.
542

Key Agreement Against Quantum Adversaries

Kalach, Kassem H. 08 1900 (has links)
Key agreement is a cryptographic scenario between two legitimate parties, who need to establish a common secret key over a public authenticated channel, and an eavesdropper who intercepts all their messages in order to learn the secret. We consider query complexity in which we count only the number of evaluations (queries) of a given black-box function, and classical communication channels. Ralph Merkle provided the first unclassified scheme for secure communications over insecure channels. When legitimate parties are willing to ask O(N) queries for some parameter N, any classical eavesdropper needs Omega(N^2) queries before being able to learn their secret, which is is optimal. However, a quantum eavesdropper can break this scheme in O(N) queries. Furthermore, it was conjectured that any scheme, in which legitimate parties are classical, could be broken in O(N) quantum queries. In this thesis, we introduce protocols à la Merkle that fall into two categories. When legitimate parties are restricted to use classical computers, we offer the first secure classical scheme. It requires Omega(N^{13/12}) queries of a quantum eavesdropper to learn the secret. We give another protocol having security of Omega(N^{7/6}) queries. Furthermore, for any k>= 2, we introduce a classical protocol in which legitimate parties establish a secret in O(N) queries while the optimal quantum eavesdropping strategy requires Theta(N^{1/2+k/{k+1}}) queries, approaching Theta(N^{3/2}) when k increases. When legitimate parties are provided with quantum computers, we present two quantum protocols improving on the best known scheme before this work. Furthermore, for any k>= 2, we give a quantum protocol in which legitimate parties establish a secret in O(N) queries while the optimal quantum eavesdropping strategy requires Theta(N^{1+{k}/{k+1}})} queries, approaching Theta(N^{2}) when k increases. / Un protocole d'échange de clés est un scénario cryptographique entre deux partis légitimes ayant besoin de se mettre d'accord sur une clé commune secrète via un canal public authentifié où tous les messages sont interceptés par un espion voulant connaître leur secret. Nous considérons un canal classique et mesurons la complexité de calcul en termes du nombre d'évaluations (requêtes) d'une fonction donnée par une boîte noire. Ralph Merkle fut le premier à proposer un schéma non classifié permettant de réaliser des échanges securisés avec des canaux non securisés. Lorsque les partis légitimes sont capables de faire O(N) requêtes pour un certain paramètre N, tout espion classique doit faire Omega(N^2) requêtes avant de pouvoir apprendre leur secret, ce qui est optimal. Cependant, un espion quantique peut briser ce schéma avec O(N) requêtes. D'ailleurs, il a été conjecturé que tout protocole, dont les partis légitimes sont classiques, pourrait être brisé avec O(N) requêtes quantiques. Dans cette thèse, nous introduisons deux catégories des protocoles à la Merkle. Lorsque les partis légitimes sont restreints à l'utilisation des ordinateurs classiques, nous offrons le premier schéma classique sûr. Il oblige tout adversaire quantique à faire Omega(N^{13/12}) requêtes avant d'apprendre le secret. Nous offrons aussi un protocole ayant une sécurité de Omega(N^{7/6}) requêtes. En outre, pour tout k >= 2, nous donnons un protocole classique pour lequel les partis légitimes établissent un secret avec O(N) requêtes alors que la stratégie optimale d'espionnage quantique nécessite Theta(N^{1/2 + k/{k +1}}) requêtes, se rapprochant de Theta(N^{3/2}) lorsque k croît. Lors les partis légitimes sont équipés d'ordinateurs quantiques, nous présentons deux protocoles supérieurs au meilleur schéma connu avant ce travail. En outre, pour tout k >= 2, nous offrons un protocole quantique pour lequel les partis légitimes établissent un secret avec O(N) requêtes alors que l'espionnage quantique optimale nécessite Theta(N^{1+{k}/{k+1}}) requêtes, se rapprochant de Theta(N^{2}) lorsque k croît.
543

Transmitting Quantum Information Reliably across Various Quantum Channels

Ouyang, Yingkai January 2013 (has links)
Transmitting quantum information across quantum channels is an important task. However quantum information is delicate, and is easily corrupted. We address the task of protecting quantum information from an information theoretic perspective -- we encode some message qudits into a quantum code, send the encoded quantum information across the noisy quantum channel, then recover the message qudits by decoding. In this dissertation, we discuss the coding problem from several perspectives.} The noisy quantum channel is one of the central aspects of the quantum coding problem, and hence quantifying the noisy quantum channel from the physical model is an important problem. We work with an explicit physical model -- a pair of initially decoupled quantum harmonic oscillators interacting with a spring-like coupling, where the bath oscillator is initially in a thermal-like state. In particular, we treat the completely positive and trace preserving map on the system as a quantum channel, and study the truncation of the channel by truncating its Kraus set. We thereby derive the matrix elements of the Choi-Jamiolkowski operator of the corresponding truncated channel, which are truncated transition amplitudes. Finally, we give a computable approximation for these truncated transition amplitudes with explicit error bounds, and perform a case study of the oscillators in the off-resonant and weakly-coupled regime numerically. In the context of truncated noisy channels, we revisit the notion of approximate error correction of finite dimension codes. We derive a computationally simple lower bound on the worst case entanglement fidelity of a quantum code, when the truncated recovery map of Leung et. al. is rescaled. As an application, we apply our bound to construct a family of multi-error correcting amplitude damping codes that are permutation-invariant. This demonstrates an explicit example where the specific structure of the noisy channel allows code design out of the stabilizer formalism via purely algebraic means. We study lower bounds on the quantum capacity of adversarial channels, where we restrict the selection of quantum codes to the set of concatenated quantum codes. The adversarial channel is a quantum channel where an adversary corrupts a fixed fraction of qudits sent across a quantum channel in the most malicious way possible. The best known rates of communicating over adversarial channels are given by the quantum Gilbert-Varshamov (GV) bound, that is known to be attainable with random quantum codes. We generalize the classical result of Thommesen to the quantum case, thereby demonstrating the existence of concatenated quantum codes that can asymptotically attain the quantum GV bound. The outer codes are quantum generalized Reed-Solomon codes, and the inner codes are random independently chosen stabilizer codes, where the rates of the inner and outer codes lie in a specified feasible region. We next study upper bounds on the quantum capacity of some low dimension quantum channels. The quantum capacity of a quantum channel is the maximum rate at which quantum information can be transmitted reliably across it, given arbitrarily many uses of it. While it is known that random quantum codes can be used to attain the quantum capacity, the quantum capacity of many classes of channels is undetermined, even for channels of low input and output dimension. For example, depolarizing channels are important quantum channels, but do not have tight numerical bounds. We obtain upper bounds on the quantum capacity of some unital and non-unital channels -- two-qubit Pauli channels, two-qubit depolarizing channels, two-qubit locally symmetric channels, shifted qubit depolarizing channels, and shifted two-qubit Pauli channels -- using the coherent information of some degradable channels. We use the notion of twirling quantum channels, and Smith and Smolin's method of constructing degradable extensions of quantum channels extensively. The degradable channels we introduce, study and use are two-qubit amplitude damping channels. Exploiting the notion of covariant quantum channels, we give sufficient conditions for the quantum capacity of a degradable channel to be the optimal value of a concave program with linear constraints, and show that our two-qubit degradable amplitude damping channels have this property.
544

An Optimizing Approach For Highway Safety Improvement Programs

Unal, Serter Ziya 01 June 2004 (has links) (PDF)
Improvements to highway safety have become a high priority for highway authorities due to increasing public awareness and concern of the high social and economic costs of accidents. However, satisfying this priority in an environment of limited budgets is difficult. It is therefore important to ensure that the funding available for highway safety improvements is efficiently utilized. In attempt to maximize the overall highway safety benefits, highway professionals usually invoke an optimization process. The objective of this thesis study is to develop a model for the selection of appropriate improvements on a set of black spots which will provide the maximum reduction in the expected number of accidents (total return), subject to the constraint that the amount of money needed for the implementation of these improvements does not exceed the available budget. For this purpose, a computer program, BSAP (Black Spot Analysis Program) is developed. BSAP is comprised of two separate, but integrated programs: the User Interface Program (UIP) and the Main Analysis Program (MAP). The MAP is coded in MATLAB and contains the optimization procedure itself and performs all the necessary calculations by using a Binary Integer Optimization model. The UIP, coded in VISUAL BASIC, was used for monitoring the menu for efficient data preparation and providing a user-friendly environment.
545

二元雙界二分選擇模型下的願付價值分析

詹玉葳 Unknown Date (has links)
利用條件評估法 (contingent valuation method) 來評估非市場財貨之市場隱含價值時,雙界二分選擇法 (doubled-bound dichotomous choice method) 為最普遍的詢價方式。近年來,藉由此詢價方式來估計受訪者心目中的願付價值 (willingness to pay) 之研究中,更將此方法推廣至同時估計兩個以上且具有相關性的非市場財貨。只是文獻中的相關探討多半忽略其間的相關性,此外所採用的模型也有可能導致估計的願付價值會有小於零的情形產生。因此,本文引進了衍生版本的Bivariate Generalized Gamma Distribution,來解決這上述兩個問題。我們並採用「竹東及朴子地區心臟血管疾病之危險因子長期追□研究」中,第五循環的「肥胖之願付價格問卷」來作實證分析。在其餘的條件不變的情況下,分析結果顯示,居住在竹東、女性、教育程度愈高、年紀愈小、體重愈重及收入愈高的受訪者會願意支付較高的金額來接受減肥的療程;此外,認為肥胖會影響工作及社交關係的受訪者也會願意支付較高的金額。 / In a contingent valuation survey, it is quite often that subjects were asked to respond to more than one WTP (willingness-to-pay) scenarios. Under such a circumstance, responses provided by a subject are clearly correlated. Although the issue is well recognized in the past, in practice a popular strategy in analyzing this sort of data, however, simply ignore the issue and treat them as if they were totally uncorrelated. Concerning that WTP prices can take only non-negative values along with the issue of possible correlation, we propose an “extend bivariate generalized gamma distribution” that can be used to deal with data collected under a two-scenario situation. Applying it to the CVDFACTS study, where subjects were asked to evaluate a medication-only program as well as a medication-and-exercise program, we found that, other things being equal, female subjects, subjects residing in Chu-Dung County, subjects weigh more, subjects with younger age, higher income, and more years of schooling are willing to pay more. In addition, those who think obesity would affect their social activities would also have higher WTP prices.
546

A multi-objective stochastic approach to combinatorial technology space exploration

Patel, Chirag B. 18 May 2009 (has links)
Several techniques were studied to select and prioritize technologies for a complex system. Based on the findings, a method called Pareto Optimization and Selection of Technologies (POST) was formulated to efficiently explore the combinatorial technology space. A knapsack problem was selected as a benchmark problem to test-run various algorithms and techniques of POST. A Monte Carlo simulation using the surrogate models was used for uncertainty quantification. The concepts of graph theory were used to model and analyze compatibility constraints among technologies. A probabilistic Pareto optimization, based on the concepts of Strength Pareto Evolutionary Algorithm II (SPEA2), was formulated for Pareto optimization in an uncertain objective space. As a result, multiple Pareto hyper-surfaces were obtained in a multi-dimensional objective space; each hyper-surface representing a specific probability level. These Pareto layers enabled the probabilistic comparison of various non-dominated technology combinations. POST was implemented on a technology exploration problem for a 300 passenger commercial aircraft. The problem had 29 identified technologies with uncertainties in their impacts on the system. The distributions for these uncertainties were defined using beta distributions. Surrogate system models in the form of Response Surface Equations (RSE) were used to map the technology impacts on the system responses. Computational complexity of technology graph was evaluated and it was decided to use evolutionary algorithm for probabilistic Pareto optimization. The dimensionality of the objective space was reduced using a dominance structure preserving approach. Probabilistic Pareto optimization was implemented with reduced number of objectives. Most of the technologies were found to be active on the Pareto layers. These layers were exported to a dynamic visualization environment enabled by a statistical analysis and visualization software called JMP. The technology combinations on these Pareto layers are explored using various visualization tools and one combination is selected. The main outcome of this research is a method based on consistent analytical foundation to create a dynamic tradeoff environment in which decision makers can interactively explore and select technology combinations.
547

Thermo-elasto-plastic uncoupling model of width variation for online application in automotive cold rolling process / Modèle thermo-elasto-plastic découplé de la variation de largeur au laminage à froid pour les applications en temps réel

Ngo, Quang Tien 30 March 2015 (has links)
Afin d'optimiser la mise aux milles au laminage à froid, la thèse consiste à développer un modèle prédictif de variation de largeur à la fois précis et rapide pour des utilisations en temps réel. Des efforts ont commencé en 1960s en développant des formules empiriques. Par la suite, la Méthode des Bornes Supérieures (MBS) est devenue la plus connue. [Oh 1975] utilisant le champ de vitesse 3D "simple" prédit bien la variation de largeur au laminage en conditions d'un tandem finisseur. [Komori 2002] a proposé une combinaison des champs fondamentaux et obtenu une structure informatique peu dépendante aux champs de vitesse. Néanmoins, seuls deux champs fondamentaux ont été introduits qui forment un sous-ensemble de la famille 3D "simple". [Serek 2008] a étudié des champs de vitesse quadratique qui inclue la famille "simple" et donne des meilleurs résultats avec un temps de calcul plus long. Le premier résultat de la thèse est un modèle 2D (MBS) avec des champs de vitesse oscillante. Ce modèle aboutit à une vitesse optimale qui oscille spatialement le long de l'emprise. Les résultats (puissance, vitesse...) sont plus proches des ceux de Lam3-Tec3 que la MBS 2D "simple". Pour une modélisation 3D, nous avons choisi la MBS avec la vitesse 3D "simple" et obtenu un très bon accord avec les expériences réalisées sur des produits étroits à Arcelor Mittal [64]. En outre, un nouveau modèle MBS est développé pour une bande bombée et des cylindres droits. Les résultats montrent que la variation de largeur diminue avec la bombée de la bande et correspondent bien à ceux de Lam3-Tec3. Cependant, la MBS admet un comportement rigide-plastique tandis qu'au laminage des bandes larges les déformations élastique et thermique ont des impacts importants sur la déformation plastique. Les modèles existant prenant en compte ces phénomènes [23,64] sont couteux en temps de calcul. Ainsi, l'idée est de décomposer la variation de la largeur de plastique en trois termes : les variations de largeur totales, élastique et thermique à travers la zone de plastique déterminés par trois nouveaux modèles simplifiés. Les deux premiers permettent d'estimer les variations de largeur élastique et plastique avant et après l'emprise. Ils donnent aussi les conditions aux limites au modèle d'emprise qui est en effet la MBS avec le champ de vitesse 3D "simple" permettant d'estimer la variation de la largeur totale. En outre, avec les puissances de déformation et de dissipation plastique de frottement données par le même modèle, la variation de largeur thermique est également obtenue. Le modèle de variation de largeur est donc appelée UBM-Slab combiné, très rapide (0,05 s) et prédit avec précision la largeur de variation par rapport à Lam3-Tec3 (<6%) / In order to save material yields in cold rolling process, the thesis aims at developing a predictive width variation model accurate and fast enough to be used online. Many efforts began in the 1960s in developing empirical formula. Afterward, the Upper Bound Method (UBM ) became more common. [Oh 1975]'s model with 3D "simple" velocity field estimates well the width variation for finishing mill rolling conditions. [Komori 2002] proposed a combination of fundamental ones to obtain a computer program depending minimally on the assumed velocity fields. However, only two fundamental fields were introduced and formed a subset of the "simple" family. [Serek 2008] studied a quadratic velocity family that includes the "simple" one and leads to better results with a higher computing time. Focusing on UBM , the first result of the thesis is a 2D model with an oscillating velocity field family. The model results to an optimum velocity that oscillates spatially throughout the roll-bite. The optimum power and the velocity field are closer to Lam3-Tec3 results than the "simple" one. For 3D modelling, we chose the 3D "simple" UBM and carried a comparison to the experiments performed at Arcelor Mittal using narrow strips [64]. A very good agreement is obtained. Further, a new UBM model is developed for a crowned strip with cylindrical work-rolls. It shows that the width variation decreases as a function of the strip crown and the results match well those of Lam3-Tec3 . However, the UBM considers only a rigid-plastic behaviour while in large strip rolling, the elastic and thermal deformations have important impacts on the plastic one. There exist some models considering these phenomena [23,64] but they are all time-consuming. Thus, the idea is to decompose the plastic width variation into three terms : total, elastic and thermal width variations through the plastic zone that are determined by three new models. The simplified roll-bite entry & exit models allow estimating the elastic and plastic width variations before and after the roll-bite. They give equally the longitudinal stresses defining the boundary conditions for the roll-bite model which is indeed the 3D "simple" UBM approximating the total width variation term. Moreover, with the plastic deformation and friction dissipation powers given by the same model, the thermal width variation term is also obtained. The width variation model, called UBM-Slab combined is very fast (0.05s) and predicts accurately the width variation in comparison with Lam3-Tec3 (<6%)
548

The Impact of Self-Efficacy and Academic Achievement on Twelfth Grade African-American Male TriO Program Participants: A Comparison Study of Two TRiO Programs at a Select Urban Institution

Ruffin, Christopher 21 May 2018 (has links)
This qualitative study examined the impact of TRIO-Upward Bound and Math Science programs for 12th-grade African-American male participants. The overall aim studied their self-efficacy in fulfilling graduation requirements and academic achievement in preparation for acceptance into a postsecondary institution. Data collection methods for this study were comprised of interviews, surveys, and student achievement data. Utilizing the qualitative director interviews, the researcher analyzed the data and presented the impact of independent variables on the effectiveness of the Upward Bound TRIO program for African-American 12th-grade males. A comparison of two Upward Bound TRIO programs at a select urban southern institution was conducted in the southern region of Georgia. The results were analyzed and queried as to whether the academic challenges confronting economically disadvantaged potential first generation college students, particularly African-American males, suggest an urgent call to action for an effective intervention strategy.
549

Planejamento da expansão do sistema de transmissão com dispositivos FACTS e links CC empregando metodologia Branch-and-Bound adaptada

Klas, Juliana January 2013 (has links)
Este trabalho apresenta proposta de modelo matemático para o problema de expansão do sistema de transmissão baseado no fluxo de carga CC considerando a utilização de links CC e FACTS resolvido através de metodologia de solução que considera a primeira e a segunda lei de Kirchhoff em processo enumerativo de branch-and-bound adaptado. A abordagem possui dois pontos em destaque: i) apresenta uma proposta de modelo matemático com possibilidade da utilização direta em problemas de expansão de linhas de transmissão que possuem tanto linhas de transmissão CA, transformadores, links CC e dispositivos FACTS e ii) é um método exato de solução do problema que garante a otimalidade da resposta e traz uma contribuição ao tradicional método branch-and-bound por incluir relaxações adicionais. O método aplicado aos sistemas de 6 barras de Garver e sistema Sul sudeste Brasileiro de 46 barras apresenta respostas adequadas e o modelo matemático testado em um sistema Garver modificado apresenta novas configurações possíveis com redução do custo total do investimento. / This work proposes a mathematical model to the transmission expansion system problem based on the DC power flow model considering the use of DC links and FACTS that is solved using a solution method considering the first and second Kirchhoff’s Law in an enumerative adapted branch-and-bound process. It is possible to highlight two key aspects of the proposed approach: i) presents a mathematical model that can be directly used on expansion transmission systems problems that have AC transmission lines, transformers, DC links and FACTS and ii) is an exact solution method that guarantees the optimum problems’s solutions and contributes to the traditional branch-and-bound method bringing additional relaxations. The solution method applied to Garver’s six-bus network and southeast Brazilian 46 bus network provides correct answers and the mathematical model tested on a modified Garver’s six-bus network presents new possible configurations that enables overall cost reduction to the problem.
550

Coordination d'ordonnancement de production et de distribution / Coordination of production and distribution scheduling

Fu, Liangliang 02 December 2014 (has links)
Dans cette thèse, nous étudions trois problèmes d'ordonnancement de la chaîne logistique dans le modèle de production à la demande. Le premier problème est un problème d'ordonnancement de production et de distribution intermédiaire dans une chaîne logistique avec un producteur et un prestataire logistique. Le deuxième problème est un problème d'ordonnancement de production et de distribution aval avec des dates de début au plus tôt et des dates limites de livraison dans une chaîne logistique avec un producteur, un prestataire logistique et un client. Le troisième problème est un problème d'ordonnancement de production et de distribution aval avec des temps de réglage et des fenêtres de temps de livraison dans une chaîne logistique avec un producteur, un prestataire logistique et plusieurs clients. Pour les trois problèmes, nous étudions les problèmes d'ordonnancement individuels et les problèmes d'ordonnancement coordonnés. Nous proposons des algorithmes polynomiaux ou prouvons la NP-Complétude de ces problèmes, et développons des algorithmes exacts ou heuristiques pour résoudre les problèmes NP-Difficiles. Nous proposons des mécanismes de coordination et évaluons le bénéfice de la coordination. / In this dissertation, we aim at investigating three supply chain scheduling problems in the make-To-Order business model. The first problem is a production and interstage distribution scheduling problem in a supply chain with a manufacturer and a third-Party logistics (3PL) provider. The second problem is a production and outbound distribution scheduling problem with release dates and deadlines in a supply chain with a manufacturer, a 3PL provider and a customer. The third problem is a production and outbound distribution scheduling problem with setup times and delivery time windows in a supply chain with a manufacturer, a 3PL provider and several customers. For the three problems, we study their individual scheduling problems and coordinated scheduling problems: we propose polynomial-Time algorithms or prove the intractability of these problems, and develop exact algorithms or heuristics to solve the NP-Hard problems. We establish mechanisms of coordination and evaluate the benefits of coordination.

Page generated in 0.0399 seconds