• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 345
  • 128
  • 49
  • 39
  • 12
  • 10
  • 9
  • 7
  • 5
  • 4
  • 3
  • 3
  • 2
  • 1
  • 1
  • Tagged with
  • 709
  • 183
  • 94
  • 88
  • 87
  • 76
  • 69
  • 54
  • 53
  • 53
  • 53
  • 51
  • 49
  • 43
  • 41
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
261

Robustness of connections to concrete-filled steel tubular columns under fire during heating and cooling

Elsawaf, Sherif Ahmed Elkarim Ibrahim Soliman January 2012 (has links)
Joint behaviour in fire is currently one of the most important topics of research in structural fire resistance. The collapse of World Trade Center buildings and the results of the Cardington full-scale eight storey steel framed building fire tests in the UK have demonstrated that steel joints are particularly vulnerable during the heating and cooling phases of fire. The main purpose of this research is to develop robust joints to CFT columns that are capable of providing very high rotational and tying resistances to make it possible for the connected beam to fully develop catenary action during the heating phase of fire attack and to retain integrity during the cooling phase of fire attack. This research employed the general finite element software ABAQUS to numerically model the behaviour of restrained structural subassemblies of steel beam to concrete filled tubular (CFT) columns and their joints in fire. For validation, this research compared the simulation and test results for 10 fire tests previously conducted at the University of Manchester. It was envisaged that catenary action in the connected beams at very large deflections would play an important role in ensuring robustness of steel framed structures in fire. Therefore, it was vital that the numerical simulations could accurately predict the structural behaviour at very large deflections. In particular, the transitional behaviour of the beam from compression to catenary action presented tremendous difficulties in numerical simulations due to the extremely high rate of deflection increase. This thesis will explain the methodology of a suitable simulation method, by introducing a pseudo damping factor. The comparison between the FE and the experimental results demonstrates that the 3-D finite element model is able to successfully simulate the fire tests. The validated ABAQUS model was then applied to conduct a thorough set of numerical studies to investigate methods of improving the survival temperatures under heating in fire of steel beams to concrete filled tubular (CFT) columns using reverse channel connection. This study investigated five different joint types of reverse channel connection: extended endplate, flush endplate, flexible endplate, hybrid flush/flexible endplate and hybrid extended/flexible endplate. The connection details investigated include reverse channel web thickness, bolt diameter and grade, using fire-resistant (FR) steel for different joint components (reverse channel, end plate and bolts) and joint temperature control. The effects of changing the applied beam and column loads were also considered. It is concluded that by adopting some of the joint details to improve the joint tensile strength and deformation capacity, it is possible for the beams to develop substantial catenary action to survive very high temperatures. This thesis also explains the implications on fire resistant design of the connected columns in order to resist the additional catenary force in the beam. The validated numerical model was also used to perform extensive parametric studies on steel framed structures using concrete filled tubular (CFT) columns with flexible reverse channel connection and fin plate connection to find means of reducing the risk of structural failure during cooling. The results lead to the suggestion that in order to avoid connection fracture during cooling, the most effective and simplest method would be to reduce the limiting temperature of the connected beam by less than 50°C from the limiting temperature calculated without considering any axial force in the beam.
262

Universal Biology

Mariscal, Carlos January 2014 (has links)
<p>Our only example of life is that of Earth- which is a single lineage. We know very little about what life would look like if we found evidence of a second origin. Yet there are some universal features of geometry, mechanics, and chemistry that have predictable biological consequences. The surface-to-volume ratio property of geometry, for example, places a maximum limit on the size of unassisted cells in a given environment. This effect is universal, interesting, not vague, and not arbitrary. Furthermore, there are some problems in the universe that life must invariably solve if it is to persist, such as resistance to radiation, faithful inheritance, and resistance to environmental pressures. At least with respect to these universal problems, some solutions must consistently emerge.</p><p> In this dissertation, I develop and defend my own account of universal biology, the study of non-vague, non-arbitrary, non-accidental, universal generalizations in biology. In my account, a candidate biological generalization is assessed in terms of the assumptions it makes. A successful claim is accepted only if its justification necessarily makes reference to principles of evolution and makes no reference to contingent facts of life on Earth. In this way, we can assess the robustness with which generalizations can be expected to hold. I contend that using a stringent-enough causal analysis, we are able to gather insight into the nature of life everywhere. Life on Earth may be our single example of life, but this is merely a reason to be cautious in our approach to life in the universe, not a reason to give up altogether.</p> / Dissertation
263

Optimization and Robustness in Planning and Scheduling Problems. Application to Container Terminals

Rodríguez Molins, Mario 31 March 2015 (has links)
Despite the continuous evolution in computers and information technology, real-world combinatorial optimization problems are NP-problems, in particular in the domain of planning and scheduling. Thus, although exact techniques from the Operations Research (OR) field, such as Linear Programming, could be applied to solve optimization problems, they are difficult to apply in real-world scenarios since they usually require too much computational time, i.e: an optimized solution is required at an affordable computational time. Furthermore, decision makers often face different and typically opposing goals, then resulting multi-objective optimization problems. Therefore, approximate techniques from the Artificial Intelligence (AI) field are commonly used to solve the real world problems. The AI techniques provide richer and more flexible representations of real-world (Gomes 2000), and they are widely used to solve these type of problems. AI heuristic techniques do not guarantee the optimal solution, but they provide near-optimal solutions in a reasonable time. These techniques are divided into two broad classes of algorithms: constructive and local search methods (Aarts and Lenstra 2003). They can guide their search processes by means of heuristics or metaheuristics depending on how they escape from local optima (Blum and Roli 2003). Regarding multi-objective optimization problems, the use of AI techniques becomes paramount due to their complexity (Coello Coello 2006). Nowadays, the point of view for planning and scheduling tasks has changed. Due to the fact that real world is uncertain, imprecise and non-deterministic, there might be unknown information, breakdowns, incidences or changes, which become the initial plans or schedules invalid. Thus, there is a new trend to cope these aspects in the optimization techniques, and to seek robust solutions (schedules) (Lambrechts, Demeulemeester, and Herroelen 2008). In this way, these optimization problems become harder since a new objective function (robustness measure) must be taken into account during the solution search. Therefore, the robustness concept is being studied and a general robustness measure has been developed for any scheduling problem (such as Job Shop Problem, Open Shop Problem, Railway Scheduling or Vehicle Routing Problem). To this end, in this thesis, some techniques have been developed to improve the search of optimized and robust solutions in planning and scheduling problems. These techniques offer assistance to decision makers to help in planning and scheduling tasks, determine the consequences of changes, provide support in the resolution of incidents, provide alternative plans, etc. As a case study to evaluate the behaviour of the techniques developed, this thesis focuses on problems related to container terminals. Container terminals generally serve as a transshipment zone between ships and land vehicles (trains or trucks). In (Henesey 2006a), it is shown how this transshipment market has grown rapidly. Container terminals are open systems with three distinguishable areas: the berth area, the storage yard, and the terminal receipt and delivery gate area. Each one presents different planning and scheduling problems to be optimized (Stahlbock and Voß 2008). For example, berth allocation, quay crane assignment, stowage planning, and quay crane scheduling must be managed in the berthing area; the container stacking problem, yard crane scheduling, and horizontal transport operations must be carried out in the yard area; and the hinterland operations must be solved in the landside area. Furthermore, dynamism is also present in container terminals. The tasks of the container terminals take place in an environment susceptible of breakdowns or incidences. For instance, a Quay Crane engine stopped working and needs to be revised, delaying this task one or two hours. Thereby, the robustness concept can be included in the scheduling techniques to take into consideration some incidences and return a set of robust schedules. In this thesis, we have developed a new domain-dependent planner to obtain more effi- cient solutions in the generic problem of reshuffles of containers. Planning heuristics and optimization criteria developed have been evaluated on realistic problems and they are applicable to the general problem of reshuffling in blocks world scenarios. Additionally, we have developed a scheduling model, using constructive metaheuristic techniques on a complex problem that combines sequences of scenarios with different types of resources (Berth Allocation, Quay Crane Assignment, and Container Stacking problems). These problems are usually solved separately and their integration allows more optimized solutions. Moreover, in order to address the impact and changes that arise in dynamic real-world environments, a robustness model has been developed for scheduling tasks. This model has been applied to metaheuristic schemes, which are based on genetic algorithms. The extension of such schemes, incorporating the robustness model developed, allows us to evaluate and obtain more robust solutions. This approach, combined with the classical optimality criterion in scheduling problems, allows us to obtain, in an efficient in way, optimized solution able to withstand a greater degree of incidents that occur in dynamic scenarios. Thus, a proactive approach is applied to the problem that arises with the presence of incidences and changes that occur in typical scheduling problems of a dynamic real world. / Rodríguez Molins, M. (2015). Optimization and Robustness in Planning and Scheduling Problems. Application to Container Terminals [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/48545 / TESIS
264

Essays on mechanism design under non-Bayesian frameworks

Guo, Huiyi 01 May 2018 (has links)
One important issue in mechanism design theory is to model agents’ behaviors under uncertainty. The classical approach assumes that agents hold commonly known probability assessments towards uncertainty, which has been challenged by economists in many fields. My thesis adopts alternative methods to model agents’ behaviors. The new findings contribute to understanding how the mechanism designer can benefit from agents’ uncertainty aversion and how she should respond to the lack of information on agents’ probability assessments. Chapter 1 of this thesis allows the mechanism designer to introduce ambiguity to the mechanism. Instead of informing agents of the precise payment rule that she commits to, the mechanism designer can tell agents multiple payment rules that she may have committed to. The multiple payment rules are called ambiguous transfers. As agents do not know which rule is chosen by the designer, they are assumed to make decisions based on the worst-case scenario. Under this assumption, this chapter characterizes when the mechanism designer can obtain the first-best outcomes by introducing ambiguous transfers. Compared to the standard approach where the payment rule is unambiguous, first-best mechanism design becomes possible under a broader information structure. Hence, there are cases when the mechanism designer can benefit from introducing ambiguity. Chapter 2 assumes that the mechanism designer does not know agents’ probability assessments about others’ private information. The mechanisms designed to implement the social choice function thus should not depend on the probability assessments, which are called robust mechanisms. Different from the existing robust mechanism design literature where agents are always assumed to act non-cooperatively, this chapter allows them to communicate and form coalitions. This chapter provides necessary and almost sufficient conditions for robustly implementing a social choice function as an equilibrium that is immune to all coalitional deviations. As there are social choice functions that are only implementable with coalitional structures, this chapter provides insights on when agents should be allowed to communicate. As an extension, when the mechanism designer has no information on which coalitions can be formed, this chapter also provides conditions for robust implementation under all coalition patterns. Chapter 3 assumes that agents are not probabilistic about others’ private information. Instead, when they hold ambiguous assessments about others’ information, they make decisions based on the worst-case belief. This chapter provides necessary and almost sufficient conditions on when a social choice goal is implementable under such a behavioral assumption. As there are social choice goals that are only implementable under ambiguous assessments, this chapter provides insights on what information structure is desirable to the mechanism designer.
265

Contributions to Optimal Experimental Design and Strategic Subdata Selection for Big Data

January 2020 (has links)
abstract: In this dissertation two research questions in the field of applied experimental design were explored. First, methods for augmenting the three-level screening designs called Definitive Screening Designs (DSDs) were investigated. Second, schemes for strategic subdata selection for nonparametric predictive modeling with big data were developed. Under sparsity, the structure of DSDs can allow for the screening and optimization of a system in one step, but in non-sparse situations estimation of second-order models requires augmentation of the DSD. In this work, augmentation strategies for DSDs were considered, given the assumption that the correct form of the model for the response of interest is quadratic. Series of augmented designs were constructed and explored, and power calculations, model-robustness criteria, model-discrimination criteria, and simulation study results were used to identify the number of augmented runs necessary for (1) effectively identifying active model effects, and (2) precisely predicting a response of interest. When the goal is identification of active effects, it is shown that supersaturated designs are sufficient; when the goal is prediction, it is shown that little is gained by augmenting beyond the design that is saturated for the full quadratic model. Surprisingly, augmentation strategies based on the I-optimality criterion do not lead to better predictions than strategies based on the D-optimality criterion. Computational limitations can render standard statistical methods infeasible in the face of massive datasets, necessitating subsampling strategies. In the big data context, the primary objective is often prediction but the correct form of the model for the response of interest is likely unknown. Here, two new methods of subdata selection were proposed. The first is based on clustering, the second is based on space-filling designs, and both are free from model assumptions. The performance of the proposed methods was explored visually via low-dimensional simulated examples; via real data applications; and via large simulation studies. In all cases the proposed methods were compared to existing, widely used subdata selection methods. The conditions under which the proposed methods provide advantages over standard subdata selection strategies were identified. / Dissertation/Thesis / Doctoral Dissertation Statistics 2020
266

Robustness Evaluation of Long Span Truss Bridge Using Damage Influence Lines / 損傷影響線を用いた長大トラス橋のロバスト性評価

Mya, San Wai 23 March 2020 (has links)
京都大学 / 0048 / 新制・課程博士 / 博士(工学) / 甲第22417号 / 工博第4678号 / 新制||工||1730(附属図書館) / 京都大学大学院工学研究科社会基盤工学専攻 / (主査)教授 高橋 良和, 教授 清野 純史, 教授 八木 知己 / 学位規則第4条第1項該当 / Doctor of Philosophy (Engineering) / Kyoto University / DFAM
267

Citlivostní analýza různých typů rekonstruktoru stavu / Sensitivity analysis of different forms of state observers

Kadlec, Milan January 2012 (has links)
This master thesis is focused on the sensitivity analysis of selected kinds of state reconstructors. They are realized in a general form, via direct and parallel programing. Quantity that determines the quality of sensitivity is output signal difference of the reconstructor with the general form of the system. Testing will be based on different initial state conditions and on the parameters change of the feedback A matrix due to the rested reconstructors.
268

Informed Non-Negative Matrix Factorization for Source Apportionment / Factorisation informées de matrice pour la séparation de sources non-négatives

Chreiky, Robert 19 December 2017 (has links)
Le démélange de sources pour la pollution de l'air peut être formulé comme un problème de NMF en décomposant la matrice d'observation X en le produit de deux matrices non négatives G et F, respectivement la matrice de contributions et de profils. Généralement, les données chimiques sont entâchées d'une part de données aberrantes. En dépit de l'intérêt de la communauté pour les méthodes de NMF, elles souffrent d'un manque de robustesse à un faible nombre de données aberrantes et aux conditions initiales et elles fournissent habituellement de multiples minimas. En conséquence, cette thèse est orientée d'une part vers les méthodes de NMF robustes et d'autre part vers les NMF informées qui utilisent une connaissance experte particulière. Deux types de connaissances sont introduites dans la matrice de profil F. La première hypothèse est la connaissance exacte de certaines composantes de la matrice F tandis que la deuxième information utilise la propriété de somme-à-1 de chaque ligne de la matrice F. Une paramétrisation qui tient compte de ces deux informations est développée et des règles de mise à jour dans le sous-espace des contraintes sont proposées. L'application cible qui consiste à identifier les sources de particules dans l'air dans la région côtière du nord de la France montre la pertinence des méthodes proposées. Dans la série d'expériences menées sur des données synthétiques et réelles, l'effet et la pertinence des différentes informations sont mises en évidence et rendent les résultats de factorisation plus fiables. / Source apportionment for air pollution may be formulated as a NMF problem by decomposing the data matrix X into a matrix product of two factors G and F, respectively the contribution matrix and the profile matrix. Usually, chemical data are corrupted with a significant proportion of abnormal data. Despite the interest for the community for NMF methods, they suffer from a lack of robustness to a few abnormal data and to initial conditions and they generally provide multiple minima. To this end, this thesis is oriented on one hand towards robust NMF methods and on the other hand on informed NMF by using some specific prior knowledge. Two types of knowlodge are introduced on the profile matrix F. The first assumption is the exact knowledge on some of flexible components of matrix F and the second hypothesis is the sum-to-1 constraint on each row of the matrix F. A parametrization able to deal with both information is developed and update rules are proposed in the space of constraints at each iteration. These formulations have been appliede to two kind of robust cost functions, namely, the weighted Huber cost function and the weighted αβ divergence. The target application-namely, identify the sources of particulate matter in the air in the coastal area of northern France - shows relevance of the proposed methods. In the numerous experiments conducted on both synthetic and real data, the effect and the relevance of the different information is highlighted to make the factorization results more reliable.
269

Approximation robuste de surfaces avec garanties / Robust shape approximation and mapping between surfaces

Mandad, Manish 29 November 2016 (has links)
Cette thèse comprend deux parties indépendantes.Dans la première partie nous contribuons une nouvelle méthode qui, étant donnée un volume de tolérance, génère un maillage triangulaire surfacique garanti d’être dans le volume de tolérance, sans auto-intersection et topologiquement correct. Un algorithme flexible est conçu pour capturer la topologie et découvrir l’anisotropie dans le volume de tolérance dans le but de générer un maillage de faible complexité.Dans la seconde partie nous contribuons une nouvelle approche pour calculer une fonction de correspondance entre deux surfaces. Tandis que la plupart des approches précédentes procède par composition de correspondance avec un domaine simple planaire, nous calculons une fonction de correspondance en optimisant directement une fonction de sorte à minimiser la variance d’un plan de transport entre les surfaces / This thesis is divided into two independent parts.In the first part, we introduce a method that, given an input tolerance volume, generates a surface triangle mesh guaranteed to be within the tolerance, intersection free and topologically correct. A pliant meshing algorithm is used to capture the topology and discover the anisotropy in the input tolerance volume in order to generate a concise output. We first refine a 3D Delaunay triangulation over the tolerance volume while maintaining a piecewise-linear function on this triangulation, until an isosurface of this function matches the topology sought after. We then embed the isosurface into the 3D triangulation via mutual tessellation, and simplify it while preserving the topology. Our approach extends toDépôt de thèseDonnées complémentairessurfaces with boundaries and to non-manifold surfaces. We demonstrate the versatility and efficacy of our approach on a variety of data sets and tolerance volumes.In the second part we introduce a new approach for creating a homeomorphic map between two discrete surfaces. While most previous approaches compose maps over intermediate domains which result in suboptimal inter-surface mapping, we directly optimize a map by computing a variance-minimizing mass transport plan between two surfaces. This non-linear problem, which amounts to minimizing the Dirichlet energy of both the map and its inverse, is solved using two alternating convex optimization problems in a coarse-to-fine fashion. Computational efficiency is further improved through the use of Sinkhorn iterations (modified to handle minimal regularization and unbalanced transport plans) and diffusion distances. The resulting inter-surface mapping algorithm applies to arbitrary shapes robustly and efficiently, with little to no user interaction.
270

Hledání robustních cest pro více agentů / Robust multi-agent path finding

Nekvinda, Michal January 2020 (has links)
The thesis is devoted to finding robust non-conflict paths in multi-agent path finding (MAPF). We propose several new techniques for the construction of these types of paths and describe their properties. We deal with the use of contingency planning and we create a tree plan for the agents where the specific path is chosen by the agents during the execution based on the current delay. Next we present an algorithm that increases robustness while maintaining the original length of the solution and we combine it with the previous approach. Then we will focus on the method of increasing robustness by changing the speed of agents. Finally we experimentally verify the applicability of these techniques on different types of graphs. We will show that all the proposed methods are significantly more robust than the classic solution and they also have certain advantages over previously known constructions of robust plans.

Page generated in 0.0269 seconds