• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 379
  • 361
  • 181
  • 85
  • 51
  • 35
  • 28
  • 21
  • 19
  • 13
  • 13
  • 10
  • 8
  • 5
  • 5
  • Tagged with
  • 1323
  • 286
  • 214
  • 143
  • 139
  • 133
  • 126
  • 109
  • 105
  • 99
  • 91
  • 79
  • 75
  • 69
  • 62
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
301

Automated system design optimisation

Astapenko, D. January 2010 (has links)
The focus of this thesis is to develop a generic approach for solving reliability design optimisation problems which could be applicable to a diverse range of real engineering systems. The basic problem in optimal reliability design of a system is to explore the means of improving the system reliability within the bounds of available resources. Improving the reliability reduces the likelihood of system failure. The consequences of system failure can vary from minor inconvenience and cost to significant economic loss and personal injury. However any improvements made to the system are subject to the availability of resources, which are very often limited. The objective of the design optimisation problem analysed in this thesis is to minimise system unavailability (or unreliability if an unrepairable system is analysed) through the manipulation and assessment of all possible design alterations available, which are subject to constraints on resources and/or system performance requirements. This thesis describes a genetic algorithm-based technique developed to solve the optimisation problem. Since an explicit mathematical form can not be formulated to evaluate the objective function, the system unavailability (unreliability) is assessed using the fault tree method. Central to the optimisation algorithm are newly developed fault tree modification patterns (FTMPs). They are employed here to construct one fault tree representing all possible designs investigated, from the initial system design specified along with the design choices. This is then altered to represent the individual designs in question during the optimisation process. Failure probabilities for specified design cases are quantified by employing Binary Decision Diagrams (BDDs). A computer programme has been developed to automate the application of the optimisation approach to standard engineering safety systems. Its practicality is demonstrated through the consideration of two systems of increasing complexity; first a High Integrity Protection System (HIPS) followed by a Fire Water Deluge System (FWDS). The technique is then further-developed and applied to solve problems of multi-phased mission systems. Two systems are considered; first an unmanned aerial vehicle (UAV) and secondly a military vessel. The final part of this thesis focuses on continuing the development process by adapting the method to solve design optimisation problems for multiple multi-phased mission systems. Its application is demonstrated by considering an advanced UAV system involving multiple multi-phased flight missions. The applications discussed prove that the technique progressively developed in this thesis enables design optimisation problems to be solved for systems with different levels of complexity. A key contribution of this thesis is the development of a novel generic optimisation technique, embedding newly developed FTMPs, which is capable of optimising the reliability design for potentially any engineering system. Another key and novel contribution of this work is the capability to analyse and provide optimal design solutions for multiple multi-phase mission systems. Keywords: optimisation, system design, multi-phased mission system, reliability, genetic algorithm, fault tree, binary decision diagram
302

Framework to manage labels for e-assessment of diagrams

Jayal, Ambikesh January 2010 (has links)
Automatic marking of coursework has many advantages in terms of resource benefits and consistency. Diagrams are quite common in many domains including computer science but marking them automatically is a challenging task. There has been previous research to accomplish this, but results to date have been limited. Much of the meaning of a diagram is contained in the labels and in order to automatically mark the diagrams the labels need to be understood. However the choice of labels used by students in a diagram is largely unrestricted and diversity of labels can be a problem while matching. This thesis has measured the extent of the diagram label matching problem and proposed and evaluated a configurable extensible framework to solve it. A new hybrid syntax matching algorithm has also been proposed and evaluated. This hybrid approach is based on the multiple existing syntax algorithms. Experiments were conducted on a corpus of coursework which was large scale, realistic and representative of UK HEI students. The results show that the diagram label matching is a substantial problem and cannot be easily avoided for the e-assessment of diagrams. The results also show that the hybrid approach was better than the three existing syntax algorithms. The results also show that the framework has been effective but only to limited extent and needs to be further refined for the semantic stage. The framework proposed in this Thesis is configurable and extensible. It can be extended to include other algorithms and set of parameters. The framework uses configuration XML, dynamic loading of classes and two design patterns namely strategy design pattern and facade design pattern. A software prototype implementation of the framework has been developed in order to evaluate it. Finally this thesis also contributes the corpus of coursework and an open source software implementation of the proposed framework. Since the framework is configurable and extensible, its software implementation can be extended and used by the research community.
303

Logical aspects of quantum computation

Marsden, Daniel January 2015 (has links)
A fundamental component of theoretical computer science is the application of logic. Logic provides the formalisms by which we can model and reason about computational questions, and novel computational features provide new directions for the development of logic. From this perspective, the unusual features of quantum computation present both challenges and opportunities for computer science. Our existing logical techniques must be extended and adapted to appropriately model quantum phenomena, stimulating many new theoretical developments. At the same time, tools developed with quantum applications in mind often prove effective in other areas of logic and computer science. In this thesis we explore logical aspects of this fruitful source of ideas, with category theory as our unifying framework. Inspired by the success of diagrammatic techniques in quantum foundations, we begin by demonstrating the effectiveness of string diagrams for practical calculations in category theory. We proceed by example, developing graphical formulations of the definitions and proofs of many topics in elementary category theory, such as adjunctions, monads, distributive laws, representable functors and limits and colimits. We contend that these tools are particularly suitable for calculations in the field of coalgebra, and continue to demonstrate the use of string diagrams in the remainder of the thesis. Our coalgebraic studies commence in chapter 3, in which we present an elementary formulation of a representation result for the unitary transformations, following work developed in a fibrational setting in [Abramsky, 2010]. That paper raises the question of what a suitable "fibred coalgebraic logic" would be. This question is the starting point for our work in chapter 5, in which we introduce a parameterized, duality based frame- work for coalgebraic logic. We show sufficient conditions under which dual adjunctions and equivalences can be lifted to fibrations of (co)algebras. We also prove that the semantics of these logics satisfy certain "institution conditions" providing harmony between syntactic and semantic transformations. We conclude by studying the impact of parameterization on another logical aspect of coalgebras, in which certain fibrations of predicates can be seen as generalized invariants. Our focus is on the lifting of coalgebra structure along a fibration from the base category to an associated total category of predicates. We show that given a suitable parameterized generalization of the usual liftings of signature functors, this induces a "fibration of fibrations" capturing the relationship between the two different axes of variation.
304

Experimental determination and thermodynamic modelisation of Mo-Ni-Re system / Détermination expérimentale et modélisation thermodynamique du système Mo-Ni-Re

Yaqoob, Khurram 20 December 2012 (has links)
E système Mo-Ni-Re est un sous-système majeur des superalliages à base de Ni conçus pour une utilisation dans les applications à haute température. Compte tenu des contradictions entre les informations publiées antérieurement, cette étude a été consacrée à la détermination expérimentale complète des équilibres entre phases dans le système Mo-Ni-Re, à la caractérisation structurale des phases intermétalliques et à la modélisation thermodynamique du système à l'aide de la méthode CALPHAD. L'étude expérimentale des équilibres entre phases a été effectuée en utilisant des alliages à l'équilibre, et les diagrammes de phases du système Ni-Re et Mo-Ni-Re (à 1200°C et à 1600°C) ont été proposées. En comparaison avec les études précédentes, le diagramme de phases Ni-Re déterminé au cours de ce travail a montré des différences significatives en termes de domaines d'homogénéité, des domaines de cristallisation et de température de réaction péritectique. La coupe isotherme à 1200°C du système Mo-Ni-Re proposée lors de cette étude a montré l'extension importante de la phase σ du système binaire Mo-Re et de la phase δ du système Mo-Ni dans le diagramme ternaire. En outre, la présence de deux phases ternaires jusque-là inconnues a également été observée. La coupe isotherme du système Mo-Ni-Re à 1600°C a également montré une grande extension de la phase σ dans le diagramme ternaire tandis que l'extension de la phase χ du système Mo-Re dans les deux coupes isothermes est limitée à un étroit domaine de composition. Les phases ternaires observées dans la coupe isotherme à 1200°C ne sont pas présentes dans la coupe isotherme à 1600°C. D'autre part, les études partielles des domaines de composition des phases dans les systèmes binaires Mo-Ni et Mo-Re ainsi que la détermination de la projection du liquidus du système Mo-Ni-Re ont également été effectuées. La projection du liquidus du système Mo-Ni-Re proposée lors de la présente étude a également montré des champs de cristallisation primaire de la phase σ de système Mo-Re et la solution solide de Re dans la région ternaire qui sont largement étendus. Puisque la coupe isotherme du système Mo-Ni-Re a montré un domaine d'homogénéité de la phase σ très étendu, la caractérisation structurale de la phase σ du système ternaire Mo-Ni-Re a été effectuée en mettant l'accent sur la détermination de l'occupation des sites en fonction de la composition par affinement de Rietveld combiné des données de diffraction des rayons X et des neutrons. Les résultats expérimentaux recueillis au cours de cette étude ainsi que les informations disponibles dans la littérature ont été utilisés comme données d'entrée pour la modélisation thermodynamique du système Mo-Ni-Re. La description thermodynamique du système Mo-Re a été prise de la littérature tandis que les optimisations thermodynamiques des systèmes Mo-Ni, Ni-Re et Mo-Ni-Re ont été effectuées au cours de ce travail avec la méthode CALHAD.Mots-clés: Mo-Ni, Mo-Re, Ni-Re, Mo-Ni-Re; diagramme de phases; coupe isotherme; caractérisation structurale; modélisation thermodynamique; méthode CALPHAD / The Mo-Ni-Re system is one of the important subsystems of the Ni based superalloys engineered for use in high temperature applications. Considering the contradictions among previously reported information, the present study was devoted to the complete experimental determination of the phase equilibria in the Mo-Ni-Re system, structural characterization of its intermetallic phases and thermodynamic modeling of the system with the help of the CALPHAD method. The experimental investigation of phase equilibria was carried out with the help of equilibrated alloys and phase diagrams of the Ni-Re and Mo-Ni-Re system (at 1200°C and 1600°C) were proposed. In comparison with previous investigations, the Ni-Re phase diagram determined during the present study showed significant difference in terms of homogeneity domains, freezing ranges and peritectic reaction temperature. The 1200°C isothermal section of the Mo-Ni-Re system proposed during the present study showed large extension of the Mo-Re σ phase and Mo-Ni δ phase in the ternary region. In addition, presence of two previously unknown ternary phases was also observed. The isothermal section of the Mo-Ni-Re system at 1600°C also showed large extension of σ phase in the ternary region whereas extension of the Mo-Re χ phase in both isothermal sections was restricted to narrow composition range. The presence of the ternary phases observed in the 1200°C isothermal was not evidenced in 1600°C isothermal section. On the other hand, partial investigations of phase boundaries in the Mo-Ni and Mo-Re binary systems and determination of liquidus projection of the Mo-Ni-Re system was also carried out. The liquidus projection of the Mo-Ni-Re system proposed during present study also showed largely extended primary crystallization fields of the Mo-Re σ phase and Re solid solution in the ternary region. Since isothermal sections of the Mo-Ni-Re system showed largely extended homogeneity domain of σ, structural characterization of the Mo-Ni-Re σ with particular emphasis on determination of site occupancy trends as a function of composition was carried out by combined Rietveld refinement of the X-ray and neutron diffraction data. The experimental results gathered during the present study along with the information available in the literature were used as input for thermodynamic modeling of the Mo-Ni-Re system. The thermodynamic description of the Mo-Re system was taken from literature whereas thermodynamic modeling of the Mo-Ni, Ni-Re and Mo-Ni-Re system was carried out during the present study with the help of the CALHAD method.Keywords: Mo-Ni; Mo-Re; Ni-Re; Mo-Ni-Re; phase diagram; isothermal section; structural characterization; thermodynamic modeling; CALPHAD method
305

Scale dependence and renormalon-inspired resummations for some QCD observables

Mirjalili, Abolfazl January 2001 (has links)
Since the advent of Quantum Field Theory (QFT) in the late 1940's, perturbation theory has become one of the most successful means of extracting phenomenologically useful information from QFT. In the ever-increasing enthusiasm for new phenomenological predictions, the mechanics of perturbation theory itself have taken a back seat. It is in this light that this thesis aims to investigate some of the more fundamental properties of perturbation theory. In the first part of this thesis, we develop the idea, suggested by C.J.Maxwell, that at any given order of Feynman diagram calculation for a QCD observable all renormalization group (RG)-predictable terms should be resummed to all-orders. This "complete" RG-improvement (CORGI) serves to separate the perturbation series into infinite subsets of terms which when summed are renormalization scheme (RS)-invariant. Crucially all ultraviolet logarithms involving the dimensionful parameter, Q, on which the observable depends are resummed, thereby building the correct Q-dependence. We extend this idea, and show for moments of leptoproduction structure functions that all dependence on the renormahzation and factorization scales disappears provided that all the ultraviolet logarithms involving the physical energy scale Q are completely resummed. The approach is closely related to Grunberg's method of Effective Charges. In the second part, we perform an all-orders resummation of the QCD Adler D-function for the vector correlator, in which the portion of perturbative coefficients containing the leading power of b, the first beta-function coefficient, is resummed to all-orders. To avoid a renormalization scale dependence when we match the resummation to the exactly known next-to-leading order (NLO), and next-NLO (NNLO) results, we employ the Complete Renormalization Group Improvement (CORGI) approach , removing all dependence on the renormalization scale. We can also obtain fixed-order CORGI results. Including suitable weight-functions we can numerically integrate these results for the D-function in the complex energy plane to obtain so-called "contour-improved" results for the ratio R and its tau decay analogue Rr. We use the difference between the all-orders and fixed-order (NNLO) results to estimate the uncertainty in αs(M2/z) extracted from Rr measurements, and find αs(M2/z) = 0.120±0.002. We also estimate the corresponding uncertainty in a{Ml) arising from hadronic corrections by considering the uncertainty in R(s), in the low-energy region, and compare with other estimates. Analogous resummations are also given for the scalar correlator. As an adjunct to these studies we show how fixed-order contour-improved results can be obtained analytically in closed form at the two-loop level in terms of the Lambert W-function and hypergeometric functions.
306

The Human Analysis Element of Intrusion Detection: A Cognitive Task Model and Interface Design and Implications

Ellis, Brenda Lee 01 January 2009 (has links)
The use of monitoring and intrusion detection tools are common in today's network security architecture. The combination of tools generates an abundance of data which can result in cognitive overload of those analyzing the data. ID analysts initially review alerts generated by intrusion detection systems to determine the validity of the alerts. Since a large number of alerts are false positives, analyzing the data can severely reduce the number of unnecessary and unproductive investigations. The problem remains that this process is resource intensive. To date, very little research has been done to clearly determine and document the process of intrusion detection. In order to rectify this problem, research was conducted which involved several phases. Fifteen individuals were selected to participate in a cognitive task analysis. The results of the cognitive task analysis were used to develop a prototype interface which was tested by the participants. A test of the participants' knowledge after the use of the prototype revealed an increase in both effectiveness and efficiency in analyzing alerts. Specifically, the findings revealed an increase in effectiveness as 72% of the participants made better determinations using the prototype interface. The results also showed an increase in efficiency when 72% of the participants analyzed and validated alerts in less time while using the prototype interface. These findings, based on empirical data, showed that the use of the task diagram and prototype interface helped to reduce the amount of time it previously took to analyze alerts generated by intrusion detection systems.
307

Comfort Climate Evaluation with Thermal Manikin Methods and Computer Simulation Models

Nilsson, Håkan O January 2004 (has links)
Increasing concern about energy consumption and thesimultaneous need for an acceptable thermal environment makesit necessary to estimate in advance what effect differentthermal factors will have on the occupants. Temperaturemeasurements alone do not account for all climate effects onthe human body and especially not for local effects ofconvection and radiation. People as well as thermal manikinscan detect heat loss changes on local body parts. This factmakes it appropriate to develop measurement methods andcomputer models with the corresponding working principles andlevels of resolution. One purpose of this thesis is to linktogether results from these various investigation techniqueswith the aim of assessing different effects of the thermalclimate on people. The results can be used to facilitatedetailed evaluations of thermal influences both in indoorenvironments in buildings and in different types ofvehicles. This thesis presents a comprehensive and detaileddescription of the theories and methods behind full-scalemeasurements with thermal manikins. This is done with new,extended definitions of the concept of equivalent temperature,and new theories describing equivalent temperature as avector-valued function. One specific advantage is that thelocally measured or simulated results are presented with newlydeveloped "comfort zone diagrams". These diagrams provide newways of taking into consideration both seat zone qualities aswell as the influence of different clothing types on theclimate assessment with "clothing-independent" comfort zonediagrams. Today, different types of computer programs such as CAD(Computer Aided Design) and CFD (Computational Fluid Dynamics)are used for product development, simulation and testing of,for instance, HVAC (Heating, Ventilation and Air Conditioning)systems, particularly in the building and vehicle industry.Three different climate evaluation methods are used andcompared in this thesis: human subjective measurements, manikinmeasurements and computer modelling. A detailed description ispresented of how developed simulation methods can be used toevaluate the influence of thermal climate in existing andplanned environments. In different climate situationssubjective human experiences are compared to heat lossmeasurements and simulations with thermal manikins. Thecalculation relationships developed in this research agree wellwith full-scale measurements and subject experiments indifferent thermal environments. The use of temperature and flowfield data from CFD calculations as input produces acceptableresults, especially in relatively homogeneous environments. Inmore heterogeneous environments the deviations are slightlylarger. Possible reasons for this are presented along withsuggestions for continued research, new relationships andcomputer codes. Key-words:equivalent temperature, subject, thermalmanikin, mannequin, thermal climate assessment, heat loss,office environment, cabin climate, ventilated seat, computermodel, CFD, clothing-independent, comfort zone diagram. / <p>QCR 20161027</p>
308

Advances in Functional Decomposition: Theory and Applications

Martinelli, Andres January 2006 (has links)
Functional decomposition aims at finding efficient representations for Boolean functions. It is used in many applications, including multi-level logic synthesis, formal verification, and testing. This dissertation presents novel heuristic algorithms for functional decomposition. These algorithms take advantage of suitable representations of the Boolean functions in order to be efficient. The first two algorithms compute simple-disjoint and disjoint-support decompositions. They are based on representing the target function by a Reduced Ordered Binary Decision Diagram (BDD). Unlike other BDD-based algorithms, the presented ones can deal with larger target functions and produce more decompositions without requiring expensive manipulations of the representation, particularly BDD reordering. The third algorithm also finds disjoint-support decompositions, but it is based on a technique which integrates circuit graph analysis and BDD-based decomposition. The combination of the two approaches results in an algorithm which is more robust than a purely BDD-based one, and that improves both the quality of the results and the running time. The fourth algorithm uses circuit graph analysis to obtain non-disjoint decompositions. We show that the problem of computing non-disjoint decompositions can be reduced to the problem of computing multiple-vertex dominators. We also prove that multiple-vertex dominators can be found in polynomial time. This result is important because there is no known polynomial time algorithm for computing all non-disjoint decompositions of a Boolean function. The fifth algorithm provides an efficient means to decompose a function at the circuit graph level, by using information derived from a BDD representation. This is done without the expensive circuit re-synthesis normally associated with BDD-based decomposition approaches. Finally we present two publications that resulted from the many detours we have taken along the winding path of our research.
309

The Cardiac State Diagram : A new method for assessing cardiac mechanics

Johnson, Jonas January 2015 (has links)
<p>QC 20170306</p>
310

Analýza procesů a návrh jejich optimalizace ve strojírenském podniku / Process analysis and proposal of their optimization in engineering company

Fikar, Vítězslav January 2011 (has links)
This thesis is focused on comparison of function management and processing management, especially their advantages and benefits. Using of processing management is introduced on practical example. There is made process analysis on chosen processes in engineering company. Detailed description of these processes is the result. Next step is searching of shortcomings and mistakes in these processes and trying to find solutions of them. The result is evaluation of these solutions and their benefits for company. Then there is proposal of introduction of solutions into processes in the company

Page generated in 0.0268 seconds